visit
The Codefresh Runner can be found in Github at . It is part of the Codefresh CLI (that includes additional capabilities for managing existing pipelines and builds).
The runner is a native Kubernetes application. You install it on your cluster and then it takes care of all aspects of pipeline launching, running, cleaning up, etc. If you have auto-scaling enabled on your Kubernetes the runner will automatically scale as your run more pipelines.
You can install the runner on any compliant Kubernetes distribution. The cluster can be public or private (even behind a firewall). In fact, the Codefresh Runner can even be installed on your local Kubernetes cluster if you have one on your workstation. For example, you can easily install the Codefresh runner on your Kubernetes distribution for a quick demo.
It is important to mention that the Codefresh runner does not need any
incoming traffic (it only fetches build information). This means that you don’t need to open any firewall ports or tamper with your NAT settings if you choose to install the Runner on a private Kubernetes cluster.
For more information on the Codefresh Runner see its .
kubectl
access to your cluster. You can use any of the popular cloud solutions such as Google, Azure, AWS, Digital Ocean, etc, or even a local cluster such as Microk8s, Minikube, K3s, etc.For the installation, you can also use the “cloud console” of your cloud provider. If you run any
kubectl
command (such as kubectl get nodes
) and get a valid response, then you are good to go. the Codefresh CLI and . You can create an API token from your Codefresh account by visiting .codefresh auth create-context --api-key {API_KEY}
The Codefresh Runner has multiple installation methods, but the simplest
one is by using the command-line wizard. To start the wizard execute:
codefresh runner init
You can also inspect the status of the runner components using standard
Kubernetes tools. By default, all components of the runner are in the
“codefresh” namespace.
kostis@ubuntu18-desktop:~$ kubectl get pods -n codefresh
NAME READY STATUS RESTARTS AGE
dind-5ef0a18bd2f8f459f2d32c78 1/1 Running 0 37s
dind-lv-monitor-runner-7pnf2 1/1 Running 0 4d22h
dind-lv-monitor-runner-lf746 1/1 Running 0 4d22h
dind-lv-monitor-runner-xc8lp 1/1 Running 0 4d22h
dind-volume-provisioner-runner-64994bbb84-fsr6x 1/1 Running 0 4d22h
engine-5ef0a18bd2f8f459f2d32c78 1/1 Running 0 37s
monitor-697dd5db6f-72s6g 1/1 Running 0 4d22h
runner-5d549f8bc5-pf9mw 1/1 Running 0 4d22h
version: '1.0'
stages:
- prepare
- build
- deploy
steps:
clone:
title: Cloning main repository...
stage: prepare
type: git-clone
arguments:
repo: codefresh-contrib/helm-sample-app
revision: master
git: github
build:
title: Building Docker Image
stage: build
type: build
working_directory: ./helm-sample-app
arguments:
image_name: helm-sample-app-go
tag: multi-stage
dockerfile: Dockerfile
deploy:
title: Deploying Helm Chart
type: helm
stage: deploy
working_directory: ./helm-sample-app
arguments:
action: install
chart_name: charts/helm-example
release_name: my-go-chart-prod
helm_version: 3.0.2
kube_context: my-demo-k8s-cluster
custom_values:
- 'buildID=${{CF_BUILD_ID}}'
- 'image_pullPolicy=Always'
- 'image_tag=multi-stage'
- 'replicaCount=3'
- 'image_pullSecret=codefresh-generated-r.cfcr.io-cfcr-default'
You can find a more detailed explanation in the .
You should also check your pipeline settings and make sure that it is assigned to the cluster that has your runner (because it is possible to have multiple runners in multiple clusters).
One of the advantages of the Codefresh runner is easy access to your internal services without compromising security. The runner can connect to Git repositories, Docker registries, Kubernetes clusters, and other resources that are also behind the firewall.
To enable these services to be used in pipelines you need to visit the integrations screen at .
version: '1.0'
steps:
BuildMyImage:
title: Building My Docker image
type: build
image_name: my-app-image
dockerfile: my-custom.Dockerfile
tag: 1.0.1
registry: dockerhub
This build step will build a custom Dockerfile, tag the image as
my-app-image:1.0.1
, and then push the image to Dockerhub.Notice the complete lack of Docker login/tag/push commands. They are all abstracted away and your pipeline is as simple as possible.
You can follow the same pattern with the other integrations (e.g. Kubernetes clusters and Helm charts). Notice that the Kubernetes cluster that is hosting the runner is also available by name so you can deploy applications to it in a declarative way.
For more information on YAML pipelines check the . You can also using .
Enjoy!
Cover photo by Unsplash at
Previously published at