visit
GitHub actions and workflows are great. We use them to deploy a web app to different cluster environments and I want to show you how we did it. Hopefully, this helps to simplify your deployment process as well. I’ve also written a companion article that describes our GitHub workflows for continuous integration.
Here, we’ll focus on our continuous deployment process:Workflow Overview
Our continuous deployment workflow has four jobs. It includes the same jobs that we had in our continuous integration workflow, as well as two extra jobs:As mentioned in the preceding article, we set up these workflows while building a web app for a major property tech startup.
Development: When we’re in the process of developing, we push everything to a branch and create a release from that branch. We set the prerelease flag and give it a descriptive tag with the prefix “rc-” (release candidate).
Staging: When we’re reading to push to staging for proper QA, we merge to master and create another release. We select the master branch but keep the prerelease flag. We add a tag that indicates the intended version, but we keep the prefix “rc-”.
Production: When we deploy to production we add a tag with the prefix “r-” and the “pre-release” flag is no longer selected.
The “r-” and “rc-” tag prefixes enable us to easily distinguish between “real” releases and release candidates when reviewing the release history. As you’ll soon see, we automatically validate tag prefixes when deploying to a production cluster.
Define the triggering event
We wanted our deployment workflow to kick in whenever someone created a release in GitHub so we updated our workflow file as follows.name: CD
on:
release:
types: [created]
Define Jobs
As mentioned in my introduction, we wanted to validate the GitHub tags and test the app, take a snapshot of it, and push it as a Docker image in our container registry (just in case we needed to roll back to an earlier iteration of the app). Finally, we wanted to deploy the image to a Kubernetes cluster.For this purpose, we defined these four jobs:Creating a custom action
To check that people are tagging releases correctly, we created a custom action. GitHub actions are essentially small predefined scripts that execute one specific task. There are plenty of user-contributed actions on the Github marketplace, but in this case, we needed to create our own.GitHub supports two types of action: an action that runs as a JavaScript, or one that .We set up one that runs in a Docker container since that’s what we’re more familiar with. Our action lives in its own private repo with the following file structure.The most important files are the action metadata (“action.yaml”) and the shell script (“entrypoint.sh”).
The actions.yaml file defines the metadata for the action, according to the .
name: 'Validate tags'
author: 'Hugobert Humperdinck'
description: 'Validate release/pre-release tags'
inputs:
prerelease:
description: 'Tag is prerelease'
required: true
runs:
using: 'docker'
image: 'Dockerfile'
args:
- ${{ inputs.prerelease }}
The entrypoint.sh is the shell script to run in the Docker container. It includes all of our validation logic.
#!/bin/bash
set -e
set -o pipefail
echo "Start validation $1"
BRANCH=$(git branch -r --contains ${GITHUB_SHA} | grep "")
RELEASE_VERSION=$(echo ${GITHUB_REF} | sed -e "s/refs\/tags\///g" | sed -e "s/\//-/g")
MASTER_BRANCH_NAME='origin/master'
RELEASE_PREFIX='r-'
if [[ "${INPUT_PRERELEASE}" != true ]] && [[ "$BRANCH" == *"$MASTER_BRANCH_NAME"* ]] && [[ "$RELEASE_VERSION" == "$RELEASE_PREFIX"* ]]; then
echo "Release tag validation succeeded!"
exit 0
elif [[ "${INPUT_PRERELEASE}" == true ]]; then
echo "Pre-Release tag validation succeeded!"
exit 0
else
echo "Tag validation failed!"
exit 1
fi
Set environment variables
We’re using the built-in GitHub environment variables GITHUB_REF and GITHUB_SHA to determine the following variables:
Validate Release Tag
We want the tags to be formatted a certain way but we only care about tags for production releases.jobs:
validate-release-name:
name: Validate release name
runs-on: 'ubuntu-latest'
steps:
- name: Checkout working branch
uses: actions/checkout@v2
- name: Checkout private actions repo
uses: actions/checkout@v2
with:
repository: acme/private-actions
token: ${{ secrets.GitHub_PAT }} # `GitHub_PAT` is a secret that contains your PAT
path: private-actions
Then, we call the “checkout” action again to check out another private repo — the one that contains our action.
- name: Validate release tag
uses: private-actions/validate-tag-action
with:
prerelease: 'github.event.release.prerelease'
These are the same tests that we use in our continuous integration workflow which I have already covered in a companion article.
Again, I already covered this job in my companion article, but there is one small difference this time around. We add the parameter which changes how we tag the Docker image.
- name: Publish Docker Image
uses: elgohr/[email protected]
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
with:
name: ${{ env.DOCKER_IMAGE }}
username: ${{ steps.gcloud.outputs.username }}
password: ${{ steps.gcloud.outputs.password }}
registry: ${{ env.DOCKER_REGISTRY }}
tag_names: true
buildargs: SSH_PRIVATE_KEY
Instead of sticking with the default behaviour, which is to tag the image with the originating branch, we pass the value of the GitHub release tag as our Docker tag. So in the Docker registry, released images look something like this:
Name: 8cd6851d850b Tag: r-3
Again, this enables us to visually distinguish released Docker images from ones that were pushed as part of the continuous integration workflow. As a reminder, pre-release images are tagged with the branch name like this:Name: 8cd6851d850b Tag: XYZ-123_add_special_field_todo_magic
Set up the environment
We updated our workflow file as follows:deployment:
name: Deploy backend to cluster
runs-on: 'ubuntu-latest'
needs: [docker-image]
steps:
- name: Checkout working branch
uses: actions/checkout@v1
- name: Set Release version
run: |
echo ::set-env name=RELEASE_VERSION::$(echo ${GITHUB_REF} |
sed -e "s/refs\/tags\///g" | sed -e "s/\//-/g")
- name: Cluster env for production
if: "!github.event.release.prerelease"
run: |
echo ::set-env name=CLUSTER_ENV::prod
- name: Cluster env for staging/dev
if: "github.event.release.prerelease"
run: |
BRANCH=$(git branch -r --contains ${GITHUB_SHA} | grep "")
MASTER_BRANCH_NAME='origin/master'
if [[ "$BRANCH" == *"$MASTER_BRANCH_NAME"* ]]; then
echo ::set-env name=CLUSTER_ENV::stag
else
echo ::set-env name=CLUSTER_ENV::dev
fi
- name: Set Cluster credentials
run: |
echo ::set-env name=CLUSTER_NAME::acme-gke-${{ env.CLUSTER_ENV }}
echo ::set-env name=CLUSTER_ZONE::europe-west3-a
echo ::set-env name=PROJECT_NAME::acme-555555
We check out the working branch again (you have to do this for each job), then set RELEASE_VERSION by extracting the tag name from the end of the GITHUB_REF.
Then we need to set all the variables that we’ll use for the “gcloud” command in a subsequent step:CLUSTER_ENV: We have some simple logic for defining how to do it:
CLUSTER_NAME: We use the CLUSTER_ENV variable to set the suffix for the full name. So it’s either “acme-gke-prod”, “acme-gke-stag”, or “acme-gke-dev”.
The zone and project name are hardcoded.Install the necessary tools
Next, we need to install the Kubernetes command-line tool and Helm, which makes it easier to install Kubernetes applications.
- name: Install kubectl
run: |
sudo apt-get install kubectl
- name: Install helm
run: |
curl -fsSL -o get_helm.sh //raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
Deploy the image to a cluster
Finally, we use all the variables that we defined previously to run the gcloud CLI. - name: Deploy Release on cluster
env:
GCLOUD_KEY: ${{ secrets.GCLOUD_KEY }}
run: |
echo "$GCLOUD_KEY" | base64 --decode > ${HOME}/gcloud.json
gcloud auth activate-service-account --key-file=${HOME}/gcloud.json
gcloud auth configure-docker
gcloud container clusters get-credentials \
${{ env.CLUSTER_NAME }} --zone
${{ env.CLUSTER_ZONE }} --project ${{ env.PROJECT_NAME }}
# install/upgrade helm chart
helm upgrade --install backend ./deploy/helm/backend \
--values ./deploy/helm/backend/env.values.${{ env.CLUSTER_ENV }}.yaml \
--set env_values.image_version=${{ env.RELEASE_VERSION }
— values defines the yaml file that contains the environment variables.
For a production release, it’s “env.values.prod.yaml”.
— set overrides a specific variable in the “env.values” yaml file, namely “image_version”.
In the yaml file, it’s set to “latest” but we want it to use our release version such as “r-3”.
As I mentioned in the first part of this two-part series, it was very easy to set these workflows up. The limitation with actions in private repos was a minor irritation, but GitHub is continually improving their built-in actions as this was soon addressed. Plus there is a growing ecosystem of user-contributed actions for every kind of task. Unless a customer wants us to use something else besides GitHub, we’ll be sticking with GitHub CI/CD workflows for future projects.
A Disclosure Note On The Author and Project A
Merlin blogs about developer innovation and new technologies at Project A — a venture capital investor focusing on early-stage startups. provides operational support to its portfolio companies, including developer expertise. As part of the IT team, he covers the highlights of Project A’s work helping startups to evolve into thriving success stories.