visit
Workflow Overview
Our continuous integration workflow has two jobs:Project Overview
These instructions will make more sense if you understand how our project is set up.It’s a web app for a major property tech startup. It consists of four sub-applications which interact with each other as well as several external services.A basic overview of the platform architecture
Aside from the front end, all apps need to interact with Google Pub/Sub message queues (which explains why we use a Google Pub/Sub emulator in our workflow).For example, when someone changes customer contact details in the frontend, the following events are triggered:In your repo, go to the Actions tab and click New Workflow. In the following screen, click “set up a workflow yourself”. This creates a commented workflow file which we can then edit.
name: CI
on:
push:
branches:
- '**'
Define the environment and service containers
To run our tests, we install most of the dependencies directly on the virtual machine.But there are a few components that would be excessively complex to set up if we were to install them directly, such as:
jobs:
tests: # Job ID
name: Tests # Name of the job displayed on GitHub.
runs-on: ubuntu-latest # OS for the GitHub-hosted virtual machine
services: # Define Docker containers to use for this job.
backend-db: # Container ID for our database container
image: postgres:12.1 # Image to pull from Docker Hub
env:
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB: backend_db
ports:
- 5432:5432 # TCP port to expose on Docker container and host environment
backend-gcloud-pubsub: # Container ID for our Pub/Sub container
image: 'knarz/pubsub-emulator' # Image to pull from Docker Hub
ports:
- '8085:8085' # TCP port to expose on Docker container and host environment
Define the steps and actions
Next, we start defining the steps in our job. Many of the steps use what GitHub calls “actions” which are essentially small predefined scripts that execute one specific task. GitHub provides their own built-in actions and there’s a marketplace for actions that other users have contributed. You can also build your own actions — I’ll describe how we did this in a companion article about our continuous deployment workflow.But for this guide, we just use actions from the GitHub marketplace.1) Check out the working branch and Set up SSH credentials
We need to check out our working branch on our virtual machine, but instead of running git checkout, we just use a built-in GitHub .Next, we set up the ssh credentials on the virtual machine. In this step, we get the SSH key from a secret. steps:
- name: Checkout working branch
uses: actions/checkout@v1
- name: Setup ssh
run: |
mkdir ~/.ssh/
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
touch ~/.ssh/known_hosts
ssh-keyscan github.com >> ~/.ssh/known_hosts
2) Set up Python and install the PostgreSQL client
Next, we use with another to set up our required Python version. Then we install the PostgreSQL client. We have our database running in a docker container, but we need a client to access it. - name: Set up Python 3.8
uses: actions/setup-python@v1
with:
python-version: 3.8
- name: Install PostgreSQL 11 client
run: |
sudo apt-get -yqq install libpq-dev
3) Set up caching
We cache the dependencies that we have installed from previous runs so that we don’t have to keep installing them over again.We set up caching using the .It checks any file where dependencies are defined (such as Python’s “requirements.txt”) to see if it has been updated. If not, it loads the dependencies from the cache.However, we don’t use “requirements.txt”. However, we don’t use “requirements.txt”. Instead, we use a tool called to manage Python dependencies. With this tool, you define your dependencies in a file called “pyproject.toml”, which we’ve committed to our repo. When you initialize or update a Poetry project, a Poetry automatically generates a “poetry.lock” file based on the contents of the .toml file.
So we configure the cache action to check for changes to our poetry.lock file instead. It does this by comparing the file hashes for the cached and incoming versions of the lock file.
- name: Cache Poetry
uses: actions/cache@v1
id: cache
with:
path: ~/.cache/pip
key: ${{ runner.os }}-pip-${{ hashFiles('**/poetry.lock') }}
restore-keys: |
${{ runner.os }}-pip-
4) Install dependencies
Next, we upgrade pip and install Poetry.Once Poetry is installed, we use it to install the dependencies listed in our poetry.lock file. These dependencies include the linting and testing tools which we’ll use in the following steps. - name: Install dependencies, config poetry virtualenv
run: |
python -m pip install --upgrade pip
pip install poetry
poetry config virtualenvs.create false
poetry install --no-interaction
5) Lint the code
Next we lint the Python code to make sure it adheres to our style guide. For this, we use the which we run here. - name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
6) Run the tests
We’re going to test for any issues with the localizations so we’ll need the which we install in a preparation step. In the main test step, we first compile the localization messages, then run our tests with pytest.
We provide pytest with a whole bunch of environment variables which mostly have dummy values. Not all of the variables are used in the tests but they need to be present when building the app container that we’ll use for testing.- name: Install gettext for translations
run: |
sudo apt-get update && sudo apt-get install -y gettext
- name: Test with pytest
run: |
python manage.py compilemessages
pytest --verbose
env:
CLUSTER_ENV: test
RUN_DEV: 0
POSTGRES_DB: backend_db
POSTGRES_USER: user
POSTGRES_PASSWORD: password
POSTGRES_DB_PORT: 5432
POSTGRES_DB_HOST: localhost
SECRET_KEY: thisisasecret
SENDGRID_API_KEY: 'test-key'
PUBSUB_EMULATOR_HOST: localhost:8085
GCLOUD_PUBSUB_PROJECT_ID: 'test-project'
HERBIE_TOKEN: 'random-key'
HERBIE_HOST: '//localhost:8000'
Define the run conditions
As with last time, we set the job ID “docker-image”, give it a name and define the host operating system.We also specify that we need the previous job to have returned a success error code, using the “. docker-image:
name: Build & Publish Docker Image
needs: [tests]
runs-on: ubuntu-latest
Define the job steps
1) Check out the branch and set environment variables
The first step is the “check out branch” action which I described in the first job. We have to do this again because data is not persisted from job to job (unless you explicitly configure “”).After the check out step, we define specific environment variables: steps:
- name: Checkout working branch
uses: actions/checkout@v1
- name: Set Docker Registry
run: echo ::set-env name=DOCKER_REGISTRY::eu.gcr.io
- name: Set Docker Image
run: echo ::set-env name=DOCKER_IMAGE::${{ env.DOCKER_REGISTRY }}/acme-555555/backend
2) Log in to the Google Container Registry, then build and Push the image
For this task, we use two user-contributed actions to and . GitHub’s virtual machines include a lot of , such as the Google Cloud SDK — which our “login” and “push” actions use.
To log in to Google Cloud, we use another secret — “‘secrets.GCLOUD_KEY”, which is the service account key for our Google Cloud project. Again, this is secret .
When the login step completes, the GitHub action outputs the username and password for the container registry — which we then use in the “build and push”step. - name: Login to gcloud registry
id: gcloud
uses: elgohr/[email protected]
with:
account_key: ${{ secrets.GCLOUD_KEY }}
- name: Publish Docker Image
uses: elgohr/[email protected]
env:
SSH_PRIVATE_KEY: ${{ secrets.SSH_PRIVATE_KEY }}
with:
name: ${{ env.DOCKER_IMAGE }}
username: ${{ steps.gcloud.outputs.username }}
password: ${{ steps.gcloud.outputs.password }}
registry: ${{ env.DOCKER_REGISTRY }}
buildargs: SSH_PRIVATE_KEY
Name: 8cd6851d850b Tag: XYZ-123_add_special_field_todo_magic
I mention this only because the action provides other ways to tag the image. In my article about our deployment workflow, I’ll show you a different method.And that is the end of the workflow. If you want to see the workflow file in its entirety, check out .A note about container storage
One day in the distant future our container registry is going to get quite large.If we’re storing a 333 MB image on every push, we could reach up to 1 GB after three pushes. A GB of container storage costs $0.026 every month so it’s not exactly going to break the bank. Nevertheless, you’re using another container registry, you might want to manage your storage capacity and clean up older images.Luckily there are scripts out there to do this, such as .If you found this walkthrough useful, check out my companion article about our deployment workflow.
A Disclosure Note On The Author and Project A