paint-brush
CI/CD Pipelines on GitLab and AWS: Speed Up Your Deployments by@hdani
3,077 reads
3,077 reads

CI/CD Pipelines on GitLab and AWS: Speed Up Your Deployments

by Daniel HoffmannSeptember 5th, 2021
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The GitLab CI/CD pipeline is free to use for a limited amount of minutes/month. The project is a pretty simple web application. It does not use server-side logic or database layer, just pure frontend code. The stack looks like this: S3 - to store the actual code, CloudFront - as a domain name registrar, R53 - to make things faster and use a free SSL certification. To get started, I defined two-stage for the job. I will just cover my use case in this article.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - CI/CD Pipelines on GitLab and AWS: Speed Up Your Deployments
Daniel Hoffmann HackerNoon profile picture

Introduction

Recently I have decided to make one of my side projects - - public. After some initial versions, I got bored with repeating the manual steps again and again so I have realized, I need a CI/CD pipeline to do the work instead of me. But where should I begin? I don’t want to run a dedicated Jenkins for this job, it would be overkill. Luckily my code is stored in GitLab so the GitLab CI/CD was given, and it is free to use for a limited amount of minutes/month.

Configuring the AWS stack

The project is a pretty simple web application. It does not use any server-side logic or database layer, just pure frontend code. To keep it simple I used a simple bucket to store and server the application’s code. S3 can be slow sometimes by itself and can be expensive to retrieve data on a larger scale so I used as a CDN service. It can also provide me a free SSL certification. To register my domain, I simply used . It is simple and works with CloudFront as a charm.


The stack looks like this:
  • S3 - to store the actual code
  • CloudFront - to make things faster
  • R53 - as a domain name registrar


To keep this article brief, I am just going to cover the GitLab CI configuration deeper.

Configuring GitLab CI

You can simply enable GitLab CI/CD by pushing a .gitlab-ci.yml file to repository’s root. I will just cover my use case in this article, for further reference please read the .

To getting started, I defined two-stage for the job.


stages:
  - build
  - deploy

variables:

create_dist:
  stage: build
  image: node:12.20.2-alpine3.10
  allow_failure: false
  script: |
    echo "Hello build"
  only:
   refs:
     - master

deploy_to_storage:
  stage: deploy
  image: python:alpine
  when: manual
  allow_failure: false
  script: |
    echo "Hello deploy"
  only:
    refs:
      - master


The create_dist stage will be triggered when a commit is pushed to the master branch. Since is using NPM as a package manager, I am using node:12 with alpine as a root image for the job.

The other stage, deploy_to_storage will only check out the master branch but it will need a manual trigger to run, to avoid unwanted deployments. It will use python:alpine as a base image because I want to utilize the AWS CLI pip package.

Prerequisities

To access AWS with GitLab CI, you have to create an IAM user with programmatic access type and with the required permissions. I have added AmazonS3FullAccess and CloudFrontFullAccess rights to my IAM user. After the user is created, you will receive an id and a secret. You should save them to GitLab in the Settings/CI/CD/Variables menu. You have to declare these variables in the variables section in your build script.

To create a GITLAB_ACCESS_TOKEN you have to a add an Access Token to you profile in Settings/Access Tokens. After it is done, simply add it to the variables like the AWS access keys.

variables:
  APP_NAME: ${CI_PROJECT_NAME}
  S3_BUCKET: ${AWS_BUCKET_NAME}
  CDN_DISTRIBUTION_ID: ${CLOUDFRONT_DIST_ID}
  AWS_ID: ${MY_AWS_ID}
  AWS_ACCESS_KEY_ID: ${MY_AWS_ACCESS_KEY_ID}
  AWS_SECRET_ACCESS_KEY: ${MY_AWS_SECRET_ACCESS_KEY}
  AWS_REGION: ${AWS_REGION_NAME}

Let’s build it

The build will be triggered inside the create_dist job, the actual build script will look like this.

First I am setting up the environment dependencies. After that, I am creating a git tag with the version number from the package.json and then I am installing the npm packages. As a next step, I just simply building a distribution with ng build --prod.

After that, I just simply create a new tarball with the build output in it and I upload the output to GitLabs’s package registry as a generic package. With this step, I will have a nice collection of previous versions.
create_dist:
  stage: build
  image: node:12.20.2-alpine3.10
  allow_failure: false
  script: |
    echo "Installing curl"
    apk --no-cache add curl

    echo "Installing JQ"
    apk --no-cache add jq

    echo "Installing git"
    apk --no-cache add git

    echo "Creating version"
    APP_VERSION=$(cat ./package.json | jq -r '.version')
    echo "$APP_VERSION"

    echo "Tagging build"
    git config user.email "${GITLAB_USER_EMAIL}"
    git config user.name "${GITLAB_USER_NAME}"
    git remote add api-origin //oauth2:${GITLAB_ACCESS_TOKEN}@gitlab.com/${CI_PROJECT_PATH}
    git tag -a "$APP_VERSION" -m "Version $APP_VERSION"
    git push api-origin "$APP_VERSION"

    echo "Installing dependencies"
    npm install -g @angular/[email protected]

    echo "Installing npm modules"
    npm install

    echo "Building distribution"
    ng build --prod

    echo "Creating artifact..."
    tar -vzcf ${APP_NAME}_${APP_VERSION}.tar.gz dist
    curl --header "JOB-TOKEN: $CI_JOB_TOKEN" \
      --upload-file ${APP_NAME}_${APP_VERSION}.tar.gz \
      "${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/frontend/${APP_VERSION}/${APP_NAME}_${APP_VERSION}.tar.gz"
  only:
    refs:
      - master

Deploy to S3

For this stage, I am employing the pyton:alpine as a root image for the job, because I want to utilize the AWS CLI. As a first step, I am installing this and after that, I am downloading the previously created package from GitLab’s package registry. To start with a clean sheet I am deleting the previous version from the S3 bucket before I am uploading the new version. After the upload, I have to invalidate the CloudFront distribution. Without this step, caching issues can occur.

deploy_to_storage:
  stage: deploy
  image: python:alpine
  when: manual
  allow_failure: false
  script: |
    pip install awscli

    echo "Unpacking artifact"
    APP_VERSION=$(cat ./package.json |  python3 -c "import sys, json; print(json.load(sys.stdin)['version'])")
    wget --header="JOB-TOKEN: $CI_JOB_TOKEN" \
      ${CI_API_V4_URL}/projects/${CI_PROJECT_ID}/packages/generic/frontend/${APP_VERSION}/${APP_NAME}_${APP_VERSION}.tar.gz
    mkdir ./package
    tar -vxzf  ${APP_NAME}_${APP_VERSION}.tar.gz  --directory ./package

    echo "Delete previous version"
    aws s3 rm s3://${S3_BUCKET} --recursive --region ${AWS_REGION}

    echo "Uploading verison to S3..."
    aws s3 cp ./package/dist/dev-tools s3://${S3_BUCKET}/ --recursive --region ${AWS_REGION}

    echo "Creating CDN invalidation"
    aws cloudfront create-invalidation \
          --distribution-id ${CDN_DISTRIBUTION_ID} \
          --paths "/*"
  only:
    refs:
      - master

To sum up

As you can see, you can speed up your deployment processes with these easy steps. I hope you found it useful, and I could help you to speed up your deployment. If you have any questions or suggestions, please leave a comment.


If you are interested, you can find the while script .


바카라사이트 바카라사이트 온라인바카라