visit
Kubernetes is now the de-facto standard for container orchestration. With more and more organizations adopting Kubernetes, it is essential that we get our fundamental ops-infra in place before any migration. This post will focus on pushing out new releases of the application to our Kubernetes cluster i.e. Continuous Delivery.
Continuous Delivery Pipeline for Kubernetes
TAG=1.7-SNAPSHOT
ACTION=DEPLOY
Today, we will deploy a sample Nginx Service which reads the app version from a
pom.xml
file and renders it on the browser. Application code and Dockerfile can be found . The part where index.html
is updated can be seen below (This is what the Jenkins job does basically).#!/bin/bash
#Author: Vaibhav Thakur
#Checking whether commit was in master of not.
if_master=`echo $payload | jq '.ref' | grep master`
if [ $? -eq 1 ]; then
echo "Pipeline should not be triggered"
exit 2
fi
#Getting tag from pom.xml
TAG=`grep SNAPSHOT pom.xml | sed 's|[<,>,/,version ]||g'`
echo $TAG
#Getting action from commit message
ACTION=$(echo $payload | jq -r '.commits[0].message' | cut -d',' -f2)
#Updating index.html
sed -i -- "s/VER/${TAG}/g" app/index.html
#Pushing to dockerhub
docker build -t vaibhavthakur/nginx-demo:$TAG .
docker push vaibhavthakur/nginx-demo:$TAG
echo TAG=${TAG} > trigger.properties
echo ACTION=${ACTION} >> trigger.properties
apiVersion: v1
kind: Namespace
metadata:
name: nginx
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
namespace: nginx
name: nginx-deployment
spec:
replicas: 1
template:
metadata:
annotations:
prometheus.io/path: "/status/format/prometheus"
prometheus.io/scrape: "true"
prometheus.io/port: "80"
labels:
app: nginx-server
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app
operator: In
values:
- nginx-server
topologyKey: kubernetes.io/hostname
containers:
- name: nginx-demo
image: vaibhavthakur/nginx-demo:1.0-SNAPSHOT
imagePullPolicy: Always
resources:
limits:
cpu: 2500m
requests:
cpu: 2000m
ports:
- containerPort: 80
name: http
---
apiVersion: v1
kind: Service
metadata:
namespace: nginx
name: nginx-service
annotations:
cloud.google.com/load-balancer-type: Internal
spec:
ports:
- port: 80
targetPort: 80
name: http
selector:
app: nginx-server
type: LoadBalancer
Demo-Project configuration
3. Under the Application section add the Pipeline. Make sure that in the Trigger stage of the pipeline, Jenkins is enabled and the artifacts are consumed appropriately. (Don’t forget to modify it according to your credentials and endpoints.)
Spinnaker Pipeline
The pipeline configuration can be found . 4. Once you add it, the pipeline will look something like this:Continuous Delivery pipeline, triggered by Jenkins, deploying to Staging and Prod.
1. Configuration: This is the stage where you mention the Jenkins endpoint, the job name and expected artifact from the job. In our case trigger. properties.
2. Deploy (Manifest): The trigger.properties file has an action variable based on which we decide wether we need to trigger a new deployment for the new image tag to patch an existing deployment. The properties file (set in the TAG variable) also tells which version to deploy the patch with.
Expression validation for Deploy Stage
3. Patch (Manifest): Similar to the Deploy stage this stage will check the same variable and if it evaluates to “PATCH”, then the current deployment will be patched. It should be noted that in both these stages, the Kubernetes cluster is being used as a staging cluster. Therefore, our deployments/patches for staging environment will be automatic.
Note: k8-staging-1 under Account Setting
4. Manual Judgement: This is a very important stage. It is here where you decide whether or not you want to promote the currently running build in the staging cluster to the production cluster. This should be approved only when the staging build has been thoroughly tested by various stake holders.
5. Deploy(Manifest) and Patch(Manifest): The final stages in both paths are similar to their counterparts in pre-approval stages. The only difference being that the cluster under Account is a production Kubernetes cluster.
Now you are ready to push out releases for your app. Once triggered, the pipeline will look like this:Automated Deployment to Staging Environment
Approval Given to Manual Judgement Section
Deployment to Production Successful
The sections in Grey have been skipped as the ACTION variable did not evaluate “PATCH”. Once you deploy, you can view the current version and also the previous versions under the Infrastructure Section.
Current and Previous Versions of App in the Staging and Production cluster
This article was originally published on .