visit
In my previous article, I delved into the fascinating world of microservices - Microservice Architecture Patterns Part 1: Decomposition Patterns. This was the beginning of my comprehensive article series on microservices and their patterns.
In this article, we aim to demystify the CI/CD process through practical application. We'll take you through a step-by-step tutorial, breaking it down module by module, where you'll build a CI/CD pipeline manually. To do this, we'll harness the power of contemporary DevOps tools like AWS, Docker, Kubernetes, Ansible, Git, Apache Maven, and Jenkins. So, let's begin this journey!
Click the button Create an AWS Account.
Go to //console.aws.amazon.com/console/home. Click the Sign In button.
Choose EC2 Virtual Server by clicking EC2 Service.
Click the button Launch Instance.
Go to the “Name and tags” section.
Provide a name for a new AWS EC2 Virtual Server instance in the “Name” section.
You can also add additional tags for your virtual server by clicking ”Add additional tags”.
Go to the "Application and OS Images (Amazon Machine Image)" section.
To play with the virtual server for FREE:
Go to the ”Instance type” section.
To play with the virtual server for FREE:
Select a type with the Free tier eligible tag in the Instance type section.
For me it is t2.micro (Family: t2 1cCPU 1 GiB Memory Current generation:true).
Go to the ”Configure storage” section.
To play with the virtual server for FREE:
Do not change default settings. Free tier eligible customers can get 30 GB of EBS General Purpose (SSD) or Magnetic storage.
Go to the “Network settings“ section.
By default, your virtual server is accessible via (Type - SSH, Protocol - TCP, Port - 22). If you need additional connection types, add them by adding additional inbound security group rules.
Go to the ”Key pair (Login)” section.
If you haven't created “key-pair” yet:
Launch the EC2 Virtual Server instance by clicking the button “Launch instance”.
Then you should go to the “Instances“ section by clicking “View all instances” button.
Now you can see that your AWS EC2 Virtual Server instance is running.
Follow instructions from [Module 1]: AWS EC2 Virtual Server section of this tutorial to finish this step and create an EC2 virtual server instance with the name JenkinsServer.
Do not forget to add a security group setup. It allows Jenkins and SSH to work on port 8080 and 22 respectively.
Use the name “JenkinsServer” to distinguish your EC2 Virtual Server instance.
Create “CI_CD_Pipeline” security group and “CI_CD_Pipeline_Key_Pair“ for a new “JenkinsServer” AWS EC2 instance. You can reuse them further in the article.
Go to AWS Console home page → EC2 Management Console Dashboard → Instances.
Then you should choose JenkinsServer and then click the “Connect” button.
Then you will see this web page. You should again click the “Connect” button.
sudo wget -O /etc/yum.repos.d/jenkins.repo //pkg.jenkins.io/redhat-stable/jenkins.repo
Now Jenkins is downloaded.
To import the Jenkins key we need to copy the “sudo rpm..” command and execute it.
sudo rpm --import //pkg.jenkins.io/redhat-stable/jenkins.io-2023.key
This way “rpm” package manager can verify that the Jenkins packages you install are exactly the ones published by the Jenkins project, and that they haven't been tampered with or corrupted.
To run Jenkins, we need to install Java on our EC2 virtual server instance.
To install Java, use this command.
sudo amazon-linux-extras install java-openjdk11 -y
Verify whether Java was installed correctly using this command:
java -version
To run Jenkins, you need to install fontconfig on our EC2 virtual server instance.
Use this command.sudo yum install fontconfig java-11-openjdk -y
sudo yum install jenkins -y
sudo systemctl start jenkins
sudo systemctl status jenkins
//<your-ec2-ip>:8080
sudo cat /var/lib/jenkins/secrets/initialAdminPassword
Now, as Jenkins is working fine, you can start creating the Jenkins pipeline. To create Jenkins pipeline you need to create a new “Freestyle project”. To create a new “Freestyle project” you need to go to the Jenkins dashboard and click the “New Item” button.
Enter the name of the Github “Freestyle project” (“pipeline” name is going to be used further) and then click the button “OK”.
Then provide the Description of the pipeline.
Git is a distributed version control system (VCS) designed to help software teams keep track of every modification to the code in a special kind of database. If a mistake is made, developers can turn back the clock and compare earlier versions of the code to help fix the mistake while minimizing disruption to all team members. VCS is especially useful for
sudo yum install git -y
git --version
Now Git is working fine on EC2 Virtual Server instance.
Click the button “Manage Jenkins” and then click the button “Manage Plugins”.
Click the button “Available plugins”.
Find the Github plugin Search box.
Select Github plugin.
Select Github plugin. And then click the button “Install without restart”.
Then on the main page, you need to click the button “Manage Jenkins” and then click the button “Global tool configuration”.
Then click the “Apply” and “Save” buttons**.**
Just copy and paste it to the “Repository URL” input. Then click the “Apply” and “Save” buttons to finish the integration Git with the pipeline.
cd /var/lib/jenkins/workspace/{your pipeline name}
Apache Maven is a widely used build automation and project management tool in software development. It streamlines the process of compiling, testing, and packaging code by managing project dependencies and providing a consistent build lifecycle. Maven employs XML-based configuration files (POM files) to define project structure, dependencies, and tasks, enabling developers to efficiently manage and deploy complex software projects.
To download Apache Maven go to the “/opt” directory.
cd /opt
sudo wget //dlcdn.apache.org/maven/maven-3/3.9.4/binaries/apache-maven-3.9.4-bin.tar.gz
sudo tar -xvzf apache-maven-*.tar.gz
cd ~
Edit .bash_profile file using this command.
vi .bash_profile
Add JAVA_HOME and M2_HOME variables.
Assign the path to JDK11 for JAVA_HOME and path to the maven directory for M2_HOME variable.
sudo find / -name java
How to use VIM
source .bash_profile
To verify $PATH, use this command.
echo $PATH
To verify Apache Maven, use this command.
mvn -v
To achieve this, follow these steps:
And then click the button “Go back to the top page”.
To do so, follow these steps:
Then go to “Maven” section. Click the button “Add Maven”. Uncheck “Install automatically”.
Then add name and MAVEN_HOME path.
Click the “Apply” and “Save” buttons.
Here, you have finished configuring the Apache Maven Jenkins plugin.
To integrate Apache Maven into the pipeline you need to follow these steps:
Finally, you should click “Apply” and “Save” buttons to finish the integration of Apache Maven with the pipeline.
cd /var/lib/jenkins/workspace/{your pipeline name}/target
Use instructions from “Launch an AWS EC2 Virtual Server instance” section of this tutorial to finish this step. Do not forget to add a security group setup. It allows Docker and SSH to work on ports 8080 and 22 respectively.
sudo chown ansible-admin:ansible-admin /opt/docker
sudo mkdir /opt/docker
sudo yum install docker -y
You need to add the current user “ansible-admin” to the Docker group on the “AnsibleServer” EC2 virtual server to give Docker admin privileges.
sudo usermod -a -G docker ansible-admin
id ansible-admin
sudo systemctl start docker
sudo systemctl status docker
If you used the project “Hello” which was offered in “[Module 3]: Git and Github” module, then you don’t need to create a new Dockerfile as this project repository has already contained Dockerfile.
FROM eclipse-temurin:17-jre-jammy
ENV HOME=/opt/app
WORKDIR $HOME
ADD hello-0.0.1-SNAPSHOT.jar $HOME
ENTRYPOINT ["java", "-jar", "/opt/app/hello-0.0.1-SNAPSHOT.jar" ]
sudo touch Dockerfile
vim Dockerfile
The Dockerfile is ready to use.
Now that your Dockerfile is prepared for use, proceed by copying your project's JAR artifact from the **"JenkinsServer"**EC2 instance and pasting it onto the "AnsibleServer" EC2 instance. It is important to note that this transfer will be automated through the pipeline further.
By completing this step, you'll be ready to test your Dockerfile along with the Docker environment you've set up.
docker login
With this, you have completed the process of logging into Docker and are now ready to proceed with testing.
docker build -t hello:latest .
docker tag hello:latest zufarexplainedit/hello:latest
docker push zufarexplainedit/hello:latest
Follow instructions from [Module 1]: AWS EC2 Virtual Server section of this tutorial to finish this step and create an EC2 virtual server instance for Ansible.
Do not forget to add a security group setup. It allows Ansible and SSH to work on port 8080 and 22 respectively.
Use the name “AnsibleServer” to distinguish your EC2 Virtual Server instance.
You can reuse “CI_CD_Pipeline” security group and “CI_CD_Pipeline_Key_Pair“ for a new “AnsibleServer” EC2 instance.
Then click the “Connect” button.
Then you will see this web page. You should again click the “Connect” button.
sudo vi /etc/hostname
Replace this hostname with “ansible-server”. Then, reboot it.
sudo init 6
Now let’s add a new ansible-admin user to the AWS EC2 Virtual Server instance.
To do that use this command:
sudo useradd ansible-admin
Then, set the password for ansible-admin user.
sudo passwd ansible-admin
Also, you need to configure user privileges by editing the sudoers file.
sudo visudo
Add “ansible-admin ALL=(ALL) ALL” to this sudoers file.
Also, you need to edit /etc/ssh/sshd_config file to enable PasswordAuthentication.
sudo vi /etc/ssh/sshd_config
sudo service sshd reload
sudo su - ansible-admin
ssh-keygen
Now you can install Ansible on your “AnsibleServer” EC2 virtual server instance.
Let’s do it.
sudo amazon-linux-extras install ansible2
ansible --version
As Ansible is installed on your “AnsibleServer” EC2 virtual server instance, you can configure Jenkins to integrate it with Ansible. You need to install the “Publish over SSH” plugin to integrate Jenkins with the EC2 Virtual Server instance where Ansible is installed and with other EC2 Virtual Server instances where Kubernetes is installed.
Go to “Dashboard” → “Manage Jenkins” → “Configure System” → “Available plugins”.
Then enter “Publish over SSH“ in the search box.
Click the button “Install without restart”. Wait for the end of the downloading process.
To do so, follow these steps:
Go to “Dashboard“ → “Manage Jenkins” → “Configure System” → “Publish over SSH”.
Then click the “Apply” and “Save” buttons.
Here you have finished configuring the “Publish over SSH“ Jenkins plugin.
Go to “/opt” folder in AnsibleServer EC2 instance.
cd /opt
Create a new folder “docker” there.
sudo mkdir docker
Give privileges to this “docker” folder.
sudo chown ansible-admin:ansible-admin docker
Now, check the “docker” folder privileges by executing this command.
ll
You can see that the “docker” folder is accessible with the “ansible-admin” user.
Now as “Publish over SSH“ Github plugin is installed and configured, you're now able to integrate it into the pipeline which you created in the “[module 2]: Jenkins Server” to transfer a project jar artifact from “JenkinsServer” to “AnsibleServer”.
Well, to integrate “Publish over SSH“ Github plugin into the pipeline you need to follow these steps:
Finally, you should click “Apply” and “Save” buttons to finish the integration “Publish over SSH“ plugin with the pipeline.
Now you can use your updated pipeline to transfer a project jar artifact from “JenkinsServer” to “AnsibleServer”. To do that you need to click the “Build Now” button. As a result you will see a successful job result in the build history.
If you open your “AnsibleServer” AWS EC2 terminal. You can check that the pipeline works well.
Just use this command.
cd /opt/docker
hosts
parameter to a list of IP addresses or hostnames./etc/ansible/hosts
.
By editing /etc/ansible/hosts
, you can easily manage groups of hosts without having to write out their IP addresses each time you run a playbook.
sudo ifconfig
sudo vi /etc/ansible/hosts
sudo ssh-copy-id -i /home/{your user name}/.ssh/id_rsa.pub {your user name}@{your host address}
sudo ssh-copy-id -i /home/ansible-admin/.ssh/id_rsa.pub [email protected]
Now you can see “Number of key(s) added: 1”. It means that the passwordless SSH authentication installation was successfully completed.
touch hello-app.yml
hello-app.yml
file. Open it up for editing with this command.vi hello-app.yml
---
- hosts: ansible
user: root
tasks:
- name: create docker image
command: docker build -t hello:latest .
args:
chdir: /opt/docker
- name: create tag to push image onto dockerhub
command: docker tag hello:latest zufarexplainedit/hello:latest
- name: push docker image onto dockerhub
command: docker push zufarexplainedit/hello:latest
The Ansible playbook for Docker tasks is ready to use.
cd /opt/docker
sudo -u ansible-admin ansible-playbook /opt/docker/hello-app.yml
Now as “Publish over SSH“ Github plugin, Ansible and Docker are installed and configured, you're now able to integrate them all into the pipeline which you created in the “[module 2]: Jenkins Server” to transfer a project jar artifact from “JenkinsServer” to “AnsibleServer” and then build a new Docker image from your project and then push this Docker image onto Dockerhub.
To achieve it you need to follow these steps:
Finally, click “Apply” and “Save” buttons to finish the integration Ansible Docker tasks with the pipeline.
Now you can test your upgraded pipeline to seamlessly transfer a project jar artifact from “JenkinsServer” to “AnsibleServer” then build a new Docker image from your project and then push this Docker image onto Dockerhub. To do that you need to click the “Build Now” button. As a result you will see a successful job result in the build history.
Now let’s configure K8s on the EC2 instance. You are going to create a new EC2 instance and install their kubectl command-line tool for interacting with a Kubernetes cluster further.
Use instructions from “Launch an AWS EC2 Virtual Server instance” section of this tutorial to finish this step.
Do not forget to add a security group setup. It allows all tools and SSH to work on port 8080 and 22 respectively.
Use the name “K8sServer” to distinguish your EC2 Virtual Server instance.
You can reuse “CI_CD_Pipeline” security group and “CI_CD_Pipeline_Key_Pair“ for a new “K8sServer” EC2 instance.
sudo vi /etc/hostname
Replace this hostname with “kubernetes-server” and then reboot it.
sudo init 6
Use this command to check the AWS version.
aws --version
If you can see version aws-cli/1.18, you should download the latest version.
Copy-paste the curl command.
curl "//awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
Wait for the downloading process to commence.
unzip awscliv2.zip
sudo ./aws/install
aws --version
Kubectl is a fundamental command-line tool for interacting with any Kubernetes cluster, regardless of the underlying infrastructure. It allows you to manage resources, deploy applications, configure networking, access logs, and perform various other tasks within a Kubernetes cluster.
Now you need to install kubectl command-line tool for interacting with a Kubernetes cluster further. To that you need to go to AWS → Documentation → Amazon EKS → User Guide → Installing or updating kubectl → Linux.
curl -O //s3.us-west-2.amazonaws.com/amazon-eks/1.27.1/2023-04-19/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin
kubectl version --output=yaml
Eksctl is an another command-line tool which is tailored specifically to the Amazon EKS service. Eksctl can be used to create AWS EKS clusters, manage node groups, and perform tasks specific to EKS, such as integrating with IAM roles and other AWS services by abstracting away much of the AWS infrastructure setup and management.
curl --silent --location "//github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
sudo mv /tmp/eksctl /usr/local/bin
eksctl version
You need to create an IAM role and attach it to your “KubernetesServer” EC2 instance.
To do that you need to find EC2 in the search box.
Go to IAM Dashboard → Roles.
Click the button “Create role” on the IAM roles web page.
Then choose “AWS service” , “EC2”. And the click “Next” button.
Then, find “AmazonEC2FullAccess”, “AmazonEC2FullAccess“, “IAMFullAccess“, “AWSCloudFormationFullAccess“ in the search box and then click the “Add permissions” button.
And then click the “Next” button.
Then type “Eksctl_Role” into “Role name” input.
And the click “Create role” button.
Go to the AWS EC2 instance web page. Choose “KuberbetesServer”. Then click “Actions” → “Security” → “Modify IAM Role”.
Choose “Eksctl_Role” and then click the “Update IAM role” button.
Now your IAM Role is connected with your “EKS_Server” and eksctl tool.
An Amazon EKS (Elastic Kubernetes Service) cluster is a managed Kubernetes environment on AWS, automating intricate infrastructure tasks like setup, scaling, and maintenance. It's essential as it provides an efficient, secure, and AWS-optimized platform for deploying, managing, and scaling containerized applications, streamlining operations and freeing developers to focus on coding rather than managing underlying infrastructure.
To achieve this, follow these steps:
eksctl create cluster --name cluster-name \
--region region-name \
--node-type instance-type \
--nodes-min 2 \
--nodes-max 2 \
--zones <AZ-1>,<AZ-2>
eksctl create cluster --name zufarexplainedit \
--region eu-north-1 \
--node-type t3.micro
Execute the modified command and patiently await the completion of the cluster creation process. You will notice that the EKS cluster status is indicated as "creating" on the AWS CloudFormation web page.
Furthermore, you can verify the successful EKS cluster creation status on the AWS CloudFormation web page.
A Kubernetes Deployment YAML file is a configuration script written in YAML format that defines how to manage and maintain a specific application or service within a Kubernetes cluster. It encapsulates instructions for orchestrating the deployment, scaling, updating, and monitoring of containers running the application. This file includes details such as the container image, the desired number of replicas, resource limits, environment variables, networking settings, and more. When applied to a Kubernetes cluster, the Deployment YAML file ensures the desired state of the application, automatically managing the creation, scaling, and recovery of containers to maintain the desired level of availability and reliability.
touch hello-app-deployment.yaml
vi hello-app-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: zufarexplainedit-hello-app
labels:
app: hello-app
spec:
replicas: 2
selector:
matchLabels:
app: hello-app
template:
metadata:
labels:
app: hello-app
spec:
containers:
- name: hello-app
image: zufarexplainedit/hello
imagePullPolicy: Always
ports:
- containerPort: 8080
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
A Kubernetes Service YAML file is a configuration script written in YAML format that defines a network abstraction for a set of pods, allowing them to be accessed consistently within a Kubernetes cluster. This file outlines how the service should be discovered, accessed, and load-balanced by other services or external clients. It includes specifications like the service type (ClusterIP, NodePort, LoadBalancer), port numbers, selectors to identify pods, and more. When applied to a Kubernetes cluster, the Service YAML file creates a virtual IP and port that routes traffic to the appropriate pods, abstracting the underlying pod changes and providing a stable endpoint for communication, enabling seamless connectivity and dynamic scaling.
touch hello-app-service.yaml
vi hello-app-service.yaml
apiVersion: v1
kind: Service
metadata:
name: zufarexplainedit-hello-app-service
labels:
app: hello-app
spec:
selector:
app: hello-app
ports:
- port: 8080
targetPort: 8080
type: LoadBalancer
Now hello-app-service.yaml is created and ready to use.
Apply Deployment.
Use the following command to apply the deployment configuration.kubectl apply -f hello-app-deployment.yaml
This will create a deployment with the specified number of replicas and a rolling update strategy, ensuring your application's availability and manageability.
2. Apply Service.
Next, apply the service configuration.kubectl apply -f hello-app-service.yaml
This will set up a LoadBalancer type service, exposing your application to the internet.
Note that it might take a short while for the LoadBalancer to be provisioned and acquire an external IP address.
Check LoadBalancer Status.
Monitor the status of your service using.kubectl get service zufarexplainedit-hello-app-service
When an external IP is assigned, you're almost ready to access your application.
Access Your Application.
Using a web browser, enter the assigned external IP address followed by :8080. After a brief moment, the page will load, displaying the "HelloWorld" message. Keep in mind that the initial loading might take a few seconds.
1. Delete All Deployments.
To delete all deployments, you can use the following command.kubectl delete deployments --all
This action ensures that no active deployment instances are left in your cluster.
2. Delete All Pods.
If you need to delete all pods, whether they are managed by a deployment or not, you can use the following command.kubectl delete pods --all
Clearing pods can help reset your cluster state or prepare for new deployments.
3. Delete All Services.
To clean up services that expose your applications to the network, you can use the following command.kubectl delete services --all
Removing services may involve downtime, so consider the implications before proceeding.
To remove all the resources associated with the specified Amazon EKS cluster created with eksctl
, including worker nodes, networking components, and other resources, you can use the following command.
eksctl delete cluster --name {your cluster name} --region {your region name}
For me it is.
eksctl delete cluster --name zufarexplainedit --region eu-north-1
Make sure you are certain about stopping the cluster, as this action is irreversible and will result in data loss.
Now let’s add a new ansible-admin user to “KubernetesServer” AWS EC2 Virtual Server instance.
sudo useradd ansible-admin
Then, set the password for ansible-admin user.
sudo passwd ansible-admin
Also, you need to configure user privileges by editing the sudoers file.
sudo visudo
Add “ansible-admin ALL=(ALL) ALL” to this sudoers file.
Also, you need to edit /etc/ssh/sshd_config file to enable PasswordAuthentication.
sudo vi /etc/ssh/sshd_config
sudo service sshd reload
sudo su - ansible-admin
You are planning to manage remote servers such as K8s EC2 virtual server instance further in this article. That is why you need to set up SSH keys.
ssh-keygen
sudo ssh-copy-id -i /home/{your user name}/.ssh/id_rsa.pub {your user name}@{your host address}
sudo ssh-copy-id -i /home/ansible-admin/.ssh/id_rsa.pub [email protected]
Now you can see “Number of key(s) added: 1”. It means that the passwordless SSH authentication installation was successfully completed.
When you run an Ansible playbook, you specify the hosts it should run on. In this step you need to specify KubernetesServer EC2 instance host. To do that you need to repeat the same steps which you passed in “[Module 6]: Ansible”.
sudo ifconfig
sudo vi /etc/ansible/hosts
touch kubernetes-hello-app.yml
hello-app.yml
file. Open it up for editing with this command.vi kubernetes-hello-app.yml
---
- hosts: kubernetes
tasks:
- name: deploy regapp on kubernetes
command: kubectl apply -f hello-app-deployment.yaml
- name: create service for regapp
command: kubectl apply -f hello-app-service.yaml
- name: update deployment with new pods if image updated in docker hub
command: kubectl rollout restart deployment.apps/zufarexplainedit-hello-app
The Ansible playbook for Kubernetes tasks is ready to use.
sudo -u ansible-admin ansible-playbook /opt/docker/kubernetes-hello-app.yml
Zufar Sunagatov is an experienced senior software engineer who is passionate about designing modern software systems.