visit
This abstraction is needed because, we don’t want to deploy a container image locked into a fixed virtual machine (or physical server) for high availability, fault tolerance, and scalability as well as for the portability of containers. Remember, we talked about these containers are lightweight.You may have heard the names Kubernetes, Docker Swarm or Mesos━these are some of the popular container orchestrators available. If we take Kubernetes, for instance, there are container application platforms which offer built-in support to set up a Kubernetes cluster in minutes. These platforms, like Microsoft Azure, , AWS, provides a wide range of features and APIs to simplify the DevOps lifecycle of containers. Handling the underlying complexities in provisioning and managing clusters, cluster nodes, and even providing advanced support for container lifecycle management are some of their unique selling points. Most of these platforms also offer private container registry support and inbuilt CI/CD pipelines.The orchestrator typically handles the provisioning complexity of deploying new container images to a cluster. To do this, you need to pull the container images from the container registry and provision them.
Note that its not just one container, we are talking about here. It could be tens or even hundreds of different containers that could be running in a single cluster.So if we take the entire lifecycle described above, this involves a unique set of DevOps operations for building container images, provision containers, and maintain clusters at each step in the lifecycle. In the following section, I’m diving into more detail regarding these steps, from the perspective of DevOps.
Building and Publishing Container Images
Building containers are an essential part of DevOps. When it comes to development environments, if the container blueprint (e.g., Docker file) changes, it is mandatory to rebuild the container. Otherwise, it is possible to have an optimized setup where only the application code is built and push into the container running in the development environment. Therefore, it is essential to focus on speeding up the container build step by using scripts and automation.For container clusters, you need to build the container images outside the development environment in a build server. The CI/CD pipeline typically builds the container image. Properly configured CI/CD pipeline can automatically build the container images and publish them to the container registry as the initial steps.Some container registries, like DockerHub, simplify the process by providing container Image compilation support. For example, DockerHub provides a clickable option to connect the source code repository (e.g., GitHub) as a trigger to rebuild the image for any code modifications. Publishing the built container image to a container registry is quite straight forward since the container registries typically provide the command line tools or APIs to support it.Previously, we discussed building the application container image in a host machine. A host machine is typically a development machine or a build server where we have already installed the relevant Docker and application-specific compilation tools.However, it is also possible to build the container image inside another container, which is one of the use cases for using containers for DevOps.For instance, using a container to build another container, supports the cross-platform building of the application code and the container image having the exact build environments both in development and build servers.
Besides, in building application container images, inside containers, we can also use containers to run continuous integration (CI) tools like Jenkins.A more effective way of executing these tests is to do so before merging new code changes to the primary source code repository. Considering Git source control, the best place to perform these tests is when sending a Pull Request. If any test cases fail by then, it should automatically report the status to the Pull Request and prevent it from merging.
Deploying the Container Image to a Cluster
As discussed in the first section of this article, deploying containers to a cluster requires basically to invoke the relevant underlying container platform APIs or orchestrator APIs.The complex task of scheduling the containers is typically the responsibility of a container orchestrator. Orchestrators provide the requirements for how to define the rules to handle the complexity of scheduling. These rules comprise of the following:How many instances of a particular container image at runtime. Internal networking rules that require connecting with other containers.Volumes mounted to the containers.Rules specific to container scheduling and lifecycle management on different nodes in the cluster.Rules specific to internal container resource management.Though this may seem complicated, in terms of DevOps perspective, it is a luxury that the Orchestrator can handle these complexities while we only need to trigger the deployment instruction.One use case for DevOps using containers as hosts to coordinate the build and deployment of containerized application changes. For example, running Jenkins inside a Docker container could be used for CI/CD. Having multiple instances of Jenkins in containers is especially useful when setting up various CI/CD environments to manage different software projects.However, this comes with a cost━you’ll need to mount an external volume to keep track of previous build results.Although I haven’t delved much into post-deployment DevOps operations, containers could also play a significant role there.For instance, we could deploy containers to monitor other containers as Agents, which work as to perform cross-cutting operations like log streaming, health, and resource monitoring.As you can see, containerized applications benefit from DevOps as well as vice-versa. Since this is an emerging area both in terms of application architecture as well as DevOps, new tools and technologies are continuously coming out to make things more efficient. Therefore, it is essential to keep an eye open to the evolution of existing solutions over time.