visit
When you think of Kubernetes, you think of containers. When you think of containers, you think Docker. So it was a big surprise when, in December 2020, the Kubernetes maintainers announced. This caused quite a bit of concern and confusion.
Deprecating Docker support in Kubernetes? This sounds like a very topical piece of information in 2020. While the term docker is synonymous with containers, many do not realize that as a product, Docker is composed of multiple components and is a tech stack for containers.
One of these components is the container runtime, which Kubernetes needs to interact with your containers. The container runtime can be broken down into a high-level runtime and a low-level runtime. Both of which serve different purposes but work together. The high-level runtime focuses on pulling images from registries, managing the image, and handing the images to the low-level runtime.
The low-level runtime will then create, delete, and run containers off of the images provided. The high-level and low-level runtimes follow specifications — Container Runtime Interface (CRI) and Open Container Initiative.
The Container Runtime Interface (CRI) was introduced as an alpha offering in . The goal of the CRI was to make the Kubernetes ecosystem more extensible, supplying developers with a blueprint on how Kubernetes will interact with the runtime. But, of course, how developers design and implement the runtime is entirely up to them, as long as the interface is satisfied.
As cluster maintainers, the standardization of the CRI allows us to decide which container runtime we would like to use for our environments. This also allows Kubernetes to be more flexible as it now doesn’t require specific knowledge for each runtime that may come and go. Two popular container runtimes currently available are containerD and CRI-O.
The originated in 2015 by leaders in the Container space who had decided the way container images are built should be standardized. Images that are built to the OCI specifications will work with any Container runtime if the runtime adheres and is compliant with the OCI specification.
So whether you are building container images with Docker or your container images will be compliant across multiple specs and continue to run in your cluster. A few popular OCI runtimes are runC, Kata containers, and Gvisor.
Let’s now revisit the topic of Docker and why it is being deprecated. Docker knows how to interact with containers because it uses ContainerD for its high-level runtime and runC as its low-level runtime, both of which sit a few layers deep within Docker and are abstracted to the user.
Even though ContainerD and runC are both valid CRI and OCIs, we run into a problem because Docker itself does not satisfy the requirements of the CRI which Kubernetes needs to interact with the runtimes.
This problem was solved with the introduction of Dockershim which serves as a middleware between Kubernetes and Docker and is CRI compliant.
However, a drawback to the Dockershim is that you are loading in the entire Docker stack and having Kubernetes talk to the shim, which then talks to Dock, sending the call down the stack until it reaches containerD.
This adds unnecessary steps in your container workflow because Kubernetes can instead directly talk to containerD or any other CRI compliant runtime. Using Dockershim was intended to be a temporary solution and it slowly became more of a burden, thus necessitating its deprecation.
In summary, Kubernetes requires a way to interact with containers. This is resolved by container runtimes which handle the lifecycle of containers. Kubernetes can interact with any container runtime - it is given if they are compliant with the Container Runtime Interface, which defines how Kubernetes will interact with a supplied runtime.
In addition to all images/containers and runtimes must adhere to the Open Container Initiative which defines how images/containers should be created. As for Docker, it is a tech stack that abstracts a container runtime and is not CRI compliant and the workaround to have Kubernetes work directly with Kubernetes is being removed to help remove pain points.
Previously published: