Docker vs. Kubernetes – A Win-Win Scenario

Docker vs. Kubernetes

In the container world of today, two words rule in bliss — Docker and Kubernetes. Two extremely popular platforms used for managing containers that for a beginner at least, may also seem to be competing technologies.

In reality, they are complementary. Docker introduced an easy way to build, deploy, and run containers on Linux (and now Windows) machines, and introduced the concept of immutable infrastructure and a hierarchy of images to compose and create new images. Kubernetes, on the other hand, improved upon Docker’s notions of managing a cluster of containers and allows you to use the same, immutable container on dozens of machines with failover, scaling instances, and rolling deployment support.

Nonetheless, there is at least one point where the two platforms compete — container orchestration. Kubernetes is, of course, the most popular container orchestration tool nowadays, but it wasn’t the first. The idea of a tool to manage large clusters is being applied by several companies offering proprietary software as well as open source solutions. Google launched Kubernetes and Docker launched Swarm with the same purpose in mind, and other players have since joined the bandwagon.

The Rise of Containers

Linux Containers (LXC) were first introduced in 2008 and are still widely used today. Before gaining traction, Virtual Machines were being used for major deployments inside cloud providers or internal data centers in order to segment physical computing resources. Virtual Machines provide resource isolation and segmentation but are slow to start and require emulated CPU instructions to function. Although technologies like Intel VT-x and AMD-V provided solutions specifically to avoid emulation, the performance is not equal to a bare metal machine with the same specs.

Containers are not virtual machines, there is not a separation layer like in VM’s: all containers run on the same machine, sharing the same Kernel. The CPU, Memory, Disk, Network, etc. are all scheduled by the operational system, with the possibility to define priorities, sizes, maximum values, and other sharing settings. Storages, for example, are handled like different hard disks for each container. In another example, virtual networks can be created to allow communication between containers (similar to distributed machines). The solution is similar to the sandboxing implementation available in modern mobile operating systems, but it runs at a low level with access to, virtually, all system calls.

In other words: containers can run in bare metal, literally touching hardware in order to maximize performance. There is not an emulation layer to interfere with performance.

Docker is probably the most well-known container system that uses LXC. It is so popular that Docker has become synonymous to containers. But there are others, like RKT that can be used. Docker is the most relied upon solution for infrastructure immutability: once created, the image can’t be changed. The solution guarantees uniform testing, staging, and production environments.

Level Up: Orchestration

With Docker being rapidly adopted by software houses and enterprise companies, requirements concerning the technology increased. Containers are excellent for creating environments that can be reproduced in different stages, but how does one handle a real production environment?

In this type of setting, we have to take a closer look at some specific constraints, such as how to handle the operational side of things. For example, if a container breaks, what should we do? If one container can’t respond at a reasonable time, how should we increase its processing power? How should we handle version upgrades or rollbacks? If a physical machine breaks, where do we put the old containers that were on the broken machine?

All of these questions were addressed in a set of tools called Container Orchestrators. And over the last few years, an intense competition has resulted in the creation of many solutions.

Apache Mesos, for example, has been around since early 2010 and calls itself a data center operating system as it does more than simply manages containers. As mentioned above, Docker Swarm is Docker’s solution for orchestrating containers, the main benefit offered is the simplicity involved in managing a set of computers using the same Docker commands. Finally, Kubernetes was created by Google with years of internal experience in cluster management. A stable release was launched in 2014, bringing to the table a mix of powerful functionality,  rich declarative commands, and a huge open source community.

Nowadays, Kubernetes is considered the major solution for container orchestration.

If You Can’t Beat ‘em, Join ‘em

Docker and Kubernetes are not 100% direct rivals. With the exception of Docker Swarm, the two platforms complement one another. Kubernetes uses Docker as its main container engine solution, and Docker recently announced support for Kubernetes across its enterprise platform.

Docker is generally divided into two editions: (1) the open source and free-to-use Community Edition, and the paid (2) Enterprise Edition. The enterprise edition offers a private image registry, advanced security, and centralized management for the container lifecycle, which means building, testing, deploying, running, and upgrading a container. These last three (deploying, running, and upgrading) are handled by the orchestration layer offered by Docker and was limited to Docker Swarm until recently.

It was recently announced that Docker now supports Kubernetes as the orchestration layer of its enterprise edition. Moreover, Docker took care to be approved Certified Kubernetes™ program, which guarantees all Kubernetes API function as expected.

Inside Docker enterprise, Kubernetes uses some new features of Docker EE like Secure Image Management, in which Docker EE provides image scanning to verify issues in one of the images used by the container. Another feature is the Secure Automation in which organizations can remove bottlenecks enforcing policies such as scanning images for vulnerabilities.

Docker EE also simplifies multi-cloud environments, with multiple orchestrators enabled in the same environment. In order to do so, Docker EE proves its multi-tenancy with AD/LDAP support and fine-grained, role-based access controls. In this area, admins can leverage different roles for multiple orchestrators inside the same Docker EE instance.


Necessity is the mother of invention. The rapid adoption of first Docker and then later on Kubernetes can be explained by the huge demand for infrastructure automation and cluster management.

This demand coincided with, and facilitated, the rise of DevOps. While Docker created the default and standard way for building and deploying containers, Kubernetes is shaping the way we handle clusters. Both platforms are complementary. Docker Swarm is the only remaining point of conflict, but its gradual demise is inevitable.  

In essence, Docker and Kubernetes are becoming the new go-to infrastructure stack. As they become increasingly integrated, the easier it will become to use them. Docker EE already claims to have made operating Kubernetes (a known pain point) a simpler and easier task. At the end of the day then, developers and operators seem to be the real winners in this Docker vs. Kubernetes face-off.

Get started for free

Completely free for 14 days, no strings attached.