Kubernetes Security Best Practices
The New Cloud-Native World of Containers
Cloud-native is the new standard for modern applications. This usually means container-based applications, using the popular Docker and Kubernetes platforms, and increasingly also service mesh platforms such as Istio and Envoy. With that, container security in general—and Kubernetes security in particular—is at the forefront of engineers’ minds. Docker popularized containers based on the good-old but little-used LXC Linux containers. Docker containers (as well as other container tools) offer lightweight virtualization and encapsulation that offers great advantages, especially as a way of packaging, deploying, and scaling microservices in a microservices architecture. You can easily and quickly spin up new services, run multiple instances on the same host to maximize utilization, and scale out (and in) with demand.
But with the containers’ adoption came the growing scale of clusters and diversity of the microservices. As a result, the need for container orchestration has become paramount. That’s where Kubernetes comes in. Kubernetes (or K8S in geek talk) is an open source container orchestration tool that can automatically scale, distribute, and handle faults on containers. A creation of Google’s and donated to the Cloud Native Computing Foundation, Kubernetes is prolific in production environments to handle docker containers in a fault-tolerant manner. It also supports other container tools like CoreOS rkt. In addition to on-premise installations, all the major cloud vendors now offer it as a managed service.
In this post I’ll discuss the special security concerns arising in Kubernetes environments, and best practices in properly setting up your environment to mitigate vulnerabilities:
- work with namespaces for authentication, authorization and access control
- working with reliable docker images and updating relevant software
- defining resource quotas to avoid resource cannibalization
- setting up network policies for proper segmentation and traffic control
Security Challenges of Kubernetes and Container Environments
If you’re in charge of security, the containers shift may give you a headache with the host of new challenges it brings up. These are fairly young platforms (compared to Linux or VMware for example), so CVE (Common Vulnerabilities and Exposures) can be found even in the most common utilities. A good example of that is the docker cp
command (file copy), which had a critical CVE found last November letting an attacker take full root control of the host and all containers within it.
These are also highly distributed environments with typically large clusters (the set of virtual interacting nodes). This raises additional security pitfalls like a bad network configuration that can expose entire computing systems to unauthorized users; a single node with an outdated OS that can lead to breaches of all your machines; or a system subjected to a DoS attack that could lead to one or more machines becoming unusable. The ephemeral nature of containers which spin up and down frequently makes them even harder to monitor and trace.
What makes things worse is that traditional wisdom and tools such as firewalls are not suitable for the containers domain. Even the operating systems themselves (which all the containers on the host share) are largely oblivious to individual containers and workloads. The shared OS kernel that makes containers lightweight also makes it tough to create good container isolation or prevent abuse of host resources. This can result in malicious activity such as 1) an elevation of privileges, 2) exfiltration of sensitive data, 3) a compromise of operations, or 4) a breach of compliance policies.
In the following sections, we’ll take a deep dive into some Kubernetes security best practices that will help you avoid issues when deploying your own K8s instance.
Authentication, Authorization and Access Control with K8s Namespaces
A cluster can be used for distinct environments and purposes. It can have services for several products; for different environments such as testing, staging, and production; and for independent teams with different roles. It is important to separate these into separate namespaces
namespaces, so you can control access to the service’s resources. Namespaces create a network layer with resources within the same space.
Production environments should always be in a separate cluster with strict access permissions. Nevertheless, for other environments it is possible to create roles for each namespace so only your QA team can access the testing environment.
The APIs are the central interfaces for administrators, users, and applications to operate and communicate in the Kubernetes environment. For that reason, controlling API access is the main task of authentication and authorization within Kubernetes.
The Kubernetes authentication and authorization controls and admission controls. Admission control intercepts and regulates requests to the APIs after authentication and authorization.
You can integrate with your organization’s directory—such as with LDAP, Active Directory, and SAML.
You should also protect the endpoints of Kubelet, which is the primary “node agent” that runs on each node.
Keep Cluster Updated with Reliable Docker Images
Every type of software contains bugs, some of which pose major security vulnerabilities. The active community of open source Kubernetes provides a power multiplier in flushing out and patching them. But to take advantage of the community, it is up to you to keep updating your Kubernetes deployment, Docker images, OS, and any other software running on the cluster so flaws are repaired before being exploited. Let’s go over the main items:
On your cluster make sure to monitor the Kubernetes version and the OS version of each node. Maintaining updates via a mailing list or announcements of the software in use should avoid the majority of known Kubernetes security issues.
Your containers run your Docker images. Those images are made of layers. For example, if you have a Java Web Application, your image might have the following layers: [Distribution Layer], [JRE Layer], [Tomcat Layer], [Your application layer]. Each layer could have a security flaw and must be updated. What makes things harder to control is the open-source developer culture of sharing images, which all-too-often ends up relying on base images off the web without even checking the registry certificate. I’d recommend using private or official registries, maintaining standard base images for developers to use, and scanning images for would-be K8s security vulnerabilities, with tools and services such as Snyk, Alcide, Sonatype Nexus, CoreOS Clair, and Dockscan.
And as any other software, containerized or not, make sure you keep your own software updated together with all your dependencies and middleware software.
Keep in mind that, if a container is defective, Kubernetes will exacerbate the problem by deploying it across a number of machines.
Kubernetes Namespace Resource Quota
It is important to define quotas for your resources that a namespace may consume. Unbounded resources can lead to total cluster unavailability in the case of DOS attacks or malfunctioning applications. If a resource is not bounded, it can draw all the available hardware resources to itself. Kubernetes has several resource quota configurations available, both for the “classic” resources of CPU, memory and disk, and for Kubernetes resources of pods, services and volumes.
Kubernetes Network Policies
In Kubernetes, you can define network policies for namespaces to segment your network and control access to your various pods and ports. The Network Policy configuration enables you to specify, among other things, which pods can:
- communicate with which,
- at which port
- in which direction (inbound/outbound), and
- whitelist rules for ingress/egress
For example, here’s a Network Policy definition allowing outbound traffic on DNS ports:
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: foo-deny-egress spec: podSelector: matchLabels: app: foo policyTypes: - Egress egress: # allow DNS resolution - ports: - port: 53 protocol: UDP - port: 53 protocol: TCP
You can find many other useful examples of network policies in this repo.
Endnote
Modern cloud-native architectures are booming but at the same time bring new security challenges. With more layers of platforms and virtualization come more potential Kubernetes security challenges: multiple layers of virtualization and utilities (host, VM, docker daemon, and more), use of software-defined networking (overlay networks) and software-defined storage (attached storage), and Kubernetes orchestration with its own host of services. Following best practices in configuring the environment and keeping it current will prevent many vulnerabilities.
In the next post, I will discuss logging and monitoring Kubernetes systems at runtime, to identify and intercept those vulnerabilities that slipped past the first line of defence, with some useful open source tooling to address.
Get started for free
Completely free for 14 days, no strings attached.