10 Indispensable Amazon EKS Features and Updates You Ought to Know
Amazon’s Elastic Kubernetes Service (EKS) is the company’s managed option for Kubernetes clusters. We have several articles on using AWS and Kubernetes on our blog, and felt there was a need to highlight some of the key features that AWS EKS offers. Many of these features have been rolled out or updated over the last year.
More on the subject:
We have mentioned some of these features in other posts, such as our comparison of EKS with AKS and GKE. The following is a short and hardly exhaustive list of key EKS features, as well as excerpts of larger tutorials for setting them up:
1. Managed Node Groups
Managed node groups (MNGs) automate EC2 node provisioning for Kubernetes clusters in EKS. They were rolled out starting with Kubernetes 1.14. MNGs can be added to existing clusters through the AWS CLI, eksctl, AWS API and Cloudformation.
AWS EKS Managed Node Groups (MNG) will not just manage your EC2 instances, but create them from the outset. Amazon EC2 Auto Scaling groups—which span specified subnets—provision all managed nodes. There is no premium price for using MNGs, only the resources allocated to them. They can be launched in private and public subnets. All instances in these groups deploy on Amazon Linux 2 AMIs. There can also be multiple managed node groups inside one cluster.
EKS tags will further organize a group’s resources so that nodes can be managed effectively in the EKS Kubernetes Cluster Autoscaler. As for group instances themselves, they use labels recognizably marked by eks.amazon.com.
2. EKS Resource Tagging
Amazon EKS Resource Tagging manages the tagging within EKS clusters we mention in the previous section. You can add custom metadata tags that will show up when you export your EKS logs. These are not automatic; they must be configured. Applying tags is possible to new resources through the Amazon EKS Console, while the tags parameter can be used for EKS API, the AWS SDK, or via the AWS CLI.
3. EKS Control Plane Logging
AWS EKS control plane logs are audit logs provided to your CloudWatch logs. They include five main types: audit
, Kubernetes API server component logs (api
), authenticator
, controllerManager
, and scheduler
. Each type can be activated or disabled via the EKS API, AWS CLI, or management console. The costs include standard EKS pricing and CloudWatch log ingestion pricing, plus other resources such as EC2 instances deployed.
4. EKS Cluster Autoscaler
AWS Kubernetes Cluster Autoscaler automates the creation or deletion of nodes depending on their necessity. This comes in handy when pods suddenly fail or more resources are needed for sudden usage spikes. It will also delete pods should they fit predefined criteria to be considered under-utilized. After they are created, they can be manually configured later, but will be created based on predefined parameters. Configurations are available for different EC2 Auto Scaling groups of instances.
The following command in the AWS CLI will launch a cluster (including use of eksctl which we will cover later):
eksctl create cluster --name my-cluster --version 1.15 --managed --asg-access
To deploy the Autoscaler, use this command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/autoscaler/master/cluster-autoscaler/cloudprovider/aws/examples/cluster-autoscaler-autodiscover.yaml
To add the cluster-autoscaler.kubernetes.io/safe-to-evict annotation, enter this command:
kubectl -n kube-system annotate deployment.apps/cluster-autoscaler cluster-autoscaler.kubernetes.io/safe-to-evict="false"
Edit the deployment with this command:
kubectl -n kube-system edit deployment.apps/cluster-autoscaler
5. EKS Cluster Access Points
By default, the Kubernetes API is public to the internet with RBAC and AWS IAM controls in place to restrict access. As such, you need to resolve Private Access Points and Public Access Points in EKS Clusters. When you turn on private access, the service creates a Route 53 private hosted zone and links it to the cluster virtual private cloud (VPC). Ensure any IAM user has route53:AssociateVPCWithHostedZone set to enable private access.
The DHCP within the VPC should include AmazonProvidedDNS within its domain servers list, while the following settings should be set in the VPC:
enableDnsHostnames: true
6. eksctl
AWS’s eksctl is command line made especially for EKS; it allows use of kubectl within the AWS CLI. To add it, make sure that your AWS CLI and AWS credentials are up-to-date.
pip install awscli --upgrade --user
$ aws configure AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY Default region name [None]: region-code Default output format [None]: json
After this, for MacOS use Homebrew:
brew install weaveworks/tap/eksctl
Or on Linux, using cURL:
curl --silent --location "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" | tar xz -C /tmp
And extract the binary:
sudo mv /tmp/eksctl /usr/local/bin
7. Cluster Limits
You can create up to 100 clusters per region in EKS, an upgrade on 50 just a year ago. The same maximum applies to label pairs per Fargate cluster, number of concurrent Fargate pods (per region, per account), and nodes within each managed node group.
8. Deploy the Kubernetes Metrics Server on EKS
The Kubernetes Metrics Server can work with any K8s deployment. For AWS, it can be deployed to monitor EKS clusters and collect cluster metrics for CPU and memory usage. You can deploy it with the following components.yaml file:
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml
You should then verify deployment:
kubectl get deployment metrics-server -n kube-system
9. Instance Worker Nodes
Worker nodes are deployed on a cluster-specific basis and must be deployed with the original cluster. All instance work notes are A1 instances,, deployed on either Kubernetes 1.13 or 1.14. As of this writing, those are the only two versions supported.
10. AWS Outpost Clusters
You can now run EKS clusters on AWS Outposts, or on-prem AWS centers. This is as of Kubernetes 1.14.8 and Amazon EKS eks.5. The abovementioned worker nodes come in handy here because they deal well with low latency loads. For it to work, obviously make sure Outpost is fully set up in your on-prem data center, but also make sure you have a reliably consistent network connection. Also double-check that your region supports EKS.
Get started for free
Completely free for 14 days, no strings attached.