VANDEKERCKHOVE

A journey of ideas, insights, and inspiration.

Maximizing Pod Density on Amazon EKS with AL2023

When running workloads on Amazon EKS, the number of pods you can schedule per node directly impacts how efficiently you use your compute resources. By default, Kubernetes caps the number of pods per node at 110, and AWS EKS enforces that limit out of the box, regardless of instance size. However, larger instances can support far more pods if configured correctly. In this post, we’ll walk through how to increase pod density on EKS with Amazon Linux 2023 (AL2023) nodes, using the new nodeadm configuration method.

Why Pod density matters

Each pod scheduled onto a node consumes IP addresses and other networking resources. If you hit the default cap (110), you might end up adding more nodes than necessary — which increases costs, overhead, and even licensing costs for observability tools like IBM Instana or other pay-per-node licensed tools.

By tuning maxPods to match the capacity of your instance, you can:

  • Save costs by running fewer, larger nodes.
  • Reduce operational overhead by managing fewer nodes.
  • Optimize licensing if your monitoring or security platform charges per node.

Small vs. Large Instances: The Defaults

There are three important rules to understand:

  1. Without ENI prefix delegation: Each instance type has its own networking limits (ENIs × IP-per-ENI). This means small instances (e.g. t3.small) may only support ~17 pods, while m5.large might allow ~29 pods.
  2. With ENI prefix delegation enabled: AWS normalizes pod density so that all instance types can support up to 110 pods by default. This matches the upstream Kubernetes default.
  3. Larger instances can do more: An instance like m5.4xlarge actually has enough ENI/IP capacity to run up to 234 pods. But AWS still caps the kubelet default at 110. To go beyond that, you must explicitly override maxPods via nodeadm.

Enabling Prefix Delegation

By default, the number of pods per node is limited by the maximum number of IP addresses the instance type can handle. Smaller instances often support far fewer than 110 pods. For larger instances, AWS caps the default maximum at 110 pods per node, even though the instance type technically supports more.

To increase this, AWS introduced prefix delegation for the VPC CNI plugin. Instead of assigning individual IPs to pods one by one, the CNI can allocate entire /28 prefixes to each ENI, massively increasing pod density.

You can enable prefix delegation like this:

ENABLE_PREFIX_DELEGATION=true tells the CNI to allocate /28 prefixes.
WARM_PREFIX_TARGET=1 keeps one full prefix ready for fast pod startup.
WARM_ENI_TARGET=1 keep 1 unused ENI pre-attached for faster pod startup.

Important: This setup is specific to the AWS VPC CNI. If you use another CNI (e.g., Calico, Cilium), the process is different since they do not rely on ENIs in the same way.

Of course if you’re like me and you like to manage everything in Terraform, you can change this in your aws_eks_addon resource like this :

Doing this will allow you to run 110 pods on supported instances.

Modifying maxPods on Amazon Linux 2023 and nodeadm

Before AL2023, EKS worker nodes used a bootstrap.sh script to configure the kubelet on startup. With AL2023-based EKS AMIs, bootstrap.sh has been removed. Nodes are now initialized by nodeadm, which consumes a Kubernetes-style YAML configuration (NodeConfig) provided through user data in MIME format.

Below is an example of such configuration:

This YAML should be embedded in your launch template as MIME-encoded user data.

There is an official list of the maximum amount of pods each instancy type supports which can be found here: https://github.com/awslabs/amazon-eks-ami/blob/main/templates/shared/runtime/eni-max-pods.txt

In Terraform you can achieve this by passing the userdata to your nodegroup like this :

Putting it all together

  1. Enable ENI prefix delegation so your instances can support more IPs.
  2. Use nodeadm with AL2023 AMIs to override kubelet defaults.
  3. Set maxPods appropriately for your instance size (e.g., 234 for m5.4xlarge).

By doing this, you unlock the full pod capacity of your nodes, reduce your cluster size, and potentially save significantly on costs and licensing.

Important: Test Test Test

While larger instances can support more pods than the default 110, it’s important to test workloads before applying these settings in production. Running nodes near their maximum pod density can:

  • Increase CPU, memory, and network contention per node.
  • Make pod startup slower if ENIs/IPs aren’t pre-warmed.
  • Amplify the impact of a node failure since more pods are concentrated on a single node.
  • Potentially stress the kubelet, leading to higher CPU usage, slower reconciliation, or delayed pod lifecycle events if the node is overloaded.

Recommendation: Start with a smaller node group or staging cluster, gradually increase maxPods, and monitor metrics like pod startup time, kubelet CPU usage, and network throughput.

Conclusion

With AL2023, Amazon EKS changed how worker nodes are bootstrapped. While the removal of bootstrap.sh caused confusion at first, the new nodeadm mechanism provides a clean and Kubernetes-native way to configure nodes. By combining it with ENI prefix delegation and carefully tuning maxPods, you can maximize pod density per node, simplify your cluster, and optimize costs.

Remember: small instances default below 110, larger ones are capped at 110, and you need nodeadm to go beyond that.

⚠️ Important: Always test performance when increasing maxPods above the default. High pod density can stress the kubelet, increase resource contention, and affect pod startup times. Gradual testing in a staging environment helps ensure stability before applying to production.

References

Leave a Reply

Your email address will not be published. Required fields are marked *

Follow me for more content
Share this post if you liked it !
Comments