When running workloads on Amazon EKS, the number of pods you can schedule per node directly impacts how efficiently you use your compute resources. By default, Kubernetes caps the number of pods per node at 110, and AWS EKS enforces that limit out of the box, regardless of instance size. However, larger instances can support far more pods if configured correctly. In this post, we’ll walk through how to increase pod density on EKS with Amazon Linux 2023 (AL2023) nodes, using the new nodeadm configuration method.
Each pod scheduled onto a node consumes IP addresses and other networking resources. If you hit the default cap (110), you might end up adding more nodes than necessary — which increases costs, overhead, and even licensing costs for observability tools like IBM Instana or other pay-per-node licensed tools.
By tuning maxPods to match the capacity of your instance, you can:
There are three important rules to understand:
By default, the number of pods per node is limited by the maximum number of IP addresses the instance type can handle. Smaller instances often support far fewer than 110 pods. For larger instances, AWS caps the default maximum at 110 pods per node, even though the instance type technically supports more.
To increase this, AWS introduced prefix delegation for the VPC CNI plugin. Instead of assigning individual IPs to pods one by one, the CNI can allocate entire /28 prefixes to each ENI, massively increasing pod density.
You can enable prefix delegation like this:
# Enable prefix delegation in the aws-node DaemonSet
kubectl set env daemonset aws-node -n kube-system ENABLE_PREFIX_DELEGATION=true
kubectl set env daemonset aws-node -n kube-system WARM_PREFIX_TARGET=1
kubectl set env daemonset aws-node -n kube-system WARM_ENI_TARGET=1
ENABLE_PREFIX_DELEGATION=true tells the CNI to allocate /28 prefixes.WARM_PREFIX_TARGET=1 keeps one full prefix ready for fast pod startup.WARM_ENI_TARGET=1 keep 1 unused ENI pre-attached for faster pod startup.
Important: This setup is specific to the AWS VPC CNI. If you use another CNI (e.g., Calico, Cilium), the process is different since they do not rely on ENIs in the same way.
Of course if you’re like me and you like to manage everything in Terraform, you can change this in your aws_eks_addon resource like this :
resource "aws_eks_addon" "vpc-cni" {
cluster_name = aws_eks_cluster.eks.name
addon_name = "vpc-cni"
configuration_values = jsonencode({
env = {
# Reference docs https://docs.aws.amazon.com/eks/latest/userguide/cni-increase-ip-addresses.html
ENABLE_PREFIX_DELEGATION = "true"
WARM_PREFIX_TARGET = "1"
WARM_ENI_TARGET = "1"
},
# Enable Network Policies
enableNetworkPolicy = "true"
})
}
Doing this will allow you to run 110 pods on supported instances.
nodeadmBefore AL2023, EKS worker nodes used a bootstrap.sh script to configure the kubelet on startup. With AL2023-based EKS AMIs, bootstrap.sh has been removed. Nodes are now initialized by nodeadm, which consumes a Kubernetes-style YAML configuration (NodeConfig) provided through user data in MIME format.
Below is an example of such configuration:
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
name: my-cluster
apiServerEndpoint: https://xxxxx.yy.us-east-1.eks.amazonaws.com
certificateAuthority: <base64-encoded-CA>
cidr: 10.100.0.0/16
kubelet:
config:
maxPods: 234
This YAML should be embedded in your launch template as MIME-encoded user data.
There is an official list of the maximum amount of pods each instancy type supports which can be found here: https://github.com/awslabs/amazon-eks-ami/blob/main/templates/shared/runtime/eni-max-pods.txt
In Terraform you can achieve this by passing the userdata to your nodegroup like this :
locals {
nodeadm_yaml = <<-EOT
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="BOUNDARY"
--BOUNDARY
Content-Type: application/node.eks.aws
apiVersion: node.eks.aws/v1alpha1
kind: NodeConfig
spec:
cluster:
name: my-cluster
apiServerEndpoint: https://xxxxx.yy.us-east-1.eks.amazonaws.com
certificateAuthority: <base64-encoded-CA>
cidr: 10.100.0.0/16
kubelet:
config:
maxPods: 234
--BOUNDARY--
EOT
}
resource "aws_launch_template" "maxpods_eks" {
name = "eks-maxpods-worker-${var.env}"
vpc_security_group_ids = [
aws_security_group.eks.id,
aws_eks_cluster.eks.vpc_config[0].cluster_security_group_id
]
block_device_mappings {
device_name = "/dev/xvda"
ebs {
volume_size = 100
}
}
tag_specifications {
resource_type = "instance"
tags = {
Name = "eks-maxpods-worker-${var.env}"
env = var.env
}
}
user_data = base64encode(local.nodeadm_yaml)
instance_type = var.eks_maxpods_instance_type
}
resource "aws_eks_node_group" "maxpods_eks" {
cluster_name = aws_eks_cluster.eks.name
node_role_arn = data.aws_iam_role.eks_nodegroup.arn
subnet_ids = [module.vpc.private_subnets[0], module.vpc.private_subnets[1]]
node_group_name = "eks-maxpods-workers-${var.env}"
capacity_type = "ON_DEMAND"
launch_template {
name = aws_launch_template.maxpods_eks.name
version = aws_launch_template.maxpods_eks.latest_version
}
scaling_config {
min_size = 1
max_size = 3
desired_size = 1
}
labels = { role = "workernode" }
}
nodeadm with AL2023 AMIs to override kubelet defaults.maxPods appropriately for your instance size (e.g., 234 for m5.4xlarge).By doing this, you unlock the full pod capacity of your nodes, reduce your cluster size, and potentially save significantly on costs and licensing.
While larger instances can support more pods than the default 110, it’s important to test workloads before applying these settings in production. Running nodes near their maximum pod density can:
Recommendation: Start with a smaller node group or staging cluster, gradually increase maxPods, and monitor metrics like pod startup time, kubelet CPU usage, and network throughput.
With AL2023, Amazon EKS changed how worker nodes are bootstrapped. While the removal of bootstrap.sh caused confusion at first, the new nodeadm mechanism provides a clean and Kubernetes-native way to configure nodes. By combining it with ENI prefix delegation and carefully tuning maxPods, you can maximize pod density per node, simplify your cluster, and optimize costs.
Remember: small instances default below 110, larger ones are capped at 110, and you need nodeadm to go beyond that.
⚠️ Important: Always test performance when increasing maxPods above the default. High pod density can stress the kubelet, increase resource contention, and affect pod startup times. Gradual testing in a staging environment helps ensure stability before applying to production.