Pod Density Estimator
Estimate Kubernetes pod density from workload requests, allocatable node capacity, utilization targets, and pod-slot limits.
Input
Estimate practical pod density from workload requests, allocatable node capacity, utilization targets, and pod-slot limits.
Result
Estimated pods per node, cluster-wide density, limiting factor, replica fit status, and residual headroom.
Things to check
- CPU and memory ceilings are far apart. That often signals an imbalanced workload profile or node shape.
- Residual headroom per node is very small. Real scheduling behavior may feel tighter than the estimate suggests.
Replica target fit
Efficiency signals
Headroom after full packing
What this means
With a target utilization of 75%, the planner treats each node as having 6.00 usable vCPU and 24.00 GiB of usable memory. Under those assumptions, the estimated workload density is 24 pods per node and 72 pods across the cluster.
The current limiting factor is cpu. CPU allows up to 24 pods per node, memory allows up to 48, and pod slots allow up to 35.
From a max-pods ceiling of 40, the planner reserves 5 pod slots per node for system or cluster overhead, leaving 35 workload pod slots per node.
This workload behaves like a CPU-heavy profile. Higher density will depend much more on available CPU than on available memory.
Density breakdown
- Start with one workload profile: 0.25 vCPU and 0.50 GiB per pod.
- Apply target utilization of 75% to allocatable node capacity: usable node capacity becomes 6.00 vCPU and 24.00 GiB.
- Calculate CPU-based pods per node: floor(6.00 / 0.25) = 24.
- Calculate memory-based pods per node: floor(24.00 / 0.50) = 48.
- Calculate pod-slot-based pods per node: 40 max pods - 5 reserved = 35.
- Take the smallest ceiling as the practical pods-per-node result: min(24, 48, 35) = 24.
- Multiply by 3 worker node(s) to get cluster-wide workload density: 24 × 3 = 72 pods.
- Compare desired replica target of 24 against cluster capacity of 72: target fits.
- After full packing, remaining headroom per node is 0.00 vCPU and 12.00 GiB.
Common presets
Pod density quick guide
Practical notes for request-based workload packing across Kubernetes worker nodes.
Requests-based model
This tool plans pod density from CPU and memory requests, not limits, so the result reflects schedulable packing assumptions.
Replica fit
The replica-fit section checks whether the desired workload replica count fits inside the modeled cluster density envelope.
Pod-slot ceiling
Real density may be constrained by max pods per node even when CPU and memory still have room available.
Planning assumptions
These assumptions define exactly what this version of the estimator models.
- Inputs represent allocatable node capacity, not raw instance size.
- Density is modeled from requests, not limits.
- Target utilization reduces usable planning capacity per node.
- Reserved pod slots reduce application pod capacity per node.
- The tool evaluates CPU, memory, and pod-slot ceilings.
- This tool does not simulate taints, topology spread, daemonset CPU/memory reservations, or scheduler-level placement rules.
Common use cases
Typical Kubernetes capacity and packing scenarios.
Estimate how many replicas of a single service profile can fit per node and across the cluster.
Identify whether practical density is limited by CPU, memory, or pod-slot ceilings.
Check whether a target replica count actually fits before changing node shape or cluster size.