Live tool

Pod Density Estimator

Estimate Kubernetes pod density from workload requests, allocatable node capacity, utilization targets, and pod-slot limits.

Input

Estimate practical pod density from workload requests, allocatable node capacity, utilization targets, and pod-slot limits.

Enter the CPU request for a single workload pod.
Enter the memory request for a single workload pod.
Optional fit target for the workload across the cluster.
Use allocatable worker-node CPU, not raw VM size.
Use allocatable worker-node memory, not raw VM memory.
Enter the number of worker nodes in the cluster.
This reduces the usable planning capacity per node.
Enter the pod-slot ceiling per node from your platform or cluster policy.
Reserve pod slots for system pods, daemonsets, or cluster services.
Reset

Result

Estimated pods per node, cluster-wide density, limiting factor, replica fit status, and residual headroom.

Things to check

  • CPU and memory ceilings are far apart. That often signals an imbalanced workload profile or node shape.
  • Residual headroom per node is very small. Real scheduling behavior may feel tighter than the estimate suggests.
Pods per node 24
Pods across cluster 72
Limiting factor CPU
Workload profile CPU-heavy
Density status Comfortable
Replica fit status Fits target
CPU-based pods per node 24
Memory-based pods per node 48
Pod-slot-based pods per node 35
Available workload pod slots / node 35
Usable vCPU per node 6.00 vCPU
Usable memory per node 24.00 GiB

Replica target fit

Desired replicas 24
Spare replica capacity 48
CPU spare after target fit 12.00 vCPU
Memory spare after target fit 60.00 GiB

Efficiency signals

CPU efficiency 100 %
Memory efficiency 50 %
Slot efficiency 69 %

Headroom after full packing

CPU headroom per node 0.00 vCPU
Memory headroom per node 12.00 GiB
CPU headroom across cluster 0.00 vCPU
Memory headroom across cluster 36.00 GiB

What this means

With a target utilization of 75%, the planner treats each node as having 6.00 usable vCPU and 24.00 GiB of usable memory. Under those assumptions, the estimated workload density is 24 pods per node and 72 pods across the cluster.

The current limiting factor is cpu. CPU allows up to 24 pods per node, memory allows up to 48, and pod slots allow up to 35.

From a max-pods ceiling of 40, the planner reserves 5 pod slots per node for system or cluster overhead, leaving 35 workload pod slots per node.

This workload behaves like a CPU-heavy profile. Higher density will depend much more on available CPU than on available memory.

Density breakdown

  1. Start with one workload profile: 0.25 vCPU and 0.50 GiB per pod.
  2. Apply target utilization of 75% to allocatable node capacity: usable node capacity becomes 6.00 vCPU and 24.00 GiB.
  3. Calculate CPU-based pods per node: floor(6.00 / 0.25) = 24.
  4. Calculate memory-based pods per node: floor(24.00 / 0.50) = 48.
  5. Calculate pod-slot-based pods per node: 40 max pods - 5 reserved = 35.
  6. Take the smallest ceiling as the practical pods-per-node result: min(24, 48, 35) = 24.
  7. Multiply by 3 worker node(s) to get cluster-wide workload density: 24 × 3 = 72 pods.
  8. Compare desired replica target of 24 against cluster capacity of 72: target fits.
  9. After full packing, remaining headroom per node is 0.00 vCPU and 12.00 GiB.

Common presets

Pod density quick guide

Practical notes for request-based workload packing across Kubernetes worker nodes.

Requests-based model

This tool plans pod density from CPU and memory requests, not limits, so the result reflects schedulable packing assumptions.

Replica fit

The replica-fit section checks whether the desired workload replica count fits inside the modeled cluster density envelope.

Pod-slot ceiling

Real density may be constrained by max pods per node even when CPU and memory still have room available.

Planning assumptions

These assumptions define exactly what this version of the estimator models.

  • Inputs represent allocatable node capacity, not raw instance size.
  • Density is modeled from requests, not limits.
  • Target utilization reduces usable planning capacity per node.
  • Reserved pod slots reduce application pod capacity per node.
  • The tool evaluates CPU, memory, and pod-slot ceilings.
  • This tool does not simulate taints, topology spread, daemonset CPU/memory reservations, or scheduler-level placement rules.

Common use cases

Typical Kubernetes capacity and packing scenarios.

Estimate how many replicas of a single service profile can fit per node and across the cluster.

Identify whether practical density is limited by CPU, memory, or pod-slot ceilings.

Check whether a target replica count actually fits before changing node shape or cluster size.