Live tool

K8s Sizing Calculator

Estimate Kubernetes worker node count from workload requests, node capacity, reserve overhead, and target utilization.

Input

Estimate Kubernetes worker node count from workload demand, node capacity, reserve overhead, and target utilization.

Example: 2.0 means CPU limit is 2x request.
Example: 1.5 means memory limit is 1.5x request.

This is the planning ceiling for schedulable requested capacity, not a hard runtime guarantee.
Reset

Result

Requested demand, effective node capacity, required worker count, and practical planning summaries.

Total pods 36
Requested CPU 9,000 m (9.00 cores)
Requested memory 18,432 MiB (18.00 GiB)
Effective CPU / node 5,250 m (5.25 cores)
Effective memory / node 22,221 MiB (21.70 GiB)
Required workers 2

What this means

The current planning inputs produce 36 total pod(s), 9,000 millicores of requested CPU, and 18,432 MiB of requested memory.

With reserve overhead applied and node utilization capped at 70%, each worker contributes 5,250 effective millicores and 22,221 effective MiB of schedulable requested capacity. The required worker count is driven by the stricter of CPU and memory pressure.

Use this as an early sizing model for Kubernetes worker count, then validate the result against workload burst behavior, daemonset footprint, pod spread rules, disruption requirements, autoscaling policy, and failure-domain design.

Common presets

Kubernetes sizing quick guide

Practical reminders for early cluster sizing and node-count estimation.

Requests drive scheduling

Worker count planning should start from pod requests, because requests determine placement pressure in the scheduler.

Reserve node overhead

Kubelet, OS, daemonsets, and operational headroom reduce usable node capacity. Raw node size is not schedulable capacity.

Limits are context, not sizing truth

Limits matter for runtime contention analysis, but request-based planning is the safer default for deterministic early sizing.

Common use cases

Typical situations where cluster sizing estimation is useful.

Estimate starting worker count for new Kubernetes platforms, migration programs, or landing zones.

Compare node shapes against aggregate workload demand for cost, density, and operability trade-offs.

Stress-test request assumptions before moving into capacity planning, autoscaling, and production rollout.