WhatschatDocsCloud Computing
Related
How to Fix a Blocked ClickHouse Container Deploy with Docker Hardened ImagesSpeed Up AI Development with Runpod Flash: A Step-by-Step Guide to Eliminating Docker ContainersScaling Sovereign Infrastructure: Q&A on Microsoft's Private Cloud ExpansionKubernetes v1.36 Alpha: Pod-Level Resource Managers for Smarter Resource Allocation7 Game-Changing Features of Amazon S3 Files: Bridging Object Storage and File SystemsHow to Standardize Enterprise Agent Telemetry with OpenTelemetry and OpenInference10 Key Insights into AWS Interconnect: Simplifying Multicloud and Hybrid ConnectivityUnified Angular Deployment: One Build, Environment-Specific Configs via Docker and Nginx

Kubernetes v1.36 Alpha: Pod-Level Resource Managers for Better Performance and Efficiency

Last updated: 2026-05-06 04:26:13 · Cloud Computing

In Kubernetes v1.36, an exciting new alpha feature called Pod-Level Resource Managers arrives, overhauling how performance-sensitive workloads handle resource allocation. Instead of the traditional per-container model, this enhancement lets the kubelet's Topology, CPU, and Memory Managers work with pod-level resource specifications (.spec.resources). This means you can now define a single resource budget for an entire pod, enabling hybrid allocations that keep main containers isolated while sharing resources efficiently among sidecars. The feature is gated by PodLevelResourceManagers and PodLevelResources flags.

What are Pod-Level Resource Managers in Kubernetes v1.36?

Pod-Level Resource Managers are an alpha feature that extends the kubelet's existing resource managers (Topology, CPU, Memory) to support resource specifications at the pod level, rather than only at the container level. In prior versions, each container inside a pod had to declare its own CPU and memory requests and limits. With this new feature, you can set a pod-wide resource budget via spec.resources. The kubelet then uses that budget to perform NUMA alignment and exclusive resource assignments for the entire pod, while allowing a hybrid model where some containers get dedicated resources and others share a pool. This brings flexibility and efficiency to high-performance workloads like ML training or low-latency databases, without sacrificing NUMA alignment or QoS guarantees.

Kubernetes v1.36 Alpha: Pod-Level Resource Managers for Better Performance and Efficiency

Why was there a need for pod-level resource management?

Before this feature, running performance-critical workloads with sidecar containers created a dilemma. To get NUMA-aligned, exclusive CPU and memory for your main application container, you had to allocate integer-based CPUs and memory to every container in the pod—even lightweight sidecars for logging or monitoring. This wasted resources on sidecars that didn't need dedicated cores. Alternatively, you could avoid exclusive allocation, but then the pod would lose its Guaranteed QoS class, hurting performance predictability. Pod-level resource managers solve this by letting you define a single pod budget. The main container gets its exclusive, NUMA-aligned slice, and the remaining resources form a shared pool for sidecars. This eliminates waste while preserving performance isolation.

How do pod-level resource managers differ from per-container allocation?

In the traditional per-container model, each container in a pod specifies its own resources.requests and resources.limits. The kubelet then allocates hardware resources independently for each container, and the Topology Manager aligns them per container scope. This setup forces every container to request integer CPUs if you want exclusive, NUMA-aligned resources for any one container. With pod-level resource managers, you can set a pod-wide budget in spec.resources. The kubelet treats that as the total resource envelope for the pod. It then performs a single NUMA alignment for the entire pod and creates two allocation pools: one for containers that request exclusive resources (like the main app) and a shared pod pool for others that can share. This hybrid approach reduces waste and complexity, especially in pods with many sidecars.

How does the pod-level resource manager work with the Topology Manager?

The Topology Manager is a key component that ensures NUMA alignment for performance-sensitive workloads. When the pod-level resource manager is enabled and the Topology Manager scope is set to Pod, the kubelet performs a single alignment decision based on the entire pod's resource budget. It calculates the NUMA node that can satisfy the pod's total CPU and memory requests. Then it allocates exclusive, aligned resources to containers that explicitly request them (typically the primary application container). The leftover resources from the pod budget become a pod shared pool on that same NUMA node. Sidecar containers that don't need exclusivity can run from this shared pool, sharing resources among themselves but strictly isolated from the main container and the rest of the node. This ensures NUMA affinity without wasting dedicated cores on auxiliary containers.

What is a real-world use case for pod-level resource managers?

A perfect example is a tightly-coupled database pod. Imagine a pod containing a main database container, a metrics exporter sidecar, and a backup agent sidecar. Using the pod-level resource manager with the Topology Manager in Pod scope, you define a single pod budget—say, 8 CPUs and 16Gi memory. The kubelet performs a single NUMA alignment for the entire pod. The database container gets its exclusive, NUMA-aligned CPU and memory slices. The remaining budget capacity forms a shared pool on the same NUMA node. The metrics exporter and backup agent run from this shared pool. They can share resources with each other but are completely isolated from the database's exclusive slices and other node resources. This allows safe co-location of auxiliary containers alongside a primary workload without dedicating expensive cores to them, maximizing efficiency and performance.

What are the benefits of pod-level resource managers for sidecar containers?

The primary benefit is resource efficiency. Sidecar containers, such as loggers, service mesh proxies, or monitoring agents, typically require minimal CPU and memory. Under the old model, to achieve NUMA alignment for the main container, every sidecar had to request integer CPUs—often over-provisioning them. Pod-level resource managers eliminate this waste by allowing sidecars to share a pool of resources defined by the pod's total budget. Additionally, the feature preserves the pod's Guaranteed QoS class because the overall pod request and limit match. Sidecars also benefit from being strictly isolated from the main container's exclusive resources, preventing interference. This makes the feature ideal for modern, sidecar-heavy deployments where performance and efficiency are both critical.

How can you enable this alpha feature?

To try out Pod-Level Resource Managers in Kubernetes v1.36, you need to enable two feature gates on the kubelet: PodLevelResourceManagers=true and PodLevelResources=true. These are alpha-level, so they are disabled by default. You can enable them by adding them to the kubelet's command-line arguments or configuration file. Once enabled, you can define pod-level resources using the spec.resources field in a Pod spec, as shown in the official examples. Note that the Topology Manager should be configured with scope: Pod to fully leverage the new functionality. As with any alpha feature, it may have limitations and is subject to change, so testing in non-production environments is recommended.