- Release Signoff Checklist
- Summary
- Motivation
- Proposal
- Design Details
- Production Readiness Review Questionnaire
- Implementation History
- Drawbacks
- Alternatives
Items marked with (R) are required prior to targeting to a milestone / release.
- (R) Enhancement issue in release milestone, which links to KEP dir in kubernetes/enhancements (not the initial KEP PR)
- (R) KEP approvers have approved the KEP status as
implementable
- (R) Design details are appropriately documented
- (R) Test plan is in place, giving consideration to SIG Architecture and SIG Testing input (including test refactors)
- e2e Tests for all Beta API Operations (endpoints)
- (R) Ensure GA e2e tests meet requirements for Conformance Tests
- (R) Minimum Two Week Window for GA e2e tests to prove flake free
- (R) Graduation criteria is in place
- (R) all GA Endpoints must be hit by Conformance Tests
- (R) Production readiness review completed
- (R) Production readiness review approved
- "Implementation History" section is up-to-date for milestone
- User-facing documentation has been created in kubernetes/website, for publication to kubernetes.io
- Supporting documentation—e.g., additional design documents, links to mailing list discussions/SIG meetings, relevant PRs/issues, release notes
We propose an enhancement to the PodResources API to include resources allocated by Dynamic Resource Allocation (DRA). This KEP extends 2043-pod-resource-concrete-assigments and 2403-pod-resources-allocatable-resources.
One of the primary motivations for this KEP is to extend the PodResources API to allow node monitoring agents to access information about resources allocated by DRA.
- To allow node monitoring agents to know the allocated DRA resources for Pods on a node.
- To allow node components to use the PodResourcesAPI to use the DRA information to develop new features and integrations.
To enhance the GetAllocatableResources() call in the PodResources API to account for resources managed by DRA. With DRA there is no standard way to get the capacity, for example.
This API is read-only, which removes a large class of risks. The aspects that we consider below are as follows:
- What are the risks associated with the API service itself?
- What are the risks associated with the data itself?
Risk | Impact | Mitigation |
---|---|---|
Too many requests risk impacting the kubelet performances | High | Implement rate limiting and or passive caching, follow best practices for gRPC resource management. |
Improper access to the data | Low | Server is listening on a root owned unix socket. This can be limited with proper pod security policies. |
Our proposal is to extend the existing PodResources gRPC service of the Kubelet
with a repeated DynamicResource
field in the ContainerResources message. This
new field will contain information about the DRA resource class, the DRA
claim name, claim namespace, and a list of claimed resources allocated by a DRA driver.
Currently, the claim resources only contain a list of CDI Devices but in the future they
may be extended to support plugins that allocate different types of resource (not just CDI Devices).
Additionally, we propose adding a Get()
method to the existing gRPC service
to allow querying specific pods for their allocated resources.
Note: The new Get()
call is a strict subset of the List()
call (which
returns the list of PodResources for all pods across all namespaces in the
cluster). That is, it allows one to specify a specific pod and namespace to
retrieve PodResources from, rather than having to query all of them all at
once.
The full PodResources API (including our proposed extensions) can be seen below:
// PodResourcesLister is a service provided by the kubelet that provides information about the
// node resources consumed by pods and containers on the node
service PodResourcesLister {
rpc List(ListPodResourcesRequest) returns (ListPodResourcesResponse) {}
rpc GetAllocatableResources(AllocatableResourcesRequest) returns (AllocatableResourcesResponse) {}
rpc Get(GetPodResourcesRequest) returns (GetPodResourcesResponse) {}
}
message AllocatableResourcesRequest {}
// AllocatableResourcesResponses contains informations about all the devices known by the kubelet
message AllocatableResourcesResponse {
repeated ContainerDevices devices = 1;
repeated int64 cpu_ids = 2;
repeated ContainerMemory memory = 3;
}
// ListPodResourcesRequest is the request made to the PodResourcesLister service
message ListPodResourcesRequest {}
// ListPodResourcesResponse is the response returned by List function
message ListPodResourcesResponse {
repeated PodResources pod_resources = 1;
}
// PodResources contains information about the node resources assigned to a pod
message PodResources {
string name = 1;
string namespace = 2;
repeated ContainerResources containers = 3;
}
// ContainerResources contains information about the resources assigned to a container
message ContainerResources {
string name = 1;
repeated ContainerDevices devices = 2;
repeated int64 cpu_ids = 3;
repeated ContainerMemory memory = 4;
repeated DynamicResource dynamic_resources = 5;
}
// ContainerMemory contains information about memory and hugepages assigned to a container
message ContainerMemory {
string memory_type = 1;
uint64 size = 2;
TopologyInfo topology = 3;
}
// ContainerDevices contains information about the devices assigned to a container
message ContainerDevices {
string resource_name = 1;
repeated string device_ids = 2;
TopologyInfo topology = 3;
}
// Topology describes hardware topology of the resource
message TopologyInfo {
repeated NUMANode nodes = 1;
}
// NUMA representation of NUMA node
message NUMANode {
int64 ID = 1;
}
// DynamicResource contains information about the devices assigned to a container by DRA
message DynamicResource {
// tombstone: removed in 1.31 because claims are no longer associated with one class
// string class_name = 1;
string claim_name = 2;
string claim_namespace = 3;
repeated ClaimResource claim_resources = 4;
}
// ClaimResource contains resource information. The driver name/pool name/device name
// triplet uniquely identifies the device. Should DRA get extended to other kinds
// of resources, then device_name will be empty and other fields will get added.
// Each device at the DRA API level may map to zero or more CDI devices.
message ClaimResource {
repeated CDIDevice cdi_devices = 1 [(gogoproto.customname) = "CDIDevices"];
string driver_name = 2;
string pool_name = 3;
string device_name = 4;
}
// CDIDevice specifies a CDI device information
message CDIDevice {
// Fully qualified CDI device name
// for example: vendor.com/gpu=gpudevice1
// see more details in the CDI specification:
// https://github.com/container-orchestrated-devices/container-device-interface/blob/main/SPEC.md
string name = 1;
}
// GetPodResourcesRequest contains information about the pod
message GetPodResourcesRequest {
string pod_name = 1;
string pod_namespace = 2;
}
// GetPodResourcesResponse contains information about the pod the devices
message GetPodResourcesResponse {
PodResources pod_resources = 1;
}
Under the hood, retrieval of the information needed to populate the new
DynamicResource
field will be pulled from an in-memory cache stored within the
DRAManager
of the kubelet. This is similar to how the fields for
ContainerDevices
(from the DeviceManager
) and cpu_ids
(from the
CPUManager
) are populated today.
The one difference being that the DeviceManager
and CPUManager
checkpoint
the state necessary to fill their in-memory caches, so that it can be
repopulated across a kubelet restart. We will need to add a similar
checkpointing mechanism in the DRAManager
so that it can repopulate its
in-memory cache as well. This will ensure that the information needed by the
PodResources API is available for all running containers without needing to call
out to each DRA resource driver to retrieve this information on-demand. We will
follow the same pattern used by the DeviceManager
and CPUManager
to
implement this checkpointing mechanism.
Note: Checkpointing is possible in the DRAManager
because the set of CDI
devices allocated to a container cannot change across its lifetime (just as the
set of traditional devices injected into a container by the DeviceMmanager
cannot change across its lifetime). Moreover, the set of CDI devices that have
been injected into a container are not tied to the "availability" of the DRA
driver that injected them -- i.e. once a DRA driver allocates a set of CDI
devices to a container, that container will have full access to those devices
for its entire lifetime (even if the DRA driver that injected them temporarily
goes offline). In this way, the in-memory cache maintained by the DRAManager
will always have the most up-to-date information for all running containers (so
long as checkpointing is added as described to repopulate it across kubelet
restarts).
k8s.io/kubernetes/pkg/kubelet/apis/podresources
:10-08-2024
-75.3%
These cases will be added in the existing integration tests:
- Feature gate enable/disable tests.
- Get API work with DRA and device plugin.
- List API work with DRA and Device plugin.
These cases will be added in the existing e2e tests:
- Feature gate enable/disable tests.
- Get API work with DRA and device plugin.
- List API work with DRA and Device plugin.
- Feature implemented behind a feature flag. (kubernetes/kubernetes#115847)
- e2e tests completed and enabled. (kubernetes/kubernetes#116846)
- Gather feedback from consumers of the DRA feature.
- No major bugs reported in the previous cycle.
- Allowing time for feedback (1 year).
- Risks have been addressed.
With gRPC the version is part of the service name. Old versions and new versions should always be served and listened by the kubelet.
To a cluster admin upgrading to the newest API version, means upgrading Kubernetes to a newer version as well as upgrading the monitoring component.
To a vendor changes in the API should always be backwards compatible.
Kubelet will always be backwards compatible, so going forward existing plugins are not expected to break.
- Feature gate (also fill in values in
kep.yaml
)- Feature gate name:
DynamicResourceAllocation
is existing feature gate to enable / disable DRA feature. - Components depending on the feature gate: kube-apiserver, kube-controller-manager, kube-scheduler, kubelet
- Feature gate name:
KubeletPodResourcesDynamicResources
new feature gate to enable / disable PodResources API List method to populateDynamicResource
information from theDRAManager
.DynamicResourceAllocation
feature gate has to be enabled as well. - Components depending on the feature gate: kubelet, 3rd party consumers.
- Feature gate name:
KubeletPodResourcesGet
new feature gate to enable / disable PodResources API Get method. In caseDynamicResourceAllocation
or theKubeletPodResourcesDynamicResources
are disabled andKubeletPodResourcesGet
is enabled, the Get method will retrieve resources allocated by device plugins, memory and cpus (but omit those allocated by DRA resource drivers). In caseKubeletPodResourcesGet
,DynamicResourceAllocation
andKubeletPodResourcesDynamicResources
are all enabled, theGet()
method will also retrieve the resources allocated via DRA. - Components depending on the feature gate: kubelet, 3rd party consumers.
- Feature gate name:
No.
Yes, through feature gates.
The API becomes available again. The API is stateless, so no recovery is needed, clients can just consume the data.
e2e test will demonstrate that when the feature gate is disabled, the API returns the appropriate error code.
Kubelet may fail to start. The new API may report inconsistent data, or may cause the kubelet to crash.
pod_resources_endpoint_errors_get
- but only with feature gate KubeletPodResourcesGet
enabled. Otherwise the API will always return a known error.
Not Applicable.
Is the rollout accompanied by any deprecations and/or removals of features, APIs, fields of API types, flags, etc.?
No.
Look at the pod_resources_endpoint_requests_list
and pod_resources_endpoint_requests_get
metric exposed by the kubelet.
Call the PodResources API and see the result.
- Events
- Event Reason:
- API .status
- Condition name:
- Other field:
- Other (treat as last resort)
- Details:
N/A.
What are the SLIs (Service Level Indicators) an operator can use to determine the health of the service?
- Metrics
- Metric name:
pod_resources_endpoint_requests_total
,pod_resources_endpoint_requests_list
andpod_resources_endpoint_requests_get
. - Components exposing the metric: kubelet
- Metric name:
Are there any missing metrics that would be useful to have to improve observability of this feature?
As part of this feature enhancement, per-API-endpoint resources metrics are being added; to observe this feature the pod_resources_endpoint_requests_get
and pod_resources_endpoint_requests_list
metric should be used. We will add pod_resources_endpoint_errors_get
error counter.
The container runtime must support CDI.
A third-party resource driver is required for allocating resources.
No.
No.
No.
No.
Will enabling / using this feature result in increasing time taken by any operations covered by existing SLIs/SLOs?
No. Feature is out of existing any paths in kubelet.
Will enabling / using this feature result in non-negligible increase of resource usage (CPU, RAM, disk, IO, ...) in any components?
DDOSing the API can lead to resource exhaustion.
N/A.
The API will always return a well-known error. In normal operation, the API is expected to never return an error and always return a valid response, because it utilizes internal kubelet data which is always available. Bugs may cause the API to return unexpected errors, or to return inconsistent data. Consumers of the API should treat unexpected errors as bugs of this API.
N/A.
-
2023-01-12: KEP created
-
2024-09-10: KEP Updated to reflect the current state of the implementation.