-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[KEP-2400]: Restrict access to swap for containers in high priority Pods #125277
[KEP-2400]: Restrict access to swap for containers in high priority Pods #125277
Conversation
/sig node |
@iholder101: GitHub didn't allow me to request PR reviews from the following users: fabiand. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@haircommander do we need a backing issue for this PR? |
Changelog suggestion -Exclude critical pods from having swap access
+Changed Linux swap handling to restrict access to swap for containers in high priority Pods.
+New Pods that have a node- or cluster-critical priority are prohibited from accessing swap on Linux,
+even if your cluster and node configuration could otherwise allow this. |
c92bb66
to
c5fbd74
Compare
Thanks, done! |
/retest |
I think we can discuss pros and cons here |
I think the use case is reasonable. Many critical components don't have the memory limit set today. |
Is this in line with the KEP https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2400-node-swap#set-aside-swap-for-system-critical-daemons
|
/triage accepted |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
5 similar comments
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
The Kubernetes project has merge-blocking tests that are currently too flaky to consistently pass. This bot retests PRs for certain kubernetes repos according to the following rules:
You can:
/retest |
/hold |
@@ -1244,6 +1257,11 @@ func TestGenerateLinuxContainerResourcesWithSwap(t *testing.T) { | |||
pod.Spec.Containers[0].Resources = resourceReqsC1 | |||
pod.Spec.Containers[1].Resources = resourceReqsC2 | |||
|
|||
if tc.isCriticalPod { | |||
pod.Spec.Priority = ptr.To(scheduling.SystemCriticalPriority) | |||
assert.Equal(t, true, types.IsCriticalPod(pod), "pod is expected to be critical") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
assert.Equal(t, true, types.IsCriticalPod(pod), "pod is expected to be critical") | |
assert.True(t, types.IsCriticalPod(pod), "pod is expected to be critical") |
/lgtm cancel |
c5fbd74
to
353d71a
Compare
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
Signed-off-by: Itamar Holder <iholder@redhat.com>
353d71a
to
a6df16a
Compare
/lgtm |
LGTM label has been added. Git tree hash: 8bd81f644c8d8f2768129031f7e36f1877bf2dc4
|
/hold cancel |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Exclude critical pods from gaining swap access.
I believe this is valuable for two main reasons:
p.s. currently, it is possible to opt-out of swap for burstable pods by setting
requests.memory == limits.memory
. However, this approach forces the workload owner to set limits which is unacceptable for certain workloads. With this, an administrator can choose to classify such burstable pods as critical to opt-out of swap.Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: