-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Limitrange request causes indefinite amount of pods spawned #93750
Comments
/sig scheduling |
Oh great it's not just us! We're also on EKS v1.16. This started last week for us...I wonder if it's an EKS problem specifically... edit: That said. We're not using LimitRange. |
@cablespaghetti |
I'm downgrading to ami-05ac566a7ec2378db which is from May but was the previous AMI we were running. Will feedback if that fixes it... edit: To clarify this is going from a 1.16.12 or 1.16.13 AMI to a 1.16.8 one. |
You don't have anything else, like a policy that applies resources on the container level? |
We don't no. Just heard back from AWS:
I spent a while puzzling over this and it seems to be that this is a problem because the control plane on EKS 1.16 is 1.16.8 which pre-dates this change in later versions of the kubelet. |
Ah ok, I saw it happen numerous times with init containers. |
@dza89: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
Pods keep spawning indefinitely with OutOfcpu error
What you expected to happen:
A single error returned
How to reproduce it (as minimally and precisely as possible):
Add limitrange:
Make sure your nodes have less resources available then 1 CPU (set high for this example)
Do a dummy deployment without any resources set
Anything else we need to know?:
My findings:
Since the limits are set on container level, the kubelet receives the requests, then the requestlimit is added and the container cannot be scheduled, the kubelet returns OutOfcpu and the replicaset deploys a new pod. And this starts an endless loop of pod spawning.
Environment:
cat /etc/os-release
): Amazon Linux 2uname -a
): 4.14.186-146.268.amzn2.x86_64kubeletVersion: v1.16.13-eks-2ba888
The text was updated successfully, but these errors were encountered: