-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix volume limit for EBS on m5 and c5 instances #66397
Fix volume limit for EBS on m5 and c5 instances #66397
Conversation
6f1a8c1
to
45b8107
Compare
/test pull-kubernetes-e2e-gce |
1 similar comment
/test pull-kubernetes-e2e-gce |
lgtm from aws point of view, i.e. this is what's necessary to do on AWS and the values are right. |
/approve |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
}, | ||
}, | ||
} | ||
os.Unsetenv(KubeMaxPDVols) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: set it back later?
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: aveshagarwal, gnufied, wongma7 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatic merge from submit-queue (batch tested with PRs 66410, 66398, 66061, 66397, 65558). If you want to cherry-pick this change to another branch, please follow the instructions here. |
This is a fix for lower volume limits on m5 and c5 instance types while we wait for kubernetes/enhancements#554 to land GA.
This problem became urgent because many of our users are trying to migrate to those instance types in light of spectre/meltdown vulnerability but lower volume limit on those instance types often causes cluster instability. Yes they can workaround by configuring the scheduler with lower limit but often this becomes somewhat difficult to do when cluster is mixed.
The newer default limits were picked from https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/volume_limits.html
Text about spectre/meltdown is available on - https://community.bitnami.com/t/spectre-variant-2/54961/5
/sig storage
/sig scheduling