-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AWS: Move enforcement of attached AWS device limit from kubelet to scheduler #23254
AWS: Move enforcement of attached AWS device limit from kubelet to scheduler #23254
Conversation
Fixes #22994 |
@justinsb @kubernetes/rh-storage, please review. |
@jsafrane edit the original comment to include the |
@@ -1050,19 +1050,20 @@ func (self *awsInstance) getMountDevice(volumeID string, assign bool) (assigned | |||
return mountDevice(""), false, nil | |||
} | |||
|
|||
// Check all the valid mountpoints to see if any of them are free | |||
valid := instanceType.getEBSMountDevices() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there another caller of getEBSMountDevices()
or should you be deleting that code as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was going to say the same thing - it makes sense to inline it because the volume list is now pretty big, but do delete getEBSMountDevices
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, removed
c2e88c4
to
37b969c
Compare
TravisCI error:
Looks like flake. |
Limit of nr. of attached EBS volumes to a node is now enforced by scheduler. It can be adjusted by KUBE_MAX_PD_VOLS env. variable there. Therefore we don't need the same check in kubelet. If the system admin wants to attach more, we should allow it. Kubelet limit is now 650 attached volumes ('ba'..'zz').
37b969c
to
e4dc670
Compare
@justinsb, can you please take a look? |
Sorry for the delay @jsafrane . I changed the title to be a clearer (?) release note - feel free to tweak! |
@k8s-bot test this issue: #IGNORE |
GCE e2e build/test passed for commit e4dc670. |
Automatic merge from submit-queue |
Limit of nr. of attached EBS volumes to a node is now enforced by scheduler. It can be adjusted by
KUBE_MAX_PD_VOLS
env. variable there. Therefore we don't need the same check in kubelet. If the system admin wants to attach more, we should allow it.Kubelet limit is now 650 attached volumes ('ba'..'zz').
Note that the scheduler counts only pods assigned to a node. When a pod is deleted and a new pod is scheduled on a node, kubelet start (slowly) detaching the old volume and (slowly) attaching the new volume. Depending on AWS speed it may happen that more than KUBE_MAX_PD_VOLS volumes are actually attached to a node for some time! Kubelet will clean it up in few seconds / minutes (both attach/detach is quite slow).
Fixes #22994