-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add volume reconstruct/cleanup logic in kubelet volume manager #27970
Add volume reconstruct/cleanup logic in kubelet volume manager #27970
Conversation
@jingxu97 this fails the unit test |
@@ -901,6 +907,74 @@ func (kl *Kubelet) setupDataDirs() error { | |||
return nil | |||
} | |||
|
|||
func (kl *Kubelet) getPodVolumes(podUID types.UID) ([]volumeTuple, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A general question: Is it possible to move all these logic into the volume package?
/cc @saad-ali
Is this confirmed as fixing a bug? |
We decided to not use this clean up code. I found a race condition problem On Sat, Jun 25, 2016 at 7:40 AM, Eric Tune notifications@github.com wrote:
|
Proposed for 1.3.1 |
e867383
to
f4857df
Compare
Currently kubelet volume management works on the concept of desired and actual world of states. The volume manager periodically compares the two worlds and perform volume mount/unmount and/or attach/detach operations. When kubelet restarts, the cache of those two worlds are gone. Although desired world can be recovered through apiserver, actual world can not be recovered which may cause some volumes cannot be cleaned up if their information is deleted by apiserver. This change adds the reconstruction of the actual world by reading the pod directories from disk. The reconstructed volume information is added to both desired world and actual world if it cannot be found in either world. The rest logic would be as same as before, desired world populator may clean up the volume entry if it is no longer in apiserver, and then volume manager should invoke unmount to clean it up.
8fd2e93
to
f19a114
Compare
GCE e2e build/test passed for commit f19a114. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit f19a114. |
Automatic merge from submit-queue |
This PR broke the Darwin and Windows builds, I think:
It looks like you're missing changes to |
cc @k8s-oncall |
Note the breakage in |
Yes, I forgot to put the function into the pkg/util/mount/mount_unsupported.go. Jing On Tue, Aug 16, 2016 at 3:43 PM, Jeff Grafton notifications@github.com
|
@jingxu97 did you have a chance to create a PR for this yet? build still seems broken. |
Hi Jeff Yes, #30724 From error log message you sent in previous email, there is an error which k8s.io/kubernetes/pkg/util/procfspkg/util/procfs/procfs.go:84 Please let me know if there is any issue. Thank you! On Thu, Aug 18, 2016 at 7:27 PM, Jeff Grafton notifications@github.com
|
glog.V(3).Infof("Orphaned pod %q found, but volumes are not cleaned up; err: %v", uid, err) | ||
continue | ||
} | ||
// Check whether volume is still mounted on disk. If so, do not delete directory | ||
if volumeNames, err := kl.getPodVolumeNameListFromDisk(uid); err != nil || len(volumeNames) != 0 { | ||
glog.V(3).Infof("Orphaned pod %q found, but volumes are still mounted; err: %v, volumes: %v ", uid, err, volumeNames) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not unmount unused mounted volumes here?
I think that's what #31596 is asking for
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@luxas https://github.com/luxas, we do unmount unused mounted volumes,
this logic is implemented in pkg/kubelet/volumemanager/reconciler/reconciler.go
because this is the place for all mount/unmount related logic.
Please let me know if you have any questions about it.
Jing
On Tue, Aug 30, 2016 at 10:15 PM, Lucas Käldström notifications@github.com
wrote:
In pkg/kubelet/kubelet.go
#27970 (comment)
:glog.V(3).Infof("Orphaned pod %q found, but volumes are not cleaned up; err: %v", uid, err) continue }
// Check whether volume is still mounted on disk. If so, do not delete directory
if volumeNames, err := kl.getPodVolumeNameListFromDisk(uid); err != nil || len(volumeNames) != 0 {
glog.V(3).Infof("Orphaned pod %q found, but volumes are still mounted; err: %v, volumes: %v ", uid, err, volumeNames)
Why not unmount unused mounted volumes here?
I think that's what #31596
#31596 is asking for—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/kubernetes/kubernetes/pull/27970/files/f19a1148db1b7584be6b6b60abaf8c0bd1503ed3#r76925660,
or mute the thread
https://github.com/notifications/unsubscribe-auth/ASSNxZ2PPiHykY7mWc1u4e7Uagxqpgsgks5qlQ3WgaJpZM4I9M0t
.
- Jing
…ck-of-#27970-kubernetes#30724-upstream-release-1.3 Automatic merge from submit-queue Automated cherry pick of kubernetes#27970 kubernetes#30724 Cherry pick of kubernetes#27970 kubernetes#30724 on release-1.3.
Automatic merge from submit-queue Add volume reconstruct/cleanup logic in kubelet volume manager Currently kubelet volume management works on the concept of desired and actual world of states. The volume manager periodically compares the two worlds and perform volume mount/unmount and/or attach/detach operations. When kubelet restarts, the cache of those two worlds are gone. Although desired world can be recovered through apiserver, actual world can not be recovered which may cause some volumes cannot be cleaned up if their information is deleted by apiserver. This change adds the reconstruction of the actual world by reading the pod directories from disk. The reconstructed volume information is added to both desired world and actual world if it cannot be found in either world. The rest logic would be as same as before, desired world populator may clean up the volume entry if it is no longer in apiserver, and then volume manager should invoke unmount to clean it up. Fixes kubernetes#27653
Currently kubelet volume management works on the concept of desired
and actual world of states. The volume manager periodically compares the
two worlds and perform volume mount/unmount and/or attach/detach
operations. When kubelet restarts, the cache of those two worlds are
gone. Although desired world can be recovered through apiserver, actual
world can not be recovered which may cause some volumes cannot be cleaned
up if their information is deleted by apiserver. This change adds the
reconstruction of the actual world by reading the pod directories from
disk. The reconstructed volume information is added to both desired
world and actual world if it cannot be found in either world. The rest
logic would be as same as before, desired world populator may clean up
the volume entry if it is no longer in apiserver, and then volume
manager should invoke unmount to clean it up.
Fixes #27653
This change is