-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
StorageOS configurable device directory and mount options #58816
StorageOS configurable device directory and mount options #58816
Conversation
@@ -323,27 +346,30 @@ func getLoopDevice(path string, exec mount.Exec) (string, error) { | |||
} | |||
|
|||
args := []string{"-j", path} | |||
out, err := exec.Run(losetupPath, args...) | |||
cmd := exec.Command(losetupPath, args...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsafrane thanks for the review, but quick question as I noticed that I reverted the VolumeHost.GetExec() change you made without fully understanding it.
While we expect the mount utilities (incl losetup
) to run where the kubelet runs, we don't require it. We just need the device and mount to be available in the correct place and where the kubelet can see them.
Unless you can think of any gotchas or I'm misunderstanding, I'll revert this bit.
Also fyi: the Block Volume support loop device code seems to have used this as a base, and doesn't use VolumeHost.GetExec(). Not sure if this was intentional, and if it was do I need the same here? https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/util/util_linux.go. Maybe @mtanino can advise?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In 1.9 there is a possibility to run mount utilities (mount.nfs, iscsiadm, mkfs.btrfs, ..., losetup) in a dedicated container instead of the host (which can be a minimal one like CoreOS) or in the container where kubelet runs (which can have just kubelet). The setup is quite elaborate and it is dedicated only for testing in our e2e tests, which run quite minimal GCI image (but it has losetup). VolumeHost.GetExec()
is used to execute things in these containers with mount utilities instead of on the host. You probably should use this interface for losetup too, just in case someone has really minimal OS without it.
In addition, it would be great if you could prepare container(s) with the server parts of StorageOS and e2e tests that would deploy it and test the plugin against. This way, we could test our plugin and make sure there are no regressions as the plugin evolves.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, I'll revert this bit. We're working on the e2e tests separately. I'm hoping we can get a PR for them in the next few weeks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still see you use exec.Command
directly, did you plan to revert it back to mount.Exec
interface?
out, err := exec.Run(losetupPath, args...) | ||
func makeLoopDevice(path string) (string, error) { | ||
args := []string{"-f", "-P", "--show", path} | ||
cmd := exec.Command(losetupPath, args...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).
args := []string{"-d", device} | ||
out, err := exec.Run(losetupPath, args...) | ||
cmd := exec.Command(losetupPath, args...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add a comment that you intentionally run where kubelet is (e.g. in kubelet's container).
/ok-to-test |
/retest |
flake: #58975 |
/retest |
3500580
to
81456bf
Compare
@jsafrane reverted the exec changes, but just noticed the Godeps conflict, will resolve tomorrow. |
81456bf
to
035a080
Compare
This change queries the StorageOS API to retrieve the volume device dir when a volume is attached, rather than relying on the default /var/lib/storageos/volumes directory. This allows all or some nodes to have kubelet running in a container with the StorageOS volume path mounted in, which is required for Azure ACS support. Volume mount options are also supported.
035a080
to
1a91403
Compare
@jsafrane I think this is good to go now, PTAL - thanks |
/lgtm |
/assign @thockin |
/approve no-issue |
/test pull-kubernetes-e2e-kubeadm-gce-canary |
1 similar comment
/test pull-kubernetes-e2e-kubeadm-gce-canary |
/test pull-kubernetes-e2e-kubeadm-gce-canary |
@croomes: The following test failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
/test pull-kubernetes-e2e-kubeadm-gce-canary |
In the future, please file bugs so we can xref them in PRs. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: croomes, jsafrane, thockin The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here. |
/status approved-for-milestone |
You must be a member of the kubernetes/kubernetes-milestone-maintainers github team to add status labels. |
@saad-ali are you able to help get this into 1.10 release? Was hoping it would have gone in automatically... |
Don't worry, it's merged into master so it's going to be part of 1.10. It splits from master in upcoming week(s). |
Fantastic, thanks for letting me know! |
What this PR does / why we need it:
This allows StorageOS volumes to be mounted when the kubelet is running in a container and we are unable to use the default device location (/var/lib/storageos/volumes). With this PR, the node's device location is requested via the StorageOS api, falling back to the current behaviour if not configured. The node's device location can be supplied as an environment variable (DEVICE_DIR) to the StorageOS container. This is backwards-compatible and no changes are needed to existing deployments.
The PR also allows Mount options to be set for StorageOS volumes in the same way they're enabled for other volume plugins.
The StorageOS API dependency was updated to the latest version, but no functionality changes besides adding the DeviceDir property to the Controller object.
There is also a small refactor of the loopback device handling code in storageos_utils.go to capture stderr output.
Release note:
Not sure why godep changed the comments of unrelated packages in Godeps.json...
/sig storage