-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add flocker volume plugin #14328
Add flocker volume plugin #14328
Conversation
Can one of the admins verify that this patch is reasonable to test? (reply "ok to test", or if you trust the user, reply "add to whitelist") If this message is too spammy, please complain to ixdy. |
fa8fd97
to
b2c5c55
Compare
cc @rootfs @markturansky - just FYI | review. |
173fbc2
to
c231341
Compare
Labelling this PR as size/XXL |
} | ||
|
||
func (p flockerPlugin) CanSupport(spec *volume.Spec) bool { | ||
return (spec.PersistentVolume != nil && spec.PersistentVolume.Spec.Flocker != nil) || |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can just return spec.Volume != nil && spec.Volume.Flocker != nil
This plugin is not a Persistent Volume, so you can ignore that part of the CanSupport check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nm :) I overlooked Flocker in the PVVolumeSource struct. Your check above is correct.
You guys are cutting this close... |
@@ -306,6 +309,8 @@ type PersistentVolumeSource struct { | |||
CephFS *CephFSVolumeSource `json:"cephfs,omitempty"` | |||
// FC represents a Fibre Channel resource that is attached to a kubelet's host machine and then exposed to the pod. | |||
FC *FCVolumeSource `json:"fc,omitempty"` | |||
// Flocker represents a Flocker volume attached to a kubelet's host machine and exposed to the pod for its usage |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please also document that this depends on the flocker control plane being running.
I took a quick look, nothing major jumped out. I'll leave it to Mark and Saad to pick the details. What this is missing most is a good doc explaining how to use it, including the bits about env variables in the first comment. |
// otherwise it return the default value provided. | ||
// | ||
// Note: it should be GetEnv but I tried to keep the os.Getenv standard. | ||
func GetenvOrFallback(key, defaultValue string) string { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
getenvOrFallback?
@agonzalezro Is flocker volume dependent on any docker release? Does it support rkt? Can you provide a working Pod and running environment in a doc? |
return &builder, nil | ||
} | ||
|
||
func (p *flockerPlugin) NewCleaner(datasetName string, podUID types.UID) (volume.Cleaner, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you don't want to delete the volume?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The idea of Flocker is that you can move it between pods. I don't see at this stage any need for removing volume. Flocker agent will mount/dismount it when needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But when the volume is moved between pods and between hosts, don't you want to clean up the orphaned directory on the old host?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not in this case, no. If and when the same dataset is requested elsewhere (by datasetName in the volume spec), the Flocker agent will umount/detach and reattach/mount the volume. If a volume is not required, at this stage it is a manual operation to destroy it. Hope that helps explain.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This makes me wonder if there is any utility in using Flocker as a PV, since the semantics seem intrinsic to how the Flocker agent handles volumes.
@rootfs Flocker is not docker/rkt dependent. We are working in documentation for it. cc: @mattbates |
c231341
to
a604569
Compare
DatasetName string `json:"datasetName"` | ||
|
||
// Optional: the size of the volume in human readable format. | ||
Size string `json:"size"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please use resource.Quantity as this type. (Find other usages elsewhere in this file.)
Meta: it seems odd to specify a size here. Other volumes (including persistent volumes) don't specify a size. Why is this volume different?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right that the volume itself does not require this information.
It's irrelevant to a Volume and size is already an attribute of PV.Spec.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
Unit, integration and GCE e2e build/test failed for commit ca8a0b7563bb3531e8b800c3832ea40f3ea62372. |
Hi, what's the problem with Jenkins? Everything looks ok in my end. |
|
CLAs look good, thanks! |
@saad-ali I've repushed. But TBH I doubt that anything I have changed could cause the error that you pasted. Let's wait for the checks, but it should be ok! (I hope so :) Thanks! |
Flocker [1] is an open-source container data volume manager for Dockerized applications. This PR adds a volume plugin for Flocker. The plugin interfaces the Flocker Control Service REST API [2] to attachment attach the volume to the pod. Each kubelet host should run Flocker agents (Container Agent and Dataset Agent). The kubelet will also require environment variables that contain the host and port of the Flocker Control Service. (see Flocker architecture [3] for more). - `FLOCKER_CONTROL_SERVICE_HOST` - `FLOCKER_CONTROL_SERVICE_PORT` The contribution introduces a new 'flocker' volume type to the API with fields: - `datasetName`: which indicates the name of the dataset in Flocker added to metadata; - `size`: a human-readable number that indicates the maximum size of the requested dataset. Full documentation can be found docs/user-guide/volumes.md and examples can be found at the examples/ folder [1] https://clusterhq.com/flocker/introduction/ [2] https://docs.clusterhq.com/en/1.3.1/reference/api.html [3] https://docs.clusterhq.com/en/1.3.1/concepts/architecture.html
Make sure to re-run |
@saad-ali I've run that & |
Unit, integration and GCE e2e test build/test passed for commit 5d8fd6db1f74d3f826c4ff4fca4fce98b8671b06. |
Unit, integration and GCE e2e test build/test passed for commit fa39c2b. |
The author of this PR is not in the whitelist for merge, can one of the admins add the 'ok-to-merge' label? |
LGTM was before last commit, removing LGTM |
Hello @saad-ali, just wondering if you need something else from me to add the |
LGTM re-added. |
Manually merging |
…28-upstream-release-1.1 Automated cherry pick of #14328 upstream release 1.1
…ck-of-#14328-upstream-release-1.1 Automated cherry pick of kubernetes#14328 upstream release 1.1
…ck-of-#14328-upstream-release-1.1 Automated cherry pick of kubernetes#14328 upstream release 1.1
Flocker [1] is an open-source container data volume manager for
Dockerized applications.
This PR adds a volume plugin for Flocker.
The plugin interfaces the Flocker Control Service REST API [2] to
request volume creation and attachment.
Each kubelet host should run Flocker agents (Container Agent and Dataset
Agent).
The kubelet will also require environment variables that contain the
host and port of the Flocker Control Service. (see Flocker architecture
[3] for more).
FLOCKER_CONTROL_SERVICE_HOST
FLOCKER_CONTROL_SERVICE_PORT
The contribution introduces a new 'flocker' volume type to the API with
fields:
datasetName
: which indicates the name of the dataset in Flockeradded to metadata;
size
: a human-readable number that indicates the maximum size of therequested dataset.
Full documentation (including getting started guides) and e2e tests to
follow.
As the plugin is code complete, we're hoping to make version 1.1.
[1] https://clusterhq.com/flocker/introduction/
[2] https://docs.clusterhq.com/en/1.3.1/reference/api.html
[3] https://docs.clusterhq.com/en/1.3.1/concepts/architecture.html