-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added volume.Creater interface and simple HostPath implementation #13650
Added volume.Creater interface and simple HostPath implementation #13650
Conversation
GCE e2e build/test failed for commit 2fb5322deb2d2ebce6bbdfd47f0b23f334f47dd3. |
Labelling this PR as size/L |
2fb5322
to
23c584c
Compare
@thockin I applied the same /tmp/.+ validation here in Creater that you suggested in Deleter. This PR is half as large as Deleter. No calling code yet. That's the provisioning controller PR to come. |
23c584c
to
247fd86
Compare
GCE e2e build/test passed for commit 247fd865eeb75ce440f21c961f12000a84c429ed. |
GCE e2e build/test failed for commit 23c584cd4046ee073a6ab382cc09286d31fcfbdc. |
247fd86
to
753e42c
Compare
GCE e2e build/test failed for commit 753e42cf5019a6c6027aaa346156f8556136712a. |
I am a little anxious about the proliferation of volume interfaces, but this flows from the rest, so I have no better answer (or I would have said it already :) @saad-ali for final |
753e42c
to
f5b3ce6
Compare
GCE e2e build/test passed for commit f5b3ce6b76c289698fb01981ce59d88dda789ff7. |
f5b3ce6
to
930ab92
Compare
GCE e2e build/test passed for commit 930ab92a880351654535dac57c85c6eda7e73d2b. |
@saad-ali all green for this one, too. |
VolumePlugin | ||
// NewCreater creates a new volume.Creater which knows how to create this resource | ||
// in accordance with the underlying storage provider | ||
NewCreater(spec *Spec) (Creater, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is going to need to take information about the pod spec, too. See #12944
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't want that to block this going in, I just want to call it out. I don't know if we'll be getting to volume security for shared storage any time soon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There won't be a pod when a PV is being created.
Security is important, though, and this constructor func was changed to work exactly like Builder, which takes VolumeOptions. We've got access to the same security info on the options that Builder has.
d55a0ad
to
1e0cedd
Compare
@saad-ali This is ready for final review. Follows the same pattern of passing Options to a constructor and plumbing it through. |
// Create for hostPath simply creates a local /tmp/hostpath_pv/%s directory as a new PersistentVolume. | ||
// This Creater is meant for development and testing only and WILL NOT WORK in a multi-node cluster. | ||
func (r *hostPathCreater) Create() (*api.PersistentVolume, error) { | ||
fullpath := fmt.Sprintf("/tmp/hostpath_pv/%s", util.NewUUID()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This isn't going to work with SELinux enforcing. The SELinux type will be tmp_t
which is (rightfully) not usable from the SELinux context docker containers get.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's great, then. We definitely don't want anyone using this by default in any cluster.
It's for development and local testing only. The provisioner will have integration tests that cover it fully. I don't need a writer pod to test this feature.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's for development
Do you develop with setenforce 0?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's right, I think you use a mac for development. Just make sure someone who wants to develop on this feature doesn't have to setenforce 0 please.
GCE e2e build/test failed for commit 746a759e16360285cf4fd182a3012ce95d94c062. |
GCE e2e build/test failed for commit e56ba0568c9ca7c3d6ae6ba98d5ec5daf14f070e. |
@k8s-bot test this |
GCE e2e build/test passed for commit e56ba0568c9ca7c3d6ae6ba98d5ec5daf14f070e. |
e56ba05
to
9c7d03b
Compare
GCE e2e build/test passed for commit 9c7d03b. |
Thanks for the clarifications Mark! I think the All that said, for the sake of expediency, it's fine as is--we can refactor this code later. LGTM |
@@ -65,6 +66,12 @@ type Recycler interface { | |||
Recycle() error | |||
} | |||
|
|||
// Create adds a new resource in the storage provider and creates a PersistentVolume for the new resource. | |||
// Calls to Create should block until complete. | |||
type Creater interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't this be Creator
? I don't want to pick a nit at the last minute if this is otherwise good. Follow-up if you agree?
I'm in favor of a distinction between a volume's lifecycle with respect to the node and to the lifecycle of persistent volumes. Specifically regarding I'll bounce shippable and see if we can get a green. LGTM other than |
There is an API "creater", too, which I was happy to copy. Recycler, I know Creator is proper spelling. I'd rather type the wrong one and it has I agree with the code factoring. I can move provisioning stuff elsewhere in On Sunday, September 20, 2015, Paul Morie notifications@github.com wrote:
|
Ugh. Spelling matters to me in this area; I don't want to have to remember the particular wrong spelling we use. Maybe we should change the precedent |
LGTM. |
Thanks @pmorie |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 9c7d03b. |
Automatic merge from submit-queue |
Auto commit by PR queue bot
// Create adds a new resource in the storage provider and creates a PersistentVolume for the new resource. | ||
// Calls to Create should block until complete. | ||
type Creater interface { | ||
Create() (*api.PersistentVolume, error) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think now this is a little bit backward or should at least be 2 operations:
- the provisioner needs to provide a template PV for it's plugin according to the volumeOptions.
- the provisioner needs to invoke its internal logic for creation of the resource in the infrastructure.
A Claim's provisioning should be like this:
- Create PVC with experimental annotation. will only bind to a PV with a matching pre-bind annotation.
- Watch claim, Create PV - but unfulfilled in infrastructure. special annotation prevents binding until fulfillment.
- Watch volumes, create resource -- use real PV name to link to provider -- remove unfulfilled annotation and save real volumeID.
- claim will bind to newly provisioned volume
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I think that if we're retooling the controllers this makes more sense to be lifted out of the plugin.
Also, we've done sort of a poor job of defining exactly what logic belongs in the the GCE volume code vs what belongs in the GCE cloud provider code. More and more I am feeling that the GCE-API stuff should be in the cloud provider - the GCE specific provider, not as a generic abstraction. That module becomes our "one true place" to interface with GCE APIs. The GCEPD volume uses that to get PDs attached and mounted. (I mean eventually :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That module becomes our "one true place" to interface with GCE APIs. The GCEPD volume uses that to get PDs attached and mounted. (I mean eventually :)
Thinking through that a bit -- so we will wind up with a provisioner for GCEPD that gets the cloud provider from the VolumeHost
interface, the makes a downcast to GCECloudProvider, that uses some special function of that type to create a new PD.
That seems simple enough, but it's not going to be as cut and dry for other volume types. NFS, for example:
- You want a provisioner that knows how to provision EFS volumes that are used as NFS volumes
- You want a provisioner that knows how to provision using the netapp ONTAP API
- What happens when you have a cluster with an aws cloud provider and you want to provision some NFS volumes via EFS and some via ONTAP against a machine running the ONTAP server in AWS?
Maybe we solve (3) by differentiating distinct volume types for differently provisioned volumes of the same type, but that doesn't sit quite right with me. I guess the question is whether there is really utility for an encompassing 'type' for each permutation of (client-side-concerns,provisioning-concerns) versus decoupling them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking through that a bit -- so we will wind up with a provisioner for GCEPD that gets the cloud provider from the VolumeHost interface, the makes a downcast to GCECloudProvider, that uses some special function of that type to create a new PD.
Correct - it's just a matter of where the code lives.
That seems simple enough, but it's not going to be as cut and dry for other volume types. NFS, for example:
You want a provisioner that knows how to provision EFS volumes that are used as NFS volumes
You want a provisioner that knows how to provision using the netapp ONTAP API
What happens when you have a cluster with an aws cloud provider and you want to provision some NFS volumes via EFS and some via ONTAP against a machine running the ONTAP server in AWS?
The NetApp NFS provisioner is a totally different beast than the GCE
provisioner. I do not think we should have a single provisioner
that knows how to provision every volume type.
Maybe we solve (3) by differentiating distinct volume types for differently provisioned volumes of the same type, but that doesn't sit quite right with me. I guess the question is whether there is really utility for an encompassing 'type' for each permutation of (client-side-concerns,provisioning-concerns) versus decoupling them.
—
Reply to this email directly or view it on GitHub.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The NetApp NFS provisioner is a totally different beast than the GCE
provisioner. I do not think we should have a single provisioner
that knows how to provision every volume type.
I wasn't talking about a single provisioner that knows how to provision all types. I was saying, we'll have an EFS provisioner and netapp provisioner, do we need to differentiate entire plugins for those since they're both NFS on the client side (afaik), or do we need to have a mapping of the PV concerns to plugins that allows for us to DNR on volume plugins just to differentiate provisioners.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The provisioner will map what it gets from PVC onto what it knows to
create. That might start with "class = gold" mapping to netapp while
"class = bronze" maps to NFS on a RasPi.
On Fri, Oct 2, 2015 at 3:32 PM, Paul Morie notifications@github.com wrote:
In pkg/volume/volume.go
#13650 (comment)
:@@ -65,6 +66,12 @@ type Recycler interface {
Recycle() error
}+// Create adds a new resource in the storage provider and creates a PersistentVolume for the new resource.
+// Calls to Create should block until complete.
+type Creater interface {
- Create() (*api.PersistentVolume, error)
@thockin https://github.com/thockin
The NetApp NFS provisioner is a totally different beast than the GCE
provisioner. I do not think we should have a single provisioner
that knows how to provision every volume type.I wasn't talking about a single provisioner that knows how to provision
all types. I was saying, we'll have an EFS provisioner and netapp
provisioner, do we need to differentiate entire plugins for those since
they're both NFS on the client side (afaik), or do we need to have a
mapping of the PV concerns to plugins that allows for us to DNR on volume
plugins just to differentiate provisioners.—
Reply to this email directly or view it on GitHub
https://github.com/kubernetes/kubernetes/pull/13650/files#r41074722.
Part of #6773
volume.Creater interface allows a CreatableVolumePlugin to create volumes in the underlying infrastructure.
This PR is the framework for Creater and a deliberately gimped HostPath implementation for testing/example. Each plugin would implement Creater, as appropriate, in a separate PR.
@thockin @saad-ali @kubernetes/rh-storage