Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PersistentDisk: one per cloud, or one for all clouds? #5129

Closed
justinsb opened this issue Mar 6, 2015 · 10 comments
Closed

PersistentDisk: one per cloud, or one for all clouds? #5129

justinsb opened this issue Mar 6, 2015 · 10 comments
Labels
area/api Indicates an issue on api area. area/cloudprovider priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/storage Categorizes an issue or PR as relevant to SIG Storage.

Comments

@justinsb
Copy link
Member

justinsb commented Mar 6, 2015

Currently we have "GCEPersistentDisk", and this name is exposed via the API. I am working on adding support for AWS EBS, which is very similar to GCEPersistentDisk.

Should I add another type AWSPersistentDisk, or should we try to make GCEPersistentDisk work for any PD/EBS/Cinder style cloud-block device? (And presumably rename GCEPersistentDisk to CloudPersistentDisk or something similar)

On EC2, EBS volumes are bound to a specific AZ, so I think we'll likely have a "cloud location" specifier even if we have AWSPersistentDisk (cloudLocation: us-west-2b). I'm thinking if we're going to do that, we might as well have "cloudLocation: aws/us-west-2b", in which case there seems less reason to have the different types.

My personal preference would be to have one PersistentDisk type for all the clouds, to avoid code duplication; it also feels like a simpler API to consume.

@markturansky
Copy link
Contributor

Even if you had one PD type for all clouds, you would then need an interface (to work with clouds generically) and implementation (to work with AWS specifically). IMHO, this is a net gain in complexity, not reduction.

Common mounting utilities recently moved to pkg/util/mount to reduce some of the duplication we saw after introducing NFS and iSCSCI plugins. Surely there will be other common code that can be refactored to promote reuse, but there still needs to be one volume plugin per provider.

I am doing a lot of work in volumes and Red Hat is producing many types of volume plugins for storage. Feel free to reach out to me. I'm happy to work with you. I'm in the #google-containers IRC channel.

@bgrant0607 bgrant0607 added area/api Indicates an issue on api area. area/cloudprovider sig/storage Categorizes an issue or PR as relevant to SIG Storage. team/cluster priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Mar 6, 2015
@bgrant0607
Copy link
Member

I don't want cloud-specific objects (including GCEPersistentDisk) in the API.

@thockin

@markturansky
Copy link
Contributor

@bgrant0607 agreed and understood. See #5105.

@thockin
Copy link
Member

thockin commented Mar 24, 2015

Right - the place for cloud-specific PD implementations is in the
persistent-volume structs, not in the volume-source struct (and I'll be
happy to put a bullet in the GCEVolumeSource).

But to the original question, I think we want 1 PD type per implementation,
not one uber-abstractions.

On Fri, Mar 6, 2015 at 7:15 AM, Mark Turansky notifications@github.com
wrote:

@bgrant0607 https://github.com/bgrant0607 agreed and understood. See
#5105 #5105.


Reply to this email directly or view it on GitHub
#5129 (comment)
.

@justinsb
Copy link
Member Author

So, to confirm, I am copying-and-pasting "persistentDisk" in VolumeSource of pkg/api/types.go, to "awsPersistentDisk". And copying-and-pasting GCEPersistentDiskVolumeSource -> AWSPersistentDiskVolumeSource. For now, API callers will need to know whether they are running on GCE / AWS / OpenStack / VMWare / Azure, and populate the correct field.

#5105 (when it lands) will then refactor this and replace persistentDisk / awsPersistentDisk / nfs with a single PersistentVolume type, so users will not have to know what type of cloud they're running on.

Is that correct?

@markturansky
Copy link
Contributor

Cut/Paste "persistentDisk" and refactor to suit the AWS provider. This provides the volume to Kubernetes.

The struct pointer can go in either/both VolumeSource and PersistentVolumeSource. The former allows pod authors to use it directly to access a known EBS volume. The latter makes it a provisionable resource by a cluster admin.

@markturansky
Copy link
Contributor

After copying from GCE, you'll want to change your volumes GetAccessModes method. EBS only supports ReadWriteOnce, I believe, while GCE has two modes (adds ReadOnlyMany).

@thockin
Copy link
Member

thockin commented Mar 27, 2015

clone and customize, but yes :)

On Wed, Mar 25, 2015 at 10:48 AM, Justin Santa Barbara <
notifications@github.com> wrote:

So, to confirm, I am copying-and-pasting "persistentDisk" in VolumeSource
of pkg/api/types.go, to "awsPersistentDisk". And copying-and-pasting
GCEPersistentDiskVolumeSource -> AWSPersistentDiskVolumeSource. For now,
API callers will need to know whether they are running on GCE / AWS /
OpenStack / VMWare / Azure, and populate the correct field.

#5105 #5105 (when
it lands) will then refactor this and replace persistentDisk /
awsPersistentDisk / nfs with a single PersistentVolume type, so users will
not have to know what type of cloud they're running on.

Is that correct?


Reply to this email directly or view it on GitHub
#5129 (comment)
.

@justinsb
Copy link
Member Author

We've implemented the "one per cloud" API model, so closing.

@bgrant0607
Copy link
Member

PersistentVolumeClaim is the "one for all clouds" abstraction.

PersistentVolumeSource is "one per cloud" currently, but eventually we'd like to convert it to a plugin.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/cloudprovider priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/storage Categorizes an issue or PR as relevant to SIG Storage.
Projects
None yet
Development

No branches or pull requests

4 participants