-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PersistentDisk: one per cloud, or one for all clouds? #5129
Comments
Even if you had one PD type for all clouds, you would then need an interface (to work with clouds generically) and implementation (to work with AWS specifically). IMHO, this is a net gain in complexity, not reduction. Common mounting utilities recently moved to pkg/util/mount to reduce some of the duplication we saw after introducing NFS and iSCSCI plugins. Surely there will be other common code that can be refactored to promote reuse, but there still needs to be one volume plugin per provider. I am doing a lot of work in volumes and Red Hat is producing many types of volume plugins for storage. Feel free to reach out to me. I'm happy to work with you. I'm in the #google-containers IRC channel. |
I don't want cloud-specific objects (including GCEPersistentDisk) in the API. |
@bgrant0607 agreed and understood. See #5105. |
Right - the place for cloud-specific PD implementations is in the But to the original question, I think we want 1 PD type per implementation, On Fri, Mar 6, 2015 at 7:15 AM, Mark Turansky notifications@github.com
|
So, to confirm, I am copying-and-pasting "persistentDisk" in VolumeSource of pkg/api/types.go, to "awsPersistentDisk". And copying-and-pasting GCEPersistentDiskVolumeSource -> AWSPersistentDiskVolumeSource. For now, API callers will need to know whether they are running on GCE / AWS / OpenStack / VMWare / Azure, and populate the correct field. #5105 (when it lands) will then refactor this and replace persistentDisk / awsPersistentDisk / nfs with a single PersistentVolume type, so users will not have to know what type of cloud they're running on. Is that correct? |
Cut/Paste "persistentDisk" and refactor to suit the AWS provider. This provides the volume to Kubernetes. The struct pointer can go in either/both VolumeSource and PersistentVolumeSource. The former allows pod authors to use it directly to access a known EBS volume. The latter makes it a provisionable resource by a cluster admin. |
After copying from GCE, you'll want to change your volumes GetAccessModes method. EBS only supports ReadWriteOnce, I believe, while GCE has two modes (adds ReadOnlyMany). |
clone and customize, but yes :) On Wed, Mar 25, 2015 at 10:48 AM, Justin Santa Barbara <
|
We've implemented the "one per cloud" API model, so closing. |
PersistentVolumeClaim is the "one for all clouds" abstraction. PersistentVolumeSource is "one per cloud" currently, but eventually we'd like to convert it to a plugin. |
Currently we have "GCEPersistentDisk", and this name is exposed via the API. I am working on adding support for AWS EBS, which is very similar to GCEPersistentDisk.
Should I add another type AWSPersistentDisk, or should we try to make GCEPersistentDisk work for any PD/EBS/Cinder style cloud-block device? (And presumably rename GCEPersistentDisk to CloudPersistentDisk or something similar)
On EC2, EBS volumes are bound to a specific AZ, so I think we'll likely have a "cloud location" specifier even if we have AWSPersistentDisk (cloudLocation: us-west-2b). I'm thinking if we're going to do that, we might as well have "cloudLocation: aws/us-west-2b", in which case there seems less reason to have the different types.
My personal preference would be to have one PersistentDisk type for all the clouds, to avoid code duplication; it also feels like a simpler API to consume.
The text was updated successfully, but these errors were encountered: