-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Provide support to attach a vhd as volume on Microsoft Azure #23259
Comments
Sounds reasonable to me. Glad to have more Azure support.
|
Hi @colrack, I discussed this with some folks and it sounds like the disk attachment APIs for Azure are not very reliable and that it would be difficult to build a reliable feature on top of the current disk attachment APIs. I don't have a lot of specific, but I was told that another developer tried to build a feature on top of the disk attach APIs for Flocker and ran into significant resistance. We have a feature in the works that would be more suited for these types of use-cases, but we don't have anything we can share about the feature or timeline currently. |
Thanks @colemickens Some time ago I went into this https://github.com/sedouard/azure-flocker-driver/ which at that time was a work in progress; I think this is the project you are talking about, discussions about issues here https://github.com/sedouard/azure-flocker-driver/issues/15 |
note this has to be sync'ed with #20262 |
Is it going to be in release 1.3 ? |
Automatic merge from submit-queue support Azure data disk volume This is a WIP of supporting azure data disk volume. Will add test and dynamic provisioning support once #29006 is merged replace #25915 fix #23259 @kubernetes/sig-storage @colemickens @brendandburns
With this I want to track and share thoughts about introducing the ability to provide support to attach a vhd as a volume on MS Azure.
With the recent kubernetes v1.2.0 the ability to use Azure File System as a volume was introduced. I recently gave it a try on a cluster running on CoreOS and with some workaround (see coreos/bugs#571 and coreos/bugs#358) it seems to work. However Azure File Service has a lot of limitations (see https://msdn.microsoft.com/en-us/library/azure/dn744326.aspx ) that prevent to use it in many use cases.
Intro
In Azure we can attach a persistence disk to a vm. This is more or less equal to what happens when you attach a Persistence Disk on GCE to vm or what happens when you attach an EBS volume to a ec2 instance on AWS. When you attach a vhd to a vm a new device is created in /dev and then you can perform operations like format and mount; attaching and detaching a vhd does not require to perform a reboot. Persistence disks in Azure are VHDs. You can create them, attach and do other operations via REST API. VHDs are saved as
page blobs
in an Azure Storage Account, inside a blob container (see https://azure.microsoft.com/en-us/documentation/articles/storage-introduction/ ). In Azure you can attach a vhd to a single vm. So using a vhd as volume would work like attaching to a pod a gcePersistentDisk or awsElasticBlockStore and it would have more or less the same limitations, including:These limitations are due to the fact that, as reported before, you can attach a vhd to a single vm. VHDs are mounted on a vm filesystem, and the volume associated to this filesystem must be bound to a pod running on the same vm. Hypothetically pods running in the same node could share the same volume both in reading and writing, but this is a complicated and not so useful scenario.
Work to do
Add in
kubernetes/pkg/volume/azure_vhd/
the following:kubernetes/pkg/volume/azure_vhd/azure_vhd.go
kubernetes/pkg/volume/azure_vhd/azure_vhd_test.go
kubernetes/pkg/volume/azure_vhd/azure_util.go
kubernetes/pkg/volume/azure_vhd/doc.go
You can see code in
kubernetes/pkg/volume/
, speciallykubernetes/pkg/volume/aws_ebs/
andkubernetes/pkg/volume/gce_pd/
since the idea is the same.Basically the code must call Azure api to attach the vhd to the node where the pod is scheduled, mount the filesystem on that vm and share the volume with the running container. Cloud provider logic is detached from volume code, files are in
kubernetes/pkg/cloudprovider/providers/
, so stuff inkubernetes/pkg/cloudprovider/providers/azure
must be added and then used from files listed above. The idea is that credentials to call Azure API must be provided to masters in the cluster. This can be achieved by creating aservice principal
(see https://azure.microsoft.com/en-us/documentation/articles/resource-group-authenticate-service-principal/) and let it access resources in the subscription where the kube cluster resides. Azure SDK for go is here https://github.com/Azure/azure-sdk-for-go . The package to use isarm/compute
. Usefunc (VirtualMachinesClient) CreateOrUpdate
to update the vm passing appropriate paramsVirtualMachine -> VirtualMachineProperties -> StorageProfile -> DataDisk -> VirtualHardDisk
.An example pod would be like:
Please comment and share your thoughts.
Feel free to point out if I'm missing something or if I'm wrong.
Ciao,
Carlo
The text was updated successfully, but these errors were encountered: