-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add block volume support to internal provisioners. #64447
Conversation
e2e tests will follow |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
on azuredisk & azurefile part
Thanks for the quick review, however, I need the others reviewer too :-) |
/approve |
Added e2e test. I ran it manually on GCE and AWS. It will be run in alpha test suite(s), since it depends on alpha feature. |
This seems like a feature, but its really a bug fix:
ideally these volume types wouldn't even provision volumes, but enforcing that may be more difficult then enabling them to provision block volumes. would like this in as a fix for 1.11 |
/kind bug |
"lgtm" from gluster side, Thanks Jan! |
@mkimuram @screeley44 @msau42 please review |
/retest |
I agree with you that it is enough for checking current internal provisioner.
However, IIUC, we are on the way to migrate provision logic for all plugins to CSI, If we don't check it, we won't be able to make filebase backend, like NFS and file mode of Gluster/Ceph, co-exist in blockVolume-enabled environment. So, I think that we also need to find a way to check it in external provisioner. (*1) provisionClaimOperation() for external provisioner. |
For CSI we have external provisioner that will get updated with a particular Kubernetes release and will check what PVC wants and if the CSI driver supports it. |
Users could run older versions of the CSI driver though so we still need to handle the scenario that the external provisioner may ignore any new provisioning parameters |
aws and e2e LGTM |
To discuss similar check for external provisioner, I made PR for it.
As described there, similar issue to CSI also happens in flex volume plugin. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: andyzhangx, jsafrane, rootfs The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
I got reviews from all volume plugins (@rootfs for Cinder and RBD) /sig storage |
/status approved-for-milestone |
[MILESTONENOTIFIER] Milestone Pull Request: Up-to-date for process @andyzhangx @copejon @jsafrane @mtanino @rootfs Pull Request Labels
|
/test all [submit-queue is verifying that this PR is safe to merge] |
@jsafrane: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
Automatic merge from submit-queue (batch tested with PRs 63348, 63839, 63143, 64447, 64567). If you want to cherry-pick this change to another branch, please follow the instructions here. |
What this PR does / why we need it:
Internal provisioners now create filesystem PVs when block PVs are requested. This leads to unbindable PVCs.
In this PR, volume plugins that support block volumes provision block PVs when block is requested. All the other provisioners return clear error in
kubectl describe pvc
:cc @kubernetes/vmware for vsphere changes
cc @andyzhangx for Azure changes
/assign @copejon @mtanino