-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
AccessMode doesn't work properly within PVC #47333
Comments
@pkutishch There are no sig labels on this issue. Please add a sig label by: |
@kubernetes/sig-storage |
/sig storage |
The only function AccessModes has right now is at binding time, for ensuring a PVC gets a PV with the AccessModes it wants: think of it as a first-class labelselector. And the way kubernetes thinks of it is that RWO is a subset of RWM, so a provisioned RWM glusterfs volume satisfies your claim. But I agree this is a bug, we should provision ReadOnly when ReadOnly is explicitly requested. The value of the ReadOnly setting of PersistentVolumeSource (which is what says "mount w/ ro") can be totally independent of the AccessModes, i.e. you can have a PV with RWM AccessModes but ReadOnly set true. But the provisioners are in a position to be smarter and set ReadOnly according to what AccessModes were requested. Let me plug #47274, when it's merged provisioners will stop ignoring AccessModes, they will start parsing it on per-plugin basis, but for a different reason than what's requested here. Like this, when implemented it will be a "breaking bugfix," & so we will need approval before we break stuff |
@pkutishch what does your pod definition looks like? You can specify @wongma7 yeah, I think we can indeed set |
There are 2 separate issues here. a. AccessMode is not applied at mount time. So if user specifies ROX, volumes are still mounted as RW. As @wongma7 said, this is because AccessMode is only used during binding phase and forgotten later on. This problem is relatively easier to solve and I think we can set ReadOnly on volumes if ROX accessmode is specified in PVC. b. Another problem of course is, k8s doesn't enforces accessMode check during pod creation/scheduling. i.e - a RWO PVC can be used from multiple pods and that inevitably fails (unless with some luck, 2 pods land on same node). We have merged #45346 PR, which prevents attach operation from happening if volume is attached elsewhere. That PR only applies to Cinder and Azure. The problem that we are seeing is - in many cases users aren't aware that they aren't supposed to use same PVC in 2 pods or they are scaling the deployment with PVC to 2 or more replicas. This state usually leads to a huge number of invalid CloudProvider API calls and is I think undesirable. We have 2 choices to fix problem#b:
@saad-ali @kubernetes/sig-storage-pr-reviews |
@gnufied, if we detect the multi-pod usage from scheduler side, is there any way for the scheduler to know whether or not a PVC is already bound to a pod? |
@xingzhou as per my understanding the way is check pod spec where binding pvc defined in volume block. |
yes, that's my concern, as the PVC status does not include the bound pod info, so it might be not easy to stop the pod scheduling at schedule phase at present, if check from pod spec, I don't know if there is any way to "lock" the resource efficiently, as in a cluster, I think we can run multiple schedulers simultaneously. |
@xingzhou as i know k8s storing information in etcd, not sure if it stores information regarding storage but you may setup lock flag in etcd, and during assigning PVC to POD it will check for lock flag in case you using RWO mode. and not to set lock flag for RWX and ROX, but not sure that this case is suitable. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Kubernetes version (use
kubectl version
):Client Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.2", GitCommit:"477efc3cbe6a7effca06bd1452fa356e2201e1ee", GitTreeState:"clean", BuildDate:"2017-04-19T20:33:11Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.3", GitCommit:"0480917b552be33e2dba47386e51decb1a211df6", GitTreeState:"clean", BuildDate:"2017-05-10T15:38:08Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}
Environment:
NAME="Container Linux by CoreOS"
ID=coreos
VERSION=1381.1.0
VERSION_ID=1381.1.0
BUILD_ID=2017-04-25-2232
PRETTY_NAME="Container Linux by CoreOS 1381.1.0 (Ladybug)"
ANSI_COLOR="38;5;75"
HOME_URL="https://coreos.com/"
BUG_REPORT_URL="https://issues.coreos.com"
COREOS_BOARD="amd64-usr"
Linux server37 4.10.12-coreos Unit test coverage in Kubelet is lousy. (~30%) #1 SMP Tue Apr 25 22:08:35 UTC 2017 x86_64 Intel(R) Xeon(R) CPU E5-2650 v2 @ 2.60GHz GenuineIntel GNU/Linux
hyperkube
Access modes for PVC resources does not work as expected, for example if using dynamic provision with ROX mode, for glusterFS provisioner it allows perform write operation on volume moreover it can be used as RWX.
I expect when using accessModes it should work properly, at least if i creating RO volume it should be mounted as RO
Create PVC with AccessMode param as ReadOnlyMany, then assign it to POD and try to write something, for example touch <path/to/mount>/test, you should be able to write file in "ReadOnly" filesystem
The text was updated successfully, but these errors were encountered: