-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The ScaleIO volume plugin is not applying fsGroup
value properly
#50794
Comments
@vladimirvivien
Note: Method 1 will trigger an email to the group. You can find the group list here and label list here. |
/sig storage |
…f-#48999-upstream-release-1.7 Automatic merge from submit-queue ScaleIO Volume Plugin - volume attribute updates This commit introduces the following updates and fixes: - Enable scaleIO volume multip-mapping based on accessMode - No longer uses "default" as default values for storagepool & protection domain - validates capacity when capacity is zero - Better naming for PV and volume - make mount ro when accessModes contains ROM **Special notes for your reviewer**: - Related bug - #50794 - This is being cherry-picked for PR #48999 Fixes: #50794 **Release note**: ```release-note ScaleIO: fixed enforcement of fsGroup, enabled multiple-instance volume mapping, adjusted alignment of PVC, PV, and volume names for dynamic provisioning ```
This was fixed in 1.8 and backported in 1.7 |
Is this a BUG REPORT or FEATURE REQUEST?:
/kind bug
What happened:
The ScaleIO volume plugin is not applying
fsGroup
value properly.What you expected to happen:
When
fsGroup
value is specified in the spec, the mounted volume does not reflect the expected value of thefsGroup
.How to reproduce it (as minimally and precisely as possible):
Set fsGroup value in a spec:
After pod is deployed, the mounted directory for the volume does not reflect the fsGroup value.
Environment:
kubectl version
): 1.7.x and lowerThe text was updated successfully, but these errors were encountered: