-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
remove unneeded permissions for volume controllers #125995
remove unneeded permissions for volume controllers #125995
Conversation
This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/cc @xing-yang @jsafrane @msau42 For storage review |
/sig storage |
@@ -286,14 +282,7 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding) | |||
rbacv1helpers.NewRule("get", "list", "watch", "update").Groups(legacyGroup).Resources("persistentvolumeclaims").RuleOrDie(), | |||
rbacv1helpers.NewRule("update").Groups(legacyGroup).Resources("persistentvolumeclaims/status").RuleOrDie(), | |||
rbacv1helpers.NewRule("list", "watch", "get", "create", "delete").Groups(legacyGroup).Resources("pods").RuleOrDie(), | |||
|
|||
// glusterfs | |||
rbacv1helpers.NewRule("get", "list", "watch").Groups(storageGroup).Resources("storageclasses").RuleOrDie(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The PV controller requires storageclass permissions, so keep it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you have a link to this? knowing how you audited what was still in use would help build confidence that the other things being removed from this role are not in use by the controller in some other way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
PV controller: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/persistentvolume/pv_controller_base.go#L63
- informers: pv, pvc, sc, pod, node (list, watch)
- It is so strange that
persistent-volume-binder
clusterrole doesn't contain the watch verb for node resource. And the list verb is added in the openstack section. @jsafrane @msau42 @liggitt should I add the watch and list verb for node resource in thepersistent-volume-binder
clusterrole? - And the sc permission is added in the glusterfs section.
- It is so strange that
- kubeClient: pv (get, update), pvc (get, update)
- others: event
- volume plugins:
func ProbeControllerVolumePlugins(logger klog.Logger, config persistentvolumeconfig.VolumeConfiguration) ([]volume.VolumePlugin, error) { - hostpath:
- recycler: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/hostpath/host_path.go#L156 (kube client)
- deleter: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/hostpath/host_path.go#L354 (no kube client used)
- provisioner: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/hostpath/host_path.go#L293 (no kube client used)
- nfs:
- recycler: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/nfs/nfs.go#L156 (kube client)
- portworx:
- deleter: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx.go#L383 (use kube client to get service when it delete the volume https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx_util.go#L289)
- provisioner: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx.go#L394 (use kube client to get service when it create the volume https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx_util.go#L289)
These volume plugins doesn't access toendpoints
andnodes
resources. I can't find the code that it uses kube client to access the specific resource.
- hostpath:
- informers: pv, pvc, sc, pod, node (list, watch)
-
expand controller: https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/volume/expand/expand_controller.go#L100
- informers: pvc (list, watch)
- kubeClient: pv (get)
pv, err := expc.kubeClient.CoreV1().PersistentVolumes().Get(ctx, volumeName, metav1.GetOptions{}) - others: event
- volume plugins: https://github.com/kubernetes/kubernetes/blob/master/cmd/kube-controller-manager/app/plugins.go#L68 used to expand the volume.
- portworx:
- expander: https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx.go#L204 (use kube client to get service when it expand the volume https://github.com/kubernetes/kubernetes/blob/master/pkg/volume/portworx/portworx_util.go#L289)
- And the get verb for service is added in the glusterfs section.
- fc: it doesn't implement the expander interface.
These volume plugins doesn't access toendpoints
,storageclasses
andsecrets
resources. I can't find the code that it uses kube client to access the specific resource.
- portworx:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd like storage folks to do the first pass review here since they have domain knowledge of the controller loops
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I went through the PR and I can see that it removes bunch of unneeded permissions of removed volume plugins.
About the selected line here (read storageclasses
), the PV controller needs to find a CSI driver / provisioner responsible for provisioning of a PVC. It is not a feature related to Gluster, the comment is wrong.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/hold cancel
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsafrane do we have test coverage that would notice if we dropped a permission here that was actually still required?
/test pull-kubernetes-e2e-gce |
/lgtm |
LGTM label has been added. Git tree hash: 238fa31f813576e9fbc274fa953bb19355f04bce
|
/assign @mikedanese |
/cc @liggitt Can you take a look? |
…inder and system:controller:expand-controller clusterroles
b788e76
to
ae9e381
Compare
the AD controller supports the The flexVolume plugin implements the
Is it a bug? the flexVolume plugin has been marked as |
#67851 implemented the Flexvolume resize and said that the flex plugins are not installed on the controller. the flex plugin implements the
I'm wondering if the flex plugins are not installed on the master, why the ADC needs to probe volume plugins dynamically? If we don't add the flex plugins to the expand controller, once the portwox plugin completes csi migration, can we deprecated the expand controller and remove it in the future? @aniket-s-kulkarni @gnufied |
/assign |
@carlory can you file a separate issue about deprecation and removal of expand_controller from intree code? The flex volume plugin IMO does support allowing expansion from control-plane too. It is just that, most deployments typically don't install the flex volume plugin in the control-plane for various reasons, but if they do and plugin implements
Node only deployment is now the only way to deploy flex. Flex plugin also optionally supports |
/lgtm |
LGTM label has been added. Git tree hash: d2b293971a2e546f1647150118f60c8a48a091ad
|
@@ -188,11 +188,10 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding) | |||
rbacv1helpers.NewRule("get", "list", "watch", "update", "patch").Groups(legacyGroup).Resources("persistentvolumes").RuleOrDie(), | |||
rbacv1helpers.NewRule("update", "patch").Groups(legacyGroup).Resources("persistentvolumeclaims/status").RuleOrDie(), | |||
rbacv1helpers.NewRule("get", "list", "watch").Groups(legacyGroup).Resources("persistentvolumeclaims").RuleOrDie(), | |||
// glusterfs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there an out of tree CSI gluster controller now? does it deploy its own role/rolebindings to grant required permissions or reference this default role? do we need to open an issue to update the manifest there to add these removed permissions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the selected line here (read storageclasses), the PV controller needs to find a CSI driver / provisioner responsible for provisioning of a PVC. It is not a feature related to Gluster, the comment is wrong.
@jsafrane does this also apply to the expand controller? (does it need storageclass read permission independent of glusterfs?)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure if there is a supported glusterfs csi driver currently. The driver Google shows is deprecated - https://github.com/gluster/gluster-csi-driver
But yes, the permissions associated with expand-controller which was given here for intree glusterfs driver shouldn't matter, any external CSI driver should run with its own serviceaccount and permissions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(does it need storageclass read permission independent of glusterfs?)
It shouldn't. The only other place we do read storageclasses for expansion purposes (apart from external-resizer which runs with its own permissions), is for admission - https://github.com/kubernetes/kubernetes/blob/master/plugin/pkg/admission/storage/persistentvolume/resize/admission.go#L62
But that code shouldn't run with permissions specified in here. So, we should be fine. I triggered alpha test suite just in case, because it has more of expansion related tests to be sure.
@@ -188,11 +188,10 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding) | |||
rbacv1helpers.NewRule("get", "list", "watch", "update", "patch").Groups(legacyGroup).Resources("persistentvolumes").RuleOrDie(), | |||
rbacv1helpers.NewRule("update", "patch").Groups(legacyGroup).Resources("persistentvolumeclaims/status").RuleOrDie(), | |||
rbacv1helpers.NewRule("get", "list", "watch").Groups(legacyGroup).Resources("persistentvolumeclaims").RuleOrDie(), | |||
// glusterfs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
About the selected line here (read storageclasses), the PV controller needs to find a CSI driver / provisioner responsible for provisioning of a PVC. It is not a feature related to Gluster, the comment is wrong.
@jsafrane does this also apply to the expand controller? (does it need storageclass read permission independent of glusterfs?)
@@ -286,14 +282,7 @@ func buildControllerRoles() ([]rbacv1.ClusterRole, []rbacv1.ClusterRoleBinding) | |||
rbacv1helpers.NewRule("get", "list", "watch", "update").Groups(legacyGroup).Resources("persistentvolumeclaims").RuleOrDie(), | |||
rbacv1helpers.NewRule("update").Groups(legacyGroup).Resources("persistentvolumeclaims/status").RuleOrDie(), | |||
rbacv1helpers.NewRule("list", "watch", "get", "create", "delete").Groups(legacyGroup).Resources("pods").RuleOrDie(), | |||
|
|||
// glusterfs | |||
rbacv1helpers.NewRule("get", "list", "watch").Groups(storageGroup).Resources("storageclasses").RuleOrDie(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jsafrane do we have test coverage that would notice if we dropped a permission here that was actually still required?
/test gce-cos-master-alpha-features |
@gnufied: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test pull-kubernetes-e2e-gce-cos-alpha-features |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: carlory, jsafrane, liggitt The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
What type of PR is this?
/kind bug
What this PR does / why we need it:
the glusterfs and OpenStack volume plugins have been removed from the in-tree.
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: