Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

core: define empty securityContext for pods to fix CIS 5.7.3 #14823

Merged
merged 1 commit into from
Oct 17, 2024

Conversation

prazumovsky
Copy link
Contributor

Is this a bug report or feature request?

  • Feature Request

What should the feature do:

Rook deployments should be CIS compliant.
Currently having rook/rook fail to pass some Kubernetes CIS tests/recommandations.

What is use case behind this feature:

5.7.3 Apply Security Context to Your Pods and Containers

When designing your containers and pods, make sure
that you configure the security context for your pods,
containers, and volumes.

Almost all Rook deployments / daemonsets / jobs have no securityContext defined on a pod level (spec.template.spec). These objects could satisfy CIS 5.7.3 rule even with empty securityContext explicitly defined on a pod level.

@prazumovsky
Copy link
Contributor Author

prazumovsky commented Oct 9, 2024

If it is possible, please add "backport-release-1.14" label due to we are using rook 1.14 version

@prazumovsky prazumovsky changed the title Define empty securityContext for pods to fix CIS 5.7.3 core: define empty securityContext for pods to fix CIS 5.7.3 Oct 9, 2024
Resolves CIS benchmark rule 5.7.3, Pods part. SecurityContext
should be explicitly defined in pod level of Pod spec section.
It is sufficient to specify empty securityContext to satisfy
CIS 5.7.3 rule.

5.7.3 Apply Security Context to Your Pods and Containers

When designing your containers and pods, make sure
that you configure the security context for your pods,
containers, and volumes.

Signed-off-by: Peter Razumovsky <prazumovsky@mirantis.com>
@@ -162,6 +162,7 @@ func (c *ClusterController) cleanUpJobTemplateSpec(cluster *cephv1.CephCluster,
Volumes: volumes,
RestartPolicy: v1.RestartPolicyOnFailure,
PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames),
SecurityContext: &v1.PodSecurityContext{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we set the pod SecurityContext, we need to text thoroughly that it does not break the container security context that is already set. While if both are set, I understand the container security context will override the pod security context, we must confirm that this does not cause issues with the volumes that are mounted. IIRC the pod security context may only be used for volumes, which could break the volumes. See where we are setting container security context with the controller.PodSecurityContext() method. The best way to test this will be on an openshift cluster where the security contexts are necessary. Do you have an openshift cluster for testing?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you have an openshift cluster for testing?

I have no openshift cluster for testing, we tested this on our side on metal3 bare metal cluster ((clusterapi provider) and bm-operator)) with rook-ceph chart installed and got nothing to be worried about. However I understand that it is not a proof (even we could not share the results of testing) and I understand your worry therefore asking back - how can I get openshift cluster for free to test changes? Maybe there any upstream jobs to test such changes?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't have any upstream jobs to test openshift, unfortunately. I can potentially test it on an openshift cluster, it will just take me a few days to get to it. I'll plan on it next week.

Copy link
Contributor Author

@prazumovsky prazumovsky Oct 10, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can potentially test it on an openshift cluster, it will just take me a few days to get to it. I'll plan on it next week.

it would be brilliant, many thanks! I'll wait the results then.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testing looks good in my openshift cluster, I could not see any side effects of setting the pod's securitycontext to empty like this. It also seems low risk enough to go ahead and backport to 1.15.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great news, thanks for merging!

@@ -162,6 +162,7 @@ func (c *ClusterController) cleanUpJobTemplateSpec(cluster *cephv1.CephCluster,
Volumes: volumes,
RestartPolicy: v1.RestartPolicyOnFailure,
PriorityClassName: cephv1.GetCleanupPriorityClassName(cluster.Spec.PriorityClassNames),
SecurityContext: &v1.PodSecurityContext{},
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The testing looks good in my openshift cluster, I could not see any side effects of setting the pod's securitycontext to empty like this. It also seems low risk enough to go ahead and backport to 1.15.

@travisn travisn merged commit 6428693 into rook:master Oct 17, 2024
55 of 56 checks passed
mergify bot added a commit that referenced this pull request Oct 17, 2024
core: define empty securityContext for pods to fix CIS 5.7.3 (backport #14823)
travisn added a commit that referenced this pull request Oct 21, 2024
core: define empty securityContext for pods to fix CIS 5.7.3 (backport #14823)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants