-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed conflict resolution behavior while apply podpresets #47864
fixed conflict resolution behavior while apply podpresets #47864
Conversation
Hi @droot. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
hack/local-up-cluster.sh
Outdated
@@ -27,6 +27,8 @@ ALLOW_SECURITY_CONTEXT=${ALLOW_SECURITY_CONTEXT:-""} | |||
PSP_ADMISSION=${PSP_ADMISSION:-""} | |||
NODE_ADMISSION=${NODE_ADMISSION:-""} | |||
RUNTIME_CONFIG=${RUNTIME_CONFIG:-""} | |||
# RUNTIME_CONFIG=${RUNTIME_CONFIG:-"settingsio/v1alpha1=true,settings.k8s.io/v1alpha1/podpreset=true"} | |||
RUNTIME_CONFIG=${RUNTIME_CONFIG:-"api/all=true"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's your intended goal for "RUNTIME_CONFIG"? You're attempting to declare it on both lines 29 and 31.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file got checked in by mistake. I have cleaned it up and enabled "api/all" by default. PTAL.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@droot I don't see any changes to this file.
@droot: GitHub didn't allow me to assign the following users: jpeeler. Note that only kubernetes members can be assigned. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/assign @jpeeler |
@droot: GitHub didn't allow me to assign the following users: jpeeler. Note that only kubernetes members can be assigned. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
a8ce205
to
78f7974
Compare
hack/local-up-cluster.sh
Outdated
@@ -26,7 +26,7 @@ ALLOW_PRIVILEGED=${ALLOW_PRIVILEGED:-""} | |||
ALLOW_SECURITY_CONTEXT=${ALLOW_SECURITY_CONTEXT:-""} | |||
PSP_ADMISSION=${PSP_ADMISSION:-""} | |||
NODE_ADMISSION=${NODE_ADMISSION:-""} | |||
RUNTIME_CONFIG=${RUNTIME_CONFIG:-""} | |||
RUNTIME_CONFIG=${RUNTIME_CONFIG:-"api/all=true"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these debugging changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Took it out. It is not needed. I thought it was needed to enable "settings..../v1alpha1" API endpoint, but I was wrong, so took it out. PTAL.
78f7974
to
51cd10a
Compare
/unassign |
hack/local-up-cluster.sh
Outdated
@@ -215,7 +215,7 @@ RKT_PATH=${RKT_PATH:-""} | |||
RKT_STAGE1_IMAGE=${RKT_STAGE1_IMAGE:-""} | |||
CHAOS_CHANCE=${CHAOS_CHANCE:-0.0} | |||
CPU_CFS_QUOTA=${CPU_CFS_QUOTA:-true} | |||
ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"false"} | |||
ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"true"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is also a debugging change, right? Def out of scpe for this PR
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Not scoped for this PR, took it out.
pod.ObjectMeta.Annotations[fmt.Sprintf("%s/podpreset-%s", annotationPrefix, pip.GetName())] = pip.GetResourceVersion() | ||
func safeToApplyPodPresetsOnContainer(ctr *api.Container, podPresets []*settings.PodPreset) error { | ||
var errs []error | ||
if _, err := MergeEnv(ctr.Env, podPresets); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like mergeEnv is probably unintentionally exported; we should make it private unless it's used elsewhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good pt. Made it private.
// if there were no original envvar just return the pip envvar | ||
if original == nil { | ||
return pip.Spec.Env, nil | ||
func MergeEnv(envVars []api.EnvVar, podPresets []*settings.PodPreset) ([]api.EnvVar, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hrm, maybe it is intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think so. Made it private.
|
||
var mergedEnvFrom []api.EnvFromSource | ||
|
||
mergedEnvFrom = append(mergedEnvFrom, envSources...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason you chose a different form of allocate and copy here vs mergeenv?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed the original code and didn't change it. I am trying to dig more in to why EnvFrom are treated differently. Quick look suggests, they should also be handled in the similar ways as Env, but I need to confirm that.
EDIT: Took a look at EnvFrom. So EnvFrom is used by envVars to source the value from one of Sources (ConfigMap, Secret, Pod Resource or Container Resource basically downward API). So it is list of those objects which are referred by EnvVars. So there is not question of conflict here, so these lists just need to be merged. However, we can have duplicates in the list referring to the same source, but is harmless.
|
||
// make sure they are identical or throw an error | ||
if !reflect.DeepEqual(found, v) { | ||
errs = append(errs, fmt.Errorf("merging volume mounts for %s has a conflict on %s: \n%#v\ndoes not match\n%#v\n in container", pp.GetName(), v.Name, v, found)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be helpful if this identified the 'for' as a podpreset
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't understand it. Can you explain it.
continue | ||
} | ||
|
||
// make sure they are identical or throw an error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if identical but duplicated mounts is an error - use case?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm... good point. Have to think more about it. Made a note of it.
for _, v := range original { | ||
orig[v.MountPath] = v | ||
} | ||
for _, pp := range podPresets { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like we can do all of this in a single nested loop, since this is orthogonal to the above loop but covers the same space
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion, refactored it.
continue | ||
} | ||
|
||
// make sure they are identical or throw an error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also not sure about duplicate but equal volumes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.. thinking more about it.
} | ||
} | ||
|
||
func applyPodPresetsOnContainer(ctr *api.Container, podPresets []*settings.PodPreset) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wiuld add a comment explaining that it shiuld be safe to ignore these errors
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
51cd10a
to
09bfe6f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @pmorie for the review. Have addressed all comments except one related to handling merge of EnvFrom. Will be investigating more about it today. PTAL.
hack/local-up-cluster.sh
Outdated
@@ -215,7 +215,7 @@ RKT_PATH=${RKT_PATH:-""} | |||
RKT_STAGE1_IMAGE=${RKT_STAGE1_IMAGE:-""} | |||
CHAOS_CHANCE=${CHAOS_CHANCE:-0.0} | |||
CPU_CFS_QUOTA=${CPU_CFS_QUOTA:-true} | |||
ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"false"} | |||
ENABLE_HOSTPATH_PROVISIONER=${ENABLE_HOSTPATH_PROVISIONER:-"true"} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch. Not scoped for this PR, took it out.
hack/local-up-cluster.sh
Outdated
@@ -403,7 +403,7 @@ function start_apiserver { | |||
fi | |||
|
|||
# Admission Controllers to invoke prior to persisting objects in cluster | |||
ADMISSION_CONTROL=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount${security_admission},ResourceQuota,DefaultStorageClass,DefaultTolerationSeconds | |||
ADMISSION_CONTROL=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount${security_admission},ResourceQuota,DefaultStorageClass,DefaultTolerationSeconds,PodPreset |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@pmorie Changed the order here. Moved PodPreset before the Pod Security Policy plugin because we want PSP to evaluate the modified PodSpec instead of original PodSpec.
pod.ObjectMeta.Annotations[fmt.Sprintf("%s/podpreset-%s", annotationPrefix, pip.GetName())] = pip.GetResourceVersion() | ||
func safeToApplyPodPresetsOnContainer(ctr *api.Container, podPresets []*settings.PodPreset) error { | ||
var errs []error | ||
if _, err := MergeEnv(ctr.Env, podPresets); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good pt. Made it private.
// if there were no original envvar just return the pip envvar | ||
if original == nil { | ||
return pip.Spec.Env, nil | ||
func MergeEnv(envVars []api.EnvVar, podPresets []*settings.PodPreset) ([]api.EnvVar, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think so. Made it private.
for _, v := range original { | ||
orig[v.MountPath] = v | ||
} | ||
for _, pp := range podPresets { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good suggestion, refactored it.
continue | ||
} | ||
|
||
// make sure they are identical or throw an error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm... good point. Have to think more about it. Made a note of it.
|
||
// make sure they are identical or throw an error | ||
if !reflect.DeepEqual(found, v) { | ||
errs = append(errs, fmt.Errorf("merging volume mounts for %s has a conflict on %s: \n%#v\ndoes not match\n%#v\n in container", pp.GetName(), v.Name, v, found)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't understand it. Can you explain it.
continue | ||
} | ||
|
||
// make sure they are identical or throw an error |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah.. thinking more about it.
} | ||
} | ||
|
||
func applyPodPresetsOnContainer(ctr *api.Container, podPresets []*settings.PodPreset) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
|
||
var mergedEnvFrom []api.EnvFromSource | ||
|
||
mergedEnvFrom = append(mergedEnvFrom, envSources...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I followed the original code and didn't change it. I am trying to dig more in to why EnvFrom are treated differently. Quick look suggests, they should also be handled in the similar ways as Env, but I need to confirm that.
EDIT: Took a look at EnvFrom. So EnvFrom is used by envVars to source the value from one of Sources (ConfigMap, Secret, Pod Resource or Container Resource basically downward API). So it is list of those objects which are referred by EnvVars. So there is not question of conflict here, so these lists just need to be merged. However, we can have duplicates in the list referring to the same source, but is harmless.
@pmorie addressed all the comments. Also updated the comment about EnvFrom. PTAL. (Also, I am not able to see review comment threads, thats very strange). |
@k8s-bot ok to test |
/release-note |
09bfe6f
to
7e7c3e9
Compare
/test pull-kubernetes-verify |
/test pull-kubernetes-node-e2e |
/assign @shiywang |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple nits; I also think godoc for each method would be fantastic to have.
hack/local-up-cluster.sh
Outdated
@@ -406,8 +406,7 @@ function start_apiserver { | |||
fi | |||
|
|||
# Admission Controllers to invoke prior to persisting objects in cluster | |||
ADMISSION_CONTROL=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount${security_admission},ResourceQuota,DefaultStorageClass,DefaultTolerationSeconds | |||
|
|||
ADMISSION_CONTROL=Initializers,NamespaceLifecycle,LimitRanger,PodPreset,ServiceAccount${security_admission},ResourceQuota,DefaultStorageClass,DefaultTolerationSeconds |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we were to do it again, I would say to make this change in another PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Makes sense. Took it out. Will make a separate PR for this.
if err != nil { | ||
// add event to pod | ||
c.addEvent(pod, pip, err.Error()) | ||
glog.Infof("applied podpresets successfully on Pod: %s ", pod.GetGenerateName()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this always going to yield the right name?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed it to pod.Name. Also improved the logging a bit by making it more informative (added applied podpreset names as well)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
pod.Name or pod.GetName() is empty, so kept pod.GenerateName(). I guess each pod instance will get a generated name.
7e7c3e9
to
91f804d
Compare
@pmorie addressed the comments. Please take a look. |
@k8s-bot test this |
/test pull-kubernetes-kubemark-e2e-gce |
/lgtm |
/approve |
Removing label |
/approve |
91f804d
to
4d5b96f
Compare
rebased with the master, so will need LGTM. |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: derekwaynecarr, droot, pmorie, pwittrock Associated issue: 47861 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
Automatic merge from submit-queue (batch tested with PRs 49444, 47864, 48584, 49395, 49118) |
What this PR does / why we need it:
This fixes the PodPreset application behavior in case of conflicts occur during the merging of Pod's information with PodPreset's. More details are in issue #47861
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #fixes #47861
Special notes for your reviewer:
We are splitting the PodPreset application logic in two phases. In first phase, we try to detect the conflicts in information merging without modifying the Pod at all. If conflict occurs, then we reject the PodPresets injection. Incase of no conflicts, we apply the PodPresets and merge the information.
Release note: