-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run l7 controller on master #26048
Run l7 controller on master #26048
Conversation
@@ -1,33 +1,26 @@ | |||
{% set kube_uid = "" -%} | |||
{% if pillar['kube_uid'] is defined -%} | |||
{% set kube_uid = pillar['kube_uid'] %} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If gke keeps re-salting clusters it defeats the purpose of file based persistence, and there are too many edge cases anway, so it's safer to maintain and allocate the uid in the l7 pod.
You'll need to update the gci configs for master as well. |
8b38707
to
b535d0c
Compare
GCE e2e build/test passed for commit c01a058. |
@kubernetes/goog-testing we used to configure logging through e2e/core.go but I don't see that anymore, have we just automated the collection of master logs from /var/log? |
Also updated startup scripts so we deploy an l7 static pod on a gci master too (which is the default currently) |
LGTM |
This pr will fix gke ingress e2e flake. GKE is currently infrequently leaking resources because a salt variable reflecting the cluster uid wasn't piped through with the appropriate internal cls. Leaving priority as is because there are few enough prs in the queue. Lets hope there aren't anymore configuration discrepancies between gce and gke. |
Ok, nope, more people are jumping on the sq, so p1-ing this as a flake fix |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit c01a058. |
Automatic merge from submit-queue |
Automatic merge from submit-queue Collect l7 controller e2e logs #26048 (comment) I meant to check e2e output and see if the logs were being collected, but it merged before i could.
@@ -706,9 +705,6 @@ function start-kube-addons { | |||
fi | |||
if [[ "${ENABLE_L7_LOADBALANCING:-}" == "glbc" ]]; then | |||
setup-addon-manifests "addons" "cluster-loadbalancing/glbc" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This line can be moved into function start-lb-controller
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
but then it won't start-lb-controller without the right env vars, which makes the function name a lie right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh, no i guess it can. it splits addon deployment, but consolidates loadbalancer deployment. So ill send a follow up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for late comment, I just came back to office. It is not a functionality problem, just an improvement. So, you don't have to make a PR dedicated for the correction. It is up to you
Automatic merge from submit-queue GCI: correct the fix in #26363 This PR is mainly for correcting the fix to 'find' command in #26363. I added "-maxdepth 1" in an earlier change, and #26363 tried to fix it by changing the search path. This is potentially incorrect, when yaml files are in more than one layer deep. The real fix should be removing the "-maxdepth 1" flag from 'find' command. This PR also updates two minor places in the file configure-helper.sh introduced by two previous PR #26413 and #26048. @roberthbailey @wonderfly cc/ @dchen1107 @fabioy @kubernetes/goog-image
Fixes #23663, needs kubernetes-retired/contrib#680
@roberthbailey @kubernetes/goog-cluster