-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
federation: Creating kubeconfig files to be used for creating secrets for clusters on aws and gke #27332
Conversation
Am testing it out |
Verified that it works as expected on GKE |
@@ -131,7 +131,6 @@ function validate-cluster { | |||
# CLUSTER_IP_RANGE (optional) | |||
# GKE_CREATE_FLAGS (optional, space delineated) | |||
function kube-up() { | |||
echo "... in gke:kube-up()" >&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why remove this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching.
reverted
One minor nit. LGTM otherwise. Please feel free to self-apply the label. |
Thanks fixed. Adding LGTM. Also adding P1 since this is for federation which is release blocker for 1.3 |
Turns out "_" is not allowed in secret names and the context that GKE script creates in kubeconfig is named
Removing LGTM. |
Pushed another commit to fix the context name before using it as cluster and secret names. PTAL. |
|
Tracking issue for flake is #26290 |
@quinton-hoole |
@nikhiljindal the kubeconfig contexts should already be created for you- I don't understand why you are creating additional kubeconfig files in the various By the time the e2e framework is running |
@colhom You are right. By the time the e2e framework is done, the default kubeconfig is setup with credentials for all the clusters. That enables kubectl to be able to talk to any cluster. That flow is working great. This change is to be able to create a secret. This secret is used by the federation controller manager (specifically the cluster controller) to be able to talk to each k8s cluster that is part of federation. There is a secret for each k8s cluster. The secret contains the kubeconfig record corresponding to that cluster. For this we create a new kubeconfig file that contains record for only one cluster (vs the default kubeconfig which contains records for all clusters) and then use that newly created kubeconfig file to create the secret. |
And yes to be clear, there is another way to do this. Instead of creating those individual kubeconfigs in these shell scripts at cluster bring up time, we can extract those individual records from the single unified kubeconfig (which framework.GetUnderlyingFederatedContexts uses). That way we wont have to write cloud provider specific code and it will be go code so we can have good tests. Changing the shell script for GCE was easy enough (just 4 lines of code) and the changes were a lot less brittle than the alternative so I went that route. |
@nikhiljindal the format that comes out of I don't see the advantage of keeping redundant copies of the kubeconfig around on a per-provider basis. I do see a number of detriments (maintainability, further bloating artifact tree, more potentially non-portable build state, etc) |
@nikhiljindal and I discussed this offline- I agree that this is the "least bad" way of doing this as of now. |
LGTM |
[sorry for the late comment] |
Updated the code to use |
lgtm |
GCE e2e build/test passed for commit 864b267. |
LGTM. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 864b267. |
Automatic merge from submit-queue |
Extension of #26914 which created the kubeconfig files for gce clusters.
This PR extends it to AWS, vagrant and GKE.
The change for AWS and vagrant is exactly same as GCE.
For GKE, since
gcloud create clusters
creates kubeconfig, we are just copying the generated kubeconfig to the desired locationcc @kubernetes/sig-cluster-federation @colhom
@roberthbailey for GKE