-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add kubeconfig types #2861
add kubeconfig types #2861
Conversation
0f23338
to
379d5e8
Compare
379d5e8
to
43bff2b
Compare
@smarterclayton simple renames squashed in. APIServer and APIVersion captialized. CmdBuilder clarified to FlagBasedBuilder. IsReady() renamed to Complete(). |
e162676
to
d8acc0a
Compare
@smarterclayton read path complete including merging and overriding. |
d8acc0a
to
eb2fa2c
Compare
@smarterclayton Comments addressed except
It does work when you won't want to skew server and client versions. If my intent is to get a client.Client for a particular server,authInfo pair, it doesn't seem unreasonable to provide checking that the client is likely to work. Sort of a validation check before returning. Given that we already have a flag for this in both kubernetes and openshift, it seems reasonable to have a common spot to have it checked. Using the factory doesn't seem very clean. That prevents this code from standing alone and the Factory in kubectl seems only tangentially related to what this code is trying to do. |
30c1305
to
7f61ebc
Compare
@smarterclayton moved match-server-version |
return util.SliceToError(validationErrors) | ||
} | ||
|
||
// validateAuthInfo looks for conflicts and errors in the cluster info |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
validateClusterInfo looks for conflicts...
minor nit on godoc, but looks good to me. |
Will look later today
|
@derekwaynecarr typo fixed. |
4f3503d
to
8405eb3
Compare
c67192a
to
ffc4f5e
Compare
@ghodss @jlowdermilk This enables reading multiple cli configuration profiles #1755. Do you guys have comments? |
func (builder *builder) Config() (*client.Config, error) { | ||
if builder.config != nil { | ||
return builder.config, nil | ||
// Client implements KubeConfig |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forget to delete this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Forget to delete this?
Yes. Tidied up.
LGTM, this fits the design proposed in #1755 nicely. Just nits above. Might be helpful to add a .md doc that includes an explanation of load order. Maybe cut/paste the text of the comment on getKubeConfig |
ffc4f5e
to
ade2236
Compare
@jlowdermilk Just a doc under kubernetes/docs? |
Some final comments (one substantial concern about validation, one gap with flag names), otherwise is in good shape to me. |
@smarterclayton Got the renames. I put the error type export in a separate commit so you can confirm that's how you want it. |
a1e7c03
to
af1247c
Compare
af1247c
to
cece022
Compare
cece022
to
131deef
Compare
131deef
to
0e688dc
Compare
@smarterclayton tweaked, rebased, squashed. |
LGTM |
Fix inflight merge conflict by adapting rollingupdate to #2861
I think that this broke e2e because we didn't make changes to cluster turn up to write the appropriate config file. |
I would lean towards rolling this back, and then have a re-revert with changes to kube-up function cluster/*/ Sorry! |
/sub (chasing this with @brendanburns) |
@brendanburns All command line flags were kept exactly the same and the file isn't supposed to be required, however vagrant e2e has always been hit or miss so I got the same results as I got on master and considered it good. Would you like me to do the revert? Also, do you have a gist with the failure? |
…ter_conflict Revert "Fix inflight merge conflict by adapting rollingupdate to #2861"
The e2e failure looks about like this:
|
Ok, well that's pretty bad. I see the problem. .kubernetes_auth contains information for the server to identify the user and information for the client to identify the server. I lost the latter half during a refactor. I'll put together a change to restore that and show that succeeding. I'm sorry about this. |
@zmerlynn Forgive my ignorance, but is there an easy way for me to run e2e using the gke provider from outside google (getting started lists gce, but not gke)? While there was a bug in respecting "Insecure" from a .kubernetes_auth file, that didn't affect the vagrant environment either way and I noticed that "--auth_path" is handled differently for gke depending on whether it is kubecfg of kubectl. Basically, it means that making sure auth for e2e runs properly vagrant (It did before too) doesn't mean that it will work on gke. Alternatively, if the ~/.kubernetes_auth from your failing test had "Insecure":true, then we're golden. |
Adds the ability to read a kubeconfig file (issue #1755).
Addresses #2644 to allow binding to different names and provides suggested names with a user specified prefix.