Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Don't write password in get-password #1915

Closed
wants to merge 1 commit into from

Conversation

erictune
Copy link
Member

get-password overwrites the .kubernetes_auth,
without respecting some added fields.

For GCE, it is not necessary to write in
the file in kube-up() because the file
is written later in the function.

In kube-push, it is not necessary to write
since the cluster is already created.

For vagrant, it was not saving them, so no change.

Other providers are in the "icebox". Added todos to those.

get-password overwrites the .kubernetes_auth,
without respecting some added fields.

For GCE, it is not necessary to write in
the file in kube-up() because the file
is written later in the function.

In kube-push, it is not necessary to write
since the cluster is already created.

For vagrant, it was not saving them, so no change.

Other providers are in the "icebox".  Added todos to those.
@smarterclayton
Copy link
Contributor

LGTM

@erictune
Copy link
Member Author

1.3 passed and 1.2 failed with a timeout. Please kick travis?

@jbeda jbeda self-assigned this Oct 20, 2014
@@ -241,6 +241,8 @@ kube-up() {

detect-minions

# TODO: write .kubernetes_auth file with user/pass/certs/key
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#1832 is about to go in and we have to rationalize this.

@jbeda
Copy link
Contributor

jbeda commented Oct 20, 2014

At this point we could just delete get-password from kube-push and solve the problem. We actually don't need/use it there.

But I'm not sure I understand the problem. When is this clobbering your password file? If the file is already there it shouldn't re-write it. It should only write it if it isn't there or the username/password can't be read.

@erictune
Copy link
Member Author

It is not fixing a bug. I am working on generating multiple passwords (at
least one for the user and one for the kubelets). For that I want to
separate the step of generation of passwords from the step of making an
htpassword. This was an incremental change towards that.

On Mon, Oct 20, 2014 at 4:28 PM, Joe Beda notifications@github.com wrote:

At this point we could just delete get-password from kube-push and solve
the problem. We actually don't need/use it there.

But I'm not sure I understand the problem. When is this clobbering your
password file? If the file is already there it shouldn't re-write it. It
should only write it if it isn't there or the username/password can't be
read.


Reply to this email directly or view it on GitHub
#1915 (comment)
.

@jbeda
Copy link
Contributor

jbeda commented Oct 21, 2014

@erictune Gotcha!

I'm not sure I'm totally cool with this change though. If something happens during the start up of the cluster and you Ctrl-C, there is a good chance that you'll have a cluster that you don't have the password for. We are carrying the password in memory for quite a while here.

Is there some way we can write the password out ASAP if we generate it?

If this is blocking you, we can get this in and I can clean things up after the rest of the flow becomes more obvious.

One idea: How about we leverage kubectl to maintain this password list? Ideally we'd modify this file instead of clobbering it but the tools to do that from bash are limited. We could either do something like lean on python or we could build password management stuff into kubectl.

@smarterclayton
Copy link
Contributor

@derekwaynecarr was going to look at namespace additions to kubectl, and one thing we'd like kubectl to manage would be the auth per namespace/server as well as other config. So you'd say kubectl namespace <foo> which would inherit your default server settings, but then be able to switch to other namespaces and use other tokens. Ultimately one kubectl is going to talk to multiple servers, and probably multiple namespaces, so I think we should start from the assumption that each of those args/vars/config is scoped.

----- Original Message -----

@erictune Gotcha!

I'm not sure I'm totally cool with this change though. If something happens
during the start up of the cluster and you Ctrl-C, there is a good chance
that you'll have a cluster that you don't have the password for. We are
carrying the password in memory for quite a while here.

Is there some way we can write the password out ASAP if we generate it?

If this is blocking you, we can get this in and I can clean things up after
the rest of the flow becomes more obvious.

One idea: How about we leverage kubectl to maintain this password list?
Ideally we'd modify this file instead of clobbering it but the tools to do
that from bash are limited. We could either do something like lean on
python or we could build password management stuff into kubectl.


Reply to this email directly or view it on GitHub:
#1915 (comment)

@erictune
Copy link
Member Author

@smarterclayton Why would you have another authentication token just because you are using a different namespace? You are still the same person. The authorizer should know what namespaces a user is authorized to use based on a policy file.

When you say that kubectl is going to talk to multiple servers, are you talking about:
{apiserver for cluster A, apiserver for cluster B, ...},
or about:
{apiserver for cluster A, build-management server, ...}

@derekwaynecarr
Copy link
Member

I am working on PR for this afternoon that will add the basic kubectl namespace support that was in kubecfg. We can look at a follow-on PR for how we can do more server settings association that @smarterclayton discusses after iterating on an issue discussion when its a little clearer what we want.

@smarterclayton
Copy link
Contributor

I don't think that have to be the same person - imagine this set of namespaces in a single cluster apiserver

infrastructure: run by operations team for self-hosting Kube, log servers, proxies
appA: ops team 1, self contained
appB: ops team 2, self contained

Admins want restricted access to infrastructure. They use an access token for infrastructure that has reduced privileges so that client compromise doesn't cause issues. Additionally, they may be logging in to A and B as different people to occasionally debug things.

What I was mostly getting at is that the smallest unit of switching is namespace - you're unlikely to be two different people in the same namespace. You're slightly more likely to be using two different accounts on the same server, and almost certainly using two different accounts on two different servers. So scoping all of the client level config (server info, preferences, etc) to the namespace is valuable, but you're right, you're likely to share settings/config/auth across namespaces on the same server. I was just arguing for not forcing the client to require that auth be stored per server when you have the ability to quickly change namespaces and servers.

----- Original Message -----

@smarterclayton Why would you have another authentication token just because
you are using a different namespace? You are still the same person. The
authorizer should know what namespaces a user is authorized to use based on
a policy file.

When you say that kubectl is going to talk to multiple servers, are you
talking about:
{apiserver for cluster A, apiserver for cluster B, ...},
or about:
{apiserver for cluster A, build-management server, ...}


Reply to this email directly or view it on GitHub:
#1915 (comment)

@erictune
Copy link
Member Author

Having the client remember what namespace you are in definitely makes sense.
I agree that we should support the use case you gave as an example.
I can see why one user might want to have multiple authentication tokens (e.g. having one on each device).
I see that it is possible for the client to send different credentials based on namespace, but I don't see that it is necessary, or particularly desirable, for me to have namespace-specific authentication tokens.

Thinking about github, I have:

  • one account, @erictune (some people have two or three, but for different reasons than you give in your example.)
  • multiple credentials (one public/private key pair per machine I use, plus web login)
  • multiple repositories (~= namespaces) that I can access
  • different privileges in different repositories
  • any one of my credentials can be used to access any one of the repositories that I can access.
  • my client (git) knows what repository I am working on from local config files.

@erictune erictune closed this Oct 23, 2014
@erictune erictune deleted the passwd branch September 29, 2015 15:16
bertinatto pushed a commit to bertinatto/kubernetes that referenced this pull request Apr 17, 2024
…erry-pick-1892-to-release-4.15

[release-4.15] UPSTREAM: <carry>: OCPBUGS-31348: fix cpu manager cpuset check
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants