-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Key/Cert rotation for Kubernetes clients. #4672
Comments
the .kubeconfig loader -> client.Config -> client.Client transformation steps are currently lossy... they transform and return an object disconnected from the config source |
Ugh. |
Has there been any traction on this? I'd be willing to take a look if no one is / will be in the next few days. |
Given the upcoming (US) holiday, I doubt anyone is going to be working on this in the next few days. Go for it! |
It might be helpful to discuss the approach you plan to take before a lot of code gets written... how kubeconfigs will remain connected to resulting client configs, how clients built from configs will be rebuilt (or have transports swapped) on update, etc. |
@liggitt sure, going to take a look today and come up with a proposal. |
OK, did a little digging, want to circle back for input on a few things. platform compatibilityThe best approach for this is probably to use https://github.com/go-fsnotify/fsnotify, which is limited by compatibility listed here https://github.com/go-fsnotify/fsnotify#file-system-notifications-for-go. As far as I can tell the limitation won't affect runtime replacement on changeThe relationship between the resulting client and the kubeconfig is completely severed when Instead, if we compromise to only watch the files mentioned in the kubeconfig (i.e. changes to the certificates mention by path in kubeconfig, as opposed to watching kubeconfig for changes), we may be able to integrate the watch with the client itself in Otherwise, to address changes in @liggitt @roberthbailey what do you think? |
@deads2k who wrote the client builders. |
There are two main cases:
|
suggest attacking number 2 first. |
Thinking through the downstream impacts, I'd like to see this developed with a mechanism that calls a registered function to decide how to handle expired credentials. This would allow us to do things like:
That means I have cases where I want to react to certain kinds of rejections and where I want to proactively push new credentials. |
@erictune generally agree with your points in 1), except for the usage of as for 2), credentials for service accounts are stored in their own files, but kubeconfig also allows them to be embedded within the yaml file itself. This is the case for tokens, certs/keys, and ca certs. I believe that the config should be immutable, and in order to have changing credentials they should be referenced from the kubeconfig file, not embedded within it. This allows handling the changes further down hierarchy and not having to maintain a reference to configuration loading code within the client itself. Consider the use case of non-service account credentials, for example, cert/key mounted from hostPath into a manifest pod for authenticated kube master components @deads2k Handling rejection is probably much more complicated, i.e. for certs it implies knowledge of / an interface with a certificate authority or signing service. I like the idea of a function that re-reads the config, and thats generally what I was trying to say in my second proposal above, but that implies that a kubeconfig should be allowed to change over time while a server remains running. I guess the question is, should a config be immutable throughout the life of a server, or should it be allowed to be modified? My stance is that a config should be immutable, but the references it makes to external files (certificates, keys, tokens, etc.) should be kept up to date with changes to those files. |
I disagree... I wouldn't want to special case key/cert rotation... what if they are using token auth (which is specified in the kubeconfig)? |
Check out When this factory is used, then there is no On Tue, Nov 24, 2015 at 1:04 PM, Jordan Liggitt notifications@github.com
|
And On Tue, Nov 24, 2015 at 1:09 PM, Eric Tune etune@google.com wrote:
|
@erictune I see that, yeah. This case is outside both the case of monitoring a kubeconfig or the files mentioned within it, and it's also completely disjoint from the resulting client - all it ends up with is a string extracted from the file. Without integrating the config source into the client it won't be possible to monitor that file for changes. Also, I'm pretty certain tokens and certs are mutually exclusive, in this case it doesn't use certs. @liggitt then maybe integrating the config loading and the refresh into the client is a way to get this a bit cleaner. (src) type Client struct {
*RESTClient
*ExtensionsClient
// TODO: remove this when we re-structure pkg/client.
*DiscoveryClient
ConfigSource func() *Config
}
type Reloadable interface {
Watch() error
Reload() error
}
func (c *Client) Watch() error {
//fsnotify loop
if err := c.Reload(); err != nil {
return err
}
}
func (c *Client) Reload() error {
new := c.ConfigSource()
//todo update necessary components
}
c := unversioned.New(func() *Config {
//load configuration here
})
go c.Watch() Something like this could extend to reactive credentials updates if the @deads2k does this align with what you were thinking? |
Nit, but tokens are not mutually exclusive with certs. Certs are TLS-level, tokens (or basic auth) is header-level. Some providers require both. |
Structurally, I don't think I'd want the @lavalamp is currently in the client code. He may have a stronger opinion. I do still want the ability to register some sort of error handler for specific HTTP status codes in the |
[MILESTONENOTIFIER] Milestone Removed Important: kind: Must specify exactly one of [ Removing it from the milestone. |
Now that kubelet supports auto-rotation of certs using the Certificate Request Signing API (see #45059), can other clients do the same thing? One potential issue is how CSR will be approved for non-node clients. After #45619, this criteria needs to be matched for a CSR to be auto-approved:
Hopefully components can add this rolebinding when they're first installed on the system. |
Just hit this on a cluster I set up a year ago with certs that expired for the service account token validation. Really frustrating to fix as you have to have to delete all the service account tokens from secrets to get them to refresh, and you then have to restart any pods that need any of those secrets. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
@mikedanese @liggitt - seems that the one way to go with this is:
As for the logic, we might want to factor it under client-go, so every client can use cert rotation workflow. This has the benefit of having the same behavior as what is existing now for the kubelet. |
We may want to rotate keys/certs periodically for things that use pkg/client.
Suggest we assume that whatever key rotation plan is used (out of scope of this bug), that old and new tokens/keypairs will always have overlapping lifetimes. If not, we'd need to teach .kubeconfig to have a list of creds and teach pkg/client to retry. (I prefer the former option).
Suggest pkg/client watches the .kubeconfig file and any file mentioned by it, using inotify.
@deads2k @liggitt @roberthbailey
The text was updated successfully, but these errors were encountered: