-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clean up Kubelet RESTful APIs #2098
Comments
Kubelet should also use the apiserver infrastructure. |
In order to properly secure the kubelet, we'll want to reuse the same authentication / authorization interfaces we have in the master. We also want to have a consistently structured API. The initial work here would be to replace the Kubelet server with a true apiserver object, with a versioned beta, and begin evolving that API. |
My name is Nataliia and I am student who plan to apply for GSoC for this round. Unfortunately, there are few development docs and issues are formulated in the way, to be understandable for internal person, so without background it is really hard to understand what is going on. My current understanding of it is like this: There is primary/master API of the kubernetes, that are versioned and like more-or-less stable (spread among many files, cannot find single file with it). There is also API of the individual Kubelet (that lies in the /pkg/kubulet/server.go), that now is 'unofficial' state, it just exposes what it exposes. Like implementation is the only documentation for it. So the idea is to structure it, make more predictable and stable. To do it, first you should migrate to the same libraries that master API uses (like apiserver and all supporting auth stuff). And then make it more consistent and predictable. So the questions are:
Sorry if questions are too far away from the reality :) |
Nataliia, We can do this piece by piece, replacing one http handler at a time. I wouldn't worry too much about conversions.go and types.go, they are Please feel free to join us on IRC at #google-containers and we can give Many thanks for your interest! On Tue, Mar 17, 2015 at 7:06 AM, Nataliia Uvarova notifications@github.com
|
@AAzza Welcome! Yes, as @brendandburns also responded, you have the right idea. The apiserver library is used by at least one other component, the Openshift master, so there's some hope that it's general enough, but we won't really know until we try. We may need a new generic registry implementation, though, since the master components both use etcd-backed registries. Though hopefully you won't need to do much with the representation conversions, we do have a brand new documentation that explains how the API machinery works: |
BTW: handlers are registered here: Note that we're in the progress of eliminating many of these handlers. Kubelet will send information to apiserver. #4561 #4562 I would eventually like Kubelet to export the same pod API as apiserver, though. Also note that we're in the process of moving to v1beta3 for all components, though kubelet currently only supports v1beta1. #5475 |
And there is #4685 which attempts to replace '/stats' end point. On Tue, Mar 17, 2015 at 10:18 PM, Brian Grant notifications@github.com
|
The logs and remote execution endpoints are somewhat special - I would save those for last, because #3481 will change how they behave on the apiserver and we're likely to want the kubelet API to be consistent (same options). @csrwng ----- Original Message -----
|
Thanks you for the replies! It really help to understand what is going on and what to do. I just have last concern. The GSoC will be during summer. Till that, I hope, some of the issues/pull requests will be solved/merged (surely, a lot of others will appear). But the question is how does it correlate with the any planned releases? Like this project could be not-very-trivial change (if Kubelet API will be changed or whatever) and if there will be 1.0 release around that time, probably there could be problems with that. Just have no clue, what the release plans are and haven't found anything except roadmap and versioning. And there is no specific dates/milestones, only description how releases should look like when they are done. |
@AAzza If the work can be decomposed into small enough pieces, we'd prefer to just merge them into the master branch. If that's too hard to do, we may need to keep the work in a branch in your fork until we create a 1.0 branch, which should happen sometime in June. As part of this, we should ensure our generic API client library is capable of targeting Kubelet. In fact, I'd like kubectl to be able to target Kubelet directly, as well. (Assuming the node is reachable from wherever the tool is invoked.) |
Additional details:
|
@bgrant0607 Thank you for so detailed answer! If the 1.0 release will be around middle June, we probably will not be able to do stable transition to apiserver, so probably it will be merged after branch creation. Before this we probably can do simple tasks, like remove deprecated endpoints if they will exists still. About client library. For now I understand for kubelet it leaves in separate file https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/client/kubelet.go but after the unification it probably could work with client's main code for nodes and pods. Thank you for pointing this, didn't notice it from the beginning :) So we want to make /stats (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubelet/server.go#L656) and /spec (https://github.com/GoogleCloudPlatform/kubernetes/blob/master/pkg/kubelet/server.go#L3560 to be also versioned? Just crazy idea to evaluate: is it worth to spend a week to change url scheme before moving to apiserver and before release of 1.0. It will be a little bit ugly to do without go-restul for pods and nodes, see the code for /stats endpoint from previous paragraph, where the url is parsed directly. But the benefits is that in 1.0 Kubelet will expose at least some sort of consistent API, that later will be rewritten to the apiserver and more elegant code. Is it worth or I overesitimate the importness of kubelet api for 1.0? :) |
@AAzza the half-way point with internal improvements sounds like a good idea. It sounds like you want to do the public-facing changes first (before 1.0) to allow for the internal cleanups to happen in between. This should give us flexibility going forward. One concern would be how documented that versioned API would be. Initially we could do it manually before we move onto swagger-based docs. Let's see what @bgrant0607 thinks :) |
Yes, cleaning up the API is more important/urgent than improving/changing the implementation, but changing the implementation would be less controversial and less design-intensive. We could try to hash out the API details first and then decide which way to go based on how much progress is made. |
We should close this and open new issues on Kubelet's API(s) |
This is fairly out of date at this point. |
Was kubelet rest api auth[n/z] ever completed? Is there some doc/issues that discuss this? |
There is an authz/n filter in place, @liggitt On Tue, Jul 12, 2016 at 2:18 PM, Mike Danese notifications@github.com
|
@mikedanese #11816 (comment) mentions authn/authz for kubelet. #14700 added interfaces for authn/authz to the kubelet but didn't wire them to command line options yet. Some of the work around webhook authn and authz could allow the kubelet to use the same authn/authz as the master if we wanted it to. |
@liggitt thanks for clarifying. |
More and more external systems, such as Heapster is using Kubelet RESTful API, and we are stabling Kubernetes API, now it is the time to clean up Kubelet's.
The text was updated successfully, but these errors were encountered: