-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to structure independent api endpoints in kubernetes #635
Comments
Agree with the approach, and I'd like to take this on as part of my scheduler separation work. I may quibble with the particular groups, and I'd like it to be possible to put the same resource in multiple groups. (e.g., scheduler and replication controller both need to see pods.) |
I think groups can depend on others, they just have to do so over an api connection. And conversely, if a group needs no knowledge of another resource, that resource should not be in that group. So a pod does not need to know about a replication controller or service, and so those should not be in the same group as a pod. |
I guess I would structure it a bit differently. My initial thought is that the scheduler endpoint should serve everything the scheduler needs, the replicationController endpoint serves everything the replicationController needs, etc. I think the possibility of running in an arbitrary API group endpoint should be sufficient to keep the code totally separate all the way down to the storage layer. Go interfaces make it really easy to write code that will use a resource either over an API connection or a locally served one. The latter comes with efficiency gains, so I don't think we should completely veto it. |
I get what you mean now - making an endpoint composed of a set of resources targeted to a consumer, potentially having different security or access pattern constraints. Makes sense. Local interfaces: For sure - meant purely logical separation, not enforced. |
@smarterclayton @lavalamp Sounds pretty good. We also need to split up types.go. We could start by pulling out parts of it that we'd want to use in new APIs, such as the ReasonType constants. |
Some concrete examples from OpenShift of how we're using APIGroup now (haven't yet integrated Eric's changes w.r.t. Master and authentication) https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go#L178
The first loop there is injecting other API groups into our Mux (for the case when we want to load Kubernetes API as well for our standalone binary). That code is at https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go#L48 and does:
So we want to keep api groups separate if possible from being tied to Kubernetes, because we want to be able to
Once we have designed ComponentPlugins enough that our pattern works, it seems like the APIs of registered plugins should show up in the central API doc. Each group of versions is a unit, and it would be ideal to install those to a server as a unit so that we can have these consistent patterns. |
This is superseded by #3201 |
Simple fix to format string
Fix anchnet kube-up scripts after rebase to 1.3.3
Fix anchnet kube-up scripts after rebase to 1.3.3
replaced Conainer with Container
Bug 1942141: fixes cinder PV labelling
Discussion in #503 and #592 (see comment) raised the issue of separating different logical api services at both a code level (now) and hosting level (future) to enforce boundaries, prevent coupling, allow higher security, allow api versioning at different rates, and allow experimentation for competing implementations. The last is important for us (we have some build, deploy, and image management prototypes we'd like to iterate on publicly). I'd like to propose some concrete steps to enable that:
pkg/registry
based on api groups (into their own packages?)Thoughts?
The text was updated successfully, but these errors were encountered: