Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to structure independent api endpoints in kubernetes #635

Closed
smarterclayton opened this issue Jul 26, 2014 · 7 comments
Closed

How to structure independent api endpoints in kubernetes #635

smarterclayton opened this issue Jul 26, 2014 · 7 comments
Assignees
Labels
area/api Indicates an issue on api area. area/extensibility kind/design Categorizes issue or PR as related to design. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@smarterclayton
Copy link
Contributor

Discussion in #503 and #592 (see comment) raised the issue of separating different logical api services at both a code level (now) and hosting level (future) to enforce boundaries, prevent coupling, allow higher security, allow api versioning at different rates, and allow experimentation for competing implementations. The last is important for us (we have some build, deploy, and image management prototypes we'd like to iterate on publicly). I'd like to propose some concrete steps to enable that:

  • propose a group of api rest resources be called a "api group" (better names welcome)
  • in code, ensure that an apiserver "master" can be created from multiple independent "api groups" with no coupling between groups
    • ensure that it is easy to combine multiple groups into the same golang http.Mux for testing and simple deployment - do not require separate compilation yet
    • begin to split up pkg/registry based on api groups (into their own packages?)
  • put pods/podTemplates/bindings into an api group "pod scheduling"
  • put services into an api group
  • put replicationController into an api group
  • put minions into an api group
  • leave the api groups in the same path prefix for now
  • define a policy for how and where "experimental" features can be prototyped in the master branch
    • should these be separate api groups under a distinct path prefix?
    • should the storage, registry, and controllers associated with these features be in their own golang package?

Thoughts?

@lavalamp
Copy link
Member

Agree with the approach, and I'd like to take this on as part of my scheduler separation work.

I may quibble with the particular groups, and I'd like it to be possible to put the same resource in multiple groups. (e.g., scheduler and replication controller both need to see pods.)

@smarterclayton
Copy link
Contributor Author

I think groups can depend on others, they just have to do so over an api connection. And conversely, if a group needs no knowledge of another resource, that resource should not be in that group. So a pod does not need to know about a replication controller or service, and so those should not be in the same group as a pod.

@lavalamp
Copy link
Member

I guess I would structure it a bit differently. My initial thought is that the scheduler endpoint should serve everything the scheduler needs, the replicationController endpoint serves everything the replicationController needs, etc.

I think the possibility of running in an arbitrary API group endpoint should be sufficient to keep the code totally separate all the way down to the storage layer.

Go interfaces make it really easy to write code that will use a resource either over an API connection or a locally served one. The latter comes with efficiency gains, so I don't think we should completely veto it.

@smarterclayton
Copy link
Contributor Author

I get what you mean now - making an endpoint composed of a set of resources targeted to a consumer, potentially having different security or access pattern constraints. Makes sense.

Local interfaces: For sure - meant purely logical separation, not enforced.

@bgrant0607
Copy link
Member

@smarterclayton @lavalamp Sounds pretty good. We also need to split up types.go. We could start by pulling out parts of it that we'd want to use in new APIs, such as the ReasonType constants.

@smarterclayton
Copy link
Contributor Author

Some concrete examples from OpenShift of how we're using APIGroup now (haven't yet integrated Eric's changes w.r.t. Master and authentication)

https://github.com/openshift/origin/blob/master/pkg/cmd/server/origin/master.go#L178

    var extra []string
    for _, i := range installers {
        extra = append(extra, i.InstallAPI(osMux)...)
    }
    apiserver.NewAPIGroup(storage, v1beta1.Codec, OpenShiftAPIPrefixV1Beta1, latest.SelfLinker).InstallREST(osMux, OpenShiftAPIPrefixV1Beta1)
    apiserver.InstallSupport(osMux)

    handler := http.Handler(osMux)
    if c.RequireAuthentication {
        handler = c.wrapHandlerWithAuthentication(handler)
    }
    if len(c.CORSAllowedOrigins) > 0 {
        handler = apiserver.CORS(handler, c.CORSAllowedOrigins, nil, nil, "true")
    }

    handler = apiserver.RecoverPanics(handler)

    server := &http.Server{
        Addr:           c.BindAddr,
        Handler:        handler,
        ReadTimeout:    5 * time.Minute,
        WriteTimeout:   5 * time.Minute,
        MaxHeaderBytes: 1 << 20,
    }

The first loop there is injecting other API groups into our Mux (for the case when we want to load Kubernetes API as well for our standalone binary). That code is at https://github.com/openshift/origin/blob/master/pkg/cmd/server/kubernetes/master.go#L48

and does:

    apiserver.NewAPIGroup(m.API_v1beta1()).InstallREST(mux, KubeAPIPrefixV1Beta1)
    apiserver.NewAPIGroup(m.API_v1beta2()).InstallREST(mux, KubeAPIPrefixV1Beta2)

    return []string{
        fmt.Sprintf("Started Kubernetes API at %%s%s", KubeAPIPrefixV1Beta1),
        fmt.Sprintf("Started Kubernetes API at %%s%s", KubeAPIPrefixV1Beta2),
    }

So we want to keep api groups separate if possible from being tied to Kubernetes, because we want to be able to

  1. Reuse the core apiserver implementation so we have 100% fidelity with the Kube API conventions
  2. Add blocks of API functionality in the all-in-one binary (for when you don't have a Kube deployment)
  3. Omit certain API groups when we connect to an existing Kube deployment, and handle registering ourselves as a ComponentPlugin when we startup.

Once we have designed ComponentPlugins enough that our pattern works, it seems like the APIs of registered plugins should show up in the central API doc. Each group of versions is a unit, and it would be ideal to install those to a server as a unit so that we can have these consistent patterns.

@bgrant0607 bgrant0607 added the priority/backlog Higher priority than priority/awaiting-more-evidence. label Dec 3, 2014
@smarterclayton
Copy link
Contributor Author

This is superseded by #3201

@bgrant0607 bgrant0607 mentioned this issue May 6, 2015
4 tasks
vishh pushed a commit to vishh/kubernetes that referenced this issue Apr 6, 2016
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Dec 8, 2016
Fix anchnet kube-up scripts after rebase to 1.3.3
mqliang pushed a commit to mqliang/kubernetes that referenced this issue Mar 3, 2017
Fix anchnet kube-up scripts after rebase to 1.3.3
wking pushed a commit to wking/kubernetes that referenced this issue Jul 21, 2020
damemi pushed a commit to damemi/kubernetes that referenced this issue May 20, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/extensibility kind/design Categorizes issue or PR as related to design. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

3 participants