-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Kubernetes Build Plugin #994
Conversation
This is an evolution of the prototype work done for #662 and is an exploration of offering a high level abstraction around builds that could exist running on top of Kubernetes. This is not a core component, but we do see value in trying to offer concepts within the ecosystem that leverage the platform and capabilities. We see features like build as key to folks building real-world Docker environments, and so as part of OpenShift Origin we want to see if there are concepts and patterns that we can as a K8S community converge on. |
So, first I want to say that having a build system on top of k8s would be wonderful and a compelling use case for lots of people, especially if it was easy. That said, I feel like there's a big murky question of what sort of thing is a plugin to k8s itself, and what sort of thing is something you run on top of k8s? And the analogy my brain comes up with is kernal : gcc :: k8s : build service. That is, you don't build gcc into your kernel, and an equivalently scaled up build system shouldn't be a plugin to your cloud kernel, either. I see k8s as being a cloud OS; a build system is certainly an awesome use case and we should make it installable on your k8s cluster with a one line command if possible. But I don't think that makes it a "plugin". So what sort of thing would be a plugin? Again the analogy my brain comes up with is device drivers and filesystems; the corresponding k8s options might be doling out your tape drives and the ability to mount storage provided by your custom service into a container. I may be being overly pedantic here, or maybe I just have a much more limited scope for the word "plugin". Anyway, whatever we call this, I am in favor of a system that does builds on top of k8s, and I'm in favor of k8s having the primitives necessary to support such a system, because those primitives will be needed elsewhere. I also think k8s primitives probably should not include the word "Build" in their names; that is too specific of an operation for a kernel module. |
(for eventual deployment in Kubernetes). | ||
2. As a user of Kubernetes, I want to build an image from a binary input (docker context, artifact) | ||
and push it to a registry (for eventual deployment in Kubernetes). | ||
3. As a provider of a service that involves building docker images, I want to offload the resource |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bullet is sort of opaque to me. If you offload all those things, what service exactly are you providing?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are still offering a service to do builds, which the core of K8s is not. The core would be used to schedule the resources instead of inventing a separate system to do so.
My main thought is similar to @lavalamp - why is this a k8s component and not just an application? |
I think we should sort out terminology between plugins, applications, components, etc. How about: Plugins are Go code implementing internal interfaces (scheduler algorithm) or external endpoints (make a POST to X, get back Y, act based on what is in Y). They mutate core or component functionality. Services are currently a plugin because they alter pod creation (inject env). All plugins exist to components or the core. Applications are code that runs in containers on top of a kubernetes system. Components (need a better name, system-services?, k8s-components?) are layered functionality on top of the k8s core that solve container/service domain specific problems. cAdvisor is potentially a component - it offers resource use information in parallel to the kubelet, and a scheduler plugin can use that info to make better placement decisions. It's not the only type of component that can offer that feature, just one that works well. A component is usually an application, but it does not have to be. Applications may depend on specific components in order to function (as api consumers), and components may depend on other components. Every resource except pods is a component (?). Components should be logically grouped with related components, and organized into layers that identify dependencies. A distribution of Kubernetes may include multiple components, and the distributor may choose to run and deploy those components in ways that make sense to the targeted consumers. A component or application should be portable across distributions that expose the same versions of the components it depends upon. A distribution should not introduce incompatibilities that alter the client contracts of exposed component APIs (you can proxy, add authn/authz, aggregate by path, and alter the process / compilation setup of a component, but not break well written clients when you do so). Image builds and image metadata tracking to me are components that offer a domain solution (I need to create the images I'm using in pods, and trigger actions based on new images being available) in a particular way. They are not the only possible way to offer those capabilities. Because they have utility to a large set of potential k8s users, it is beneficial to use them as examples of how to integrate components with the core, and it's beneficial to drive at least their early development in a highly visible way to the community. I believe feedback around these components across the community will help evolve an ecosystem for component-authors and consumers and clarify the default expectations of components. Standard (however that's defined) plugins and components should be in the Kubernetes tree. Common components should either be in tree (with suitable tests / documentation and maintainers), or catalogued in the tree and located in other repos. Components should be written in a way that allows a distribution to easily embed, package, and deploy those components as it sees fit. The client contract for a component to fit within a k8s system (api behavior that allows config mechanisms to function) should be defined. |
Relatedly, @smarterclayton, have you considered what we might call the pieces that comprise infrastructure-y software layers on which Kubernetes might run, such as Mesos? |
@smarterclayton In your taxonomy, I see "components" as just being the composition of applications. They're not special, or shouldn't be. (back to my analogy, building is not a system daemon, either.) We might need some more advanced discovery to make this work well-- we need a way for a CLI to ask k8s master, "Hey, I'm looking for something that can handle requests of type RHLMAGICBUILD?" and get a useful response. (should be a small addition to services?) |
To reverse the analogy, gcc and llvm and tools like it form a critical part of the ecosystem, and an enormous amount of work goes into ensuring they're available in all environments because without them, the OS doesn't do much useful work. And packagers spend more time on fundamental packages and services (bash, ssh, etc) that provide key value up the stack so as to reduce the need to reinvent those tools. Are components "special"? Only in the context that they provide value for other applications. I agree they don't need special semantics at the core to make them work except possible simple API discovery (which is in some small part "special" and so we should consider it carefully). Things that are part of a "config" sound to me like components, and everything else is an app. Have to have some common level of pattern there to make the idea of config work. |
I think it's worth making a distinction between things that simply run on K8s and others that make use of potentially reusable parts of K8s such as API server code, registry, etc. Builds imho would fall in this latter category. |
I think we're going to have to disagree on this; I'll be flattered if the API stuff we make is good enough that people reuse it (and I think we're definitely headed in the right direction, if not completely there yet). But this just means that k8s is itself a good example of the sort of application we want people to write. It shouldn't mean your application gets special treatment from the k8s system itself. |
@lavalamp - after reading #991 I see what you mean. You'd expect each contributor to the API to run its own process. So builds could reuse some of the stuff in base K8s on its own. One question I'd have is how'd you see this model working with something like project scoping (#1017). I may want my add-on resources also scoped to a project. Is this something each contributor would need to implement or would we have something in core to enforce it? |
@csrwng Yeah, I don't see building as being a plugin in the #991 sense. It should just be a regular kubernetes application. However I do see that it could be very useful to allow apps to use whatever identity stuff we come up with. But we shouldn't do something special to make it work with builds; rather building should be able to be implemented on top of what we provide. |
An implementation of this exists now in https://github.com/openshift/origin/tree/master/pkg/build. We'll move this doc to that repo and throw a link back into README or an integration doc for folks to discover if they want to iterate or leverage requirements. The package is not tied to anything outside of Kube, and eventually it should be a composable resource. |
@pmorie go ahead and move this doc to origin and then open a pull to add this into a "Kubernetes Extensions" |
We propose that a build plugin and corresponding framework be added to kubernetes to facilitate: