-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Higher level image and deployment concepts in Kubernetes #503
Comments
I think there's value in those abstractions, but to my naive ears they On Thu, Jul 17, 2014 at 11:46 AM, Clayton Coleman notifications@github.com
|
Agreed - I don't think pods or replication controllers know anything about builds or deployments, in fact, the layering is reversed - a type of build should be able to use a run once pod to accomplish its goal, while a type of deployment may depend on a particular sequence of calls to replication controllers. |
We think a comprehensive platform should include deployment capabilities and a means to build images without requiring external infrastructure. To build images, you need hosting infrastructure. At scale, we'd prefer to use the cluster’s resources where possible, and schedule builds just like any other task (i.e. pod). In order to model this problem, we need a notion of a pod that runs only once. We wanted to get a feel for what this integration might feel like. To that end, we've been working on a prototype to add the ability to build images in Kubernetes. We feel like there should be something fundamental between a build and a pod, so we’ve also added a simple job framework and a POC implementation of run-once semantics for pods as well. A job contains a pod template, status, success flag, and a reference to the resulting pod. We expect that there will be different types of jobs in the future - for example, running a process inside an existing container or running a pod with multiple containers that all have to complete. We also expect to add dependency information such as predecessors and successors to jobs. A new job controller (similar to the replication controller) looks for new jobs in storage and acts on them. It creates a run-once pod from the job's pod template and monitors the pod's status to completion. The job controller will support different job types through delegation in the future. A build is a user's request to create a new Docker image from one or more inputs (such as a Dockerfile or Docker context). In our POC we implement Dockerfile builds - we expect to support multiple build types such as STI (source to images), packer, Dockerfile2, etc. We are especially interested in feedback about how this problem should be modeled to facilitate other build extensions. A new build controller (similar to the replication controller) looks for new builds in storage and acts on them. It creates a job for the build, executes the job, and monitors its status to completion (success or failure). The build controller can support different build implementations, with the initial prototype defining a container that runs its own Docker daemon (Docker-in-Docker) and then executes Implementation Notes: We had to prototype/provide a couple of new capabilities to implement this proof of concept:
Link to our prototype: https://github.com/ironcladlou/kubernetes/tree/build-poc We'll have a screencast demonstrating our prototype shortly! We appreciate all feedback - thanks! |
Screencast link: https://www.youtube.com/watch?v=ae2xYeL-RFs Supplementals: Dockerfile: https://gist.github.com/pmorie/b7a0270bab01b86091aa |
On a Venn diagram a Job and a ReplicationController definitely overlap - to me, I felt like there was value in a Job object which could be driven by an external state machine and the job status to be used as the state register (with the special states NOTSTARTED, RUNNING, and COMPLETE). I'd be interested in how others would model a consumable state machine on top of pod execution for reuse or whether you would instead implement independent resources dependent only on pods. I pushed this towards separating Job from Build, but I could equally see it without that shared concept. |
I definitely think this is a topic worth discussing, and we could host solutions on the kubernetes repo, either in the main tree or subrepos or something, even if the APIs don't necessarily land in the main apiserver. We definitely need to do something (or multiple somethings) to make deployment simpler. Some deployment concepts mechanisms have been discussed, such as declarative configuration (#113), pod templates and configuration generation (#170), and rolling updates (several issues). So, we'd love to hear your ideas. |
Whatever comes out of this we will consider it for merge in upstream Docker. |
@shykes which part(s) specifically? |
We need a deployment solution and should document the recommended approach(es). |
Not urgent, but how hard would it be to package most of this functionality as independent plugins (once we have a plugin mechanism)? |
Very little. Builds are decoupled except for the ability to watch for upstream images that change, and the same applies to deployments.
|
Is this now a dupe to #1743? Or do we still want to keep this open for the idea of builds? In which case it may help to close and fork given this issue as-is covers a lot. |
I'm closing this now. Jobs and Deployments are underway. If builds appear in Kubernetes, it will be as some kind of extension. We might need image metadata at some point, but that's not really discussed here in any detail. |
Run e2e tests on travis
changing the label names as per the standards
In Kubernetes, the reference from a container manifest to an image is a "name" - that name is arbitrary and it is up to the user to specific how that name interacts with their docker build and docker registry scenarios. That includes ensuring that the name and label the user uses to refer to their image is not changed accidentally (so that new images aren't introduced outside of a controlled deployment process) and that the registry DNS that hosts the images is continuously available as long as that image may be needed (see docker image discussions for how this might change).
That loose coupling is valuable for flexibility, but the lack of a concrete process leaves room for error and requires thought and control. In addition, the resolution of those names is tightly bound to the execution of the container in the Kubelet.
We think there is value in Kubernetes providing a set of higher level concepts above pods/replication controllers that can be used to create deployable units of containers. Two concepts we see as valuable are "builds" and "deployments" - the former can be used to compose new images (by leveraging the Kubernetes cluster for build slaves with resource control) and the latter can manage the process of transitioning between one set of podTemplates to another (and can be triggered by builds).
First, is this something that should be in Kubernetes? Should it be on top of Kubernetes as a separate server? Or is it something that could be optionally enabled by those who wish to work on it? We've got some ideas of how we could make this flow work really cleanly with Docker and images, but we'd want to get feedback on those ideas.
The text was updated successfully, but these errors were encountered: