Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cmd: build minimal docker images for all cmds #1444

Closed
wants to merge 1 commit into from

Conversation

proppy
Copy link
Contributor

@proppy proppy commented Sep 25, 2014

Build the images using context tar injection (see moby/moby#5715) on top of FROM scratch.

This could be cleaner (and work with the hub!) once moby/moby#8021 is merged.

$ ./hack/build-docker-images.sh
...
kubernetes/proxy                latest                c8ecf48b90e2        4 days ago          6.073 MB
kubernetes/kubelet              latest                e0aa1bbef6f0        4 days ago          6.946 MB
kubernetes/kubecfg              latest                310aef0bed1c        4 days ago          6.398 MB
kubernetes/integration          latest                d7e55ac70f5d        4 days ago          7.614 MB
kubernetes/controller-manager   latest                0bc6f96a1151        4 days ago          6.094 MB
kubernetes/apiserver            latest                cd8907e1464f        4 days ago          8.945 MB

Note: because scratch doesn't have hostname you have to pass --hostname_overide to the kubelet when starting.

/cc @gbin

@thockin thockin assigned thockin and jbeda and unassigned thockin Sep 25, 2014
CONTEXT=$(readlink -f $(dirname ${BASH_SOURCE[0]})/..)

docker build -t kube-builder ${CONTEXT}
for dir in ${CONTEXT}/cmd/*; do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Build targets are already defined in hack/config-go.sh (returned by function kube::default_build_targets). As it is now, you won't compile integration but you will have the scheduler.

@jbeda
Copy link
Contributor

jbeda commented Sep 25, 2014

@proppy We should talk about this stuff if you want to go deeper.

This doesn't fix #19. If we just want docker images with our binaries you can do BUILD_RUN_IMAGES=y ./build/release.sh. We need a workflow that includes not only building docker containers but also installing them and using them as part of our default cluster deploy. We also need this workflow to be reasonably fast and reliable for everyday development. As things stand, uploading/downloading to the docker hub on every test deploy is too slow. We need to find more efficient ways to move the docker images around when doing local/test deploys.

Other things we need to consider:

  • I should be able to deploy to vagrant without uploading my binaries to the docker hub.
  • I agree that we should look at switching to the pure-go statically linked binaries. Howeve,r I don't want to have that only happen when we deploy to a cluster. All of our testing and integration tests should also be built with that so that we don't get any surprises.
  • When we do a release we want to make sure we can tar up the binaries in a way that isn't embedded in docker containers.
  • When we have official images on the docker hub we are going to want to make sure that we can test/version those that match up with the rest of the tools. Things like the kubelet or kubecfg won't run under docker and we want to release this stuff as a set. For this reason, I doubt that we'll ever have docker doing the build/releasing for us right out of git.

@josselin-c As part of moving over to build/* for doing stuff I'd like to eliminate most of what is currently in hack. The "official" way to build kubernetes would then be in a docker container. As part of that the common definitions in hack/config-go.sh would then be moved/integrated with build/*.

@proppy
Copy link
Contributor Author

proppy commented Sep 25, 2014

This doesn't fix #19.

Updated the description.

If we just want docker images with our binaries you can do BUILD_RUN_IMAGES=y ./build/release.sh

Cool!, I didn't know (or remember) that existed.

After taking a look at BUILD_RUN_IMAGES there is a few things this PR does differently:
a/ use golang to build the binaries in a builder image
b/ the builder image can output a valid context for each binaries when run with -e cmd=<binary>
c/ each run image is minimal (FROM scratch).

I'd be happy to update release/, just let me know which one you care about.

When we do a release we want to make sure we can tar up the binaries in a way that isn't embedded in docker containers.

Can you expand on this, I'm not sure I'm parsing it correctly.

Things like the kubelet or kubecfg won't run under docker

I'm interested to run the kubelet under docker to give a way for docker user to try out pod.yaml manifest w/o spawning a whole cluster.

@jbeda
Copy link
Contributor

jbeda commented Sep 25, 2014

Look a little closer to what is going on. We are building in docker images there. It creates a kube-build image (including the set up for cross compilation) and then copies the binaries back out locally if you are running boot2docker. We do this as not everything will end up in a docker image. (things like kubelet and kubecfg).

I agree that building from scratch would be nice but we need to build/test static images everywhere. That is a orthogonal change.

As for taring up binaries -- we want to have a binary release of built executables (including both client and server). We are distributing/building more than just docker images.

As for kubelet under docker -- perhaps it makes sense but I'm not sure it is worthwhile yet. It is really our bootstrap that makes sure that we can manage other containers in a kube friendly way. There is no reason that users can't run it outside of docker locally without a whole cluster.

@proppy
Copy link
Contributor Author

proppy commented Sep 25, 2014

Look a little closer to what is going on. We are building in docker images there. It creates a kube-build image (including the set up for cross compilation) and then copies the binaries back out locally if you are running boot2docker. We do this as not everything will end up in a docker image. (things like kubelet and kubecfg).

I did look at this and didn't meant to imply the current approach was bad. I think using golang base image (a) and/or the builder (b) pattern could still be useful contribution.

As for kubelet under docker -- perhaps it makes sense but I'm not sure it is worthwhile yet. It is really our bootstrap that makes sure that we can manage other containers in a kube friendly way. There is no reason that users can't run it outside of docker locally without a whole cluster.

Yes, but we could have both. I'm not advocating for this to be the default way to run the kubelet, just an convenient way for people to run the kubelet in docker.

@jbeda
Copy link
Contributor

jbeda commented Sep 25, 2014

I'm going to close this for now without merging as we talked about it on IRC and I want to avoid Yet Another way of building.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants