-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add docker-in-docker based multi-node development cluster provider #26147
Conversation
|
@ingvagabund that issue is fixed now. |
@@ -0,0 +1,42 @@ | |||
#!/usr/bin/env bash | |||
|
|||
# Copyright 2015 The Kubernetes Authors All rights reserved. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
2016
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This file is actually copyrighted 2015 ;-)
Thanks for help @sttts, here are my steps to make it running on Fedora:
Then run Once you are finished with the cluster, run If something goes wrong on the way make sure you call |
cc myself, as I've been working on |
bin="$(cd "$(dirname "${BASH_SOURCE}")" && pwd -P)" | ||
|
||
# create the kube-system and static-pods namespaces | ||
"${kubectl}" apply -f "${KUBE_ROOT}/cluster/docker-in-docker/kube-system-ns.yaml" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
kube-system
is created automatically with v1.3.0-alpha.4
and over: #25196
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
99be434
to
eadd8ad
Compare
@sttts Please see #25631, how I've integrated |
I browsed through With cluster/docker-in-docker I would like to keep life-cycle questions out With this reasoning I think the addon-manager is not really the right tool On Wed, May 25, 2016 at 2:52 PM, Lucas Käldström notifications@github.com
|
@ingvagabund I have added your instructions to |
@sttts thanks, now I can call that user friendly :) |
https://github.com/kubernetes/kube-deploy is a better repo for this work. We are trying to remove cluster/ from the main repo. |
@mikedanese Does this mean that the |
Having said that, I am happy to split the docker-in-docker code from the cluster directory and turn it into a self-contained script if that helps. It doesn't use much infrastructure from there anyway. |
Our goal is to split out cluster/ into a separate repo, kube-deploy and people are actively working towards this, e.g. #26031. It's going to be a process that takes time, but this has very little dependency on anything in cluster so I don't see a very good reason to add it there temporarily when we can put it in a proper place on the get go. I'm going to push back on all new automation that are proposed in cluster/ and declare maintenance mode. We are discussing this in #23174 |
@mikedanese Sounds all reasonable for clusters which are used with named release versions. My fear is that you always have to search the right revision in the kube-deploy repo which happens to use the right command line flags to work with the kubernetes/kubernetes repo at hand. I wonder how you solve the same problem with e2e tests on GCE or in the vagrant cluster. Will read through #23174, maybe my concerns are solved already. |
@mikedanese having read through #23174 I still feel that this PR is misplaced in the discussion of cluster v2 in #23174. It mostly fits into what @bgrant0607 calls (1) in #23174 (comment): purely for development and without any machine to ssh/ansible into, i.e. (0) hack/local-up-cluster < (0+) docker-in-docker < (1) minikube < (2) good enough cluster .... Having said that, I think there is no proper home for this at the moment if cluster/ is deprecated. I can move this PR into its own repo or into an independent directory of kube-deploy. Either way makes it harder to use and to find for new developers. It would probably end up like $ cd kubernetes
$ git clone git@github.com:some/repo/docker-in-docker docker-in-docker
$ docker-in-docker/up.sh That workflow is fine for my use-case. |
948fbc7
to
0420a2c
Compare
CC @batikanu, @zreigz, @f-higashi, @taimir |
... into a pure Kubernetes cluster: KUBERNETES_PROVIDER=docker-in-docker cluster/kube-up.sh will launch a 2-node cluster running completely in Docker.
0420a2c
to
2aac3f6
Compare
GCE e2e build/test passed for commit 952e01e. |
@sttts you pushed this change to the main repo rather than to you private fork. Could you please close it, remove the branch and then work in you private repo? |
@piosz will do |
Followed-up by #27459 |
@sttts thanks. BTW you can invoke something like this |
Followed-up by #27459
Based on
cluster/mesos/docker
, with all Mesos dependencies removed.gives you a 2-node k8s cluster, completely running within Docker containers using docker-in-docker for each kubelet. Moreover, kube-dns and the dashboard are deployed.
This was originally used heavily by the Mesos Kubernetes team. Spinning up a cluster is a matter of a minute.
It's perfectly suited for e2e test trouble shooting due to multi-node support. You can even run more nodes by setting NUM_NODES.