-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Default should be a Debian image with memory cgroup enabled #368
Comments
What is necessary to activate this, is it a kernel parameter? Do we need to build in a different module? Thanks |
Just to update the Grub config and reboot. |
Ok, we need to do this in the image build stage, so that it is built into the container VM image. --brendan |
@dchen1107 , are you still looking to build an image? |
It's already there for the container VM image, either in the currently live Ok, we need to do this in the image build stage, so that it is built into --brendan — |
I think this boils down to changing the default image from |
SGTM, we were headed there anyway as part of the desire to run all of the Kubernetes binaries inside of containers. |
So this would be a default for our container-optimized docs/examples with
|
Just default for the Kubernetes scripts for now :) the examples are next and I think we stop there. |
SGTM.
|
Tried the container-vm like this: in config-default.sh
VMs booted, so
Unable to bring up a replicationController |
Hm. We released that with Docker 1.0.1. I hope there's not already an On Wed, Jul 9, 2014 at 1:33 PM, Tony Worm notifications@github.com wrote:
|
In general, Docker expects the CLI and daemon to be the same version which is the error here. Somehow those are different in the Kubernetes setup. |
There shouldn't be any Docker version skew within the image as shipped. If On Wed, Jul 9, 2014 at 1:45 PM, Victor Marmol notifications@github.com
|
Hrm, that seems highly unlikely. The Kubernetes set up uses (unless cAdvisor ships with a docker client?) --brendan On Wed, Jul 9, 2014 at 1:45 PM, Victor Marmol notifications@github.com
|
I think the image brings up the Docker daemon (v 1.0.1) before Kubernetes installs the new Docker image (v 1.1.0). So the CLI invocations are of the new kind with the old client. Kubernetes should not need to install Docker if it is already installed. |
Ah, I missed that you had tried by switching to container vm. Can you disable the docker install on the minion? Go into cluster/saltbase/salt/top.sls and comment out 'docker' for the --brendan On Wed, Jul 9, 2014 at 1:48 PM, jkaplowitz notifications@github.com wrote:
|
You will need to ./cluster/kube-down.sh and re-create your cluster. --brendan On Wed, Jul 9, 2014 at 1:49 PM, Brendan Burns bburns@google.com wrote:
|
Yup, that seems to be the issue there. On Wed, Jul 9, 2014 at 1:50 PM, brendandburns notifications@github.com
|
ok, I started a controller, which gets registered with k8s, 1 of 4 pods started in the output.
I also noticed this at the end of the dev-build-and-up.sh
|
I will build a new container vm image with docker 1.1. Should that mitigate the issue here? |
@dchen1107 I think at this point it is just working through the issues of being on a different image. They should be similar enough to get it to work. @verdverm not sure about the last error, but is there anything in the Kubelet log to say why the other 3 haven't come up? They make take some time depending on the size of the Docker image. |
Only until the next build comes out. Really the fix is to have kubernetes's On Wed, Jul 9, 2014 at 2:40 PM, Dawn Chen notifications@github.com wrote:
|
The fact that they aren't in the pod list means that the master never got My guess is that your cluster only has a single machine, and your pod is --brendan On Wed, Jul 9, 2014 at 2:10 PM, Tony Worm notifications@github.com wrote:
|
I'm spinning up 4 minions, the same startup script works with the debian backport I then run my usual startup script, which can talk to the k8s master, |
@verdverm is the Kubelet complaining of something? It may be that the environment is not setup correctly for them (we did want to move that to Docker containers eventually) |
how would I find out if the Kubelet is complaining? on master or minion? |
Can you try "kubecfg.sh list minions"? Brendan
|
no complaints, still no dockers running on any of the minions
|
what does the minion logs say?: /var/log/kubelet.log I think On Wed, Jul 9, 2014 at 4:25 PM, Tony Worm notifications@github.com wrote:
|
Also, look at /var/log/controller-manager.log on the kubernetes-master Thanks! On Wed, Jul 9, 2014 at 4:47 PM, Victor Marmol notifications@github.com
|
MASTER:
MINION-3
|
Is there more on the master after "Too few replicas, creating 4"? Thanks On Wed, Jul 9, 2014 at 4:55 PM, Tony Worm notifications@github.com wrote:
|
FULL MASTER LOG
|
Is the controller manager still running: either or ps -ef | grep controller-manager ? I wonder if the request to create the pod is hanging? On Wed, Jul 9, 2014 at 5:00 PM, Tony Worm notifications@github.com wrote:
|
|
OK, my guess is the api request to create a pod is hanging (clearly we What's in /var/log/apiserver.log on the master? Brendan
|
/var/log/apiserver.log is full of errors
|
I noticed cAdvisor is not running in any of the minions |
here's the head of the apiserver.log
|
cAdvisor is a Docker container so the Kubelet may be having issues starting On Wed, Jul 9, 2014 at 5:43 PM, Tony Worm notifications@github.com wrote:
|
It sounds like this issue got a little unfocused, and it also sounds like the original request is going to happen (at least for GCE) when we grab the next version of the image we use. If my understanding is incorrect, please reopen and add detail of what would satisfy this request. |
Add a disk map to machine info.
Create and configure the service front-end-service so it’s accessible through NodePort / ClusterIP kubectl expose pod front-end --name=front-end-service --type="NodePort" --port=80 |
updated namereference config for missing references.
Bug 1881225: UPSTREAM: <carry>: apiserver: create hasBeenReadyCh channel
Add stackdriver exporter endpoint for problem_gauge
Any reason not to do this? Otherwise we get no memory stats. This only deals with the image we use by default, the user can always use their own image and we'll gracefully degrade as we do today.
The text was updated successfully, but these errors were encountered: