Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a resource specifying number of Pods that are allowed to run on Kubelet. #5547

Merged
merged 1 commit into from
May 18, 2015

Conversation

gmarek
Copy link
Contributor

@gmarek gmarek commented Mar 17, 2015

Related to #5507

@gmarek
Copy link
Contributor Author

gmarek commented Mar 17, 2015

Integration tests failure is easy to fix, so please comment just the general direction.

@vmarmol
Copy link
Contributor

vmarmol commented Mar 17, 2015

Assigning @davidopp for more detailed review on direction, but it looks good to me.

@vmarmol vmarmol assigned vmarmol and davidopp and unassigned vmarmol Mar 17, 2015
@davidopp
Copy link
Member

See my comment in the issue #5507

@gmarek gmarek force-pushed the client3 branch 2 times, most recently from c10d314 to 19520e7 Compare March 18, 2015 13:02
@gmarek gmarek force-pushed the client3 branch 2 times, most recently from c8a756e to 5d1e95a Compare April 28, 2015 13:36
@gmarek gmarek changed the title WIP: add IPs as Resource for #5507 Add a resource specifying number of Pods that are allowed to run on Kubelet. Apr 28, 2015
@gmarek
Copy link
Contributor Author

gmarek commented Apr 28, 2015

As #4910 is marked as v1.0 I updated this PR. PTAL

@@ -99,6 +99,7 @@ type KubeletServer struct {
CertDirectory string
NodeStatusUpdateFrequency time.Duration
ResourceContainer string
AvailablePods int
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Available implies the "remaining". Call this "MaxPods"?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

@gmarek
Copy link
Contributor Author

gmarek commented May 11, 2015

Rebased. PTAL.

@@ -211,6 +212,7 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&s.ResourceContainer, "resource-container", s.ResourceContainer, "Absolute name of the resource-only container to create and run the Kubelet in (Default: /kubelet).")
fs.StringVar(&s.CgroupRoot, "cgroup_root", s.CgroupRoot, "Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default.")
fs.StringVar(&s.ContainerRuntime, "container_runtime", s.ContainerRuntime, "The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.")
fs.IntVar(&s.MaxPods, "max_pods", 32, "Number of Pods that can run on this Kubelet.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if we should set this higher for now (like 200 or something) since we're doing experiments where we run lots of pods per node, and it would be annoying to have to change the flag default.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wojtek and Filip believe that 30 should be enough. Only place where higher number appears is in Density test, where we allow 50 and 100 Pods per Node, but those are disabled by default. I changed to 100, but I'm not really convinced that a good call.

@davidopp
Copy link
Member

Other than that, LGTM.

@davidopp davidopp added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 14, 2015
@davidopp
Copy link
Member

Well, up to you. I'm just worried about surprising people who may be running load tests manually before 1.0 (e.g. whatever stuff the RedHat folks might cook up on their own). Maybe we should set the flag to 100 for now, and then change it back to 30 once we're done with the intensive performance work (e.g. between the close of code for 1.0 and the release)?

Anyway, if you do change it back to 30, please make sure Density test overrides it.

@gmarek
Copy link
Contributor Author

gmarek commented May 14, 2015

I'm playing with Salt to create a separate-ish config for test environment, but as I have absolutely no idea about Salt it may take some time.

@davidopp
Copy link
Member

I would suggest just using limit of 100 and merging it now, then do 30 + custom environment for tests in a subsequent PR.

@gmarek
Copy link
Contributor Author

gmarek commented May 14, 2015

Done.

@gmarek
Copy link
Contributor Author

gmarek commented May 14, 2015

e2e is green

@vmarmol
Copy link
Contributor

vmarmol commented May 14, 2015

@gmarek can you try to re-push to poke Shippable. If we wait for Travis it'll be a bunch of hours :(

@vmarmol
Copy link
Contributor

vmarmol commented May 14, 2015

Will need a rebase btw

@gmarek
Copy link
Contributor Author

gmarek commented May 15, 2015

Rebased, as I do each morning. It's getting to be a habit of mine:)

@vmarmol
Copy link
Contributor

vmarmol commented May 18, 2015

Tests are green and this is LGTM'd merging.

vmarmol added a commit that referenced this pull request May 18, 2015
Add a resource specifying number of Pods that are allowed to run on Kubelet.
@vmarmol vmarmol merged commit 45874d5 into kubernetes:master May 18, 2015
@dchen1107
Copy link
Member

cc/ @bgrant0607

@@ -226,6 +227,7 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) {
fs.StringVar(&s.ContainerRuntime, "container_runtime", s.ContainerRuntime, "The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.")
fs.StringVar(&s.DockerDaemonContainer, "docker-daemon-container", s.DockerDaemonContainer, "Optional resource-only container in which to place the Docker Daemon. Empty for no container (Default: /docker-daemon).")
fs.BoolVar(&s.ConfigureCBR0, "configure-cbr0", s.ConfigureCBR0, "If true, kubelet will configure cbr0 based on Node.Spec.PodCIDR.")
fs.IntVar(&s.MaxPods, "max_pods", 100, "Number of Pods that can run on this Kubelet.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmarek @davidopp
Look at all other flags. What do you notice?

@bgrant0607
Copy link
Member

#4910 was marked post-1.0 (not referenced by this PR, btw) and #5507 was closed. However, since this was merged, the issues cited above must be fixed, or this must be reverted.

@dchen1107 is working an alternative approach that uses the existing LimitRange mechanism to set a minimum default cpu for pods in the default namespace, which will reduce node oversubscription in the absence of resource-aware scheduling and/or auto-sizing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants