-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a resource specifying number of Pods that are allowed to run on Kubelet. #5547
Conversation
Integration tests failure is easy to fix, so please comment just the general direction. |
Assigning @davidopp for more detailed review on direction, but it looks good to me. |
See my comment in the issue #5507 |
c10d314
to
19520e7
Compare
c8a756e
to
5d1e95a
Compare
As #4910 is marked as v1.0 I updated this PR. PTAL |
@@ -99,6 +99,7 @@ type KubeletServer struct { | |||
CertDirectory string | |||
NodeStatusUpdateFrequency time.Duration | |||
ResourceContainer string | |||
AvailablePods int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Available implies the "remaining". Call this "MaxPods"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
Rebased. PTAL. |
@@ -211,6 +212,7 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) { | |||
fs.StringVar(&s.ResourceContainer, "resource-container", s.ResourceContainer, "Absolute name of the resource-only container to create and run the Kubelet in (Default: /kubelet).") | |||
fs.StringVar(&s.CgroupRoot, "cgroup_root", s.CgroupRoot, "Optional root cgroup to use for pods. This is handled by the container runtime on a best effort basis. Default: '', which means use the container runtime default.") | |||
fs.StringVar(&s.ContainerRuntime, "container_runtime", s.ContainerRuntime, "The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.") | |||
fs.IntVar(&s.MaxPods, "max_pods", 32, "Number of Pods that can run on this Kubelet.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we should set this higher for now (like 200 or something) since we're doing experiments where we run lots of pods per node, and it would be annoying to have to change the flag default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wojtek and Filip believe that 30 should be enough. Only place where higher number appears is in Density test, where we allow 50 and 100 Pods per Node, but those are disabled by default. I changed to 100, but I'm not really convinced that a good call.
Other than that, LGTM. |
Well, up to you. I'm just worried about surprising people who may be running load tests manually before 1.0 (e.g. whatever stuff the RedHat folks might cook up on their own). Maybe we should set the flag to 100 for now, and then change it back to 30 once we're done with the intensive performance work (e.g. between the close of code for 1.0 and the release)? Anyway, if you do change it back to 30, please make sure Density test overrides it. |
I'm playing with Salt to create a separate-ish config for test environment, but as I have absolutely no idea about Salt it may take some time. |
I would suggest just using limit of 100 and merging it now, then do 30 + custom environment for tests in a subsequent PR. |
Done. |
e2e is green |
@gmarek can you try to re-push to poke Shippable. If we wait for Travis it'll be a bunch of hours :( |
Will need a rebase btw |
Rebased, as I do each morning. It's getting to be a habit of mine:) |
Tests are green and this is LGTM'd merging. |
Add a resource specifying number of Pods that are allowed to run on Kubelet.
cc/ @bgrant0607 |
@@ -226,6 +227,7 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) { | |||
fs.StringVar(&s.ContainerRuntime, "container_runtime", s.ContainerRuntime, "The container runtime to use. Possible values: 'docker', 'rkt'. Default: 'docker'.") | |||
fs.StringVar(&s.DockerDaemonContainer, "docker-daemon-container", s.DockerDaemonContainer, "Optional resource-only container in which to place the Docker Daemon. Empty for no container (Default: /docker-daemon).") | |||
fs.BoolVar(&s.ConfigureCBR0, "configure-cbr0", s.ConfigureCBR0, "If true, kubelet will configure cbr0 based on Node.Spec.PodCIDR.") | |||
fs.IntVar(&s.MaxPods, "max_pods", 100, "Number of Pods that can run on this Kubelet.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#4910 was marked post-1.0 (not referenced by this PR, btw) and #5507 was closed. However, since this was merged, the issues cited above must be fixed, or this must be reverted. @dchen1107 is working an alternative approach that uses the existing LimitRange mechanism to set a minimum default cpu for pods in the default namespace, which will reduce node oversubscription in the absence of resource-aware scheduling and/or auto-sizing. |
Related to #5507