-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added pods-per-core to kubelet. #25762 #25813
Added pods-per-core to kubelet. #25762 #25813
Conversation
d9fac00
to
5c9ed79
Compare
/cc @kubernetes/sig-node |
@rrati needs a test. |
@@ -258,4 +259,5 @@ func (s *KubeletServer) AddFlags(fs *pflag.FlagSet) { | |||
fs.StringVar(&s.EvictionSoftGracePeriod, "eviction-soft-grace-period", s.EvictionSoftGracePeriod, "A set of eviction grace periods (e.g. memory.available=1m30s) that correspond to how long a soft eviction threshold must hold before triggering a pod eviction.") | |||
fs.DurationVar(&s.EvictionPressureTransitionPeriod.Duration, "eviction-pressure-transition-period", s.EvictionPressureTransitionPeriod.Duration, "Duration for which the kubelet has to wait before transitioning out of an eviction pressure condition.") | |||
fs.Int32Var(&s.EvictionMaxPodGracePeriod, "eviction-max-pod-grace-period", s.EvictionMaxPodGracePeriod, "Maximum allowed grace period (in seconds) to use when terminating pods in response to a soft eviction threshold being met. If negative, defer to pod specified value.") | |||
fs.Int32Var(&s.PodsPerCore, "pods-per-core", s.PodsPerCore, "Number of Pods per core that can run on this Kubelet. A value of 0 disables this limit. Cannot exceed max-pods") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if it will exceed max-pod?
Also, this cannot exeed max-pod or this * number of cores?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
max-pods is the total # of pods allowed, so if pods per core * cores > max-pods, then the kubelet will report max-pods. Should I make the statement more clear?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes please
lgtm - just one minor comment |
tests with 2 core system: kubelet --pods-per-core=20 kubelet --pods-per-core=100 kubelet --pods-per-core=10 --max-pods=10 |
5c9ed79
to
5f584a9
Compare
Logged issue #25875 to address automated testing |
lgtm |
5f584a9
to
da62b8f
Compare
@k8s-bot Please unit test this issue #IGNORE |
@k8s-bot unit test this issue #IGNORE |
da62b8f
to
e0bf300
Compare
@yujuhong no, it will be 0 by default; it was just requested by RedHat. I would like to take advantage of it in the next release hopefully. |
@k8s-bot test this issue #IGNORE |
e3c7978
to
9b6e26c
Compare
For docs: "pods-per-core" is a simple scaling factor that allows us to get past the previous limit of 110 on larger systems. OpenShift could lay down a config like so (via installer): --pods-per-core=10 Which, when compared with openshift 3.2 max-pods=110, the effective behavior of the new pods-per-core algorithm: For 1-10 cores, max-pods is reduced Examples: 8 cores: 16 cores: 32 cores * pods-per-core(10) = 320 |
Buildcop: Seems worth a release-note, so swapped labels. LMK if that's wrong. |
9b6e26c
to
8164a31
Compare
8164a31
to
2d487f7
Compare
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 2d487f7. |
Automatic merge from submit-queue |
…core to 10 and max-pods to 250
Should the code throw an error if running_pods > max_pods ? |
That's a valid question. I think after a restart, kubelet will admit each pod again, and some of the pods will get rejected due the MaxPods restriction. The containers belonging to those pods will be stopped. |
mark. |
Added --pods-per-core to kubelet
#25762