-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scale kube-proxy conntrack limits by cores (new default behavior) #28876
Scale kube-proxy conntrack limits by cores (new default behavior) #28876
Conversation
Looks good. I think the default for conntrack-max is not 0, so users will have to explicitly set it to zero to get per-core scaling? I think the behaviour when both flags are set is a little surprising. What I would have guessed is that the max was always an upper bound if non-zero, but if per-core is non-zero this becomes the default value. Something like:
Or is the intention that we will change the default for conntrack-max to zero and deprecate it entirely? (If more bike-shedding is mandatory, why scale based on cores vs on total RAM?) |
On Tue, Jul 12, 2016 at 10:57 PM, Justin Santa Barbara
The default was set in code, which this PR changes.
I wanted the behavior for anyone who sets the older flag to be
I considered a per-core and per-GB scalar, but decided that we scale |
Ah sorry, I totally missed the nuance of the default change. Was definitely not suggesting scaling on both CPU & Memory - cores are great, it was really just curiosity :-) |
6d0a499
to
6771346
Compare
fs.Int32Var(&s.ConntrackMax, "conntrack-max", s.ConntrackMax, | ||
"Maximum number of NAT connections to track (0 to leave as-is).") | ||
fs.Int32Var(&s.ConntrackMaxPerCore, "conntrack-max-per-core", s.ConntrackMaxPerCore, | ||
"Maximum number of NAT connections to track per CPU core (0 to leave as-is). This is only considered if conntrack-max is 0.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it be useful to add the default value here? (32 * 1024)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The default value gets set in code elsewhere. I'm not really happy with the state of it all, but I didn't want to buck the trend for this PR. When you run kube-proxy -?
it shows the 32k default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack. thanks!
6771346
to
e5e98aa
Compare
fs.Int32Var(&s.ConntrackMax, "conntrack-max", s.ConntrackMax, "Maximum number of NAT connections to track (0 to leave as-is)") | ||
fs.Int32Var(&s.ConntrackMax, "conntrack-max", s.ConntrackMax, | ||
"Maximum number of NAT connections to track (0 to leave as-is).") | ||
fs.Int32Var(&s.ConntrackMaxPerCore, "conntrack-max-per-core", s.ConntrackMaxPerCore, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@thockin Is there an easy way to let user know if they set both that is incorrect?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fair enough. Done.
e5e98aa
to
266d82f
Compare
266d82f
to
833f05c
Compare
833f05c
to
85a0106
Compare
For large machines we want more conntrack entries than smaller machines.
85a0106
to
1f37281
Compare
GCE e2e build/test passed for commit 85a0106d6086e9d7f333bd103d8b48d0a36a3d8d. |
@k8s-bot test this [submit-queue is verifying that this PR is safe to merge] |
GCE e2e build/test passed for commit 1f37281. |
Automatic merge from submit-queue |
…k-of-#28876-upstream-release-1.3 Automatic merge from submit-queue Automated cherry pick of kubernetes#28876 upstream release 1.3 Cherry pick of kubernetes#28876 onto release-1.3. Scale kube-proxy conntrack limits by cores For large machines we want more conntrack entries than smaller machines.
For large machines we want more conntrack entries than smaller machines.
Commence bike-shedding on names and values and semantics. I'd love to get this in as a patch to 1.3.x since we have had some trouble for users on large machines.
Fixes #28867