Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to token bucket rate limiter with better performance #10545

Merged
merged 2 commits into from
Jul 1, 2015

Conversation

liggitt
Copy link
Member

@liggitt liggitt commented Jun 30, 2015

The current token bucket implementation uses a ticker, which constantly consumes CPU, even at rest. This PR switches to an implementation that consumes no CPU at rest. The behavior is identical (bucket is initially filled to burst capacity and requests are smoothed).

Fixes #10445

@k8s-bot
Copy link

k8s-bot commented Jun 30, 2015

GCE e2e build/test passed for commit e1fc9d8d757fd4e71c18cb8a90de45cdb3309de3.

@zmerlynn
Copy link
Member

LGTM

@zmerlynn zmerlynn added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Jun 30, 2015
@k8s-bot
Copy link

k8s-bot commented Jun 30, 2015

GCE e2e build/test passed for commit f265d5c.

@zmerlynn
Copy link
Member

cc @davidopp @lavalamp for second LGTM, etcetc

@lavalamp
Copy link
Member

Hm-- are people setting rate limits high enough that this is a problem in practice?

@lavalamp
Copy link
Member

And in which binaries is it a problem?

@liggitt
Copy link
Member Author

liggitt commented Jun 30, 2015

Rates higher than 10/second have noticeable CPU usage (several components set QPS of 20 by default), and when multiple clients are created, it causes issues (since the spawned tickers live forever, since clients don't surface a way to call Stop()).

@lavalamp
Copy link
Member

OK. LGTM

@lavalamp
Copy link
Member

@davidopp is sick; @quinton-hoole for ok-to-merge

@lavalamp lavalamp assigned ghost Jun 30, 2015
@ghost
Copy link

ghost commented Jun 30, 2015

Taking a look...

@ghost
Copy link

ghost commented Jun 30, 2015

Are we OK taking on a whole new dependency by all our components on juju just before v1.0, and just for a more efficient rate limiter library? How complex would it be to just improve our current rate-limiter implementation instead?

cc @thockin @bgrant0607 for their opinion

@liggitt
Copy link
Member Author

liggitt commented Jun 30, 2015

I tried tweaking the existing implementation first, but the ticker at the heart of it was the root cause of the CPU usage at rest... I could see deferring this post-1.0

@liggitt
Copy link
Member Author

liggitt commented Jun 30, 2015

(though the CPU impact can become significant with higher QPS... we were seeing 20%ish usage in a vagrant env at rest with no data loaded)

@ghost
Copy link

ghost commented Jun 30, 2015

That's just horrible, and almost definintely needs to be fixed pre-v1.0. The only question in my mind is whether we implement our own efficient rate limiter (which doesn't seem tremendously difficult), or take on the big juju dependency instead. I'm leaning towards the former.

@lavalamp
Copy link
Member

I was on the fence about this, too, the dependency seems overkill. But it is expedient, assuming it works.

@zmerlynn
Copy link
Member

I didn't enjoy the dep, either, FWIW, but it's tiny.

@ghost
Copy link

ghost commented Jun 30, 2015

OK, lets merge this PR, and open a separate issue to remove the dependency.

@ghost ghost added the ok-to-merge label Jun 30, 2015
@thockin
Copy link
Member

thockin commented Jul 1, 2015

LGTM

zmerlynn added a commit that referenced this pull request Jul 1, 2015
Switch to token bucket rate limiter with better performance
@zmerlynn zmerlynn merged commit 03be2f3 into kubernetes:master Jul 1, 2015
@liggitt liggitt deleted the rate_limit branch July 10, 2015 15:16
@liggitt liggitt unassigned ghost Aug 12, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants