-
Notifications
You must be signed in to change notification settings - Fork 459
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Adding rate limit to pod webhook #3340
Comments
Hm... I think a rate limit makes sense, but we also should look into allowing people to set a label selector for the mutating webhook configuration for pods. That way the operator's webhook only would look at the pods you care about. |
The otel operator pod webhook perf testResult summary:Application level rate limiting has little impact on the resource consumption for otel operator. Probably because under this test setup, the pod instrumentation logic is simple enough so that most resource is used at the web server level (e.g. handling tcp connection). Creating multiple replicas can distribute the load to individual operator instance. By comparing with the isitiod operator, which has higher resource cost, unset resource limit and adding autoscaling should be preferred to resolve the perf issue we had, Setup
Result
Appendix
|
Component(s)
collector, auto-instrumentation
Is your feature request related to a problem? Please describe.
When significant amount of pods created at the same time, the performance burden will bring down the opentelemetry operator and the following k8s api server which killed the entire cluster.
Describe the solution you'd like
Add rate limit to the pod webhook, the user could enable/disable it and configure the max request rate when it is on.
Describe alternatives you've considered
No response
Additional context
It is a follow up, since the user need the auto-instrumentation, and also protect cluster.
open-telemetry/opentelemetry-helm-charts#1115
The text was updated successfully, but these errors were encountered: