-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
linkerd-proxy 2.11 crashes when the application issues requests resulting in Connection refused #7103
Comments
I don't think that the linkerd proxy is crashing, exactly. Rather, it looks like Kubelet is failing to pass readiness probes and is forcibly killing the proxy.
When it's in this state I'm also unable to contact the metrics endpoint on the admin server. We'll need to dig in a little more on why the admin server is being starved or if it's a lower level problem like the OS being unable to create new connections. |
I'm able to reproduce this a pretty low load configuration:
At lesser rates, this problem is not observed. With all logging enabled, we see requests flow for about 2.5s. At the 5s mark we notice the identity client issue a DNS query. And then the proxy becomes completely unresponsive. No further logs are emitted and the admin server stops accepting connections. The admin server runs in a dedicated thread, separate from the proxy thread pool. We could somehow be deadlocking on the policy check--this should be easy to confirm. This behavior can be reproduced with logging set to We'll continue to debug this. Thanks for the helpful report @kwencel! |
No problem, I hope you'll be able to quickly find and fix the problem :) I also thought it might be related to probes, but in my testing the proxy dies pretty quickly, in less than 10s IIRC. That is why I assumed the proxy had just terminated itself. But it seems you have configured the probes to reboot a non-responding proxy pretty fast and that just confused me. However, when I think about it, I guess it's good to have aggressive probes for such crucial components as proxies, which should be available all the time. Please keep us posted with the progress! |
I'm pretty sure I've found and fixed the issue. I'm still sorting out how to write a test to catch this issue, but we should have a fix ready for this week's edge release (and, subsequently, the 2.10.1 patch release). I was able to get an lldb session running while in this state and we see:
It turns out that this function can (incorrectly) loop infinitely, consuming the process's CPU allocation and starving the admin thread's ability to serve connections, presumably. |
Yes, that would make sense, as we observed 100% CPU usage for a few seconds just before the proxy was restarted. I assume you meant the 2.11.1 patch release? The bug is not present in the 2.10 family. When do you think we can expect the 2.11.1 release with this bug fixed? Thank you very much for the prompt reaction to the submitted issue! |
@kwencel oops, correct: 2.11.1. I'm hoping for this near the end of the month. Though, you should be able to run edge-channel proxies with the 2.11.0 control plane without issue in the meantime. |
578d979 introduced a bug: when the proxy handles errors on HTTP/2-upgraded connections, it can get stuck in an infinite loop when searching the error chain for an HTTP/2 error. This change fixes this inifite loop and adds a test that exercises this behavior to ensure that `downgrade_h2_error` behaves as expected. Fixes linkerd/linkerd2#7103
578d979 introduced a bug: when the proxy handles errors on HTTP/2-upgraded connections, it can get stuck in an infinite loop when searching the error chain for an HTTP/2 error. This change fixes this inifite loop and adds a test that exercises this behavior to ensure that `downgrade_h2_error` behaves as expected. Fixes linkerd/linkerd2#7103
@kwencel Thanks so much for providing a manifest to reproduce this issue :) That made it much easier to track down. If you want to test the fix for yourself, you can set the pod annotations: config.linkerd.io/proxy-image: ghcr.io/olix0r/l2-proxy
config.linkerd.io/proxy-version: main.ec441e581 Let us know if you see anything else unexpected! |
578d979 introduced a bug: when the proxy handles errors on HTTP/2-upgraded connections, it can get stuck in an infinite loop when searching the error chain for an HTTP/2 error. This change fixes this inifite loop and adds a test that exercises this behavior to ensure that `downgrade_h2_error` behaves as expected. Fixes linkerd/linkerd2#7103
Bug Report
What is the issue?
linkerd-proxy 2.11 crashes when the application it encapsulates requests resulting in Connection refused.
It also crashes on the latest edge-21.10.1.
It does not crash on stable-2.10, so I strongly think it is a regression from that version.
How can it be reproduced?
We have prepared a YAML which spawns a little Locust-based stress test (not really a stress test, as it requires very little requests per second to crash the linkerd-proxy). It tries to access the existing Kubernetes service (your own
linkerd-identity
in this example, but it can be anything) using a port which the service does not expose. As a result, the request ends with Connection refused and linkerd-proxy dies.It crashes on our production infrastructure (on-prem Rancher provisioned cluster) but for simplicity we have also prepared a minikube-based scenario.
1. Spawn a minikube cluster
2. Install Linkerd 2.11
3. Run our YAML
Logs, error output, etc
And shortly after linkerd-proxy dies.
linkerd check
outputEnvironment
Additional context
At first sight, it might look like a not-so-important bug, but we have encountered it in our production usage. We had a haproxy-based loadbalancer outside of the Kubernetes cluster which managed connections to several backends (also outside of the cluster).
We have observed that when the request from the k8s cluster to the loadbalancer was made during the restart of one of those backends, haproxy would often return an error, since it didn't have enough time to notice the backend was down. To our surprise, that error was triggering the linkerd-proxy of the caller to crash. We have reverted to 2.10 and it is working fine now.
The text was updated successfully, but these errors were encountered: