-
Notifications
You must be signed in to change notification settings - Fork 350
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cloud SQL Proxy side-car container issuing periodic SIGTERM requests #1789
Comments
The Proxy won't ever send SIGTERM signals. In v2 we've fixed the exit code handling such that the Proxy will exit with a non-zero code if there are still active connections. It sounds like Kubernetes is shutting down your pod (for reasons unknown to me and beyond what the Proxy knows), and the non-zero exit is making things worse. Here are a few of things to try:
|
Thanks @enocom for the quick appreciate it very much. We run cloud sql proxy as a side car container so my question is where do I set the |
@dheerajd3v You just set it as one of your name: cloud-sql-proxy
image: "gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.2.0"
args:
- "--private-ip"
- "--port=5432"
- "--max-sigterm-delay=30" The above will wait 30 seconds for connections to close after receiving a TERM signal. |
Thanks @jackwotherspoon will make the changes and test. On a different note I also see that there is a Cloud SQL proxy operator in Preview, thoughts on migrating to use this operator instead of the side car pattern? This is the link I am referring to https://cloud.google.com/sql/docs/postgres/connect-proxy-operator. I am not sure why official google docs recommend using a |
I am not sure how to do this . This is from GCP official docs |
Hi @dheerajd3v The Cloud SQL Auth Proxy Operator will make it easier to set up proxy sidecar containers for your apps. We expect the operator to be within GA in the next 2 months. Give it a try and let us know if it works for you. We recommend running the proxy as a sidecar container for two reasons:
Init containers serve a different purpose than sidecar containers. Init containers must exit before the pod containers start. See Init Containers Unfortunately, there is no pure Kubernetes way to avoid the race condition when a pod's containers start up. Kubernetes does not allow you to specify a startup order for a pod's containers. We recommend that you write your app to be resilient to failed database connections. Your app should retry a failed database connection attempt for a reasonable period of time (maybe 30 seconds) before exiting with a failure. |
Going to go ahead and close this as I believe the initial question has been answered 😄 If there are any follow-up questions feel free to re-open the issue or create a new issue for the question. Have a great day and thanks for using the Cloud SQL Proxy. |
Bug Description
We moved to Cloud SQL proxy image v2 from v1 couple of weeks back and our k8s deployment was running fine until yesterday when we started seeing SIGTERM requests to cloud sql proxy to shutdown the service.
Earlier today, one of our k8s services went down for 10 minutes. I did some investigating and I believe the reason was a SIGTERM signal was sent to cloudsql-proxy service causing it to shut down in a "not so graceful way".
Here are the sequence of events that I believe occured
because cloudsql-proxy receives a SIGTERM signal, and it still has 8 active connections to the database, cloudsql-proxy exits with an error code 2 rather than error code 0 (references to error codes with sigterm: Feature Request: Perform a graceful shutdown upon SIGTERM #128 (comment) Feature Request: Perform a graceful shutdown upon SIGTERM #128 (comment))
The main issue here is, we shouldn't lose connection to cloudsql-proxy from our microservice. Cloudsql-proxy shouldn't receive a SIGTERM because it causes this weird scenario where one is ready (microservice) and the other is not (the database)
Note:- We are using k8s deployment with a sidecar pattern
Example code (or command)
Stacktrace
No response
Steps to reproduce?
...
Environment
./cloud-sql-proxy --version
): v2.0.0./cloud-sql-proxy --enable_iam_login --dir /path/to/dir INSTANCE_CONNECTION_NAME
):image: "gcr.io/cloud-sql-connectors/cloud-sql-proxy:2.0.0"
args:
- "--private-ip"
- "--port=5432"
- ""
- "--credentials-file="
Additional Details
No response
The text was updated successfully, but these errors were encountered: