-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible Bug: Error syncing deployment, replica set already exists #26673
Comments
After doing some more testing, I am able to reproduce |
@kubernetes/deployment |
What are the back-to-back deployments? Is this just 2
This error message indicates that the deployment was updated between reading the deployment and writing it. If you are using |
Marking as P2 since this may already be addressed |
@pwittrock thanks for your response!
Yes, back-to-back
That sounds very promising! |
Closing issue since the PR to address has been merged. Re-open if this continues to be an issue for kubectl version 1.3+ |
We've not seen a repeat of this since upgrading to 1.3. It's been about 10 days so far... |
I believe I am getting this same error on 1.3 running AWS. I was running some test deployments using the deployment below. The {{ version }} gets set to the current timestamp. I had made a change of maxSurge from 1 to 2 and redeployed using pykube which I think does a patch. The new replicaset was never created. The log gets many of the already exists entries. I just redeployed again and it worked itself out.
kube-controller-manager.log log file:
|
I hit the same problem as well after the call to
|
I just ran into this as well, on 1.3.7. I noticed controller-manager pegs at 100% cpu and spams the already mentioned message. I'm not sure yet what triggered it. |
We’re using
Deployment
resources. Last night, one of our k8s clusters started spewing a lot of logs - to the tune of 70K messages per hour (~1gb/hr). The same message was repeated over and over again.The errors look like this:
This is a long-lived deployment. It seems that after several hours the problem resolved itself.
Our deployment process runs (roughly) the following commands synchronously:
(1)
kubectl apply -f supernova-deployment.yml
(2)
kubectl annotate deployment supernova 'kubernetes.io/change-cause=something'
Maybe has something to do with it?
Kubernetes Version:
On CoreOS 899.17.0
Any tips on how to triage this? Not sure where to begin.
kubectl list replicasets
does not reveal the problem set.The text was updated successfully, but these errors were encountered: