You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Running the same test listed in #5650 shows that the cleanup time now appears to take orders of magnitude longer then before. It used to take ~1 minute for the same test, now takes >> 5+ Minutes to clean the cluster.
kubectl get pods shows all clear but a pdsh docker ps on the cluster shows pods running for an extended length of time.
The text was updated successfully, but these errors were encountered:
Previously we leaked pods in some cases by deleting the rc instead of calling stop. The delete was quick. #5745 should give us the ability to watch for 0 replicas in stop, instead of polling.
The docker ps issue might be unrelated as the rc will claim victory if the pods are dead (there's typically some delay, but it won't wait on containers or anything).
Running the same test listed in #5650 shows that the cleanup time now appears to take orders of magnitude longer then before. It used to take ~1 minute for the same test, now takes >> 5+ Minutes to clean the cluster.
kubectl get pods shows all clear but a pdsh
docker ps
on the cluster shows pods running for an extended length of time.The text was updated successfully, but these errors were encountered: