[Kubernetes 1.14.0] Watch on aggregated API resource hangs until first event #75837
Description
What happened:
Prior to 1.14, when you created a watch on a resource coming from an aggregated API, you would instantly obtain the watch.Interface
object, and could stop it when you want.
Since 1.14, when you do the same, the request.Watch()
method returns when the server sends its first event. Resources directly hosted by the central API-server (core resources, crds, etc.) are not touched by this issue.
When you use higher level APIs like informers, situations like this can make your code hang, as attempts to stop the informer seem ignored (the informer cancellation logic only works once the underlying watch.Interface has been retrieved), and your code hang.
This is hitting https://github.com/docker/cli, and a workaround has been found here: docker/cli#1784. However this change of behavior means that current versions of the CLI can hang when deploying Kubernetes workloads.
What you expected to happen:
As with resources hosted by the central API server, the watch.Interface object should be instantly returned
How to reproduce it (as minimally and precisely as possible):
I'll try to setup an automated test case for that with compose-on-kubernetes.
In the mean-time:
- deploy a kube 1.14 cluster with kind / minikube / ...
- deploy compose-on-kubernetes on it (see https://github.com/docker/compose-on-kubernetes/blob/master/docs/install-on-minikube.md)
- download the latest official docker cli release and try to deploy a simple compose file to kubernetes using
docker stack deploy -c compose-file.yml my-stack --orchestrator=kubernetes
Anything else we need to know?:
After digging a little with a debugging http proxy, I saw that the big difference seem to be that:
- prior to 1.14, when such a watch is issued, the response headers are immediately sent to the client
- with 1.14, on central-API resources, it does not change
- with 1.14, on aggregated API resources, the reponse headers are sent with the first event
Environment:
- Kubernetes version (use
kubectl version
):
Client Version: version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:04:45Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.0", GitCommit:"641856db18352033a0d96dbc99153fa3b27298e5", GitTreeState:"clean", BuildDate:"2019-03-25T23:47:43Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
- Cloud provider or hardware configuration:
Using Kind, also happen with internal Docker Desktop builds - OS (e.g:
cat /etc/os-release
):
Both Kind, and Linuxkit - Kernel (e.g.
uname -a
): - Install tools:
- Others: