Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl logs -f failed with 'unexpected stream type "" ' #47800

Closed
0xmichalis opened this issue Jun 20, 2017 · 19 comments · Fixed by #50381
Closed

kubectl logs -f failed with 'unexpected stream type "" ' #47800

0xmichalis opened this issue Jun 20, 2017 · 19 comments · Fixed by #50381
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@0xmichalis
Copy link
Contributor

I was running a kubectl logs -f, not for long, that broke unexpectedly with

{"level":"info","msg":"Sync time: 2.276207ms","time":"2017-06-20T17:30:02Z"}
{"level":"info","msg":"Sync time: 5.663623ms","time":"2017-06-20T17:30:32Z"}
unexpected stream type ""

Doesn't seem normal.

$ kc version
Client Version: version.Info{Major:"1", Minor:"8+", GitVersion:"v1.8.0-alpha.0.533+1168552fade5fd", GitCommit:"1168552fade5fd138e4af664ed7b31ec40c6a20f", GitTreeState:"clean", BuildDate:"2017-06-10T16:56:39Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"6", GitVersion:"v1.6.4", GitCommit:"d6f433224538d4f9ca2f7ae19b252e6fcb66a3ae", GitTreeState:"clean", BuildDate:"2017-05-19T18:33:17Z", GoVersion:"go1.7.5", Compiler:"gc", Platform:"linux/amd64"}

@kubernetes/sig-node-bugs
@kubernetes/sig-cli-bugs

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. labels Jun 20, 2017
@SleepyBrett
Copy link

I'm seeing this on 1.6.2

@yujuhong
Copy link
Contributor

@Kargakis @SleepyBrett do you use docker, if so, what log driver do you use?
If you use the default json-file log driver and have access to the node that the specific container ran on, it'd be great if you can provide content of the raw log file (located in /var/log/containers/).

@0xmichalis
Copy link
Contributor Author

I saw this in a GKE cluster so I don't have access to the node....

@yujuhong
Copy link
Contributor

I saw this in a GKE cluster so I don't have access to the node....

Whose gke cluster? If you are the owner of the gke cluster, you do have access to the node VMs. You can gcloud compute ssh into them.

@0xmichalis
Copy link
Contributor Author

So I have access after all :) docker 11.2 is used and the log driver is json-file. Unfortunately, I have scaled the deployment down since I had the issue and now I cannot find the container log under /var/log/containers. Noted for the next time tho

@SleepyBrett
Copy link

docker 11.2.6 using json-file

I'll grab the log off the host next time i see it. I assume just like 20 lines above and below where kubectl logs fails would be sufficient?

@yujuhong
Copy link
Contributor

I'll grab the log off the host next time i see it. I assume just like 20 lines above and below where kubectl logs fails would be sufficient?

Yes, that'd be great.

@mdaniel
Copy link
Contributor

mdaniel commented Jun 23, 2017

In case it helps someone else, here's how I grabbed the logs in question as jq is installed on the host

ssh $host_ip env dock_id=$dock_id bash -x <<'FOO'
docker inspect $dock_id | \
    jq -r '.[0] as $i |
"sudo cp -iv "+$i.Config.Labels["io.kubernetes.container.logpath"]+" "+$i.LogPath+" /tmp"' | \
    bash -x
FOO

The client-side experience

the output from kubectl --context=ops2 --namespace=app logs -f --since=5m -c api platform-04229880a89cda8c27b9360f64d40adb-mjc5x

2017-06-23 00:55:57.878  INFO 1 --- [nio-8080-exec-2] c.s.c.StripeWebhooksController.webhooks  : Stripe event: invoice.payment_succeeded
2017-06-23 00:55:58.467  INFO 1 --- [nio-8080-exec-4] c.s.c.StripeWebhooksController.webhooks  : Stripe event: customer.subscription.created
unexpected stream type ""

and the api_0.log from that container's io.kubernetes.container.logpath (mildly redacted, and I know you asked for about 20 lines, but the lines above those are some sensitive bits that I would prefer not to have to go through and redact; but I will if you think it's necessary)

{"log":"2017-06-22 23:19:47.359  INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring FrameworkServlet 'dispatcherServlet'\n","stream":"stdout","time":"2017-06-22T23:19:47.36010412Z"}
{"log":"2017-06-22 23:19:47.359  INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization started\n","stream":"stdout","time":"2017-06-22T23:19:47.360140818Z"}
{"log":"2017-06-22 23:19:47.409  INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : FrameworkServlet 'dispatcherServlet': initialization completed in 49 ms\n","stream":"stdout","time":"2017-06-22T23:19:47.418559307Z"}
{"log":"2017-06-22 23:59:28.732  INFO 1 --- [nio-8080-exec-8] c.s.services.audit.AuditClient           : timestamp:1498175968729, actor:2765, targetUser:2765, sourceIp:10.2.0.10, operation:USER_EMAIL_SET, successful:true\n","stream":"stdout","time":"2017-06-22T23:59:28.732605896Z"}
{"log":"2017-06-22 23:59:28.759  INFO 1 --- [nio-8080-exec-8] c.s.services.audit.AuditClient           : timestamp:1498175968759, actor:2765, targetUser:2765, sourceIp:10.2.0.10, operation:USER_EMAIL_VERIFY_INIT, successful:true\n","stream":"stdout","time":"2017-06-22T23:59:28.759734436Z"}
{"log":"2017-06-22 23:59:28.774  INFO 1 --- [nio-8080-exec-8] c.s.services.audit.AuditClient           : timestamp:1498175968774, actor:2765, targetUser:2765, sourceIp:10.2.0.10, operation:USER_UPDATE, successful:true, reason:XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX\n","stream":"stdout","time":"2017-06-22T23:59:28.77499916Z"}
{"log":"2017-06-23 00:55:57.878  INFO 1 --- [nio-8080-exec-2] c.s.c.StripeWebhooksController.webhooks  : Stripe event: invoice.payment_succeeded\n","stream":"stdout","time":"2017-06-23T00:55:57.879240851Z"}
{"log":"2017-06-23 00:55:58.467  INFO 1 --- [nio-8080-exec-4] c.s.c.StripeWebhooksController.webhooks  : Stripe event: customer.subscription.created\n","stream":"stdout","time":"2017-06-23T00:55:58.467870349Z"}
{"log":"2017-06-23 00:56:30.340  INFO 1 --- [nio-8080-exec-6] c.s.c.StripeWebhooksController.webhooks  : Stripe event: customer.created\n","stream":"stdout","time":"2017-06-23T00:56:30.340702714Z"}
{"log":"2017-06-23 00:56:58.157  INFO 1 --- [nio-8080-exec-6] c.s.c.StripeWebhooksController.webhooks  : Stripe event: customer.subscription.deleted\n","stream":"stdout","time":"2017-06-23T00:56:58.15801668Z"}

I have the contents of .[0].LogPath, too, but they are md5-level the exact same content


Environmental information

regrettably we are using the rkt based kubelet-wrapper from CoreOS, invoked as such:

Environment=KUBELET_VERSION=v1.6.4_coreos.0
Environment=KUBELET_ACI=quay.io/coreos/hyperkube
Environment="RKT_OPTS=--volume=resolv,kind=host,source=/etc/resolv.conf --mount volume=resolv,target=/etc/resolv.conf"
ExecStartPre=/usr/bin/rkt gc --grace-period=30s
ExecStartPre=/usr/bin/docker ps
ExecStart=/usr/lib64/coreos/kubelet-wrapper  \
  --api-servers=https://k8s.example.com:443 \
  --register-node=true \
  --allow-privileged=true \
  --pod-manifest-path=/etc/kubernetes/manifests \
  --cluster-dns=10.20.0.53 \
  --cluster-domain=cluster.local \
  --cloud-provider=aws \
  --kubeconfig=/etc/kubernetes/worker-kubeconfig.yaml \
  --tls-cert-file=/etc/kubernetes/ssl/worker.pem \
  --tls-private-key-file=/etc/kubernetes/ssl/worker-key.pem
$ docker version
Client:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.7.5
 Git commit:   a82d35e
 Built:        Mon Jun 19 23:04:34 2017
 OS/Arch:      linux/amd64

Server:
 Version:      1.12.6
 API version:  1.24
 Go version:   go1.7.5
 Git commit:   a82d35e
 Built:        Mon Jun 19 23:04:34 2017
 OS/Arch:      linux/amd64
$ cat /etc/lsb-release
DISTRIB_ID="Container Linux by CoreOS"
DISTRIB_RELEASE=1409.2.0
DISTRIB_CODENAME="Ladybug"
DISTRIB_DESCRIPTION="Container Linux by CoreOS 1409.2.0 (Ladybug)"

@Random-Liu
Copy link
Member

Random-Liu commented Jun 23, 2017

@mdaniel Given the code, I really don't understand why this happens.

The only possibility is that kubelet is reading and parsing a partial line. However, if the json log line is in-complete, json.Unmarshal should return error and kubelet will discard that line; Or else, it means that json is complete, and I don't understand why streamType is empty when the actual log line contains everything.

Could you check kubelet log to see whether there is anything suspicious, although unlikely.

@mdaniel
Copy link
Contributor

mdaniel commented Jul 25, 2017

sorry for the delay in getting back to you, I finally caught something interesting in the kubelet log

Jul 25 23:15:15 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:15:15.223587   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/a7115c15-6d1a-11e7-ac09-02e6dc269b1f/snowplow-stream-enrich_0.log": &{timestamp:{sec:63636274423 nsec:579450457 loc:0x6c85580} stream: log:[]}
Jul 25 23:15:56 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:15:56.398231   18830 kuberuntime_logs.go:201] Failed with err write tcp 10.1.1.240:10250->10.1.1.139:37468: write: broken pipe when writing log for log file "/var/log/pods/6985ce2c-7187-11e7-ac09-02e6dc269b1f/sentry_0.log": &{timestamp:{sec:63636621356 nsec:397737543 loc:0x6c85580} stream:stderr log:[49 48 46 49 48 46 49 51 46 49 32 45 32 45 32 91 50 53 47 74 117 108 47 50 48 49 55 58 50 51 58 49 53 58 53 54 32 43 48 48 48 48 93 32 34 71 69 84 32 47 95 104 101 97 108 116 104 47 32 72 84 84 80 47 49 46 49 34 32 50 48 48 32 50 48 48 32 34 45 34 32 34 71 111 45 104 116 116 112 45 99 108 105 101 110 116 47 49 46 49 34 10]}
Jul 25 23:16:06 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:16:06.398164   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/a7115c15-6d1a-11e7-ac09-02e6dc269b1f/snowplow-stream-enrich_0.log": &{timestamp:{sec:63636348843 nsec:430964770 loc:0x6c85580} stream: log:[]}
Jul 25 23:17:55 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:17:55.380456   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/6985ce2c-7187-11e7-ac09-02e6dc269b1f/sentry_0.log": &{timestamp:{sec:63636621475 nsec:380019366 loc:0x6c85580} stream: log:[]}
Jul 25 23:21:05 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:21:05.575363   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/6985ce2c-7187-11e7-ac09-02e6dc269b1f/sentry_0.log": &{timestamp:{sec:63636621665 nsec:574763058 loc:0x6c85580} stream: log:[]}
Jul 25 23:25:35 k8s-worker-1485319434 kubelet-wrapper[18830]: E0725 23:25:35.791648   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/6985ce2c-7187-11e7-ac09-02e6dc269b1f/sentry_0.log": &{timestamp:{sec:63636621935 nsec:790985155 loc:0x6c85580} stream: log:[]}

I elided the info level lines in between those error level lines, as they were "MountVolume.Setup succeeded" and didn't seem relevant

I also came here to see if there was another issue regarding intermingled log output, but searching for that in the issues is extremely hard given how many ways there are in english to describe the behavior:

line-1
line-2
other-pod-line-1
other-pod-line-2
line-3
unexpected stream type ""

where other-pod-line-1 is very clearly the log output from a separate Pod on that same Node

@bbeaudreault
Copy link

I think this is a concurrency issue. I've not been able to reliably reproduce it but anecdotally have noticed that the logging is much less stable when multiple people are tailing at once. I can tail logs off-hours when no one is around and it will be fine for hours without this error.

But during the work day when multiple people are tailing the same log we all will all see the error quite regularly. Pretty much every time the error happens, if I ask around it turns out someone else had just started tailing the same log.

@mdaniel
Copy link
Contributor

mdaniel commented Jul 28, 2017

In my case, I doubt that it is multiple people tailing simultaneously, but I will try to remember to start several kubectl logs -f and see if that impacts things

I experience it pretty regularly, but I haven't tried reproducing it in any self-contained system -- there are so many variables to consider :(

@bbeaudreault do you also experience that log lines are intermixed with other Pod's output?

@bbeaudreault
Copy link

@mdaniel I've never seen that. We have a few hundred pods, and haven't witnessed it once or seen anyone else complain about it. But if somehow the streams are being crossed for you, that could also point to a concurrency issue possibly. In your case it may not be multiple readers, but possibly multiple writers. I've not looked at the code though, so just a guess.

@mdaniel
Copy link
Contributor

mdaniel commented Jul 28, 2017

Thanks for your suggestion, Bryan, as I am able to blow up the kubelet at will now; I get one of these explosions for each kubectl that attaches:

Jul 28 17:16:16 k8s-worker-1485319434 kubelet-wrapper[18830]: E0728 17:16:16.670207   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/82e9ea36-6d35-11e7-ac09-02e6dc269b1f/kong_0.log": &{timestamp:{sec:63636858862 nsec:712137987 loc:0x6c85580} stream: log:[]}
Jul 28 17:16:16 k8s-worker-1485319434 kubelet-wrapper[18830]: I0728 17:16:16.670936   18830 logs.go:41] http: multiple response.WriteHeader calls
Jul 28 17:16:16 k8s-worker-1485319434 kubelet-wrapper[18830]: E0728 17:16:16.892852   18830 kuberuntime_logs.go:201] Failed with err unexpected stream type "" when writing log for log file "/var/log/pods/82e9ea36-6d35-11e7-ac09-02e6dc269b1f/kong_0.log": &{timestamp:{sec:63636858790 nsec:698286771 loc:0x6c85580} stream: log:[]}
Jul 28 17:16:16 k8s-worker-1485319434 kubelet-wrapper[18830]: I0728 17:16:16.892926   18830 logs.go:41] http: multiple response.WriteHeader calls
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: I0728 17:16:17.057411   18830 logs.go:41] http: panic serving 10.1.1.139:54006: runtime error: invalid memory address or nil pointer dereference
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: goroutine 22001312 [running]:
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: net/http.(*conn).serve.func1(0xc4238ca800)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:1491 +0x12a
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: panic(0x3a4c580, 0xc42001a030)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/runtime/panic.go:458 +0x243
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/kuberuntime.parseDockerJSONLog(0xc42470d080, 0x17d, 0x17d, 0xc4238f2a40, 0x17d, 0x0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_logs.go:299 +0x123
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/kuberuntime.(*kubeGenericRuntimeManager).ReadLogs(0xc420f1a0e0, 0xc4238f29c0, 0x3d, 0xc42320c329, 0x40, 0xc4232a5260, 0x6ca20e0, 0xc422970fa0, 0x6ca20e0, 0xc422970fa0, ...)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_logs.go:191 +0xc2e
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/kuberuntime.(*kubeGenericRuntimeManager).GetContainerLogs(0xc420f1a0e0, 0xc421765500, 0xc42320c320, 0x6, 0xc42320c329, 0x40, 0xc4232a5260, 0x6ca20e0, 0xc422970fa0, 0x6ca20e0, ...)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kuberuntime/kuberuntime_container.go:701 +0x17e
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet.(*Kubelet).GetKubeletContainerLogs(0xc420e96c00, 0xc4237be8a0, 0xe, 0xc42433d062, 0x4, 0xc4232a5260, 0x6ca20e0, 0xc422970fa0, 0x6ca20e0, 0xc422970fa0, ...)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet_pods.go:1050 +0x524
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).getContainerLogs(0xc420f01130, 0xc423274f30, 0xc4232a5200)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:515 +0x864
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).(k8s.io/kubernetes/pkg/kubelet/server.getContainerLogs)-fm(0xc423274f30, 0xc4232a5200)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:375 +0x3e
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch.func6(0xc423274f30, 0xc4232a5200)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:267 +0x3e
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*FilterChain).ProcessFilter(0xc4232750e0, 0xc423274f30, 0xc4232a5200)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/filter.go:21 +0x95
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).InstallAuthFilter.func1(0xc423274f30, 0xc4232a5200, 0xc4232750e0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:255 +0x151
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*FilterChain).ProcessFilter(0xc4232750e0, 0xc423274f30, 0xc4232a5200)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/filter.go:19 +0x68
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).dispatch(0xc420f1c120, 0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:269 +0x996
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).(k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.dispatch)-fm(0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:120 +0x48
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: net/http.HandlerFunc.ServeHTTP(0xc420f2e600, 0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:1726 +0x44
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: net/http.(*ServeMux).ServeHTTP(0xc421457770, 0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:2022 +0x7f
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/vendor/github.com/emicklei/go-restful.(*Container).ServeHTTP(0xc420f1c120, 0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/emicklei/go-restful/container.go:287 +0x4d
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: k8s.io/kubernetes/pkg/kubelet/server.(*Server).ServeHTTP(0xc420f010e0, 0x6cca9e0, 0xc4238b0b60, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/server/server.go:777 +0x10e
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: net/http.serverHandler.ServeHTTP(0xc4202a1400, 0x6cd0ea0, 0xc422ddf6c0, 0xc4236f82d0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:2202 +0x7d
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: net/http.(*conn).serve(0xc4238ca800, 0x6cd3fa0, 0xc4233e8cc0)
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:1579 +0x4b7
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]: created by net/http.(*Server).Serve
Jul 28 17:16:17 k8s-worker-1485319434 kubelet-wrapper[18830]:         /usr/local/go/src/net/http/server.go:2293 +0x44d

I'll see if I can get it to explode in a more isolated environment, since it doesn't appear to be related to the pods that are running

@williamstein
Copy link

I see this bug frequently when calling kubectl logs many times in parallel on my kubeadm Kubernetes 1.7.4 cluster. I was using multiple threads (from a Python3 process) to get many logs at once using kubectl log, and that would cause me to hit this bug on about 25% of the requests. Switching to grabbing the logs one after the other (rather than doing many kubectl log shell commands at once) works around the problem, though is much an order of magnitude slower.

To be clear: I am not using the follow -f option. I am using the tail option, typically --tail=10000.

@feiskyer
Copy link
Member

Also reproduced the problem by fetching container logs in multiple processes. #50381 should fix the problem.

k8s-github-robot pushed a commit that referenced this issue Sep 1, 2017
Automatic merge from submit-queue (batch tested with PRs 50381, 51307, 49645, 50995, 51523)

Bugfix: Use local JSON log buffer in parseDockerJSONLog.

**What this PR does / why we need it**:
The issue described in #47800 is due to a race condition in `ReadLogs`: Because the JSON log buffer (`dockerJSONLog`) is package-scoped, any two goroutines modifying the buffer could race and overwrite the other's changes. In particular, one goroutine could unmarshal a JSON log line into the buffer, then another goroutine could `Reset()` the buffer, and the resulting `Stream` would be empty (`""`). This empty `Stream` is caught in a `case` block and raises an `unexpected stream type` error.

This PR creates a new buffer for each execution of `parseDockerJSONLog`, so each goroutine is guaranteed to have a local instance of the buffer.

**Which issue this PR fixes**: fixes #47800

**Release note**:
```release-note
Fixed an issue (#47800) where `kubectl logs -f` failed with `unexpected stream type ""`.
```
k8s-github-robot pushed a commit that referenced this issue Sep 5, 2017
Automatic merge from submit-queue

Automated cherry pick of #50381 to release-1.6

**What this PR does / why we need it**:

Use local JSON log buffer in parseDockerJSONLog.

**Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #47800

**Special notes for your reviewer**:

**Release note**:

```release-note
Fixed an issue (#47800) where `kubectl logs -f` failed with `unexpected stream type ""`.
```
@ssakhuja57
Copy link

ssakhuja57 commented Sep 18, 2017

I am able to reproduce this with minikube v0.22.1 (which runs kubernetes 1.7.5). When following logs for separate pods concurrently (using /api/v1/namespaces/<ns>/pods/<pod_id>/log?follow=true), I very quickly see unexpected stream type ". Funny thing is there's always one random pod for which the follow doesn't crash with this error (at least not for a long time) and keeps running successfully.

@wzhliang
Copy link

wzhliang commented Nov 7, 2017

Seeing this error message with a 1.7.3 bare metal cluster.

@yujuhong
Copy link
Contributor

yujuhong commented Nov 7, 2017

FYI, the fix is in 1.7.6.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.