-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubectl logs -f failed with 'unexpected stream type "" ' #47800
Comments
I'm seeing this on 1.6.2 |
@Kargakis @SleepyBrett do you use docker, if so, what log driver do you use? |
I saw this in a GKE cluster so I don't have access to the node.... |
Whose gke cluster? If you are the owner of the gke cluster, you do have access to the node VMs. You can |
So I have access after all :) docker 11.2 is used and the log driver is json-file. Unfortunately, I have scaled the deployment down since I had the issue and now I cannot find the container log under /var/log/containers. Noted for the next time tho |
docker 11.2.6 using json-file I'll grab the log off the host next time i see it. I assume just like 20 lines above and below where kubectl logs fails would be sufficient? |
Yes, that'd be great. |
In case it helps someone else, here's how I grabbed the logs in question as ssh $host_ip env dock_id=$dock_id bash -x <<'FOO'
docker inspect $dock_id | \
jq -r '.[0] as $i |
"sudo cp -iv "+$i.Config.Labels["io.kubernetes.container.logpath"]+" "+$i.LogPath+" /tmp"' | \
bash -x
FOO The client-side experience the output from
and the
I have the contents of Environmental information regrettably we are using the
|
@mdaniel Given the code, I really don't understand why this happens. The only possibility is that kubelet is reading and parsing a partial line. However, if the json log line is in-complete, Could you check kubelet log to see whether there is anything suspicious, although unlikely. |
sorry for the delay in getting back to you, I finally caught something interesting in the kubelet log
I elided the info level lines in between those error level lines, as they were "MountVolume.Setup succeeded" and didn't seem relevant I also came here to see if there was another issue regarding intermingled log output, but searching for that in the issues is extremely hard given how many ways there are in english to describe the behavior:
where other-pod-line-1 is very clearly the log output from a separate Pod on that same Node |
I think this is a concurrency issue. I've not been able to reliably reproduce it but anecdotally have noticed that the logging is much less stable when multiple people are tailing at once. I can tail logs off-hours when no one is around and it will be fine for hours without this error. But during the work day when multiple people are tailing the same log we all will all see the error quite regularly. Pretty much every time the error happens, if I ask around it turns out someone else had just started tailing the same log. |
In my case, I doubt that it is multiple people tailing simultaneously, but I will try to remember to start several I experience it pretty regularly, but I haven't tried reproducing it in any self-contained system -- there are so many variables to consider :( @bbeaudreault do you also experience that log lines are intermixed with other Pod's output? |
@mdaniel I've never seen that. We have a few hundred pods, and haven't witnessed it once or seen anyone else complain about it. But if somehow the streams are being crossed for you, that could also point to a concurrency issue possibly. In your case it may not be multiple readers, but possibly multiple writers. I've not looked at the code though, so just a guess. |
Thanks for your suggestion, Bryan, as I am able to blow up the kubelet at will now; I get one of these explosions for each kubectl that attaches:
I'll see if I can get it to explode in a more isolated environment, since it doesn't appear to be related to the pods that are running |
I see this bug frequently when calling To be clear: I am not using the follow |
Also reproduced the problem by fetching container logs in multiple processes. #50381 should fix the problem. |
Automatic merge from submit-queue (batch tested with PRs 50381, 51307, 49645, 50995, 51523) Bugfix: Use local JSON log buffer in parseDockerJSONLog. **What this PR does / why we need it**: The issue described in #47800 is due to a race condition in `ReadLogs`: Because the JSON log buffer (`dockerJSONLog`) is package-scoped, any two goroutines modifying the buffer could race and overwrite the other's changes. In particular, one goroutine could unmarshal a JSON log line into the buffer, then another goroutine could `Reset()` the buffer, and the resulting `Stream` would be empty (`""`). This empty `Stream` is caught in a `case` block and raises an `unexpected stream type` error. This PR creates a new buffer for each execution of `parseDockerJSONLog`, so each goroutine is guaranteed to have a local instance of the buffer. **Which issue this PR fixes**: fixes #47800 **Release note**: ```release-note Fixed an issue (#47800) where `kubectl logs -f` failed with `unexpected stream type ""`. ```
Automatic merge from submit-queue Automated cherry pick of #50381 to release-1.6 **What this PR does / why we need it**: Use local JSON log buffer in parseDockerJSONLog. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes #47800 **Special notes for your reviewer**: **Release note**: ```release-note Fixed an issue (#47800) where `kubectl logs -f` failed with `unexpected stream type ""`. ```
I am able to reproduce this with minikube v0.22.1 (which runs kubernetes 1.7.5). When following logs for separate pods concurrently (using |
Seeing this error message with a 1.7.3 bare metal cluster. |
FYI, the fix is in 1.7.6. |
I was running a kubectl logs -f, not for long, that broke unexpectedly with
Doesn't seem normal.
@kubernetes/sig-node-bugs
@kubernetes/sig-cli-bugs
The text was updated successfully, but these errors were encountered: