Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

no realtime output by implement kubectl logs -f podname #63225

Closed
calvinyv opened this issue Apr 27, 2018 · 10 comments
Closed

no realtime output by implement kubectl logs -f podname #63225

calvinyv opened this issue Apr 27, 2018 · 10 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.

Comments

@calvinyv
Copy link

calvinyv commented Apr 27, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind feature

What happened:
Deploy the basic sample pod from https://kubernetes.io/docs/concepts/cluster-administration/logging/, a busybox container which print echo msg.
Run kubectl logs -f coutner, no msg is shown.
And If I just run kubectl logs counter several times, I could see there are new msg generated.
What you expected to happen:
kubectl logs -f should be able to tail the logs change from pod.
How to reproduce it (as minimally and precisely as possible):
I'm not sure if this is able to reproduce in other k8s env, I just follow the guide on k8s official doc and deploy that basic sample pod.

Anything else we need to know?:
All the other kubectl commands works well, and the cluster is in health status.
and docker logs -f work well without this problem.

Environment:

  • Kubernetes version (use kubectl version):
    Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:28:34Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
    Server Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.4", GitCommit:"9befc2b8928a9426501d3bf62f72849d5cbcd5a3", GitTreeState:"clean", BuildDate:"2017-11-20T05:17:43Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}

  • Cloud provider or hardware configuration:
    An k8s cluster provision by QingCloud, 1 master node and 4 worker nodes deployed on kvm boxes.

  • OS (e.g. from /etc/os-release):
    NAME="Ubuntu"
    VERSION="16.04.3 LTS (Xenial Xerus)"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 16.04.3 LTS"
    VERSION_ID="16.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
    VERSION_CODENAME=xenial
    UBUNTU_CODENAME=xenial

  • Kernel (e.g. uname -a):
    Linux i-di6arm5w 4.4.0-112-generic Add load balancing support to services. #135-Ubuntu SMP Fri Jan 19 11:48:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux

  • Install tools:

  • Others:
    docker version
    Client:
    Version: 17.12.0-ce
    API version: 1.35
    Go version: go1.9.2
    Git commit: c97c6d6
    Built: Wed Dec 27 20:11:19 2017
    OS/Arch: linux/amd64

Server:
Engine:
Version: 17.12.0-ce
API version: 1.35 (minimum version 1.12)
Go version: go1.9.2
Git commit: c97c6d6
Built: Wed Dec 27 20:09:53 2017
OS/Arch: linux/amd64
Experimental: false

@k8s-ci-robot k8s-ci-robot added needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. kind/bug Categorizes issue or PR as related to a bug. labels Apr 27, 2018
@calvinyv
Copy link
Author

/sig cli

@k8s-ci-robot k8s-ci-robot added sig/cli Categorizes an issue or PR as relevant to SIG CLI. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Apr 27, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 26, 2018
@nikhita
Copy link
Member

nikhita commented Aug 11, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 11, 2018
@tammersaleh
Copy link

tammersaleh commented Nov 7, 2018

I'm seeing this as well. I'm deployed to GKE:

$ gcloud container clusters create lab --image-type=cos_containerd --cluster-version=1.11.2-gke.9 --num-nodes=2 --machine-type=n1-standard-2

And here's my kubectl:

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.4", GitCommit:"bf9a868e8ea3d3a8fa53cbb22f566771b3f8068b", GitTreeState:"clean", BuildDate:"2018-10-25T19:17:06Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"11+", GitVersion:"v1.11.2-gke.9", GitCommit:"869cf55e493b5bc10ae080bdb3ac60338e868eff", GitTreeState:"clean", BuildDate:"2018-10-02T20:45:12Z", GoVersion:"go1.10.3b4", Compiler:"gc", Platform:"linux/amd64"}

I'm deploying a DaemonSet that echo's to stdout every 5 seconds

DaemonSet manifest
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: nosey
spec:
  selector:
    matchLabels:
      name: nosey
  template:
    metadata:
      labels:
        name: nosey
    spec:
      tolerations:
      - key: node-role.kubernetes.io/master
        effect: NoSchedule
      containers:
      - name: nosey
        image: superorbital/toolbox:latest
        args: ["/bin/bash", "-c", "while true; do echo there are $(ls -d /var/log/pods/* | wc -l) containers on this pod; sleep 5; done"]
        volumeMounts:
        - name: varlog
          mountPath: /var/log
          readOnly: true
      volumes:
      - name: varlog
        hostPath:
          path: /var/log

I can view these messages with kubectl logs nosey-1234, but if I add the -f flag, I get no output. I've also verified that the messages are appearing every 5 seconds in /var/log/containers/nosey-UUID on the node.

@tammersaleh
Copy link

More info: This seems to happen from my local environment running MacOS behind a home router, but not from my cloud development environment running on a GCE VM.

@alculquicondor
Copy link
Member

I'm seeing this only for a pod with more than one container.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Mar 29, 2019
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/cli Categorizes an issue or PR as relevant to SIG CLI.
Projects
None yet
Development

No branches or pull requests

6 participants