Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: add ability to view logs of pod in failed state #4640

Closed
derekwaynecarr opened this issue Feb 20, 2015 · 21 comments · Fixed by #7973
Closed

Feature request: add ability to view logs of pod in failed state #4640

derekwaynecarr opened this issue Feb 20, 2015 · 21 comments · Fixed by #7973
Assignees
Labels
area/introspection area/usability priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Milestone

Comments

@derekwaynecarr
Copy link
Member

While working to build applications on Kubernetes, I found it difficult to view the logs of a container that went into a failed state. I had to SSH into the node, docker ps -a, and docker logs to see the root cause. We should provide a mechanism via kubelet to view logs of containers not yet garbage collected for access via kubectl to improve developer productivity.

/cc @smarterclayton

@dchen1107 dchen1107 added area/introspection sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Feb 20, 2015
@bgrant0607 bgrant0607 added this to the v1.0 milestone Feb 20, 2015
@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. priority/backlog Higher priority than priority/awaiting-more-evidence. area/usability and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Feb 20, 2015
@smarterclayton
Copy link
Contributor

Ah, this was a regression due to #4373. It's very harmful for RestartPolicyNever pods, slightly less important for RestartAlways.

@smarterclayton
Copy link
Contributor

@bgrant0607 I'd like to bump this back to P1, it makes run once pods impossible to view the logs on.

@satnam6502
Copy link
Contributor

Are you using Elasticsearch cluster level logging?
I have peen pondering the idea of producing a pod that makes it easy to browse the logs of current and recent pods through a web interface by fetching the information from Elasticsearch and then cross-referencing with the master to map pods to Docker container IDs etc.
Would that be useful for you? If yes, I can make that a priority after I nail the end to end test for cluster level logging?

@smarterclayton
Copy link
Contributor

We are not, quite yet - we're trying to have both the simpler level and the higher level work at the same time (so you can do streaming without ElasticSearch, and if you have ES you get the better experience).

----- Original Message -----

Are you using Elasticsearch cluster level logging?
I have peen pondering the idea of producing a pob that makes it easy to
browse the logs of current and recent pods through a web interface by
fetching the information from Elasticsearch and then cross-referencing with
the master to map pods to Docker container IDs etc.
Would that be useful for you? If yes, I can make that a priority after I nail
the end to end test for cluster level logging?


Reply to this email directly or view it on GitHub:
#4640 (comment)

@smarterclayton
Copy link
Contributor

But we do want to consume it.

----- Original Message -----

We are not, quite yet - we're trying to have both the simpler level and the
higher level work at the same time (so you can do streaming without
ElasticSearch, and if you have ES you get the better experience).

----- Original Message -----

Are you using Elasticsearch cluster level logging?
I have peen pondering the idea of producing a pob that makes it easy to
browse the logs of current and recent pods through a web interface by
fetching the information from Elasticsearch and then cross-referencing with
the master to map pods to Docker container IDs etc.
Would that be useful for you? If yes, I can make that a priority after I
nail
the end to end test for cluster level logging?


Reply to this email directly or view it on GitHub:
#4640 (comment)

@bgrant0607 bgrant0607 added priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Feb 23, 2015
@dchen1107
Copy link
Member

yes, #4373 adds the restriction to only running container, and we should provide ability to view logs of pod in failed state. But the previous way is too ad hoc. When you view the log, you could view the running one, or previous failing one right before Kubelet restart one when policy is RestartAlways.

@bgrant0607
Copy link
Member

@saad-ali

@bgrant0607
Copy link
Member

@vishh

@vishh vishh self-assigned this Feb 23, 2015
@bgrant0607
Copy link
Member

@smarterclayton Priority bumped up. Agree that #4373 was a regression.

@smarterclayton
Copy link
Contributor

New algorithm, always find the most recent container attempt and show that?

On Feb 23, 2015, at 4:08 PM, Dawn Chen notifications@github.com wrote:

yes, #4373 adds the restriction to only running container, and we should provide ability to view logs of pod in failed state. But the previous way is too ad hoc. When you view the log, you could view the running one, or previous failing one right before Kubelet restart one when policy is RestartNever.


Reply to this email directly or view it on GitHub.

@dchen1107
Copy link
Member

@smarterclayton Yes, we are going to support like this.

@satnam6502
Copy link
Contributor

I've created an issue #4736 ... if it gets traction I am happy to work on it. I also plan to tweak our Elasticsearch cluster level bring up code to remote the last shread of a GCE specific call (to get the public IP of a service) with the equivalent kubectl call (although this feature would not be needed for a cluster level log viewer).

@dchen1107
Copy link
Member

cc/ @ArtfulCoder

@vishh
Copy link
Contributor

vishh commented Mar 4, 2015

This should be fixed on HEAD

@vishh vishh closed this as completed Mar 4, 2015
@vishh
Copy link
Contributor

vishh commented Mar 5, 2015

Kubelet does not track the last failed container instance and so this issue isn't fully resolved. Re-opening it.

@vishh
Copy link
Contributor

vishh commented Mar 30, 2015

@dchen1107: I assume you are working on this. Can you confirm?

@vishh
Copy link
Contributor

vishh commented Apr 15, 2015

Ping @dchen1107

@dchen1107 dchen1107 assigned dchen1107 and unassigned vishh Apr 15, 2015
@dchen1107
Copy link
Member

Yes, just re-assign it to me. Will deal it tomorrow.

@derekwaynecarr
Copy link
Member Author

Since we are now running kube-apiserver and other master components in a pod, will it be possible via some quick shortcut to go directly to the kubelet to read failed kube-apiserver logs when the kube-apiserver itself fails to start?

----- Original Message -----
From: "Dawn Chen" notifications@github.com
To: "GoogleCloudPlatform/kubernetes" kubernetes@noreply.github.com
Cc: "Derek Carr" decarr@redhat.com
Sent: Tuesday, April 14, 2015 8:21:39 PM
Subject: Re: [kubernetes] Feature request: add ability to view logs of pod in failed state (#4640)

Yes, just re-assign it to me. Will deal it tomorrow.


Reply to this email directly or view it on GitHub:
#4640 (comment)

@bgrant0607
Copy link
Member

Status: @dchen1107 has made some progress, but other issues have preempted this, so it won't be done for a couple weeks.

@smarterclayton
Copy link
Contributor

We can assist if necessary.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/introspection area/usability priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

6 participants