-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature request: add ability to view logs of pod in failed state #4640
Comments
Ah, this was a regression due to #4373. It's very harmful for RestartPolicyNever pods, slightly less important for RestartAlways. |
@bgrant0607 I'd like to bump this back to P1, it makes run once pods impossible to view the logs on. |
Are you using Elasticsearch cluster level logging? |
We are not, quite yet - we're trying to have both the simpler level and the higher level work at the same time (so you can do streaming without ElasticSearch, and if you have ES you get the better experience). ----- Original Message -----
|
But we do want to consume it. ----- Original Message -----
|
yes, #4373 adds the restriction to only running container, and we should provide ability to view logs of pod in failed state. But the previous way is too ad hoc. When you view the log, you could view the running one, or previous failing one right before Kubelet restart one when policy is RestartAlways. |
@smarterclayton Priority bumped up. Agree that #4373 was a regression. |
New algorithm, always find the most recent container attempt and show that?
|
@smarterclayton Yes, we are going to support like this. |
I've created an issue #4736 ... if it gets traction I am happy to work on it. I also plan to tweak our Elasticsearch cluster level bring up code to remote the last shread of a GCE specific call (to get the public IP of a service) with the equivalent |
cc/ @ArtfulCoder |
This should be fixed on HEAD |
Kubelet does not track the last failed container instance and so this issue isn't fully resolved. Re-opening it. |
@dchen1107: I assume you are working on this. Can you confirm? |
Ping @dchen1107 |
Yes, just re-assign it to me. Will deal it tomorrow. |
Since we are now running kube-apiserver and other master components in a pod, will it be possible via some quick shortcut to go directly to the kubelet to read failed kube-apiserver logs when the kube-apiserver itself fails to start? ----- Original Message ----- Yes, just re-assign it to me. Will deal it tomorrow. Reply to this email directly or view it on GitHub: |
Status: @dchen1107 has made some progress, but other issues have preempted this, so it won't be done for a couple weeks. |
We can assist if necessary. |
While working to build applications on Kubernetes, I found it difficult to view the logs of a container that went into a failed state. I had to SSH into the node, docker ps -a, and docker logs to see the root cause. We should provide a mechanism via kubelet to view logs of containers not yet garbage collected for access via kubectl to improve developer productivity.
/cc @smarterclayton
The text was updated successfully, but these errors were encountered: