-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Image pull progress should be exposed #19077
Comments
I have a similar usecase for this, so I would also like to see this added :) |
@kubernetes/goog-ux |
Yeah, very common request. We've ended up adding heuristics "If PodStatus Pending and no containers, display message to user |
We generate events for this purpose. We currently have a AFAIK, docker does not surface image pull progress. Did that change On Sun, Jan 31, 2016 at 1:33 PM, Clayton Coleman notifications@github.com
|
Practically speaking, people use pod status to figure out what the pod is On Thu, Feb 11, 2016 at 4:33 PM, Vish Kannan notifications@github.com
|
By Status are you referring to the output of On Thu, Feb 11, 2016 at 1:46 PM, Clayton Coleman notifications@github.com
|
The requester seems to want a progress measure 30% pulled or x/y |
The use cases people have raised with us is an obvious indicator on
pull in Pod Status. Progress is nice but practically not required for
a well run infra. Right now there is no distinction between waiting
for schedule and pull, which is a common state people get into.
|
The output of Events:
FirstSeen LastSeen Count From SubobjectPath Type Reason Message
--------- -------- ----- ---- ------------- -------- ------ -------
8s 8s 1 {default-scheduler } Normal Scheduled Successfully assigned busybox-573201948-x4rg4 to kubernetes-minion-31zg
8s 7s 2 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Pulling pulling image "busybox"
7s 7s 1 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Created Created container with docker id 7ac36eac5dc5
7s 7s 1 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Started Started container with docker id 7ac36eac5dc5
7s 6s 2 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Pulled Successfully pulled image "busybox"
6s 6s 1 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Created Created container with docker id 8a1d39975f1c
6s 6s 1 {kubelet kubernetes-minion-31zg} spec.containers{busybox} Normal Started Started container with docker id 8a1d39975f1c We can possibly add more events that include the progress, if an image pull were to take longer than expected. I don't see why #25032 is needed yet. |
Events aren't that useful to uis, because you have to read and parse
the event stream and correlate. It would be much better to have a
container condition indicating that in the pod status.
|
Wouldn't UIs include events as well, at-least the critical ones? |
They should, but in large list views that correlation and setup is
complicated. Both kubectl get pods and a naive UI should be able to
easily show "pulling" because 95% of the time that's what is actually
happening, but when it isn't, that's really important (after 10s of
pending without seeing pulling, you could infer something else is
wrong).
|
@smarterclayton It is hard to update the image pulling progress in pod status for now. Because pod status is updated before each The plan for now is to periodically (maybe every 5, 10 senconds) send event telling what is the current image pulling progress, which will at least tell user whether the image pulling is stuck there. |
Doesn't pulling then effectively block all sync progress for other On Tue, May 3, 2016 at 6:08 PM, Lantao Liu notifications@github.com wrote:
|
@smarterclayton Assuming that exposing image pull progress is mainly for human consumption, would the following work?
This would essentially push the burden of generating human friendly pod and container status to Kubelet. |
I don't even think progress is required, I'd be happy with a single On Tue, May 3, 2016 at 6:42 PM, Vish Kannan notifications@github.com
|
Basically, yes, I think that would make 90% of clients better. On Tue, May 3, 2016 at 6:47 PM, Clayton Coleman ccoleman@redhat.com wrote:
|
@stuartbassett Will you be able to re-purpose your PR (#25032) to do what's mentioned in #19077 (comment) ? I can provide more specific design details, if you have difficulty parsing #19077 (comment) |
@smarterclayton Yeah, for now it will block starting of all other containers, but we definitely do not want that and should make it better in the future, :) |
/remove-lifecycle stale |
Do we know if there exists any better way of achieving this? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/remove-lifecycle stale |
We could write up a KEP for this and pitch it in SIG Node. I’d be happy to drive this topic forward, but we should get at least 3 people on board. Who is in? 🙃 |
@saschagrunert I am interested - what sort of commitment do you need? |
I never wrote a KEP, but I’m thrilled to write one. I would just need relevant input, review and maybe implementation support. We could create a small work group in slack if you want. :) |
I commented on the cri-o issue, but I think we could separate this into two parts:
The former is definitely Kube since it would have to be exposed via an api. However the second is likely to be fairly specific to container runtime implementation and the storage that backs it. And given the improvements in monitoring since this issue was opened, and that most deployments likely have prometheus or something like it watching their container runtimes, the second might be the best place to start both to make a concrete first step now for admins, while also learning more about how we might expose progress. I do not think the former item was intended to solve the latter, and the latter is probably more broadly applicable since the vast majority of clusters are single team owned. |
FWIW my interest comes from working on tools layered on top of Kubernetes, where the rollout is initiated by something other that Status needs to be tied to a specific update, since multiple overlapping updates can be issued. I can't immediately see how Prometheus solves this requirement. |
I'd be happy to push both topics forward. From the runtime perspective: If we have the data at hand, then we could expose it to any interface. I see four major points, whereas the first one could be dropped from here:
Anything else? |
I wrote an email to the SIG Node mailing list regarding the topic and the plan: |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Is there a ticket where one could vote for disabling fejta-bot? It pollutes the long-standing discussions and eventually closes the important issues where, I suspect, people simply got tired of interacting with the bot. I'm certainly annoyed of getting notifications from it both as an author and participant of a few issues. |
Is this ever going to be a thing? |
Is there no progress on this most-wanted feature request? |
For those of you interested in this issue, please coordinate your interest into something actionable which in our community is a KEP: Please feel free to use community resources (sig-node mailing list, agenda on sig-node meetings, google docs to seed discussion etc) to figure out what needs to be in the KEP and how the feature progress(es) through community process(es). Some good info can be found in:
You should probably read one of the KEP(s) from before in sig-node for example in: Looking forward to folks stepping up to help with this! thanks in advance. |
Closing the loop, there is a KEP: kubernetes/enhancements#3542 |
If a container is waiting for an image to be pulled before it can start, it would be nice to see the progress of that pull in kubectl, so that the user can know if they have time for another cup of coffee.
An api endpoint to give a progress update, possibly with a watch option, would be ideal.
It would also be helpful to include this in the container information in each pod.
For example, running
kubectl desribe pod/<pod>
should return, in addition to all info currently returned, a field containing the % pulled (or number of bytes) of the image that each container uses.Additionally, running
kubectl pull-progress pod/<pod>
should return a json encoded summary of each image being pulled in order to start a pod. This should also support a watch option, to notify the client of changes in the progress. There should be an equivalent HTTP API endpoint for this.I'm interested in using this capability to provide loading bars on a UI.
The text was updated successfully, but these errors were encountered: