Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make label metadata from container images available to Pods and their containers #47368

Open
kent-h opened this issue Jun 12, 2017 · 51 comments
Open
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@kent-h
Copy link

kent-h commented Jun 12, 2017

It should be possible for the user to apply custom labels to a pod's containers, and have those labels propagate down into docker.
This would be helpful for interacting with non-k8s-specific programs that access docker directly.

Related: #3764

Outdated: #13513 (I don't want to fight a reopen-war, but this probably should not have been closed.)

@0xmichalis
Copy link
Contributor

@kubernetes/sig-node-feature-requests

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/feature Categorizes issue or PR as related to a new feature. labels Jun 12, 2017
@evie404
Copy link
Contributor

evie404 commented Jun 14, 2017

would pod labels through downward api be enough? https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/

@kent-h
Copy link
Author

kent-h commented Jun 14, 2017

@rickypai No, that does not work for what I'm proposing. I'm looking to store custom metadata (labels) in docker. This is so that external/management systems can query docker for this data, without needing to be aware of k8s.

The downward api allows metadata to be made accessible inside a single container.
I want to make metadata available to docker-aware containers & external docker-aware systems.

@boosh
Copy link

boosh commented Nov 16, 2017

This seems like the only open docker label issue at the moment.

I'm adding docker labels to containers in my CD pipeline (git hash, build ID, etc) and want this data to be displayed by a status endpoint on my microservices. This is so I can unequivocally be certain of which version of an application is deployed, both manually and via automation.

I can't currently find a way of retrieving docker labels from within k8s, hence I'm here. Obviously using pod labels isn't appropriate for this since they're not immutably set at container build time.

Is there any progress on this?

@lambertjosh
Copy link

This would be extremely useful as well, for improving the ability of monitoring systems to utilize the cAdvisor data.

We use labels to denote different deployments within the cluster, and it would be so much easier to monitor if these labels were included in the Prometheus metrics that are output by cAdvisor.

Without this, we are left trying to back out the deployment from a combination of the existing labels like image, namespace, etc.

@Blasterdick
Copy link

As I understand this, cAdvisor requires a docker label on the container to provide json with custom metrics endpoint url to scrape metrics from. So if you have a multi-container pod with Nginx & Redis, you have to add a label to both. And seems like docker labels are ignored, so we need a kubernetes label on every container to make this work.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 21, 2018
@kent-h
Copy link
Author

kent-h commented May 22, 2018

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 22, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 19, 2018
@boosh
Copy link

boosh commented Sep 21, 2018

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 21, 2018
@eli-tsikel
Copy link

We would also like to have this feature. We want to use Argo to define our workflows and it has its own spec to define a container (not POD) and we'd like to add labels in the container section.

Thanks.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 3, 2019
@erhangullu
Copy link

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 28, 2019
@sonykus
Copy link

sonykus commented Apr 23, 2019

We've also been looking for this, thinking that by 2019 it should probably be a given already.
It turns out that it's unfortunately not the case yet.

Of course, one could say "Well, write your own admission controller that adds this functionality, and you're done" (like I've been writing admission controllers in golang my entire life). I think it would be pretty useful it just worked out of the box, and the community could widely benefit from the goodness of metadata / cross-label-searching.

@jribeauv
Copy link

Hi,

I totally agree and support sonykus request .
Hope somebody will be able to convince contributors to add this feature.
In the meantime is tere really no "tricky" way do send metadata to a docker container , part of Kubernetes deployment issued from a docker-compose ?
Thx.
JP

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 29, 2019
@kent-h
Copy link
Author

kent-h commented Jul 31, 2019

/remove-lifecycle stale

@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@coreyoconnor
Copy link

IMO this request is still valid. Tho I can't currently offer any assistance. Nor does it look like anybody else is available. Still, best keep valid requests open?

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 14, 2020
@troyfontaine
Copy link

/reopen

@k8s-ci-robot
Copy link
Contributor

@troyfontaine: You can't reopen an issue/PR unless you authored it or you are a collaborator.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@jdef
Copy link
Contributor

jdef commented Sep 9, 2020

/reopen

@k8s-ci-robot
Copy link
Contributor

@jdef: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot reopened this Sep 9, 2020
@grosser
Copy link

grosser commented Sep 26, 2020

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Sep 26, 2020
@ryanrolds
Copy link

ryanrolds commented Feb 23, 2023

This would be valuable even if it didn't pass down in to the CRI. In cases where sidecars are being injected, labels and annotations can help track information - like ownership - that differs from the pod or other containers.

@devopstales
Copy link

Similar to this: kubernetes/enhancements#1866

@virdz
Copy link

virdz commented Sep 1, 2023

is there any movement on this?

@andrea-tomassi
Copy link

I totally agree, this feature would be very useful to deep inter-operability with a registry, of course... without the burden of writing an admission controller by scratch. +1

@sftim
Copy link
Contributor

sftim commented Sep 6, 2023

/retitle Make label metadata from container images available to Pods and their containers

@k8s-ci-robot k8s-ci-robot changed the title Add Optional Label Metadata to Pods' Containers Make label metadata from container images available to Pods and their containers Sep 6, 2023
@sftim
Copy link
Contributor

sftim commented Sep 6, 2023

This issue is ready for a volunteer to produce an initial design and present it to SIG Node.

@sftim
Copy link
Contributor

sftim commented Sep 6, 2023

(to nudge Prow)
/remove-triage duplicate

@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Sep 6, 2023
@k8s-ci-robot
Copy link
Contributor

@sftim: Those labels are not set on the issue: triage/duplicate

In response to this:

(to nudge Prow)
/remove-triage duplicate

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@trevor-vaughan
Copy link

Ended up here when looking for what seemed to be an obvious auditing feature in k8s. Without being able to bind data authoritatively upstream reverse auditing is very difficult.

You don't want the information inside the container because that may unnecessarily expose internal build system information to the hosted application.

@sftim
Copy link
Contributor

sftim commented Nov 21, 2023

@trevor-vaughan - if you'd like help to contribute the feature, you can ask and we'll try to facilitate.
There are two relevant channels in Slack: #kubernetes-new-contributors and #sig-node. To join Slack, if you're not already in the workspace, visit https://slack.k8s.io/ and you can get an invitation.

Also if you're more interested in sponsoring someone to write the feature, you could look at https://cncf.landscape2.io/?group=Certified+partners+and+providers

@trevor-vaughan
Copy link

@sftim Heh, I'll add it to my ever growing list of FOSS contributions that I want to make and never seem to have time for :-/.

That said, how do you all practically handle container-level tracking metadata at runtime? Post-run modification of labels? Something more magical?

I do see how binding the information into the container could work but that seems to raise an unnecessary risk for a compromised application.

It seems logical to:

  • build -> add labels
  • push to registry -> add more labels
  • pull from registry -> persist labels

Given the standard stack of OCI labels, it could be extremely useful to provide discoverable awareness of the running system.

Finally, if you could toss a pointer to the section code where the images are loaded, that would be really helpful. It's possible that the fix could be quite straightforward since it should be a map of pod to container hashref (already included) to label set.

@sftim
Copy link
Contributor

sftim commented Nov 21, 2023

Those questions belong elsewhere really @trevor-vaughan - maybe #kubernetes-users in the same Slack workspace for the first question, and #sig-node for the second question?

@trevor-vaughan
Copy link

Ah, so pointers to a workaound for a feature request as well as tie-in information for helping to fix a feature request goes into systems that are not tied into the feature request?

I think I get it but is there a map somewhere?

@trevor-vaughan
Copy link

trevor-vaughan commented Nov 21, 2023

Ah, found it (surprisingly not quite straightforward). Will take things to the mailing list and try to cross-reference if I can.

@artemptushkin
Copy link

artemptushkin commented Mar 27, 2024

I'd like to join others and support that this feature request is very much useful. It would allow exposing build metadata, i.e. metadata (labels) of an image to the container and pod. It would improve observability so we can see things like git commit author and others of a deployed application w/o needing to reverse it through the sha or other ways.

It's sad that the bot closed the enhancement

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.