Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Identify which version of a secret a pod uses. #4949

Closed
erictune opened this issue Mar 2, 2015 · 12 comments
Closed

Identify which version of a secret a pod uses. #4949

erictune opened this issue Mar 2, 2015 · 12 comments
Labels
area/api Indicates an issue on api area. area/usability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@erictune
Copy link
Member

erictune commented Mar 2, 2015

Suppose I create a secret like this:

   "typeMeta": {
      "kind": "secret",
    },
    "objectMeta": {
      "name": "foo",
      "uid": "123",
      "resourceVersion": "789"
    },
    "data" : "abc"
}

And I make a pod that references that secret, using an ObjectReference, like this:

   "typeMeta": {
      "kind": "pod",
    },
    "spec" : { 
      "volumes" : [
        { 
           "name" : "shh",
           "source": {
                "secret" : {
                  "target" : {
                      "name": "shh",
                      "namespace": "myproj"
                 }
            }
     }
}

When is the ObjectReference bound? How complete an ObjectReference can/should the creator specify?

If the pod specifies name and namespace, but not uid or resourceVersion, then binding could happen in the apiserver or at kubelet.

Do we agree that users need to be able to see what binding was made, so that e.g. the user can see if all pods are updated to use a new value of a secret?

If binding happens in the apiserver then:

  • apiserver could fill in the missing uid and resourceVersion of the objectReference at pod creation time
  • this goes in the podSpec.Volume[i].source.secret.target
  • user could see binding by getting pod again and looking in the podSpec
  • does rebinding happen on update or not?
  • What would kubelet do if it is unable to find the matching uid and resourceVersion at startup time due to a race with updating a secret? Fail setup? That seems bad.

If binding happens in the kubelet, then:

  • kubelet needs to report back what binding was made.

the latter seems preferable.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

@pmorie @bgrant0607

@pmorie
Copy link
Member

pmorie commented Mar 2, 2015

@erictune @bgrant0607

I think the binding should happen in the kubelet. At what point during the pod lifecycle should the binding happen? Before pod start seems like a good place to start discussion.

Regardless of where/when binding happens, if we think it is a valid use-case to be able to specify the uid or resourceVersion, we need to think through the case where the specific version is unavailable. At the least we should create an event with cause information. I don't think you want the pod to start in this case unless the exact specified version is available.

I have some other comments about status and how this should appear to the user but I'll comment on #4950 with those.

@markturansky
Copy link
Contributor

Adding "namespace" to the object reference seems redundant. You're already in a namespace when you're working with a pod. I needed the same thing, though, so I added a defaulting function to assign the pod's namespace to the object reference I am using in my PR.

@smarterclayton
Copy link
Contributor

Object reference namespaces should remain unset when stored so they can be copied to other objects without requiring a change. An object reference without a namespace means "use this namespace" which is not the same as "use the namespace named foo"

@markturansky
Copy link
Contributor

What if you don't want to copy the object reference to another object without requiring a change of namespace?

Eric's JSON example above sans namespace:

"source": {
                "secret" : {
                  "target" : {
                      "name": "shh",
                 }
            }

@smarterclayton are you saying this is incorrect? if so, why? Because it seems so like a foreign key here to something in the same namespace, but it doesn't require the user to enter the namespace again in the "target" struct. They are already in the namespace. Meanwhile, you need the namespace in the volume plugin to lookup the Secret.

@bgrant0607 bgrant0607 added area/api Indicates an issue on api area. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. area/usability labels Mar 3, 2015
@smarterclayton
Copy link
Contributor

On Mar 3, 2015, at 12:09 AM, Mark Turansky notifications@github.com wrote:

What if you don't want to copy the object reference to another object without requiring a change of namespace?

Then you should fill out the namespace. But secrets are the last thing you want a user to be accessing across namespaces. And remember, secrets are subordinate to the service account, which means that an initializer still has to do that mapping. It may be that secrets are forced to be a lookup to the namespace of the pod (not a default).
Eric's JSON example above sans namespace:

"source": {
"secret" : {
"target" : {
"name": "shh",
}
}
@smarterclayton are you saying this is incorrect? if so, why? Because it seems so like a foreign key here to something in the same namespace, but it doesn't require the user to enter the namespace again in the "target" struct. They are already in the namespace. Meanwhile, you need the namespace in the volume plugin to lookup the Secret.

Then the pod namespace should be made available to the volume plugin.

Reply to this email directly or view it on GitHub.

@goltermann goltermann added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Mar 4, 2015
@bgrant0607 bgrant0607 added team/api and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Sep 16, 2015
@k8s-github-robot
Copy link

@erictune There are no sig labels on this issue. Please add a sig label by:
(1) mentioning a sig: @kubernetes/sig-<team-name>-misc
(2) specifying the label manually: /sig <label>

Note: method (1) will trigger a notification to the team. You can find the team list here.

@k8s-github-robot k8s-github-robot added the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label May 31, 2017
@0xmichalis
Copy link
Contributor

/sig node
/sig api-machinery

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Jun 10, 2017
@k8s-github-robot k8s-github-robot removed the needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. label Jun 10, 2017
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 26, 2017
@bgrant0607
Copy link
Member

/remove-lifecycle stale
/lifecycle frozen

see also #22368

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 22, 2018
@liggitt liggitt added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 14, 2019
@ffromani
Copy link
Contributor

/close
pretty much no activity in more than two years, and is not very clear what the next steps are.
Feel free to re-open once we have clearly actionable items on kubelet side

@k8s-ci-robot
Copy link
Contributor

@fromanirh: Closing this issue.

In response to this:

/close
pretty much no activity in more than two years, and is not very clear what the next steps are.
Feel free to re-open once we have clearly actionable items on kubelet side

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/api Indicates an issue on api area. area/usability kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests