Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Authorization for referring to Secrets #4957

Closed
erictune opened this issue Mar 2, 2015 · 34 comments
Closed

Authorization for referring to Secrets #4957

erictune opened this issue Mar 2, 2015 · 34 comments
Assignees
Labels
area/security kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/auth Categorizes an issue or PR as relevant to SIG Auth.

Comments

@erictune
Copy link
Member

erictune commented Mar 2, 2015

When a pod is created/updated and has a volumeSource of type "secret", then we should validate whether the user creating the pod has permission to use that secret.

@pmorie
Copy link
Member

pmorie commented Mar 2, 2015

@deads2k @liggitt

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

Using the volume in a pod is equivalent to having read access on the secret. So, the pod creator/updater maybe should have to have read permission on the secret that is referenced.

Questions:

  • does the Authorization happen at pod creation time (in apiserver), or at volume pull time?
    • If in the apiserver, the user gets feedback sooner.
    • If in kubelet, then the kubelet needs to do the authorization check, which requires some kind of authority delegation or impersonation. One option is the authorization check API proposed by @deads2k
  • if in the apiserver, then there could be skew between the secret version checked, and the secret version used (see Identify which version of a secret a pod uses. #4949). Maybe the answer to this is don't write policies on secrets that refer to specific uids.
  • if in the kubelet, then we need to fail the setup late.

I don't think it is sufficient to say that if the secret is in the same namespace, then it can be referenced.
If you can start a pod that references a secret, then you can probably read the secret. But I assume we want to have people in namespace "s" who can start some pods, but can read all the secrets. Otherwise, why have secrets use an objectReference.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

@liggitt said in #4954:

It feels a little odd to imply ACL to the secret is based on the pod creator, when the ACL that actually matters is the node's permissions at resolve time. That means if you revoke the pod creator's access to the secret, the pod can keep getting scheduled and resolving the secret just fine.

To be clear, I'm not advocating basing the ACL at secret resolve time on the pod creator. If I have permission in a namespace, create some pods, then leave the company and my permissions get revoked, those pods should continue working for other people in the namespace to manage. I just think it's weird to base the check at pod creation time on the pod creator.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

If I have permission to make pods but not to see a secret, can I update an existing pod that references a secret I can't see?

@pmorie
Copy link
Member

pmorie commented Mar 2, 2015

@erictune And also, to what degree does the answer to your question depend on your specific relationship with the pod's service account?

@liggitt
Copy link
Member

liggitt commented Mar 2, 2015

(or even more indirectly, a pod that references a proposed securitycontext/serviceaccount thing which in turn references secrets)

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

If our goals for secrets are:

  • prevent accidental leakage when moving around pod json
  • protect secrets from principals who only have a "Pod Reader"-role type permissions in a namespace
    then just basing it on same-namespace is probably fine.

But if they are also:

  • restrict knowledge of secrets to a subset of "Pod Writer"-role principals in the same project

Then we need more thought about it.

@pmorie
Copy link
Member

pmorie commented Mar 2, 2015

I think the pod-writer-role case is a valid one.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

to be clear, I'm saying the following use case:

  • people with the "production role" can create pods and see all secrets
  • people with the "tester role" can create pods but can only see certain secrets, or no secrets.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

the alternative would be to have two namespaces. I don't like that approach, but I'm putting it out there.

@pmorie
Copy link
Member

pmorie commented Mar 2, 2015

@erictune That's how I understood it; I think this will be a common UC.

@erictune
Copy link
Member Author

erictune commented Mar 2, 2015

@liggitt yes thinking about serviceaccounts too.

@deads2k
Copy link
Contributor

deads2k commented Mar 3, 2015

Just to be sure I'm caught up, the story breaks down like this.

Inside of the hammer namespace, I have two secrets: SecretOne and SecretTwo. david can see SecretOne and jordan can see SecretTwo. We want to allow david to create a pod that references SecretOne, but we want to prevent david from creating a pod that can see SecretTwo. So far this is pretty easy to do with the right policy rules.

When it comes time to schedule the pod, we have a few problems and I'm not sure which ones we're trying to solve.

  1. We don't want the kubelet to be able to read any secret it wants, so when the kubelet requests the secret, we want to make sure that it is authorized to read that particular one.
  2. When the pod is scheduled, it's possible that david no longer has rights to read SecretOne. If we allow SecretOne to be made available to the pod, then david could have written code in the pod that emails himself the password and we have leaked information. While it is technically possible to write an authorizer that prevents this condition, openshift hasn't done this and I don't think stock kubernetes has either.
  3. Even though a user with read access to a secret could find a way to leak it, we probably don't want to make easy for them to get the secret. Should there a special level of permission says david can use the secret, but he can't do a straight "get SecretOne"?

@smarterclayton
Copy link
Contributor

On Mar 3, 2015, at 3:11 PM, David Eads notifications@github.com wrote:

Just to be sure I'm caught up, the story breaks down like this.

Inside of the hammer namespace, I have two secrets: SecretOne and SecretTwo. david can see SecretOne and jordan can see SecretTwo. We want to allow david to create a pod that references SecretOne, but we want to prevent david from creating a pod that can see SecretTwo. So far this is pretty easy to do with the right policy rules.

When it comes time to schedule the pod, we have a few problems and I'm not sure which ones we're trying to solve.

We don't want the kubelet to be able to read any secret it wants, so when the kubelet requests the secret, we want to make sure that it is authorized to read that particular one.
The kubelet is authorized to read secrets associated with the service account of pods it is running.
When the pod is scheduled, it's possible that david no longer has rights to read SecretOne. If we allow SecretOne to be made available to the pod, then david could have written code in the pod that emails himself the password and we have leaked information. While it is technically possible to write an authorizer that prevents this condition, openshift hasn't done this and I don't think stock kubernetes has either.
If you can run code (create a pod of your own devising) under a service account, you can see the secret. If you can't create a custom pod (can only clone from existing) then you can't see secrets.
Even though a user with read access to a secret could find a way to leak it, we probably don't want to make easy for them to get the secret. Should there a special level of permission says david can use the secret, but he can't do a straight "get SecretOne"?

Reply to this email directly or view it on GitHub.

@goltermann goltermann added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Mar 4, 2015
@deads2k
Copy link
Contributor

deads2k commented Mar 4, 2015

If you can run code (create a pod of your own devising) under a service account, you can see the secret. If you can't create a custom pod (can only clone from existing) then you can't see secrets.

Ok, so that makes my number 2 less important and my number 3 more important. If being able to use a secret does not imply the ability to read that secret, we probably want a way to express those two concepts in authorization rules.

@smarterclayton
Copy link
Contributor

The user can't use a secret, but the kubelet is using the secret on their behalf. The only notch we have to support is "can't read secrets, but can clone a pod template / existing pod". And that's not a very important story. I'm not sure most users would be able to set a volumesecretsource - that's an initializer or admission controller making that decision.

On Mar 4, 2015, at 7:51 AM, David Eads notifications@github.com wrote:

If you can run code (create a pod of your own devising) under a service account, you can see the secret. If you can't create a custom pod (can only clone from existing) then you can't see secrets.

Ok, so that makes my number 2 less important and my number 3 more important. If being able to use a secret does not imply the ability to read that secret, we probably want a way to express those two concepts in authorization rules.


Reply to this email directly or view it on GitHub.

@smarterclayton
Copy link
Contributor

@liggitt has an incoming brain dump of the entire flow to put here - we had quite a few recommendations.

@liggitt
Copy link
Member

liggitt commented Mar 10, 2015

Sorry for the delay... sorting out my thoughts here piecemill before I get distracted.

First, there seems to be two main divisions of secrets:

  1. Secrets that will apply to the pod, but not be readable by the containers running in the pod. The pod doesn't need to know anything about the content/manifestation of these secrets. Examples:
    • .dockercfg with credentials used by the node to docker pull the images for the pod
    • API credentials injected into API calls from the pod via some sort of proxy (c.f. LOAS Daemon #2209, though I'm not sure if that proposal is still a direction we want to go)
  2. Secrets that are readable by the pod. The pod might need to know (and possibly even specify) how these secrets manifest themselves (file paths, etc). Examples:
    • .dockercfg with credentials a builder container would use to push its output to a docker registry
    • ssh key used to do Git pulls/pushes
    • API credentials a component running in a container uses directly

For secrets a container uses directly, I don't think we'll ever get the default secret manifestions correct to the point where a container would never need to override them. That means that containers would still (potentially) have to set up volumemounts, secret volumes with secret references, etc.

@smarterclayton
Copy link
Contributor

Subtype of 2 - Secrets that can be generically injected into /run/secrets for the container to use (i.e., pod doesn't need to know how the secrets manifest)

----- Original Message -----

Sorry for the delay... sorting out my thoughts here piecemill before I get
distracted.

First, there seems to be two main divisions of secrets:

  1. Secrets that will apply to the pod, but not be readable by the containers
    running in the pod. The pod doesn't need to know anything about the
    content/manifestation of these secrets. Examples:
    • .dockercfg with credentials used by the node to docker pull the images
      for the pod
    • API credentials injected into API calls from the pod via some sort of
      proxy (c.f. LOAS Daemon #2209,
      though I'm not sure if that proposal is still a direction we want to go)
  2. Secrets that are readable by the pod. The pod needs to at least know (and
    possibly specify) how these secrets manifest themselve (file paths, etc).
    Examples:
    • .dockercfg with credentials a builder pod would use to push its output to
      a docker registry
    • ssh key used to do Git pulls/pushes
    • API credentials a component running in a container uses directly

Reply to this email directly or view it on GitHub:
#4957 (comment)

@erictune
Copy link
Member Author

Now that #5807 is in, pods can only ref secrets in the same namespace.

@erictune
Copy link
Member Author

Question for @pmorie or @deads2k

In the openshift role-based access control model, can someone with Reader role in namespace N read secrets in that namespace? Assuming so?

@smarterclayton
Copy link
Contributor

In our model, only if they have policy level access. Read/Write secrets was reserved for "Edit". We talked about having a default role slightly below "Edit" that was "EditWithSecrets" that allowed you to read secrets but not write them, or a "ViewSecrets" that was "View" with the added ability to see secrets.

We talked about enforcing access to secrets through a service account, by enabling an endpoint /namespaces/bar/pod/foo/secrets that follows the same rules of permissions (kubelet is granted access to it because it's scheduled to the kubelet). Hadn't got very far on that though.

----- Original Message -----

Question for @pmorie or @deads2k

In the openshift role-based access control model, can someone with Reader
role in namespace N read secrets in that namespace? Assuming so?

  • In the future, we may want finer grained permissions.
    • a role that can see pods, but not their secrets, such as a UI or
      monitoring component or auditor role.
    • multiple secrets in a namespace, and different visibility for them (e.g.
      test database keys are different from the webserver private key).
      However, this can be worked around for now by using multiple namespaces.


Reply to this email directly or view it on GitHub:
#4957 (comment)

@erictune
Copy link
Member Author

@liggitt @pmorie @smarterclayton @deads2k
Have you thought about whether the "pod-can-use-secret" check would happen at the kubelet or the apiserver, or both?

I sort of like the idea of doing it in the apiserver, because it gives immediate feedback.

@smarterclayton
Copy link
Contributor

Just thinking through it it simplifies the kubelet interaction if it can get secrets for a pod by api, although it means they may want to denormalize based on service account as a performance optimization. Someone from our ops team asked me this the other day, and the more the kubelet has to know to do the right thing the harder it is to verify it is secure.

If there was /pods/foo/secrets or /podSecrets/foo the lookup can be mediated by the server, pod-kubelet access check is a simple O(1), and the secret lookup is just a constant factor.

We haven't talked about the kubelet security role, but we proposed making it a programmatic policy check (the user kubelet:node123 has access to any pod that has status.host == node123) vs trying to articulate it in a generic fashion using only verbs. But right now we punted on that until Jordan lands his node security fixes, and probably until after 1.0.

On Mar 25, 2015, at 4:17 PM, Eric Tune notifications@github.com wrote:

@liggitt @pmorie @smarterclayton @deads2k
Have you thought about whether the "pod-can-use-secret" check would happen at the kubelet or the apiserver, or both?

I sort of like the idea of doing it in the apiserver, because it gives immediate feedback.


Reply to this email directly or view it on GitHub.

@pmorie
Copy link
Member

pmorie commented Mar 27, 2015

@erictune I have thought about it but i'm not sure where I stand on it.

@cjcullen
Copy link
Member

/subscribe

@bgrant0607 bgrant0607 added team/api and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Sep 16, 2015
@0xmichalis 0xmichalis added sig/auth Categorizes an issue or PR as relevant to SIG Auth. and removed team/api (deprecated - do not use) labels Mar 20, 2017
@liubog2008
Copy link

Is there any progress? I try to use RBAC to authz, but RBAC also has this security problem till now.

env

  1. rbac alpha
  2. kubernetes v1.5.2

How to reproduce

  1. "caicloud" create secret from secret.yaml.
  2. "other" try to access secret, but the behavior is denied.
  3. "other" create a pod "test-pod" which use secret test.
  4. "test-pod" is created successfully.
  5. "other" can read secret by exec now.

secret and pod

# secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: test
type: Opaque
data:
  username: dGVzdC11c2Vy
  password: dGVzdC1wYXNz
# test.json
{
    "kind": "Pod",
    "apiVersion": "v1",
    "metadata": {
        "name": "test-pod"
    },
    "spec": {
        "containers": [
            {
                "name": "foo",
                "image": "cargo.caicloud.io/caicloud/nginx",
                "volumeMounts": [{
                    "name": "foo",
                    "mountPath": "/etc/foo",
                    "readOnly": true
                }]
            }
        ],
        "volumes": [{
            "name": "foo",
            "secret": {
                "secretName": "test"
            }
        }]
    }
}

User who cannot read secret

kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: user-cannot-read-secret
rules:
  - apiGroups: ["*"]
    resources:
    - pods
    - pods/exec
    verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: user-cannot-read-secret-role-binding
subjects:
  - kind: User
    name: other
roleRef:
  kind: Role
  name: user-cannot-read-secret
  apiGroup: rbac.authorization.k8s.io

User who is namespace owner

kind: Role
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: namespace-owner
rules:
  - apiGroups: ["*"]
    resources: ["*"]
    verbs: ["*"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1alpha1
metadata:
  name: namespace-owner
subjects:
  - kind: User
    name: caicloud
roleRef:
  kind: Role
  name: namespace-owner
  apiGroup: rbac.authorization.k8s.io

@liggitt
Copy link
Member

liggitt commented Apr 5, 2017

Correct, that is currently working as designed. The ability to create pods in a namespace currently implies the ability to read secrets in the namespace (even if you couldn't exec in, you could run an image that would send the mounted secrets to you).

@ddysher
Copy link
Contributor

ddysher commented Apr 10, 2017

@liggitt But this seems to be conflict with what abac or rbac promises? if we have abac or rbac policies that clearly rule out accessing secrets, like user-cannot-read-secret role above, then I would expect user to not being able to access secrets at all. In other words, if I want him/her to mount secret, I would otherwise use this role (or maybe two rules if post on secrets is not allowed)

rules:
  - apiGroups: ["*"]
    resources:
    - pods
    - pods/exec
    - secrets
    verbs: ["*"]

If we consider this as working as design, is there a way for us to 'tell' secret owner about this info (that pod creator/updater can consume secrets or configs in this namespace even you didn't give the permission in the role)?

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 23, 2017
@liggitt liggitt added the kind/feature Categorizes issue or PR as related to a new feature. label Jan 6, 2018
@mikedanese mikedanese self-assigned this Jan 26, 2018
@mikedanese
Copy link
Member

ref kubernetes/community#1604

@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Feb 25, 2018
@mikedanese mikedanese added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Feb 25, 2018
@liggitt
Copy link
Member

liggitt commented Apr 3, 2019

The ACL boundary for the content of pod specs is the namespace level (a pod spec can refer to a service account, secret, or configmap in its namespace)

/close

@k8s-ci-robot
Copy link
Contributor

@liggitt: Closing this issue.

In response to this:

The ACL boundary for the content of pod specs is the namespace level (a pod spec can refer to a service account, secret, or configmap in its namespace)

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/security kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. sig/auth Categorizes an issue or PR as relevant to SIG Auth.
Projects
None yet
Development

No branches or pull requests