Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Namespace as kind #3613

Merged
merged 2 commits into from
Feb 10, 2015
Merged

Conversation

derekwaynecarr
Copy link
Member

A Namespace object is defined to support discovery and traversal of Kubernetes cluster.

Model changes:

A Namespace contains the following attributes:

  1. Name
  2. Labels
  3. Annotations

A Namespace is not namespace scoped.

kube-apiserver

  • Kubernetes creates a default namespace by default and ensures it exists.

REST API

A Namespace operates like any other resource in Kubernetes.

Like Node, it has a cluster-scope.

HTTP Semantics:

Action HTTP Verb Path Description
CREATE POST /api/{version}/namespaces/ Create namespace
GET GET /api/{version}/namespaces/{name} Get namespace {name}
UPDATE PUT /api/{version}/namespaces/{name} Update namespace {name}
DELETE DELETE /api/{version}/namespaces/{name} Delete namespace {name}
LIST GET /api/{version}/namespaces/ List all namespaces
WATCH GET /api/{version}/watch/namespaces/ Watch all namespaces

This PR modifies the v1beta3 URL structure for resources that are scoped to a namespace.

URL Description
/api/{version}/namespaces/{namespace}/{resource}/{name} Fetch the resource of type {resource} with name {name} in namespace {namespace}

Data Storage

A Namespace object is not in a namespace, and is stored in a etcd key-space /registry/namespace/{name} to ensure uniqueness.

Client
kubectl can list namespaces

$ kubectl get namespaces

@derekwaynecarr
Copy link
Member Author

Resolves #2159

@derekwaynecarr
Copy link
Member Author

Items remaining:

  1. Test cases
  2. Ability to enforce existence of a namespace prior to creating resources in that namespace

@derekwaynecarr derekwaynecarr force-pushed the namespace_as_kind branch 2 times, most recently from d5d7279 to 674337c Compare January 20, 2015 00:19
@derekwaynecarr
Copy link
Member Author

Added 2 admission control plug-ins to control behavior as deployment choice:

  1. NamespaceExists - requires predeclaration of a namespace before usage
  2. NamespaceAutoProvision - automatically creates the namespace upon first usage

NamespaceAutoProvision is probably best used pre-1.0 for upstream to maintain backwards compatibility for current behavior.

@derekwaynecarr
Copy link
Member Author

Just need to add a few more unit tests. Will handle namespace clean-up process in a follow-on PR, but want to keep this initial PR to a minimum set.

@erictune
Copy link
Member

I think I like the semantics you describe. It looks like it allows creating sort of hierarchies.

I don't see the point of making the "NamespaceExists" check optional. If we think this is the right thing to do, shouldn't we do it for everyone? Otherwise we have to reason about multiple different semantics for something which is central to the system. And it won't get any easier to make it mandatory than it is today.

@erictune erictune self-assigned this Jan 20, 2015
@derekwaynecarr
Copy link
Member Author

I included the auto-provision use case for backwards compatibility, and for you guys, where I thought you were not always intent on requiring pre-declaration prior to use. I thought @jbeda noted this explicitly at our recent meet-up.

Basically, I didn't want this PR to get held up this week because it would be a breaking change. We have downstream use cases for this in the next couple weeks, that I wanted to just get the basics in this week if possible.

If at 1.0, or some future time, we mandate NamespaceExists, then I wouldn't even make it a command line argument choice and instead just have it be an implicit admission handler.

@thockin
Copy link
Member

thockin commented Jan 20, 2015

In your semantics, as per the table above, what is context?
/api/{version}/ns/{context}/namespaces

I don't understand the /api/{version}/ns/{ns}/namespaces/{ns} construct. Why have {ns} twice, as opposed to just /api/{version}/ns/{ns}? Can the different {ns} placeholders be different names? It feels like this is a workaround to a problem I don't see?

@derekwaynecarr
Copy link
Member Author

I ended up viewing the namespace object a lot like the minion object.

Here is the bootstrapping scenario I was working towards:

Alice admin does a cluster/kube-up.sh to create a new cluster.

The cluster has a default namespace with our default pod, service, replication controller, policy [future] items.

Alice wants to see what namespaces exist. If the policy in the default namespace had an ALLOW rule to LIST namespaces, the following would be accepted.

cluster/kubectl.sh get namespaces  
Request: GET /api/{version}/ns/default/namespaces
Response: []Namespace{ {Name: "default"} }

Alice wants to create a new namespace foo. If the policy in the default namespace had an ALLOW rule to create new namespaces, the following would be accepted.

cluster/kubectl.sh create -f foo_namespace.json 
Request: POST /api/{version}/ns/default/namespaces

Alice verifies her namespace was created by listing them from the default context.

cluster/kubectl.sh get namespaces  
Request: GET /api/{version}/ns/default/namespaces
Response: []Namespace{ {Name: "default"}, {Name: "foo"} }

Alice switches to now work in the foo namespace context.

cluster/kubectl.sh namespace foo
Using namespace foo

FUTURE: This is when policy is persisted
Alice wants to restrict users ability to create new namespaces from foo.
Alice wants to restrict users ability to list other namespaces from foo.
Alice wants to give Bob limited access to foo to create pods, etc.

cluster/kubectl.sh create -f policy.json
Request: POST /api/{version}/ns/foo/policies
Response: 201

Alice tries to list namespaces from foo, but the policy rejects it:

cluster/kubectl.sh get namespaces  
Request: GET /api/{version}/ns/foo/namespaces
Response: 403 Forbidden

Alice tries to create a new namespace from foo namespace context:

cluster/kubectl.sh create -f foo_namespace.json 
Request: POST /api/{version}/ns/foo/namespaces
Response: 403 Forbidden

So in this case context in my original discussion was the requesting namespace scope. The pattern of /ns/context/namespaces was proposed to provide a policy evaluation context.

Coming at this from the experience of bootstrapping a cluster and locking down who could or could not create and list namespaces from the initial bootstrapped default namespace is how I arrived at the current proposal.

Note internally, if you want to LIST/WATCH all namespaces to support controllers, we would support the following:

GET /api/{version}/namespaces  (LIST all namespaces)
GET /api/{version}/watch/namespaces (WATCH all namespaces)

I would plan to use the above APIs to support a controller to handle clean-up of resources in a namespace when a namespace is marked for deletion.

Hope this helps understand what is proposed.

@smarterclayton
Copy link
Contributor

I also had the confusion. Is namespace a security context? I hadn't viewed it as such before - instead I had assumed that the policy engine made a decision on whether you could create namespaces based on other criteria (since you don't create namespaces into another namespace). I don't know that I follow the subtlety of the reasoning, so let me look at it some more.

@thockin
Copy link
Member

thockin commented Jan 21, 2015

Thanks for the explanation. I now understand what you did. It is a bit
tricky (and a bit elegant). In effect, creating a namespace "under" a
context really creates a global namespace IFF the context is allowed to do
so. So if I create /api/.../ns/foo/namespaces/bar, it also magically
manifests as /api/.../ns/default/namespaces/bar ?

How is this reflected in etcd storage?

On Wed, Jan 21, 2015 at 9:47 AM, Clayton Coleman notifications@github.com
wrote:

I also had the confusion. Is namespace a security context? I hadn't viewed
it as such before - instead I had assumed that the policy engine made a
decision on whether you could create namespaces based on other criteria
(since you don't create namespaces into another namespace). I don't know
that I follow the subtlety of the reasoning, so let me look at it some more.

Reply to this email directly or view it on GitHub
#3613 (comment)
.

@derekwaynecarr
Copy link
Member Author

@thockin - in etcd storage, namespaces are stored as /registry/namespace/<name>. This is why I say a Namespace does not have a namespace. It is a globally unique name to the etcd repository. I do not intend to nest Namespace objects. I think that leads to insanity in short term.

If you guys are not comfortable with using the security context of one namespace to allow you to derive or list others, I am fine with modifying this PR to instead just have a path structure like:

POST /api/version/ns
BODY <namespace object>
Result: create a namespace

GET /api/version/ns
Result: list all namespaces

GET /api/version/ns/{ns}
Result: get namespace named {ns}

Note if we go this path, the HTTP semantics are nice, but the policy around who can create these things will need to be defined in a Policy related PR. I just went this route since I asked what would I expect to happen from the bootstrapping scenario.

The alternate approach will require some special logic to map a RESTStorage to these base paths, but I think that is not a major concern and can be worked out. How kubectl handles this resource type will have some special behavior, but I also do not think that is a major issue and should not be a problem.

I am going to let the PR stay as currently defined, but can we try to reach consensus on what we want to do with the approach from @smarterclayton, @thockin, @erictune by end of week after you have had some time to reflect?

I would like to get this basic support in Kubernetes early next week to make some progress on issues specific to OpenShift Origin that this blocks. In interim, I can look at some other work.

Given that @erictune has thought a lot on Policy, I would be interested in getting his preference, and trust his recommendation assuming he understood what I was initially proposing.

@derekwaynecarr
Copy link
Member Author

In effect, creating a namespace "under" a context really creates a global namespace IFF the context is allowed to do so. So if I create /api/.../ns/foo/namespaces/bar, it also magically manifests as /api/.../ns/default/namespaces/bar ?

Yes - this is why it's equivalent to Node.

If I create a Node, it is addressable from any namespace context because it is global.

As a result, both of these requests return the same value today.

GET /api/version/ns/default/nodes/node1
GET /api/version/ns/foo/nodes/node1

I imagined the ability to CRUD Nodes (or any global resource like a Namespace), would follow the policy pattern outlined.

@smarterclayton
Copy link
Contributor

I'm provisionally ok with it, especially since bootstrapping overlap with a policy engine has some tricky bits (which I agree with in your outline).

Imagine a scenario where you have a very large cluster. You want to subdivide parts of the cluster such that user level cluster tools can interact with the cluster and "enhance" the experience for some subsets (ie you have a PaaS running across certain nodes and namespaces). The infrastructure for that tool is running on the root infrastructure as pods in a namespace (call it "dev-infra"), and has certain elevated privileges on a subset of Kube nodes, namespaces, and resources. The pods need to be able to act against the master in some context that is not "root", but is nevertheless elevated for that subset of resources. With a sufficiently rich policy engine you can define rules that apply to actors in "dev-infra" that control "dev1", "dev2", "dev3", etc. The policy for the admins of dev-infra and the pods themselves looks very similar - perhaps defined on that namespace, but reflected in the policy engine running in "default" with more sophisticated delegation rules. The admin that creates "dev1" is doing so from within "dev-infra", so "dev1" is given a descendant policy from "dev-infra". The policy that allowed that action is the parent policy of "dev1", etc.

Namespace is a nice "context" simplification - it only goes so far, but it covers a lot of space.

Also, namespaces might need to be shardable, so having their mutation be a sub path of a parent path is superior to making actions on the parent path special. It also normalizes tools - no special logic for create/update/delete.

On Jan 21, 2015, at 7:55 PM, Derek Carr notifications@github.com wrote:

@thockin - in etcd storage, namespaces are stored as /registry/namespace/. This is why I say a Namespace does not have a namespace. It is a globally unique name to the etcd repository. I do not intend to nest Namespace objects. I think that leads to insanity in short term.

If you guys are not comfortable with using the security context of one namespace to allow you to derive or list others, I am fine with modifying this PR to instead just have a path structure like:

POST /api/version/ns
BODY
Result: create a namespace

GET /api/version/ns
Result: list all namespaces

GET /api/version/ns/{ns}
Result: get namespace named {ns}
Note if we go this path, the HTTP semantics are nice, but the policy around who can create these things will need to be defined in a Policy related PR. I just went this route since I asked what would I expect to happen from the bootstrapping scenario.

The alternate approach will require some special logic to map a RESTStorage to these base paths, but I think that is not a major concern and can be worked out. How kubectl handles this resource type will have some special behavior, but I also do not think that is a major issue and should not be a problem.

I am going to let the PR stay as currently defined, but can we try to reach consensus on what we want to do with the approach from @smarterclayton, @thockin, @erictune by end of week after you have had some time to reflect?

I would like to get this basic support in Kubernetes early next week to make some progress on issues specific to OpenShift Origin that this blocks. In interim, I can look at some other work.

Given that @erictune has thought a lot on Policy, I would be interested in getting his preference, and trust his recommendation assuming he understood what I was initially proposing.


Reply to this email directly or view it on GitHub.

@thockin
Copy link
Member

thockin commented Jan 22, 2015

Yeah, I am pretty OK with it (though I think we need a doc with a really
great explanation that I can refer back to when I forget how it works).
I'll defer to Eric, though, since he holds context on policy stuff.

On Wed, Jan 21, 2015 at 5:36 PM, Clayton Coleman notifications@github.com
wrote:

I'm provisionally ok with it, especially since bootstrapping overlap with
a policy engine has some tricky bits (which I agree with in your outline).

Imagine a scenario where you have a very large cluster. You want to
subdivide parts of the cluster such that user level cluster tools can
interact with the cluster and "enhance" the experience for some subsets (ie
you have a PaaS running across certain nodes and namespaces). The
infrastructure for that tool is running on the root infrastructure as pods
in a namespace (call it "dev-infra"), and has certain elevated privileges
on a subset of Kube nodes, namespaces, and resources. The pods need to be
able to act against the master in some context that is not "root", but is
nevertheless elevated for that subset of resources. With a sufficiently
rich policy engine you can define rules that apply to actors in "dev-infra"
that control "dev1", "dev2", "dev3", etc. The policy for the admins of
dev-infra and the pods themselves looks very similar - perhaps defined on
that namespace, but reflected in the policy engine running in "default"
with more sophisticated delegation rules. The admin that creates "dev1" is
doing so from within "dev-infra", so "dev1" is given a descendant policy
from "dev-infra". The policy that allowed that action is the parent policy
of "dev1", etc.

Namespace is a nice "context" simplification - it only goes so far, but it
covers a lot of space.

Also, namespaces might need to be shardable, so having their mutation be a
sub path of a parent path is superior to making actions on the parent path
special. It also normalizes tools - no special logic for
create/update/delete.

On Jan 21, 2015, at 7:55 PM, Derek Carr notifications@github.com
wrote:

@thockin - in etcd storage, namespaces are stored as
/registry/namespace/. This is why I say a Namespace does not have a
namespace. It is a globally unique name to the etcd repository. I do not
intend to nest Namespace objects. I think that leads to insanity in short
term.

If you guys are not comfortable with using the security context of one
namespace to allow you to derive or list others, I am fine with modifying
this PR to instead just have a path structure like:

POST /api/version/ns
BODY
Result: create a namespace

GET /api/version/ns
Result: list all namespaces

GET /api/version/ns/{ns}
Result: get namespace named {ns}
Note if we go this path, the HTTP semantics are nice, but the policy
around who can create these things will need to be defined in a Policy
related PR. I just went this route since I asked what would I expect to
happen from the bootstrapping scenario.

The alternate approach will require some special logic to map a
RESTStorage to these base paths, but I think that is not a major concern
and can be worked out. How kubectl handles this resource type will have
some special behavior, but I also do not think that is a major issue and
should not be a problem.

I am going to let the PR stay as currently defined, but can we try to
reach consensus on what we want to do with the approach from
@smarterclayton, @thockin, @erictune by end of week after you have had some
time to reflect?

I would like to get this basic support in Kubernetes early next week to
make some progress on issues specific to OpenShift Origin that this blocks.
In interim, I can look at some other work.

Given that @erictune has thought a lot on Policy, I would be interested
in getting his preference, and trust his recommendation assuming he
understood what I was initially proposing.

Reply to this email directly or view it on GitHub.

Reply to this email directly or view it on GitHub
#3613 (comment)
.

@erictune
Copy link
Member

@derekwaynecarr

TL;DR: please switch to the form you suggested in your next-to-last comment. Expressing the policy for bootstrapping and the use cases your mention won't be a problem. I'll gladly merge after that.

Long version:

There are some nice properties with the /api/{version}/ns/{foo}/namespaces/{bar} form, but interesting could also be confusing for users, as evidenced by the length of the conversation on this thread.

I think that the namespaced nodes thing we have is sort of funky. I think I'd like it better if, at some point, we deprecated the current /api/version/ns/ANYTHING/nodes/node1 format and just made it like /api/version/nodes/node1. (Different PR, of course, and perhaps different author).

Likewise, that would suggest a form like this for namespace operations: /api/version/ns/{ns}. You offered to do that above, and I prefer that you do.

So, then that leaves the question of how to handle bootstrapping and the other use cases you mentioned. Here are my suggestions:

  • At cluster creation, Alice admin has a policy like {user: "alice"}. This matches everything. So Alice can do anything, including create or list a namespace. That should work now IIUC.
  • Later, we may wish to teach the KindAndNamespace function in pkg/apiserver/handlers.go that an url like /api/version/ns/foo has attributes {Kind: "ns", Namespace: "", Name: "foo"} and that /api/version/node/bar has attributes {Kind: "node", Namespace: "", Name: "bar"}. Where empty string and undefined are the same for policy evaluation.
  • You mentioned the use case: Alice wants to restrict users ability to create new namespaces from foo.. I'd rephrase that use case as Alice wants to restrict foo-team's ability to create new namespaces, which would be implemented as this policy {Kind: "ns", Group: "foo-team", ReadOnly: true}
  • You mentioned the use case: Alice wants to restrict users ability to list other namespaces from foo. I think by that you really mean these three use cases:
    • Alice wants to restrict foo-team's ability to list all namespaces. That would require no particular policy.
    • Alice wants to allow foo-team to get the properties of namespace "foo". That would have this policy: {Kind: "ns", Group: "foo-team", ReadOnly: true, Name: "foo"}.
    • *Alice wants to allow foo-team to list the several namespaces they have access to, e.g. "foo1", "foo2", "foo3". I don't have a solution for that yet. I don't think this PR does either. Let's defer that question.
  • You mentioned the use case: Alice wants to give Bob limited access to foo to create pods, etc. We already know we can do that.

@smarterclayton
Copy link
Contributor

The problem with this is every client needs to know whether opaque resource types are namespaced or not. Right now, the client can assume that /ns/{} can be specified without the server rejecting it (i.e., the server can handle that context problem). If you omit /ns/{} from nodes, then the client has to know whether a resource is namespaced before it tries to POST it. If only namespace follows this behavior, that's one thing. But if nodes and other namespaceless resources start being used, every client has to know the difference. Also, in the future, what if we do decide to namespace nodes? Then old clients are broken.

I don't think of namespace as "aspect of resource" - I think of it as "fundamental server context". Having to know whether "fundamental server context" is required for a call or not complicates clients.

----- Original Message -----

@derekwaynecarr

TL;DR: please switch to the form you suggested in your next-to-last comment.
Expressing the policy for bootstrapping and the use cases your mention
won't be a problem. I'll gladly merge after that.

Long version:

There are some nice properties with the
/api/{version}/ns/{foo}/namespaces/{bar} form, but interesting could also
be confusing for users, as evidenced by the length of the conversation on
this thread.

I think that the namespaced nodes thing we have is sort of funky. I think
I'd like it better if, at some point, we deprecated the current
/api/version/ns/ANYTHING/nodes/node1 format and just made it like
/api/version/nodes/node1. (Different PR, of course, and perhaps different
author).

Likewise, that would suggest a form like this for namespace operations:
/api/version/ns/{ns}. You offered to do that above, and I prefer that you
do.

So, then that leaves the question of how to handle bootstrapping and the
other use cases you mentioned. Here are my suggestions:

  • At cluster creation, Alice admin has a policy like {user: "alice"}.
    This matches everything. So Alice can do anything, including create or
    list a namespace. That should work now IIUC.
  • Later, we may wish to teach the KindAndNamespace function in
    pkg/apiserver/handlers.go that an url like /api/version/ns/foo has
    attributes {Kind: "ns", Namespace: "", Name: "foo"} and that
    /api/version/node/bar has attributes {Kind: "node", Namespace: "", Name: "bar"}. Where empty string and undefined are the same for policy
    evaluation.
  • You mentioned the use case: Alice wants to restrict users ability to
    create new namespaces from foo.
    . I'd rephrase that use case as Alice
    wants to restrict foo-team's ability to create new namespaces
    , which would
    be implemented as this policy {Kind: "ns", Group: "foo-team", ReadOnly: true}
  • You mentioned the use case: Alice wants to restrict users ability to list
    other namespaces from foo
    . I think by that you really mean these three
    use cases:
    • Alice wants to restrict foo-team's ability to list all namespaces.
      That would require no particular policy.
    • Alice wants to allow foo-team to get the properties of namespace "foo".
      That would have this policy: {Kind: "ns", Group: "foo-team", ReadOnly: true, Name: "foo"}.
    • *Alice wants to allow foo-team to list the several namespaces they have
      access to, e.g. "foo1", "foo2", "foo3". I don't have a solution for that
      yet. I don't think this PR does either. Let's defer that question.
  • You mentioned the use case: Alice wants to give Bob limited access to foo
    to create pods, etc
    . We already know we can do that.

Reply to this email directly or view it on GitHub:
#3613 (comment)

@derekwaynecarr
Copy link
Member Author

I think the discussion of tools, clients, etc should move away from this PR and go on #3806.

I noted I was ok with either approach last week, so I do not want to go back on that statement. I have rewritten a lot of code in this project, so I am fine to continue doing that ;-)

*Alice wants to allow foo-team to list the several namespaces they have access to, e.g. "foo1", "foo2", "foo3". I don't have a solution for that yet. I don't think this PR does either. Let's defer that question.

I am curious if you think this is a question we will attempt to answer soon, pre or post v1, and if the answer to that question should be part of #3806.

@smarterclayton
Copy link
Contributor

Derek pointed out very reasonably that admin resources do not necessarily have to be namespace scoped (they could be bound to a namespace) in which case they are likely outside the common client path and called from a different cli than kubectl (which is aggressively end-user focused). So I'm reasonably ok with assuming everything kubectl accesses is namespace scoped, and that any server level resource that should be exposed to end users should also be namespaced scoped (i.e. if you want to expose nodes so that a user can see it, it's different than nodes that an admin sees).

----- Original Message -----

The problem with this is every client needs to know whether opaque resource
types are namespaced or not. Right now, the client can assume that /ns/{}
can be specified without the server rejecting it (i.e., the server can
handle that context problem). If you omit /ns/{} from nodes, then the
client has to know whether a resource is namespaced before it tries to POST
it. If only namespace follows this behavior, that's one thing. But if
nodes and other namespaceless resources start being used, every client has
to know the difference. Also, in the future, what if we do decide to
namespace nodes? Then old clients are broken.

I don't think of namespace as "aspect of resource" - I think of it as
"fundamental server context". Having to know whether "fundamental server
context" is required for a call or not complicates clients.

----- Original Message -----

@derekwaynecarr

TL;DR: please switch to the form you suggested in your next-to-last
comment.
Expressing the policy for bootstrapping and the use cases your mention
won't be a problem. I'll gladly merge after that.

Long version:

There are some nice properties with the
/api/{version}/ns/{foo}/namespaces/{bar} form, but interesting could also
be confusing for users, as evidenced by the length of the conversation on
this thread.

I think that the namespaced nodes thing we have is sort of funky. I think
I'd like it better if, at some point, we deprecated the current
/api/version/ns/ANYTHING/nodes/node1 format and just made it like
/api/version/nodes/node1. (Different PR, of course, and perhaps
different
author).

Likewise, that would suggest a form like this for namespace operations:
/api/version/ns/{ns}. You offered to do that above, and I prefer that
you
do.

So, then that leaves the question of how to handle bootstrapping and the
other use cases you mentioned. Here are my suggestions:

  • At cluster creation, Alice admin has a policy like {user: "alice"}.
    This matches everything. So Alice can do anything, including create or
    list a namespace. That should work now IIUC.
  • Later, we may wish to teach the KindAndNamespace function in
    pkg/apiserver/handlers.go that an url like /api/version/ns/foo has
    attributes {Kind: "ns", Namespace: "", Name: "foo"} and that
    /api/version/node/bar has attributes {Kind: "node", Namespace: "", Name: "bar"}. Where empty string and undefined are the same for policy
    evaluation.
  • You mentioned the use case: Alice wants to restrict users ability to
    create new namespaces from foo.
    . I'd rephrase that use case as Alice
    wants to restrict foo-team's ability to create new namespaces
    , which
    would
    be implemented as this policy {Kind: "ns", Group: "foo-team", ReadOnly: true}
  • You mentioned the use case: Alice wants to restrict users ability to
    list
    other namespaces from foo
    . I think by that you really mean these three
    use cases:
    • Alice wants to restrict foo-team's ability to list all namespaces.
      That would require no particular policy.
    • Alice wants to allow foo-team to get the properties of namespace
      "foo"
      .
      That would have this policy: {Kind: "ns", Group: "foo-team", ReadOnly: true, Name: "foo"}.
    • *Alice wants to allow foo-team to list the several namespaces they
      have
      access to, e.g. "foo1", "foo2", "foo3". I don't have a solution for
      that
      yet. I don't think this PR does either. Let's defer that question.
  • You mentioned the use case: Alice wants to give Bob limited access to
    foo
    to create pods, etc
    . We already know we can do that.

Reply to this email directly or view it on GitHub:
#3613 (comment)

@derekwaynecarr derekwaynecarr force-pushed the namespace_as_kind branch 2 times, most recently from c3da481 to ed6b962 Compare February 4, 2015 18:48
@derekwaynecarr derekwaynecarr changed the title WIP: Namespace as kind Namespace as kind Feb 4, 2015
@derekwaynecarr
Copy link
Member Author

This is ready for final review.

I updated the original PR description to document what is done.

tl,dr:

  1. namespace is cluster-scoped
  2. v1beta3 url formats are /api/version/namespaces/namespace/pods/name
  3. kubectl works fine

Always enabling the plug-in to enforce existence of a namespace prior to usage can be done in a follow-on PR, but vagrant cluster requires the namespace to exist now.

@derekwaynecarr
Copy link
Member Author

Rebased

@erictune
Copy link
Member

erictune commented Feb 5, 2015

Will look at monday when I return to work. Or feel free to reassign to
Clayton for faster merge.
On Feb 4, 2015 1:20 PM, "Derek Carr" notifications@github.com wrote:

Rebased


Reply to this email directly or view it on GitHub
#3613 (comment)
.

@abonas
Copy link
Contributor

abonas commented Feb 7, 2015

this is a bit confusing (from one of the examples above): "GET /api/{version}/ns/default/namespaces" - getting all namespaces from a namespace?

@derekwaynecarr
Copy link
Member Author

That was an earlier iteration of the work. The api is as presented in the main PR description.

Get /api/version/namespaces will just return all namespaces as expected.

@smarterclayton
Copy link
Contributor

No other comments besides those above.

@derekwaynecarr
Copy link
Member Author

Code review comments completed.

@derekwaynecarr
Copy link
Member Author

Rebased again.

@@ -134,3 +134,30 @@ func (nodeStrategy) Validate(obj runtime.Object) errors.ValidationErrorList {
node := obj.(*api.Node)
return validation.ValidateMinion(node)
}

// namespaceStrategy implements behavior for nodes
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

...for namespaces.

@erictune
Copy link
Member

very minor comments, and needs squash. then LGTM.

@derekwaynecarr
Copy link
Member Author

squashed commits, addressed last round of comments.

@smarterclayton
Copy link
Contributor

LGTM

@smarterclayton
Copy link
Contributor

Rerunning travis before merging

@smarterclayton smarterclayton added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 10, 2015
smarterclayton added a commit that referenced this pull request Feb 10, 2015
@smarterclayton smarterclayton merged commit dce4cd8 into kubernetes:master Feb 10, 2015
@derekwaynecarr derekwaynecarr deleted the namespace_as_kind branch April 17, 2015 17:56
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm "Looks good to me", indicates that a PR is ready to be merged.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants