-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Namespace as kind #3613
Namespace as kind #3613
Conversation
Resolves #2159 |
Items remaining:
|
d5d7279
to
674337c
Compare
Added 2 admission control plug-ins to control behavior as deployment choice:
NamespaceAutoProvision is probably best used pre-1.0 for upstream to maintain backwards compatibility for current behavior. |
3e37047
to
bfbf84b
Compare
Just need to add a few more unit tests. Will handle namespace clean-up process in a follow-on PR, but want to keep this initial PR to a minimum set. |
I think I like the semantics you describe. It looks like it allows creating sort of hierarchies. I don't see the point of making the "NamespaceExists" check optional. If we think this is the right thing to do, shouldn't we do it for everyone? Otherwise we have to reason about multiple different semantics for something which is central to the system. And it won't get any easier to make it mandatory than it is today. |
I included the auto-provision use case for backwards compatibility, and for you guys, where I thought you were not always intent on requiring pre-declaration prior to use. I thought @jbeda noted this explicitly at our recent meet-up. Basically, I didn't want this PR to get held up this week because it would be a breaking change. We have downstream use cases for this in the next couple weeks, that I wanted to just get the basics in this week if possible. If at 1.0, or some future time, we mandate NamespaceExists, then I wouldn't even make it a command line argument choice and instead just have it be an implicit admission handler. |
In your semantics, as per the table above, what is context? I don't understand the |
I ended up viewing the namespace object a lot like the minion object. Here is the bootstrapping scenario I was working towards: Alice admin does a The cluster has a Alice wants to see what namespaces exist. If the policy in the
Alice wants to create a new namespace
Alice verifies her namespace was created by listing them from the default context.
Alice switches to now work in the
FUTURE: This is when policy is persisted
Alice tries to list namespaces from
Alice tries to create a new namespace from
So in this case Coming at this from the experience of bootstrapping a cluster and locking down who could or could not create and list namespaces from the initial bootstrapped Note internally, if you want to LIST/WATCH all namespaces to support controllers, we would support the following:
I would plan to use the above APIs to support a controller to handle clean-up of resources in a namespace when a namespace is marked for deletion. Hope this helps understand what is proposed. |
e182bf0
to
a9a5fa6
Compare
I also had the confusion. Is namespace a security context? I hadn't viewed it as such before - instead I had assumed that the policy engine made a decision on whether you could create namespaces based on other criteria (since you don't create namespaces into another namespace). I don't know that I follow the subtlety of the reasoning, so let me look at it some more. |
Thanks for the explanation. I now understand what you did. It is a bit How is this reflected in etcd storage? On Wed, Jan 21, 2015 at 9:47 AM, Clayton Coleman notifications@github.com
|
@thockin - in etcd storage, namespaces are stored as If you guys are not comfortable with using the security context of one namespace to allow you to derive or list others, I am fine with modifying this PR to instead just have a path structure like:
Note if we go this path, the HTTP semantics are nice, but the policy around who can create these things will need to be defined in a Policy related PR. I just went this route since I asked what would I expect to happen from the bootstrapping scenario. The alternate approach will require some special logic to map a I am going to let the PR stay as currently defined, but can we try to reach consensus on what we want to do with the approach from @smarterclayton, @thockin, @erictune by end of week after you have had some time to reflect? I would like to get this basic support in Kubernetes early next week to make some progress on issues specific to OpenShift Origin that this blocks. In interim, I can look at some other work. Given that @erictune has thought a lot on |
Yes - this is why it's equivalent to If I create a As a result, both of these requests return the same value today.
I imagined the ability to CRUD Nodes (or any global resource like a Namespace), would follow the policy pattern outlined. |
I'm provisionally ok with it, especially since bootstrapping overlap with a policy engine has some tricky bits (which I agree with in your outline). Imagine a scenario where you have a very large cluster. You want to subdivide parts of the cluster such that user level cluster tools can interact with the cluster and "enhance" the experience for some subsets (ie you have a PaaS running across certain nodes and namespaces). The infrastructure for that tool is running on the root infrastructure as pods in a namespace (call it "dev-infra"), and has certain elevated privileges on a subset of Kube nodes, namespaces, and resources. The pods need to be able to act against the master in some context that is not "root", but is nevertheless elevated for that subset of resources. With a sufficiently rich policy engine you can define rules that apply to actors in "dev-infra" that control "dev1", "dev2", "dev3", etc. The policy for the admins of dev-infra and the pods themselves looks very similar - perhaps defined on that namespace, but reflected in the policy engine running in "default" with more sophisticated delegation rules. The admin that creates "dev1" is doing so from within "dev-infra", so "dev1" is given a descendant policy from "dev-infra". The policy that allowed that action is the parent policy of "dev1", etc. Namespace is a nice "context" simplification - it only goes so far, but it covers a lot of space. Also, namespaces might need to be shardable, so having their mutation be a sub path of a parent path is superior to making actions on the parent path special. It also normalizes tools - no special logic for create/update/delete.
|
Yeah, I am pretty OK with it (though I think we need a doc with a really On Wed, Jan 21, 2015 at 5:36 PM, Clayton Coleman notifications@github.com
|
0eb698b
to
c680258
Compare
TL;DR: please switch to the form you suggested in your next-to-last comment. Expressing the policy for bootstrapping and the use cases your mention won't be a problem. I'll gladly merge after that. Long version: There are some nice properties with the I think that the namespaced nodes thing we have is sort of funky. I think I'd like it better if, at some point, we deprecated the current Likewise, that would suggest a form like this for namespace operations: So, then that leaves the question of how to handle bootstrapping and the other use cases you mentioned. Here are my suggestions:
|
The problem with this is every client needs to know whether opaque resource types are namespaced or not. Right now, the client can assume that /ns/{} can be specified without the server rejecting it (i.e., the server can handle that context problem). If you omit /ns/{} from nodes, then the client has to know whether a resource is namespaced before it tries to POST it. If only namespace follows this behavior, that's one thing. But if nodes and other namespaceless resources start being used, every client has to know the difference. Also, in the future, what if we do decide to namespace nodes? Then old clients are broken. I don't think of namespace as "aspect of resource" - I think of it as "fundamental server context". Having to know whether "fundamental server context" is required for a call or not complicates clients. ----- Original Message -----
|
I think the discussion of tools, clients, etc should move away from this PR and go on #3806. I noted I was ok with either approach last week, so I do not want to go back on that statement. I have rewritten a lot of code in this project, so I am fine to continue doing that ;-)
I am curious if you think this is a question we will attempt to answer soon, pre or post v1, and if the answer to that question should be part of #3806. |
Derek pointed out very reasonably that admin resources do not necessarily have to be namespace scoped (they could be bound to a namespace) in which case they are likely outside the common client path and called from a different cli than kubectl (which is aggressively end-user focused). So I'm reasonably ok with assuming everything kubectl accesses is namespace scoped, and that any server level resource that should be exposed to end users should also be namespaced scoped (i.e. if you want to expose nodes so that a user can see it, it's different than nodes that an admin sees). ----- Original Message -----
|
c3da481
to
ed6b962
Compare
This is ready for final review. I updated the original PR description to document what is done. tl,dr:
Always enabling the plug-in to enforce existence of a namespace prior to usage can be done in a follow-on PR, but vagrant cluster requires the namespace to exist now. |
ed6b962
to
da4c541
Compare
Rebased |
Will look at monday when I return to work. Or feel free to reassign to
|
da4c541
to
a8f64e5
Compare
this is a bit confusing (from one of the examples above): "GET /api/{version}/ns/default/namespaces" - getting all namespaces from a namespace? |
That was an earlier iteration of the work. The api is as presented in the main PR description. Get /api/version/namespaces will just return all namespaces as expected. |
No other comments besides those above. |
af25406
to
8284535
Compare
Code review comments completed. |
8284535
to
48c0a32
Compare
Rebased again. |
@@ -134,3 +134,30 @@ func (nodeStrategy) Validate(obj runtime.Object) errors.ValidationErrorList { | |||
node := obj.(*api.Node) | |||
return validation.ValidateMinion(node) | |||
} | |||
|
|||
// namespaceStrategy implements behavior for nodes |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
...for namespaces.
very minor comments, and needs squash. then LGTM. |
Add example for using namespaces
48c0a32
to
0bd0e12
Compare
squashed commits, addressed last round of comments. |
LGTM |
Rerunning travis before merging |
A
Namespace
object is defined to support discovery and traversal of Kubernetes cluster.Model changes:
A
Namespace
contains the following attributes:A
Namespace
is not namespace scoped.kube-apiserver
default
namespace by default and ensures it exists.REST API
A
Namespace
operates like any other resource in Kubernetes.Like
Node
, it has a cluster-scope.HTTP Semantics:
This PR modifies the v1beta3 URL structure for resources that are scoped to a namespace.
Data Storage
A
Namespace
object is not in a namespace, and is stored in a etcd key-space/registry/namespace/{name}
to ensure uniqueness.Client
kubectl
can list namespaces