-
Notifications
You must be signed in to change notification settings - Fork 39.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Name+label+selector scoping for configuration composition #1698
Comments
We also need to auto-populate namespace (#1905). |
Looks like OpenShift is doing part of this -- adding deployment labels to everything: https://github.com/openshift/origin/blob/master/pkg/config/config.go#L92 |
I could also imagine auto-populating name prefixes in this pass based on values associated with specified label keys, such as |
One issue we flushed out with the update pass is correctly identifying which fields within a created object need to be parameterized - I.e. Which fields are labels or references in templates. We didn't have a good answer for it beyond a) let the server tell you which fields are parameterized (new verb on each resource, requires N round trips) or b) try to define a convention for labels and references. |
v1beta1 is a mess, but v1beta3 should be completely uniform in object metadata, which would cover names and labels. Selectors are trickier. Creating a selector type (#1303) isn't enough, even if we had a way of fetching object schemas, because we also plan to use them for purposes other than targeting pods (and other deployed objects), such as node constraints. I'd be ok with renaming all selector fields that should be scoped in this way if that helped distinguish them from others. There's just one reference in v1beta3 -- for replication controller referring to the pod template. Probably has issues similar to selectors. If we thought field name conventions wouldn't work, other options:
|
To clarify: We needn't block on the long-term solution. I'm perfectly happy to hardcode the transformation policy for specific fields of specific object types in the initial implementation. |
One thought here is that any UI that depends on labels and has objects which templatize other objects needs the bare minimum of a hint from the server to establish a relationship. It might be enough to identify templates and selectors for generic clients via the "ComponentPlugin discovery" endpoint. That would be pretty powerful for visualizer APIs.
|
I checked in a standalone pass in #1980, which used field name conventions:
Also, the simple config generators, simplegen and srvexpand, exploit knowledge of the objects they are generating to thread common metadata throughout. Other generators and template abstractions could do something similar. UIs could infer hierarchical and/or Venn-style relationships by comparing label sets, and pods targeted by label selectors should be associated with their services/controllers by reverse lookup (#1348). |
My proposal in #3233 would be a good starting point: We should also add command-line flags for a curated set of labels, such as A general The trickiest part is updating selectors in services and replication controllers. At least for now, that probably requires understanding of those objects. We can figure out how to make it work for plugins later. |
Adding the comment here so I don't forget: one option for setting selectors on services and replication controllers is to have a way to have them default to the labels on the service or RC (exact). The problem would be indicating you want "all pods" which we don't have a concrete implementation of in the query selector syntax yet, until the advanced selector syntax is part of the api. |
@smarterclayton I agree. I like that it would remove the need for detailed schema knowledge from the client. What do you mean by "all pods"? |
Sorry, global selector (service that includes all pods in a namespace).
|
Namespaces seem ideally suited as a scoping mechanism, since they bound the scope of both names and label selectors. I was already thinking that users would create different namespaces for dev, test, staging, and production environments, as well as for different teams. What consequences would there be if users launched every app/tier/microservice into a different namespace? One consequence would be that users would be even more likely to need to deal with multiple namespaces. At minimum, we'd need to change some of the kubectl tooling to make this easier. Would labels be sufficient to organize groups of namespaces, or would we need to add another level to the hierarchy? What would need to be common amongst multiple namespaces? Service accounts and security policies probably? The service environment variables wouldn't be of much use, then. DNS would be able to bridge namespaces, but I'm guessing we'd need to create an explicit mechanism for cross-namespace visibility, since a number of people are working on network isolation. cc @erictune @derekwaynecarr @smarterclayton @ghodss @deads2k @liggitt |
Today, we encourage deploying units of code into individual namespaces - We've been looking at how we can use namespaces in workflow - such that I think there are advantages to having service accounts per namespace - We do however believe that some sort of multi-cluster authentication and We believe that we need a way to identify cross namespace relationships for @eparis can talk about the services + network isolation I believe. On Thu, Jul 23, 2015 at 12:51 PM, Brian Grant notifications@github.com
Clayton Coleman | Lead Engineer, OpenShift |
As expected, resource names and label values are among the most commonly templated fields: |
Identifying embedded selectors, labels in inline pod templates, and object references and names is the most challenging aspect of automatic name/label customization. We could annotate those fields in types.go and in OpenAPI with enough information to determine whether they should be updated in a given scenario. For example, we could specify that PodSpec.ServiceAccountName is a local (i.e., same namespace) reference to a Service Account of that name. |
Selector and reference info: #3676, #22675 How this fits into the bigger picture: https://goo.gl/T66ZcD |
@kubernetes/sig-apps-feature-requests @kubernetes/sig-cli-feature-requests |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Closing this in favor of other issues. |
@bgrant0607: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
UPSTREAM: <carry>: disable test removed in 1.28
Let's say one has a configuration comprised of a set of objects (e.g., a simple service, replication controller, and template, as in #1695). Within that configuration, objects may refer to each other by object reference (as with replication controller to template, as of v1beta3) and/or by label selector (as with service and replication controller to pods generated from the pod template).
If one wants to create multiple instances of that configuration, such as for dev and prod deployments (aka horizontal composition) or to embed in composite macro-services (aka hierarchical composition), the names must be uniquified and the label selectors must be scoped to just one instance of the configuration, by adding deployment-specific labels and label selector requirements (e.g., env=dev, macroservice=coolapp). The uniquifiying name suffixes could be deterministically generated by hashing the deployment-specific labels.
This could be done as a transformation pass, as described in #1694. The advantage of that is that the generated objects would be fully explicit. However, note that subsequent introspection and management operations will need to operate on the extended names and label sets. That could work for operations (e.g., updates) driven off of configuration data.
The insertions could also be done in the client library invoked by kubectl, not just for creation but also for update, get, etc., or on the server. The latter would have the flavor of uniquified names, as needed for bulk pod creation (#170).
My preference is to create a single-purpose transformation pass for this. It would need to be possible to identify all names, references, label sets, and label selectors that must be uniquified/extended. In v1beta3 we hopefully will have enough API uniformity to do that automatically based on field names. If not, we could also use knowledge of the specific types and/or annotations (e.g., YAML directives) to identify them.
In general, I think we'll want to make introspection and management operations more user-friendly by supporting label selectors for all operations.
The alternative to a domain-specific pass like this would be to require users to use a generic templating mechanism, such as Mustache, but the scoping mechanism would need to be reimplemented in every templating language, and it would also make configurations more complex.
The text was updated successfully, but these errors were encountered: