-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal for Deferred Creation of Addon Objects #3579
Comments
+1 to some equivalent to an |
I'm not sure I quite get this. Sadly, DNS config involves passing new flags to kubelets. How does that happen from init.d? |
That part is still getting plumbed through salt. All this is doing is On Tue, Jan 20, 2015 at 9:22 AM, Tim Hockin notifications@github.com
|
We want to start the scheduler and controller-manager in pods. We should make sure the same solution works for those. |
would the init.d script be run-once? Or would it keep trying to ensure that the objects from the tarball exist on the master with the right values? If the former, then users can later delete/modify the pods that implement dns, monitorind, etc, and bork the system, resulting in more support overhead. We could potentially put those pods in a special namespace for system services which the admin account does not have permission to modify (depends on getting per-namespace permissions code checked into OSS, but that may be ready in a few weeks. |
I don't understand the "load deferred installation into a container". What starts that container? There has to be a list of containers to start initially, and there has to be something with permission to start that pod. The init.d running on the master seems like a fine way to do that. |
For the firewall-rules: should these be automatically created by the master based on attributes of a service (and should there be a service for heapster @vishh ?) |
@erictune: It's currently run-once. The intent is to put them in separate namespaces to prevent deletion later, and also to make this into a loop that continually enforces that the objects are around with the right values. (The current |
@erictune: If we load the deferred install into a container, there's a still a chicken-egg, yes. I'm actually not completely sold on the complexity of phase 2, either. |
@erictune: A service for 'heapster' itself will be added soon which will most likely not need a firewall rule. There are two other services for monitoring that require firewall rules. We can get rid of these firewall rules once the proxy on the apiserver works for Grafana. |
@zmerlynn is there an issue about services creating firewalls, and if not On Tue, Jan 20, 2015 at 10:40 AM, Vish Kannan notifications@github.com
|
Just another brainstorming idea: instead of introducing a init.d script, how about master works like kubelet also taking service / pod configuration from file or a directory of files managed by master? |
This implements phase 1 of the proposal in kubernetes#3579, moving the creation of the pods, RCs, and services to the master after the apiserver is available. This is such a wide commit because our existing initial config story is special: * Add kube-addons service and associated salt configuration: ** We configure /etc/kubernetes/addons to be a directory of objects that are appropriately configured for the current cluster. ** "/etc/init.d/kube-addons start" slurps up everything in that dir. (Most of the difficult is the business logic in salt around getting that directory built at all.) ** We cheat and overlay cluster/addons into saltbase/salt/kube-addons as config files for the kube-addons meta-service. * Change .yaml.in files to salt templates * Rename {setup,teardown}-{monitoring,logging} to {setup,teardown}-{monitoring,logging}-firewall to properly reflect their real purpose now (the purpose of these functions is now ONLY to bring up the firewall rules, and possibly to relay the IP to the user). * Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both functions were improperly configuring global rules, yet used lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the rule. The logging rule needed a $NETWORK specifier. The monitoring rule tried gcloud describe first, but given the instancing, this feels like a waste of time now. * Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING, ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master, since these are needed there now. (Desperately want just a yaml or json file we can share between providers that has all this crap. Maybe kubernetes#3525 is an answer?) Huge caveats: I've gone pretty firm testing on GCE, including twiddling the env variables and making sure the objects I expect to come up, come up. I've tested that it doesn't break GKE bringup somehow. But I haven't had a chance to test the other providers.
This implements phase 1 of the proposal in kubernetes#3579, moving the creation of the pods, RCs, and services to the master after the apiserver is available. This is such a wide commit because our existing initial config story is special: * Add kube-addons service and associated salt configuration: ** We configure /etc/kubernetes/addons to be a directory of objects that are appropriately configured for the current cluster. ** "/etc/init.d/kube-addons start" slurps up everything in that dir. (Most of the difficult is the business logic in salt around getting that directory built at all.) ** We cheat and overlay cluster/addons into saltbase/salt/kube-addons as config files for the kube-addons meta-service. * Change .yaml.in files to salt templates * Rename {setup,teardown}-{monitoring,logging} to {setup,teardown}-{monitoring,logging}-firewall to properly reflect their real purpose now (the purpose of these functions is now ONLY to bring up the firewall rules, and possibly to relay the IP to the user). * Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both functions were improperly configuring global rules, yet used lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the rule. The logging rule needed a $NETWORK specifier. The monitoring rule tried gcloud describe first, but given the instancing, this feels like a waste of time now. * Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING, ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master, since these are needed there now. (Desperately want just a yaml or json file we can share between providers that has all this crap. Maybe kubernetes#3525 is an answer?) Huge caveats: I've gone pretty firm testing on GCE, including twiddling the env variables and making sure the objects I expect to come up, come up. I've tested that it doesn't break GKE bringup somehow. But I haven't had a chance to test the other providers.
Zach, can you please update this with remaining work? And also how upgrading versions where the new version has a new addon will work? Is this just another reconciler that the master runs for a well-defined set of cluster services, where if DESIRED_LOGGING_ENABLED=true, the controller will make sure the appropriate pod+firewalls+node capabilities+etc are created? And likewise, it supports disabling them? |
Current status:We are still in Phase 1, with no immediate plans to move to Phase 2. I've revised the add-on plan to leave out all mention of firewall rules, though, because all add-ons are now going through the proxy instead, making this a little cleaner. Upgrade status:This is a mess. If you "upgrade", which today means What this needs is probably a defined namespace add-ons so that we can say |
Alternative view on Upgrades
|
This implements phase 1 of the proposal in kubernetes#3579, moving the creation of the pods, RCs, and services to the master after the apiserver is available. This is such a wide commit because our existing initial config story is special: * Add kube-addons service and associated salt configuration: ** We configure /etc/kubernetes/addons to be a directory of objects that are appropriately configured for the current cluster. ** "/etc/init.d/kube-addons start" slurps up everything in that dir. (Most of the difficult is the business logic in salt around getting that directory built at all.) ** We cheat and overlay cluster/addons into saltbase/salt/kube-addons as config files for the kube-addons meta-service. * Change .yaml.in files to salt templates * Rename {setup,teardown}-{monitoring,logging} to {setup,teardown}-{monitoring,logging}-firewall to properly reflect their real purpose now (the purpose of these functions is now ONLY to bring up the firewall rules, and possibly to relay the IP to the user). * Rework GCE {setup,teardown}-{monitoring,logging}-firewall: Both functions were improperly configuring global rules, yet used lifecycles tied to the cluster. Use $NODE_INSTANCE_PREFIX with the rule. The logging rule needed a $NETWORK specifier. The monitoring rule tried gcloud describe first, but given the instancing, this feels like a waste of time now. * Plumb ENABLE_CLUSTER_MONITORING, ENABLE_CLUSTER_LOGGING, ELASTICSEARCH_LOGGING_REPLICAS and DNS_REPLICAS down to the master, since these are needed there now. (Desperately want just a yaml or json file we can share between providers that has all this crap. Maybe kubernetes#3525 is an answer?) Huge caveats: I've gone pretty firm testing on GCE, including twiddling the env variables and making sure the objects I expect to come up, come up. I've tested that it doesn't break GKE bringup somehow. But I haven't had a chance to test the other providers.
@marekbiskup doesn't work on Kubernetes anymore. @mikedanese might be interested. I can't I ever completely understood this issue, but it seems like it might be related to #7459 |
@zmerlynn Can you provide an update on where we are at and where we are likely to go with this? I think that with the Deployment API and |
Otherwise, yes, it looks like things got drastically simpler here. |
Are deployments ready yet (sorry, I haven't followed kubernetes for some time)? |
No deployment is not done yet. |
any update in the last year+? I just ran across this in kubernetes/build-tools/lib/release.sh Line 290 in 74a3b77
|
Much of what is talked about in this PR is basically complete. |
Deferred Creation of "Addon Objects"
In PRs #2224 and #3292, we added a considerable amount of initialization of state in
kube-up.sh
after the actual kube-up call. This pattern really doesn't work very well for GKE: In a GKE, kube-up is essentially a single API call that can be made from a variety of clients that aren'tkube-up.sh
.This bug proposes a way to pieces of the initialization in
kube-up.sh
onto the master, so that, aspirationally,kube-up.sh
results in a single call to a provider-specifickube-up
.Phase 1: Cheesy
init.d
script on the master to create objectsIn this phase, we move (some) initialization to an
init.d
script on the master. The actual YAML files for the created objects are distributed in salt tarball. We add a tiny/etc/init.d
script via salt to the master that essentially does the exact same logic that thatkube-up.sh
was doing, with the guiding environment variables plumbed through appropriately to salt on the master (e.g.ENABLE_CLUSTER_DNS
,ENABLE_CLUSTER_MONITORING
, etc.). The only complexity for this script is that it needs to wait for the apiserver to be ready before it starts creating objects.Phase 2: Move deferred addon objects to a container
Rather than having an
init.d
script handle this, load the deferred installation into a container. This has the added complexity of needing akubectl
that can auth to the master (so possibly cheat and usehostDir
to steal the kubelet's creds), but it makes this piece robust. Turn this from a silly script into a bit more of a daemon that actually enforces these "system containers" are present at any given time.The main benefits of a container are:
The complexity is primarily high around the build, hence phase 1.
This proposal is meant to address #3305 and follow on issues from pr #3292. I hope to have the phase 1 work complete soon, so comments appreciated.
cc @thockin @satnam6502 @vishh @brendanburns
The text was updated successfully, but these errors were encountered: