-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert all component command-line flags to versioned configuration #12245
Comments
Scheduler config, as an example: https://github.com/kubernetes/kubernetes/tree/master/plugin/pkg/scheduler/api |
@mikedanese I started yesterday to convert KubeletServer (which is poorly named) to use a JSON settings file. I'd be happy to follow the lead of your work done on the scheduler config, as it does cover a couple things I missed. |
I haven't done much with this yet. I've begun to add a component config api group in this commit: Is this a similar direction you are taking? Let's discuss how we want to redistribute the configuration work tomorrow. |
@mikedanese @stefwalter Can you guys confirm that you intend to combine this with the mechanism described in #6477 ? |
@stefwalter can you please point me to the branch where you are working on this? |
All new configuration formats we create should go into new API groups #12951 |
@mikedanese I stopped working on it once I saw your work. I did about half an hour ... before posting what I was doing ... https://github.com/stefwalter/kubernetes/tree/kubelet-settings I'd be happy to help out with the tedious bits or wherever necessary. |
Some additional motivation: The current way in which we configure components via command-line flags is unnecessarily convoluted and hard to standardize, since some deployments run the components using containers, some use init, some use systemd, some use supervisord, etc. Life of configuring a flag on GCE (if wrong, that's another indication of how convoluted it is; different for every cloud platform):
|
Not for 1.1 |
I'm not sure if this is relevant or not to the conversation, but if daemonsets are enhanced to support the linked feature, along with daemonset rolling upgrade support, maybe this will be useful: |
@mikedanese is going to be driving the component configuration standardization for 1.7. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
Guide on how to implement componentconfig: https://docs.google.com/document/d/1FdaEJUEh091qf5B98HM6_8MS764iXrxxigNIdwHYW9c/ |
doc is private |
It is shared with the kubernetes-dev group.
…On Wed, Mar 28, 2018, 3:44 PM kfox1111 ***@***.***> wrote:
doc is private
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#12245 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AA3JwRSjLc82GqL2q4I8Y0_Nk07k8CAqks5tjBJbgaJpZM4Flyil>
.
|
this topic has multiple stakeholding sigs, instead of cluster-lifecycle as the owner. /sig architecture the design idea is already in place and the component owners are graduating their config to v1.
|
Apologize for the bothering, I'm wondering if there's any update on the progress of this issue? Recently I found controller-runtime deprecating its ComponentConfig implementation, so I comed up with this issue in k/k, and wanted to know the community's current opinion and roadmap about this. |
similarly to controller runtime, k/k componentconfig apis have made a rather slow progress into graduation to v1. some api versions are stale and should be higher - i.e. maturity is not accurately represented. there has not been a discussion to drop the effort entirety. the apis are widely used, despite not beeing v1 - e.g. kubelet is still at v1beta1. removing such apis will be very disruptive to users. instead we should push them to graduation and maintain them. |
@neolit123 Thanks for the rapid reply! I would also be happy to see these configurations get maintained, is there anybody or group currently responsible for this? Also, I've found that not every component of k/k has supported being configured by a versioned configration file, and for kubelet, it implemented the whole processing logic in its own code base which cannot be easily reused. So I'm wondering if there's any chance that we build a common mechanism (e.g. in the |
currently, different groups / sigs own components and their respective configs.
a working group called "component standard" would have been the appropriate forum for this discussion but it disbanded. currently, each component has the freedom to decide on the versioned config implementation details. if you want to open the discussion the next suitable place would be the sig architecture regular zoom call. https://github.com/kubernetes/community/blob/master/sig-architecture/README.md#meetings please add the topic to the agenda doc there if you'd like. |
Forked from #1627, #5472, #246, #12201, and other issues.
We configure Kubernetes components like apiserver and controller-manager with command-line flags currently. Every time we change these flags, we break everyone's turnup automation, at least for anything not in our repo: hosted products, other repos, blog posts, ...
OTOH, the scheduler and kubectl read configuration files. We should be treating all configuration similarly, as versioned APIs, with backward compatibility, deprecation policies, documentation, etc. How we distribute the configuration is a separate issue -- #1627.
cc @davidopp @thockin @dchen1107 @lavalamp
The text was updated successfully, but these errors were encountered: