-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: Upload configuration used at 'kubeadm init' time to ConfigMap for easier upgrades #50320
kubeadm: Upload configuration used at 'kubeadm init' time to ConfigMap for easier upgrades #50320
Conversation
/retest |
cmd/kubeadm/app/cmd/init.go
Outdated
@@ -269,6 +270,11 @@ func (i *Init) Run(out io.Writer) error { | |||
return fmt.Errorf("couldn't parse kubernetes version %q: %v", i.cfg.KubernetesVersion, err) | |||
} | |||
|
|||
// Upload currently used configuration to the cluster | |||
if err := saveconfigphase.SaveConfiguration(i.cfg, client); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have a bit of a nitpick about the verb here. "Save" is a little vague to me, and immediately brings up questions like "what happens if the config already exists? Will it get clobbered? Will it just save a new copy? Is it expected that this configuration was already created, and I'm just saving an update to it?"
I like how our other APIs are extremely informative about their contracts, like tokenphase.CreateBootstrapConfigMapIfNotExists()
or tokenphase.UpdateOrCreateToken()
. Can the phase be named something like configphase
and the method be CreateOrUpdateConfiguration()
? That also gives us the flexibility of also having other complementary APIs in the same phase like CreateConfigIfNotExists()
in the future if there's demand for other variants.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I changed this to UploadConfiguration
sounds good to you?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To me, "upload" is just as vague as "save," since it isn't obvious what happens if the data already exists. We already have the pattern of explicitly saying "CreateOrUpdate" elsewhere in our phases, so I'm wondering why we don't want to follow that. If we're trying to have kubeadm taken seriously as a library / toolkit, I think having a consistent API experience is pretty important.
But, three different people had three different ideas for the naming in this PR :-). Since this is basically introducing a new kubeadm API, maybe we should cast the net a little wider and get a little bit of consensus on how to name it?
Or, if we already have precedent of "all of the phases are alpha anyway, so we can rename them whenever we want," maybe it's not so important to get it right the first time. I haven't been involved in the kubeadm adoption working group, so I'm not sure what the sentiment is around using these phases.
func NewCmdSaveConfig() *cobra.Command { | ||
var cfgPath, kubeConfigFile string | ||
cmd := &cobra.Command{ | ||
Use: "save-config <node-name>", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
node-name? Copy-paste error?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, yeah, copy-paste error, shouldn't be anything there
Use: "save-config <node-name>", | ||
Short: "Upload the currently used configuration for kubeadm to a ConfigMap in the cluster.", | ||
Aliases: []string{"saveconfig"}, | ||
Run: func(_ *cobra.Command, args []string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if we have this pattern for the other commands, but if we're not expecting anything in args
, I would error in the case anything is provided. Someone might run kubeadm phase saveconfig my-file
thinking that they're overriding the filename to use, but not realize that they forgot --config
and accidentally use the wrong file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can add a comment about that as follow up. My node bootstrap token PR has a func that can do that.
) | ||
|
||
// SaveConfiguration saves the MasterConfiguration used for later reference (when upgrading for instance) | ||
func SaveConfiguration(cfg *kubeadmapi.MasterConfiguration, client clientset.Interface) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there tests we can write for this? In particular, if we want this to be idempotent like our other phases so that kubeadm init
can be used for upgrading, we should assert that AlreadyExist
errors are handled properly, while other errors are terminal.
func NewCmdSaveConfig() *cobra.Command { | ||
var cfgPath, kubeConfigFile string | ||
cmd := &cobra.Command{ | ||
Use: "save-config <node-name>", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure of the name of the phase here. Save sounds like it is absolutely necessary whereas this is more of a helper type of thing. Perhaps "upload-config"?
Also, change the description a little to something like "Upload the currently used configuration for kubeadm to a ConfigMap in the cluster for future use in reconfiguration and upgrades of the cluster."
LGTM modulo @pipejakob's comments. /approve |
…p for easier upgrades
b02f736
to
728d0f9
Compare
/lgtm I had some questions about using an object with a stricter schema instead of a ConfigMap, but @timothysc said that was already discussed and punted until later. |
/retest
… On 09 Aug 2017, at 22:20, Kubernetes Submit Queue ***@***.***> wrote:
[APPROVALNOTIFIER] This PR is APPROVED
This pull-request has been approved by: jbeda, luxas, mattmoyer
Associated issue: 373
The full list of commands accepted by this bot can be found here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Same request for new unit tests for the new code. It will never be easier to add tests than it is right now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
As discussed on Slack; there won't be a mechanism in the API for validating that configuration you're uploading to the cluster is valid anytime soon.
I agree with you. This is still in the design phase and I love input here. Please create an issue @pipejakob
Yes -- right. We're defining APIs here, but the hard thing is the timeline here and the fact that this PR is in a dependency chain for upgrades. Adding this functionality is not trivial in the chain of ~10 PRs I have posted or am posting as we speak. Getting through the SQ is not trivial either for multiple PRs in a row. So, my proposal is that you'd create an issue for your concerns. We will discuss things there and spawn a PR with a naming-only change as soon as we find a better name. Until that, we're not blocking the upgrade chain, instead merging this now-ish lets us work in parallell.
Again -- totally agree. However, I'm gonna be away for 3.5-days-ish so I'd really want to get this merged now. I can promise to make an unit test for this before the next SIG meeting if you let me. Does this at all sound reasonable? Sorry for the bad timing -- help here would be very appreciated. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jbeda, luxas, mattmoyer, pipejakob, timothysc Associated issue: 373 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/test all [submit-queue is verifying that this PR is safe to merge] |
Automatic merge from submit-queue |
What this PR does / why we need it:
Uploads config used to a ConfigMap so we can upgrade a cluster seamlessly without forcing the user to re-specify all options they specified the first time.
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #fixes: kubernetes/kubeadm#373
Special notes for your reviewer:
This should be a cherrypick-candidate for easier upgrading to v1.8
cc @wojtek-t
Release note:
@kubernetes/sig-cluster-lifecycle-pr-reviews @timothysc