-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use kubeconfig in several components #6969
Conversation
--master flag is still supported for distros that need it. But now, --kubeconfig flag can be used instead, or in addition, to specify the auth info, and/or the location of the master. A subsequent PR will change salt to generate a kubeconfig, and to make kube-proxy use it, for salt-based clouds.
@jlowdermilk PTAL |
or @deads2k |
@@ -130,6 +131,8 @@ func (s *CMServer) AddFlags(fs *pflag.FlagSet) { | |||
fs.Var(resource.NewQuantityFlagValue(&s.NodeMemory), "node_memory", "The amount of memory (in bytes) provisioned on each node") | |||
fs.StringVar(&s.ClusterName, "cluster_name", s.ClusterName, "The instance prefix for the cluster") | |||
fs.BoolVar(&s.EnableProfiling, "profiling", false, "Enable profiling via web interface host:port/debug/pprof/") | |||
fs.StringVar(&s.Master, "master", s.Master, "The address of the Kubernetes API server (overrides any value in kubeconfig)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we call this --server
in kubectl
commands
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Roger. All the existing system components take a --master flag, and I don't want to break existing cluster setups.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Eventually, I expect this won't be needed since all the info is in the kubeconfig.
Okay, PTAL |
@@ -151,11 +154,23 @@ func (s *CMServer) verifyMinionFlags() { | |||
func (s *CMServer) Run(_ []string) error { | |||
s.verifyMinionFlags() | |||
|
|||
if len(s.ClientConfig.Host) == 0 { | |||
glog.Fatal("usage: controller-manager --master <master>") | |||
if s.Kubeconfig == "" || s.Master == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
&&
Double check the warning condition in all three files, but otherwise lgtm. |
--master option still supported. --kubeconfig option added to kube-proxy, kube-scheduler, and kube-controller-manager binaries. Kube-proxy now always makes some kind of API source, since that is its only kind of config. Warn if it is using a default client, which probably won't work. Uses the clientcmd builder.
e696c96
to
6081fa5
Compare
Thanks for review. Fixed warning conditions. |
squashed |
@cjcullen PTAL. |
has lgtm from deads2k so just needs a glance and a merge. |
lgtm |
Use kubeconfig in several components
thx |
kube-proxy, kube-controller-manager, and kube-scheduler learn how to
read a kubeconfig file, via the
--kubeconfig
flagThey continue to support the
--master flag
, which is used by all distros.They forget these flags:
--api_version, --insecure_skip_tls_verify, --client_certificate, --client_key, --certificate_authority, --max_outgoing_qps, --max_outgoing_burst
, but there was no config in the repo using those flags, and we can set them in the kubeconfig if we need to.I've manually tested that the binaries can read kubeconfigs, but I'll modify their config to actually use the kubeconfigs in a subsequent PR.
My cluster passes validation with this PR.