Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubelet default argument inconsistency between flags and config file #68939

Closed
chuckha opened this issue Sep 21, 2018 · 23 comments
Closed

Kubelet default argument inconsistency between flags and config file #68939

chuckha opened this issue Sep 21, 2018 · 23 comments
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.

Comments

@chuckha
Copy link
Contributor

chuckha commented Sep 21, 2018

Is this a BUG REPORT or FEATURE REQUEST?:

/kind bug

What happened:
I tried to start a kubelet in standalone mode but got an error
failed to run Kubelet: no client provided, cannot use webhook authentication

What you expected to happen:
I expected the kubelet to start up without needing a client because kubelet --help says:

      --authorization-mode string                                                                                 Authorization mode for Kubelet server. Valid options are AlwaysAllow or Webhook. Webhook mode uses the SubjectAccessReview API to determine authorization. (default "AlwaysAllow") (DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information.)

However looking at the code I see:

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/apis/config/v1beta1/defaults.go#L75

(defaults to webhook). Since I'm using a config file and the help tells me to do as much, it should either tell me the flag has a default of alwaysallow and the config has a default of webhook, OR the defaults should be consistent.

How to reproduce it (as minimally and precisely as possible):
Run kubelet, maybe with a config file

Anything else we need to know?:
kubelet systemd file:

[Service]
# Uncomment if you are using containerd
Environment="KUBELET_CRI_ENDPOINT=--container-runtime-endpoint=unix:///var/run/containerd/container.sock"
ExecStart=
ExecStart=/usr/bin/kubelet --config /var/lib/kubelet/config.yaml --allow-privileged $KUBELET_CRI_ENDPOINT
Restart=always

kubelet config file

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 127.0.0.1
staticPodPath: /etc/kubernetes/manifests

Environment:
a recent version of kubelet

root@ip-10-0-0-8:~# kubelet --version
Kubernetes v1.13.0-alpha.0.1353+d1111a57d9243c
@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 21, 2018
@chuckha
Copy link
Contributor Author

chuckha commented Sep 21, 2018

/sig node

@k8s-ci-robot k8s-ci-robot added sig/node Categorizes an issue or PR as relevant to SIG Node. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Sep 21, 2018
@Pingan2017
Copy link
Member

Pingan2017 commented Sep 22, 2018

see #59666
should disable webhook when using --config in standalone mode

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2018
@george-angel
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 21, 2019
@mtaufen
Copy link
Contributor

mtaufen commented Mar 22, 2019

The defaults are intentionally different, since the transition from flags to config provided the opportunity to fix some insecure flag defaults.

Agree that the help should do a better job of telling folks what the defaults are in each case.

@george-angel
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 24, 2019
@tmlbl
Copy link

tmlbl commented May 9, 2019

I am running into this trying to run kubelet in standalone mode on v1.12.8 and it is quite confusing. I can start with:

kubelet --cgroup-driver=systemd --pod-manifest-path /etc/kubelet.d

But the ostensibly equivalent config file:

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 127.0.0.1
cgroupDriver: systemd
staticPodPath: /etc/kubelet.d
authorization:
  mode: AlwaysAllow

gives the following error:

# kubelet --config kubelet.yaml 
I0509 18:57:23.797635   25011 server.go:408] Version: v1.12.8
I0509 18:57:23.797789   25011 plugins.go:99] No cloud provider specified.
W0509 18:57:23.797825   25011 server.go:553] standalone mode, no API client
F0509 18:57:23.797841   25011 server.go:262] failed to run Kubelet: no client provided, cannot use webhook authentication

@mtaufen
Copy link
Contributor

mtaufen commented May 15, 2019

Standalone mode isn't the mode we configure defaults to target, so it's expected that users may have to reconfigure the auth settings, or others, to fit that mode of operation if they opt into it.

Users didn't have to do this with flags because the defaults for server mode were insecure. We fixed this in the config file, which represents a new version of the config API.

That defaults may differ is intentional, and documented here: https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/#start-a-kubelet-process-configured-via-the-config-file

If someone wants to fix the help, I'm happy to review the PR...

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 13, 2019
@george-angel
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 14, 2019
@mwhahaha
Copy link

mwhahaha commented Sep 10, 2019

For the next person who hits this, I figured out the expected values. So when you --config, authentication is defaulted to webhook where as via the cli it's set to anonymous by default.

kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 127.0.0.1
cgroupDriver: systemd
staticPodPath: /etc/kubelet.d
authentication:
  anonymous:
    enabled: true
  webhook:
    enabled: false
authorization:
  mode: AlwaysAllow

NOTE: it would be awesome to generated default configs in the actual yaml/json structure somewhere rather than having to work backwards in go (which I am not that familiar with).

@furkanmustafa
Copy link

authorization:
  mode: AlwaysAllow

even with this ☝️ it doesn't work for me.

W0913 17:03:04.201922    4165 server.go:557] standalone mode, no API client
F0913 17:03:04.202053    4165 server.go:266] failed to run Kubelet: no client provided, cannot use webhook authentication

It recognizes it very well though;

I0913 17:03:04.183668    4165 flags.go:33] FLAG: --authorization-mode="AlwaysAllow"

@furkanmustafa
Copy link

DAMN. It was a typo. ( webook )

authentication:
  anonymous:
    enabled: true
  webook:
    enabled: false

@mwhahaha
Copy link

@furkanmustafa good spot, I've updated the example in my comment.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 12, 2019
@george-angel
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 13, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2020
@george-angel
Copy link
Contributor

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 12, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 10, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 10, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

No branches or pull requests

9 participants