-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Salt reconfiguration to get rid of nginx on GCE #6618
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -251,23 +251,17 @@ EOF | |
} | ||
|
||
# This should only happen on cluster initialization. Uses | ||
# MASTER_HTPASSWORD to generate the nginx/htpasswd file, and the | ||
# KUBELET_TOKEN, plus /dev/urandom, to generate known_tokens.csv | ||
# (KNOWN_TOKENS_FILE). After the first boot and on upgrade, these | ||
# files exist on the master-pd and should never be touched again | ||
# (except perhaps an additional service account, see NB below.) | ||
# KUBE_BEARER_TOKEN, KUBELET_TOKEN, and /dev/urandom to generate | ||
# known_tokens.csv (KNOWN_TOKENS_FILE). After the first boot and | ||
# on upgrade, this file exists on the master-pd and should never | ||
# be touched again (except perhaps an additional service account, | ||
# see NB below.) | ||
function create-salt-auth() { | ||
local -r htpasswd_file="/srv/salt-overlay/salt/nginx/htpasswd" | ||
|
||
if [ ! -e "${htpasswd_file}" ]; then | ||
mkdir -p /srv/salt-overlay/salt/nginx | ||
echo "${MASTER_HTPASSWD}" > "${htpasswd_file}" | ||
fi | ||
|
||
if [ ! -e "${KNOWN_TOKENS_FILE}" ]; then | ||
mkdir -p /srv/salt-overlay/salt/kube-apiserver | ||
(umask 077; | ||
echo "${KUBELET_TOKEN},kubelet,kubelet" > "${KNOWN_TOKENS_FILE}") | ||
echo "${KUBE_BEARER_TOKEN},admin,admin" > "${KNOWN_TOKENS_FILE}"; | ||
echo "${KUBELET_TOKEN},kubelet,kubelet" >> "${KNOWN_TOKENS_FILE}") | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. If I'm being pedantic, this has to handle the upgrade case. See the note on line 273/267 below. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe this should actually be split up into a token directory and reassembled here? (Like how .conf files are done with foo.d/* directories?). Just a random thought so that any future upgrades are less obnoxious. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think your suggestions are orthogonal to this PR (seeing as how the token file already has many files with the service account tokens) but they are good points. Since we don't yet support upgrade, I'm leaning towards leaving this as a breaking change between 0.15.0 and 0.16.0. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. SGTM |
||
|
||
mkdir -p /srv/salt-overlay/salt/kubelet | ||
kubelet_auth_file="/srv/salt-overlay/salt/kubelet/kubernetes_auth" | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also check to see if a bearer token is present?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would you recommend doing if it isn't? The previous code just assumes that all of the necessary vars are set, and after a transition period I'd like to remove the username/password option from all cloud providers and transition everyone to using bearer tokens. I suppose we could skip writing any credentials to the kubeconfig file if they aren't set, but that means there is a bug in the startup scripts and the user won't be able to interact with their cluster.