-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: Adds dry-run support for kubeadm using the --dry-run
option
#50631
kubeadm: Adds dry-run support for kubeadm using the --dry-run
option
#50631
Conversation
--dry-run
option
4e722f3
to
6b10bcc
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will be great to have! It's nice for init
and token
, and for debugging during development, but it will be especially nice for upgrade
.
My only general comment is it needs unit tests. It would also be good to get a review from someone more familiar with the API machinery/client-go code.
cmd/kubeadm/app/cmd/init.go
Outdated
@@ -248,15 +258,11 @@ func (i *Init) Run(out io.Writer) error { | |||
return err | |||
} | |||
|
|||
client, err := kubeconfigutil.ClientSetFromFile(kubeadmconstants.GetAdminKubeConfigPath()) | |||
client, err := createClientsetAndOptionallyWaitForReady(i.cfg, i.dryRun) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it might be clearer to put if dryRun {
here and rename createClientsetAndOptionallyWaitForReady()
to just createClientsetAndWaitForReady()
.
cmd/kubeadm/app/cmd/init.go
Outdated
|
||
fmt.Printf("[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory %q\n", kubeadmconstants.GetStaticPodDirectory()) | ||
// TODO: Adjust this timeout or start polling the kubelet API | ||
if err := apiclient.WaitForAPI(client, 30*time.Minute); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
30*time.Minute
should probably be pulled out into a constant.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added a TODO; not for this PR
start := time.Now() | ||
wait.PollInfinite(kubeadmconstants.APICallRetryInterval, func() (bool, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice, does this take some latency out of init
as well?
If so it might be worth cherry picking a simpler fix (just s/PollInfinite/PollImmediateInfinite/
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so; as it often takes ~15 secs for control plane to come up
Whether we start polling immediately or after 0.5secs doesn't seem to matter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this is cleaner :)
@@ -319,6 +319,18 @@ type DeleteAction interface { | |||
GetName() string | |||
} | |||
|
|||
type DeleteCollectionAction interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see DeleteCollectionAction
used elsewhere. Unused/WIP code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, more for consistency.
That generic iface and PatchAction were missing somehow...
return false, nil | ||
} | ||
for _, pod := range apiPods.Items { | ||
fmt.Printf("[apiclient] Pod %s status: %s\n", pod.Name, pod.Status.Phase) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This Printf
seems useful. Is it just too noisy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was in the last PR; it was way too noisy; spitting things out twice every second
Instead this now prints on changed state
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But in the last PR, not this one
return true, nil, fmt.Errorf("unable to get first IP address from the given CIDR (%s): %v", svcSubnet.String(), err) | ||
} | ||
|
||
// We can safely return a NotFound error here as the code will just proceed normally and don't care about modifying this clusterrolebinding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The comment here is wrong .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
// We can't handle this event | ||
return false, nil, nil | ||
} | ||
// We can safely return a NotFound error here as the code will just proceed normally and don't care about modifying this clusterrolebinding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment is wrong as well.
{ | ||
Name: "https", | ||
Port: 443, | ||
TargetPort: intstr.FromInt(6443), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This target port might vary per cluster, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this doesn't matter; only important thing here is the ClusterIP
|
||
// handleGetNode | ||
func (idr *InitDryRunGetter) handleGetNode(action core.GetAction) (bool, runtime.Object, error) { | ||
if action.GetResource().Resource != "nodes" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we also check against idr.masterName
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea, now done
// We can't handle this event | ||
return false, nil, nil | ||
} | ||
// We can safely return a NotFound error here as the code will just proceed normally and don't care about modifying this clusterrolebinding |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment says "clusterrolebinding" but this is the bootstrap token.
poke me when the dependent PRs are merged. |
@timothysc ack, I'll do that. |
6b10bcc
to
79a6a08
Compare
@timothysc @mattmoyer Rebased and fixed up a small things. Should now work very well.
I stated above that "I'll yet write unit tests" 😄 cc @sttts @ncdc You are familiar with the API machinery, could you take a look and check if this is sane usage of it? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ncdc is there a de-facto client-set stub for validating client behavior that you are aware of?
func createClientsetAndOptionallyWaitForReady(cfg *kubeadmapi.MasterConfiguration, dryRun bool) (clientset.Interface, error) { | ||
if dryRun { | ||
// If we're dry-running; we should create a faked client that answers some GETs in order to be able to do the full init flow and just logs the rest of requests | ||
dryRunGetter := apiclient.NewInitDryRunGetter(cfg.NodeName, cfg.Networking.ServiceSubnet) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there are a number of different clientset stubs, I'm wondering if we can dedupe.
@@ -33,6 +33,11 @@ import ( | |||
"k8s.io/kubernetes/pkg/api" | |||
) | |||
|
|||
const ( | |||
// selfHostingWaitTimeout describes the maximum amount of time a self-hosting wait process should wait before timing out | |||
selfHostingWaitTimeout = 2 * time.Minute |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems orthogonal to dry-run
@timothysc could you expand on the exact use case here please? |
…ix_race Automatic merge from submit-queue kubeadm: Fix self-hosting race condition **What this PR does / why we need it**: Splitted out from: kubernetes#50766 Waits for the Static Pod to be deleted before proceeding with checking the API health. Otherwise there is a race condition where we're checking the health on the static pod API server; not the self-hosted one that we expect. Also improves the logging output and adds reasonable timeouts for the process **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # **Special notes for your reviewer**: Dependency for - kubernetes#50766 - kubernetes#50631 - kubernetes#48899 **Release note**: ```release-note NONE ``` @kubernetes/sig-cluster-lifecycle-pr-reviews
@ncdc This use case is for a set of stub'd out interactions to the main api. You basically want a hook mechanism to route some calls to another function. |
/assign @caesarxuchao |
e30a5f6
to
e27339f
Compare
@timothysc @mattmoyer @caesarxuchao @ncdc @sttts This is now rebased. Please take a look and review this. Now unit tests also are in place. |
/approve the client-go changes lgtm. |
e27339f
to
0bf84aa
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
TBH - I don't want to block this, but I'm not a really a fan of much of what is done in here. The dry-run behavior stubbing is kind of weird.
Given that it's stub code and not critical path, I'm ok with it. But it looks like you may be missing some UTs in this patch.
For now I'll unblock but could you open a follow on issue to address cleanup & re-eval.
var _ DryRunGetter = &ClientBackedDryRunGetter{} | ||
|
||
// NewClientBackedDryRunGetter creates a new ClientBackedDryRunGetter instance based on the rest.Config object | ||
func NewClientBackedDryRunGetter(config *rest.Config) *ClientBackedDryRunGetter { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the naming here is weird.
clientbacked_dryrun.NewClientBackedDryRunGetter() ?
} | ||
|
||
// NewClientBackedDryRunGetterFromKubeconfig creates a new ClientBackedDryRunGetter instance from the given KubeConfig file | ||
func NewClientBackedDryRunGetterFromKubeconfig(file string) (*ClientBackedDryRunGetter, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: caesarxuchao, luxas, mattmoyer, timothysc Associated issue: 389 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these OWNERS Files:
You can indicate your approval by writing |
/retest
… On 19 Aug 2017, at 06:35, k8s-ci-robot ***@***.***> wrote:
@luxas: The following test failed, say /retest to rerun them all:
Test name Commit Details Rerun command
pull-kubernetes-e2e-gce-etcd3 0bf84aa link /test pull-kubernetes-e2e-gce-etcd3
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
/retest Review the full test history for this PR. |
12 similar comments
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
/retest Review the full test history for this PR. |
Automatic merge from submit-queue (batch tested with PRs 47896, 50678, 50620, 50631, 51005) |
Automatic merge from submit-queue (batch tested with PRs 51134, 51122, 50562, 50971, 51327) kubeadm: Fully implement --dry-run **What this PR does / why we need it**: Finishes the work begun in #50631 - Implements dry-run functionality for phases certs/kubeconfig/controlplane/etcd as well by making the outDir configurable - Prints the controlplane manifests to stdout, but not the certs/kubeconfig files due to the sensitive nature. However, kubeadm outputs the directory to go and look in for those. - Fixes a small yaml marshal error where `apiVersion` and `kind` wasn't printed earlier. **Which issue this PR fixes** *(optional, in `fixes #<issue number>(, fixes #<issue_number>, ...)` format, will close that issue when PR gets merged)*: fixes # fixes: kubernetes/kubeadm#389 **Special notes for your reviewer**: Full `kubeadm init --dry-run` output: ``` [kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters. [init] Using Kubernetes version: v1.7.4 [init] Using Authorization mode: [Node RBAC] [preflight] Running pre-flight checks [preflight] WARNING: docker service is not enabled, please run 'systemctl enable docker.service' [preflight] Starting the kubelet service [kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0) [certificates] Generated ca certificate and key. [certificates] Generated apiserver certificate and key. [certificates] apiserver serving cert is signed for DNS names [thegopher kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.200.101] [certificates] Generated apiserver-kubelet-client certificate and key. [certificates] Generated sa key and public key. [certificates] Generated front-proxy-ca certificate and key. [certificates] Generated front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/tmp/kubeadm-init-dryrun477531930" [kubeconfig] Wrote KubeConfig file to disk: "admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf" [dryrun] Wrote certificates, kubeconfig files and control plane manifests to "/tmp/kubeadm-init-dryrun477531930" [dryrun] Won't print certificates or kubeconfig files due to the sensitive nature of them [dryrun] Please go and examine the "/tmp/kubeadm-init-dryrun477531930" directory for details about what would be written [dryrun] Would write file "/etc/kubernetes/manifests/kube-apiserver.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-apiserver tier: control-plane name: kube-apiserver namespace: kube-system spec: containers: - command: - kube-apiserver - --allow-privileged=true - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname - --requestheader-extra-headers-prefix=X-Remote-Extra- - --service-cluster-ip-range=10.96.0.0/12 - --tls-cert-file=/etc/kubernetes/pki/apiserver.crt - --tls-private-key-file=/etc/kubernetes/pki/apiserver.key - --proxy-client-cert-file=/etc/kubernetes/pki/front-proxy-client.crt - --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,ResourceQuota - --experimental-bootstrap-token-auth=true - --client-ca-file=/etc/kubernetes/pki/ca.crt - --kubelet-client-key=/etc/kubernetes/pki/apiserver-kubelet-client.key - --secure-port=6443 - --proxy-client-key-file=/etc/kubernetes/pki/front-proxy-client.key - --insecure-port=0 - --requestheader-username-headers=X-Remote-User - --requestheader-group-headers=X-Remote-Group - --requestheader-allowed-names=front-proxy-client - --advertise-address=192.168.200.101 - --service-account-key-file=/etc/kubernetes/pki/sa.pub - --kubelet-client-certificate=/etc/kubernetes/pki/apiserver-kubelet-client.crt - --requestheader-client-ca-file=/etc/kubernetes/pki/front-proxy-ca.crt - --authorization-mode=Node,RBAC - --etcd-servers=http://127.0.0.1:2379 image: gcr.io/google_containers/kube-apiserver-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 6443 scheme: HTTPS initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-apiserver resources: requests: cpu: 250m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/pki name: k8s-certs - hostPath: path: /etc/ssl/certs name: ca-certs - hostPath: path: /etc/pki name: ca-certs-etc-pki status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-controller-manager.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-controller-manager tier: control-plane name: kube-controller-manager namespace: kube-system spec: containers: - command: - kube-controller-manager - --address=127.0.0.1 - --kubeconfig=/etc/kubernetes/controller-manager.conf - --cluster-signing-cert-file=/etc/kubernetes/pki/ca.crt - --cluster-signing-key-file=/etc/kubernetes/pki/ca.key - --leader-elect=true - --use-service-account-credentials=true - --controllers=*,bootstrapsigner,tokencleaner - --root-ca-file=/etc/kubernetes/pki/ca.crt - --service-account-private-key-file=/etc/kubernetes/pki/sa.key image: gcr.io/google_containers/kube-controller-manager-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10252 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-controller-manager resources: requests: cpu: 200m volumeMounts: - mountPath: /etc/kubernetes/pki name: k8s-certs readOnly: true - mountPath: /etc/ssl/certs name: ca-certs readOnly: true - mountPath: /etc/kubernetes/controller-manager.conf name: kubeconfig readOnly: true - mountPath: /etc/pki name: ca-certs-etc-pki readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/pki name: k8s-certs - hostPath: path: /etc/ssl/certs name: ca-certs - hostPath: path: /etc/kubernetes/controller-manager.conf name: kubeconfig - hostPath: path: /etc/pki name: ca-certs-etc-pki status: {} [dryrun] Would write file "/etc/kubernetes/manifests/kube-scheduler.yaml" with content: apiVersion: v1 kind: Pod metadata: annotations: scheduler.alpha.kubernetes.io/critical-pod: "" creationTimestamp: null labels: component: kube-scheduler tier: control-plane name: kube-scheduler namespace: kube-system spec: containers: - command: - kube-scheduler - --leader-elect=true - --kubeconfig=/etc/kubernetes/scheduler.conf - --address=127.0.0.1 image: gcr.io/google_containers/kube-scheduler-amd64:v1.7.4 livenessProbe: failureThreshold: 8 httpGet: host: 127.0.0.1 path: /healthz port: 10251 scheme: HTTP initialDelaySeconds: 15 timeoutSeconds: 15 name: kube-scheduler resources: requests: cpu: 100m volumeMounts: - mountPath: /etc/kubernetes/scheduler.conf name: kubeconfig readOnly: true hostNetwork: true volumes: - hostPath: path: /etc/kubernetes/scheduler.conf name: kubeconfig status: {} [markmaster] Will mark node thegopher as master by adding a label and a taint [dryrun] Would perform action GET on resource "nodes" in API group "core/v1" [dryrun] Resource name: "thegopher" [dryrun] Would perform action PATCH on resource "nodes" in API group "core/v1" [dryrun] Resource name: "thegopher" [dryrun] Attached patch: {"metadata":{"labels":{"node-role.kubernetes.io/master":""}},"spec":{"taints":[{"effect":"NoSchedule","key":"node-role.kubernetes.io/master","timeAdded":null}]}} [markmaster] Master thegopher tainted and labelled with key/value: node-role.kubernetes.io/master="" [token] Using token: 96efd6.98bbb2f4603c026b [dryrun] Would perform action GET on resource "secrets" in API group "core/v1" [dryrun] Resource name: "bootstrap-token-96efd6" [dryrun] Would perform action CREATE on resource "secrets" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: description: VGhlIGRlZmF1bHQgYm9vdHN0cmFwIHRva2VuIGdlbmVyYXRlZCBieSAna3ViZWFkbSBpbml0Jy4= expiration: MjAxNy0wOC0yM1QyMzoxOTozNCswMzowMA== token-id: OTZlZmQ2 token-secret: OThiYmIyZjQ2MDNjMDI2Yg== usage-bootstrap-authentication: dHJ1ZQ== usage-bootstrap-signing: dHJ1ZQ== kind: Secret metadata: creationTimestamp: null name: bootstrap-token-96efd6 type: bootstrap.kubernetes.io/token [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:kubelet-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-bootstrapper subjects: - kind: Group name: system:bootstrappers [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [dryrun] Would perform action CREATE on resource "clusterroles" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: creationTimestamp: null name: system:certificates.k8s.io:certificatesigningrequests:nodeclient rules: - apiGroups: - certificates.k8s.io resources: - certificatesigningrequests/nodeclient verbs: - create [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-autoapprove-bootstrap roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:certificates.k8s.io:certificatesigningrequests:nodeclient subjects: - kind: Group name: system:bootstrappers [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubeconfig: | apiVersion: v1 clusters: - cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFM01EZ3lNakl3TVRrek1Gb1hEVEkzTURneU1ESXdNVGt6TUZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTFk0CnZWZ1FSN3pva3VzbWVvQ3JwZ1lFdEFHSldhSWVVUXE0ZE8wcVA4TDFKQk10ZTdHcXVHeXlWdVlyejBBeXdGdkMKaEh3Tm1pbmpIWFdNYkgrQVdIUXJOZmtZMmRBdnVuL0NYZWd6RlRZZG56M1JzYU5EaW0wazVXaVhEamQwM21YVApicGpvMGxpT2ZtY0xlOHpYUXZNaHpmN2FMV24wOVJoN05Ld0M0eW84cis5MDNHNjVxRW56cnUybmJKTEJ1TFk0CkFsL3UxTElVSGV4dmExZjgzampOQ1NmQXJScGh1d0oyS1NTWXhoaEJpNHBJMzd0ZEFpN3diTUF0cG4zdU9rVEQKU0dtdGpkbFZoUlAzV1dHQzNQTjF3M1JRakpmTW5weFFZbFFmalU2UE9Pbzg4ODBwN3dnUXFDUU11bjU5UWlBWgpwNkI1c3lrUitMemhoZVpkMWtjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFHaTVrcUJzMTdOMU5pRWx2RGJaWGFSeXk5anUKR3ZuRjRjSnczQ0dPR2hpdHgySmdxRkt5WXRIdlJUSFNYRXpBNTlteEs2RlJWUWpBZmJMdjhSZUNKUjYrSzdRdQo0U21uTVVxVXRTZFUzaHozVXZlMjVOTHVwMnhsYVpZbzVwdVRrOWhZdUszd09MbWgxZTFoRzcyUFpoZE5yOGd5Ck5lTFN3bjI4OEVUSlNCcWpob0FkV2w0YzZtcnpwWll4ekNrcEpUSDFPWnBCQzFUYmY3QW5HenVwRzB1Q1RSYWsKWTBCSERyL01uVGJKKzM5NEJyMXBId0NtQ3ZrWUY0RjVEeW9UTFQ0UFhGTnJSV3UweU9rMXdDdEFKbEs3eFlUOAp5Z015cUlRSG4rNjYrUGlsSUprcU81ODRoVm5ENURva1dLcEdISFlYNmNpRGYwU1hYZUI1d09YQ0xjaz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo= server: https://192.168.200.101:6443 name: "" contexts: [] current-context: "" kind: Config preferences: {} users: [] kind: ConfigMap metadata: creationTimestamp: null name: cluster-info namespace: kube-public [dryrun] Would perform action CREATE on resource "roles" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public rules: - apiGroups: - "" resourceNames: - cluster-info resources: - configmaps verbs: - get [dryrun] Would perform action CREATE on resource "rolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: creationTimestamp: null name: kubeadm:bootstrap-signer-clusterinfo namespace: kube-public roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: kubeadm:bootstrap-signer-clusterinfo subjects: - kind: User name: system:anonymous [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: MasterConfiguration: | api: advertiseAddress: 192.168.200.101 bindPort: 6443 apiServerCertSANs: [] apiServerExtraArgs: null authorizationModes: - Node - RBAC certificatesDir: /etc/kubernetes/pki cloudProvider: "" controllerManagerExtraArgs: null etcd: caFile: "" certFile: "" dataDir: /var/lib/etcd endpoints: [] extraArgs: null image: "" keyFile: "" featureFlags: null imageRepository: gcr.io/google_containers kubernetesVersion: v1.7.4 networking: dnsDomain: cluster.local podSubnet: "" serviceSubnet: 10.96.0.0/12 nodeName: thegopher schedulerExtraArgs: null token: 96efd6.98bbb2f4603c026b tokenTTL: 86400000000000 unifiedControlPlaneImage: "" kind: ConfigMap metadata: creationTimestamp: null name: kubeadm-config namespace: kube-system [dryrun] Would perform action GET on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Resource name: "system:node" [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: kube-dns namespace: kube-system [dryrun] Would perform action GET on resource "services" in API group "core/v1" [dryrun] Resource name: "kubernetes" [dryrun] Would perform action CREATE on resource "deployments" in API group "extensions/v1beta1" [dryrun] Attached object: apiVersion: extensions/v1beta1 kind: Deployment metadata: creationTimestamp: null labels: k8s-app: kube-dns name: kube-dns namespace: kube-system spec: selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 10% maxUnavailable: 0 template: metadata: creationTimestamp: null labels: k8s-app: kube-dns spec: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: beta.kubernetes.io/arch operator: In values: - amd64 containers: - args: - --domain=cluster.local. - --dns-port=10053 - --config-dir=/kube-dns-config - --v=2 env: - name: PROMETHEUS_PORT value: "10055" image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/kubedns port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: kubedns ports: - containerPort: 10053 name: dns-local protocol: UDP - containerPort: 10053 name: dns-tcp-local protocol: TCP - containerPort: 10055 name: metrics protocol: TCP readinessProbe: httpGet: path: /readiness port: 8081 scheme: HTTP initialDelaySeconds: 3 timeoutSeconds: 5 resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi volumeMounts: - mountPath: /kube-dns-config name: kube-dns-config - args: - -v=2 - -logtostderr - -configDir=/etc/k8s/dns/dnsmasq-nanny - -restartDnsmasq=true - -- - -k - --cache-size=1000 - --log-facility=- - --server=/cluster.local/127.0.0.1#10053 - --server=/in-addr.arpa/127.0.0.1#10053 - --server=/ip6.arpa/127.0.0.1#10053 image: gcr.io/google_containers/k8s-dns-dnsmasq-nanny-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /healthcheck/dnsmasq port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: dnsmasq ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP resources: requests: cpu: 150m memory: 20Mi volumeMounts: - mountPath: /etc/k8s/dns/dnsmasq-nanny name: kube-dns-config - args: - --v=2 - --logtostderr - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.14.4 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /metrics port: 10054 scheme: HTTP initialDelaySeconds: 60 successThreshold: 1 timeoutSeconds: 5 name: sidecar ports: - containerPort: 10054 name: metrics protocol: TCP resources: requests: cpu: 10m memory: 20Mi dnsPolicy: Default serviceAccountName: kube-dns tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/master volumes: - configMap: name: kube-dns optional: true name: kube-dns-config status: {} [dryrun] Would perform action CREATE on resource "services" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: Service metadata: creationTimestamp: null labels: k8s-app: kube-dns kubernetes.io/cluster-service: "true" kubernetes.io/name: KubeDNS name: kube-dns namespace: kube-system resourceVersion: "0" spec: clusterIP: 10.96.0.10 ports: - name: dns port: 53 protocol: UDP targetPort: 53 - name: dns-tcp port: 53 protocol: TCP targetPort: 53 selector: k8s-app: kube-dns status: loadBalancer: {} [addons] Applied essential addon: kube-dns [dryrun] Would perform action CREATE on resource "serviceaccounts" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: null name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "configmaps" in API group "core/v1" [dryrun] Attached object: apiVersion: v1 data: kubeconfig.conf: | apiVersion: v1 kind: Config clusters: - cluster: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt server: https://192.168.200.101:6443 name: default contexts: - context: cluster: default namespace: default user: default name: default current-context: default users: - name: default user: tokenFile: /var/run/secrets/kubernetes.io/serviceaccount/token kind: ConfigMap metadata: creationTimestamp: null labels: app: kube-proxy name: kube-proxy namespace: kube-system [dryrun] Would perform action CREATE on resource "daemonsets" in API group "extensions/v1beta1" [dryrun] Attached object: apiVersion: extensions/v1beta1 kind: DaemonSet metadata: creationTimestamp: null labels: k8s-app: kube-proxy name: kube-proxy namespace: kube-system spec: selector: matchLabels: k8s-app: kube-proxy template: metadata: creationTimestamp: null labels: k8s-app: kube-proxy spec: containers: - command: - /usr/local/bin/kube-proxy - --kubeconfig=/var/lib/kube-proxy/kubeconfig.conf image: gcr.io/google_containers/kube-proxy-amd64:v1.7.4 imagePullPolicy: IfNotPresent name: kube-proxy resources: {} securityContext: privileged: true volumeMounts: - mountPath: /var/lib/kube-proxy name: kube-proxy - mountPath: /run/xtables.lock name: xtables-lock hostNetwork: true serviceAccountName: kube-proxy tolerations: - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node.cloudprovider.kubernetes.io/uninitialized value: "true" volumes: - configMap: name: kube-proxy name: kube-proxy - hostPath: path: /run/xtables.lock name: xtables-lock updateStrategy: type: RollingUpdate status: currentNumberScheduled: 0 desiredNumberScheduled: 0 numberMisscheduled: 0 numberReady: 0 [dryrun] Would perform action CREATE on resource "clusterrolebindings" in API group "rbac.authorization.k8s.io/v1beta1" [dryrun] Attached object: apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: creationTimestamp: null name: kubeadm:node-proxier roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:node-proxier subjects: - kind: ServiceAccount name: kube-proxy namespace: kube-system [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube sudo cp -i /tmp/kubeadm-init-dryrun477531930/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: http://kubernetes.io/docs/admin/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join --token 96efd6.98bbb2f4603c026b 192.168.200.101:6443 --discovery-token-ca-cert-hash sha256:ccb794198ae65cb3c9e997be510c18023e0e9e064225a588997b9e6c64ebf9f1 ``` **Release note**: ```release-note kubeadm: Implement a `--dry-run` mode and flag for `kubeadm` ``` @kubernetes/sig-cluster-lifecycle-pr-reviews @ncdc @sttts
What this PR does / why we need it:
Adds dry-run support to kubeadm by creating a fake clientset that can get totally fake values (like in the init case), or delegate GETs/LISTs to a real API server but discard all edits like POST/PUT/PATCH/DELETE
Which issue this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close that issue when PR gets merged): fixes #fixes: kubernetes/kubeadm#389
Special notes for your reviewer:
This PR depends on #50626, first three commits are from there
This PR is a dependency for #48899 (kubeadm upgrades)
I have some small things to fixup and I'll yet write unit tests, but PTAL if you think this is going in the right direction
Release note:
cc @kubernetes/sig-cluster-lifecycle-pr-reviews @kubernetes/sig-api-machinery-pr-reviews