Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automated cherry pick of #122893: Fix EnsureAdminClusterRoleBindingImpl error handling #122898

Conversation

danwinship
Copy link
Contributor

Cherry pick of #122893 on release-1.29.

#122893: Fix EnsureAdminClusterRoleBindingImpl error handling

For details on the cherry pick process, see the cherry pick requests page.


The code assumed Create() returned nil on error, but that's only true
for the fake clients in unit tests.
@k8s-ci-robot
Copy link
Contributor

@danwinship: All 'parent' PRs of a cherry-pick PR must have one of the "release-note" or "release-note-action-required" labels, or this PR must follow the standard/parent release note labeling requirement.

The following parent PRs have neither the "release-note" nor the "release-note-action-required" labels: #122893.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added this to the v1.29 milestone Jan 21, 2024
@k8s-ci-robot k8s-ci-robot added do-not-merge/cherry-pick-not-approved Indicates that a PR is not yet approved to merge into a release branch. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. labels Jan 21, 2024
@k8s-ci-robot
Copy link
Contributor

This cherry pick PR is for a release branch and has not yet been approved by Release Managers.
Adding the do-not-merge/cherry-pick-not-approved label.

To merge this cherry pick, it must first be approved (/lgtm + /approve) by the relevant OWNERS.

If you didn't cherry-pick this change to all supported release branches, please leave a comment describing why other cherry-picks are not needed to speed up the review process.

If you're not sure is it required to cherry-pick this change to all supported release branches, please consult the cherry-pick guidelines document.

AFTER it has been approved by code owners, please leave the following comment on a line by itself, with no leading whitespace: /cc kubernetes/release-managers

(This command will request a cherry pick review from Release Managers and should work for all GitHub users, whether they are members of the Kubernetes GitHub organization or not.)

For details on the patch release process and schedule, see the Patch Releases page.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 21, 2024
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: danwinship
Once this PR has been reviewed and has the lgtm label, please assign neolit123 for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. label Jan 21, 2024
@k8s-ci-robot k8s-ci-robot requested a review from SataQiu January 21, 2024 23:10
@k8s-ci-robot k8s-ci-robot added sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 21, 2024
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 21, 2024
@danwinship
Copy link
Contributor Author

/assign @neolit123

@danwinship
Copy link
Contributor Author

/hold
for resolution of #122900 / #122901
/cc @pacoxu

@k8s-ci-robot k8s-ci-robot requested a review from pacoxu January 22, 2024 13:06
@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Jan 22, 2024
@pacoxu
Copy link
Member

pacoxu commented Jan 24, 2024

Cherry pick #122893 with #122893 is OK to me. @danwinship Could you update this PR?

@danwinship
Copy link
Contributor Author

@pacoxu actually, given that (as per the analysis in #122901) the bug was only in the unit test, and the code worked correctly in production, is this still worth backporting?

In particular, the bot is complaining "All 'parent' PRs of a cherry-pick PR must have one of the "release-note" or "release-note-action-required" labels" but there is nothing to release-note here; there is no user-visible change.

@pacoxu
Copy link
Member

pacoxu commented Jan 25, 2024

@pacoxu actually, given that (as per the analysis in #122901) the bug was only in the unit test, and the code worked correctly in production, is this still worth backporting?

My mistake. It sounds like not needed now.

@danwinship danwinship closed this Jan 25, 2024
@danwinship danwinship deleted the automated-cherry-pick-of-#122893-origin-release-1.29 branch January 25, 2024 14:51
@dhruvapg
Copy link

Hi,
I'm hitting "clusterrolebindings.rbac.authorization.k8s.io "kubeadm:cluster-admins" already exists" error during kubeadm init phase mark-control-plane in K8s 1.29.3 version. Even if I delete clusterrolebinding manually, it gets created automatically by kubeadm in some sync loop and thenkubeadm init phase mark-control-plane step fails with "clusterrolebinding already exists" error and is stuck in this error state.
I0627 08:12:10.094815 538426 kubeconfig.go:606] ensuring that the ClusterRoleBinding for the kubeadm:cluster-admins Group exists I0627 08:12:10.102894 538426 kubeconfig.go:682] creating the ClusterRoleBinding for the kubeadm:cluster-admins Group by using super-admin.conf clusterrolebindings.rbac.authorization.k8s.io "kubeadm:cluster-admins" already exists unable to create the kubeadm:cluster-admins ClusterRoleBinding by using super-admin.conf k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.EnsureAdminClusterRoleBindingImpl cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:708 k8s.io/kubernetes/cmd/kubeadm/app/phases/kubeconfig.EnsureAdminClusterRoleBinding cmd/kubeadm/app/phases/kubeconfig/kubeconfig.go:595 k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*initData).Client cmd/kubeadm/app/cmd/init.go:526 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runMarkControlPlane cmd/kubeadm/app/cmd/phases/init/markcontrolplane.go:60 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:259 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll cmd/kubeadm/app/cmd/phases/workflow/runner.go:446 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run cmd/kubeadm/app/cmd/phases/workflow/runner.go:232 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).BindToCommand.func1.1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:372 github.com/spf13/cobra.(*Command).execute vendor/github.com/spf13/cobra/command.go:940 github.com/spf13/cobra.(*Command).ExecuteC vendor/github.com/spf13/cobra/command.go:1068 github.com/spf13/cobra.(*Command).Execute vendor/github.com/spf13/cobra/command.go:992 k8s.io/kubernetes/cmd/kubeadm/app.Run cmd/kubeadm/app/kubeadm.go:50 main.main cmd/kubeadm/kubeadm.go:25 runtime.main /usr/local/go/src/runtime/proc.go:267 runtime.goexit /usr/local/go/src/runtime/asm_amd64.s:1650 could not bootstrap the admin user in file admin.conf k8s.io/kubernetes/cmd/kubeadm/app/cmd.(*initData).Client cmd/kubeadm/app/cmd/init.go:528 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runMarkControlPlane cmd/kubeadm/app/cmd/phases/init/markcontrolplane.go:60 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:259 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll cmd/kubeadm/app/cmd/phases/workflow/runner.go:446 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run cmd/kubeadm/app/cmd/phases/workflow/runner.go:232 k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).BindToCommand.func1.1 cmd/kubeadm/app/cmd/phases/workflow/runner.go:372 github.com/spf13/cobra.(*Command).execute vendor/github.com/spf13/cobra/command.go:940 github.com/spf13/cobra.(*Command).ExecuteC vendor/github.com/spf13/cobra/command.go:1068 github.com/spf13/cobra.(*Command).Execute vendor/github.com/spf13/cobra/command.go:992 k8s.io/kubernetes/cmd/kubeadm/app.Run cmd/kubeadm/app/kubeadm.go:50 main.main cmd/kubeadm/kubeadm.go:25 runtime.main

kubeapi server audit logs shows that kubeadm:cluster-admins clusterolebinding is automatically getting created by kubeadm soon after I delete it manually
audit/kube-apiserver.log:{"kind":"Event","apiVersion":"audit.k8s.io/v1","level":"RequestResponse","auditID":"81d9e046-37ae-4814-9e4b-56e87cc05c56","stage":"ResponseComplete","requestURI":"/apis/rbac.authorization.k8s.io/v1/clusterrolebindings?timeout=10s","verb":"create","user":{"username":"kubernetes-super-admin","groups":["system:masters","system:authenticated"]},"userAgent":"kubeadm/v1.29.3+(linux/amd64) kubernetes/4ab1a82","objectRef":{"resource":"clusterrolebindings","name":"kubeadm:cluster-admins","apiGroup":"rbac.authorization.k8s.io","apiVersion":"v1"},"responseStatus":{"metadata":{},"code":201},"requestObject":{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:cluster-admins","creationTimestamp":null},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"kubeadm:cluster-admins"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"}},"responseObject":{"kind":"ClusterRoleBinding","apiVersion":"rbac.authorization.k8s.io/v1","metadata":{"name":"kubeadm:cluster-admins","uid":"629da920-2bd3-4a98-9348-86708ccf6e4e","resourceVersion":"65240","creationTimestamp":"2024-06-27T06:54:24Z","managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"rbac.authorization.k8s.io/v1","time":"2024-06-27T06:54:24Z","fieldsType":"FieldsV1","fieldsV1":{"f:roleRef":{},"f:subjects":{}}}]},"subjects":[{"kind":"Group","apiGroup":"rbac.authorization.k8s.io","name":"kubeadm:cluster-admins"}],"roleRef":{"apiGroup":"rbac.authorization.k8s.io","kind":"ClusterRole","name":"cluster-admin"}},"requestReceivedTimestamp":"2024-06-27T06:54:24.611747Z","stageTimestamp":"2024-06-27T06:54:24.617174Z","annotations":{"authorization.k8s.io/decision":"allow","authorization.k8s.io/reason":""}}

Is there a way to fix this error without backporting ec1516b to 1.29 version ?

@neolit123
Copy link
Member

please log a ticket in kubernetes/kubeadm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubeadm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/cherry-pick-not-approved Indicates that a PR is not yet approved to merge into a release branch. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. do-not-merge/needs-kind Indicates a PR lacks a `kind/foo` label and requires one. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. sig/cluster-lifecycle Categorizes an issue or PR as relevant to SIG Cluster Lifecycle. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

5 participants