Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pkg/kubelet: ensure node object exists before syncing pods #129464

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

bcho
Copy link

@bcho bcho commented Jan 3, 2025

The nodeLister loop runs in parallel with kubelet node registration call. In rare case that the nodeLister could list with empty node result and the later watch failed due to kube-apiserver closes the connection for unexpected reaseon. Such case leaves the node informer with an empty cache marekd with synced. The node informer would need to back-off until next list watch call to discover the created node object.

If kubelet synces pods with affinity before next list watch call, these pods will be failed with NodeAffinity admission error.

This commit attempts to fix by adding an extra check for the node object existence on the node informer cache before starting the pods sync.

fix: #129463

What type of PR is this?

/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Fixes #129463

Special notes for your reviewer:

Does this PR introduce a user-facing change?


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


The nodeLister loop runs in parallel with kubelet node registration call.
In rare case that the nodeLister could list with empty node result and
the later watch failed due to kube-apiserver closes the connection for
unexpected reaseon. Such case leaves the node informer with an empty cache
marekd with synced. The node informer would need to back-off until next list watch
call to discover the created node object.

If kubelet synces pods with affinity before next list watch call, these pods will be
failed with NodeAffinity admission error.

This commit attempts to fix by adding an extra check for the node object existence
on the node informer cache before starting the pods sync.

fix: kubernetes#129463
@k8s-ci-robot
Copy link
Contributor

Adding the "do-not-merge/release-note-label-needed" label because no release-note block was detected, please follow our release note process to remove it.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. kind/bug Categorizes issue or PR as related to a bug. labels Jan 3, 2025
@k8s-ci-robot
Copy link
Contributor

Keywords which can automatically close issues and at(@) or hashtag(#) mentions are not allowed in commit messages.

The list of commits with invalid commit messages:

  • 17ea736 pkg/kubelet: ensure node object exists before syncing pods

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@k8s-ci-robot k8s-ci-robot added do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Jan 3, 2025
@k8s-ci-robot
Copy link
Contributor

Welcome @bcho!

It looks like this is your first PR to kubernetes/kubernetes 🎉. Please refer to our pull request process documentation to help your PR have a smooth ride to approval.

You will be prompted by a bot to use commands during the review process. Do not be afraid to follow the prompts! It is okay to experiment. Here is the bot commands documentation.

You can also check if kubernetes/kubernetes has its own contribution guidelines.

You may want to refer to our testing guide if you run into trouble with your tests not passing.

If you are having difficulty getting your pull request seen, please follow the recommended escalation practices. Also, for tips and tricks in the contribution process you may want to read the Kubernetes contributor cheat sheet. We want to make sure your contribution gets all the attention it needs!

Thank you, and welcome to Kubernetes. 😃

@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Jan 3, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @bcho. Thanks for your PR.

I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the needs-priority Indicates a PR lacks a `priority/foo` label and requires one. label Jan 3, 2025
@k8s-ci-robot k8s-ci-robot added area/kubelet sig/node Categorizes an issue or PR as relevant to SIG Node. and removed do-not-merge/needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Jan 3, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: bcho
Once this PR has been reviewed and has the lgtm label, please assign mrunalp for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@pacoxu
Copy link
Member

pacoxu commented Jan 3, 2025

/cc @neolit123
/assign @SergeyKanzhelev @derekwaynecarr

Copy link
Member

@neolit123 neolit123 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

listing the node doesn't mean it has synced, correct?

from the ticket:

Kubelet should back off the node list-watch calls until the node object has been populated in the node informer cache.

but that can block indefinitely? how about a configurable timeout that can cover uses cases like yours.

However, in rare scenarios that the node lister might be failing for some other reasons before the first successful call (waiting for TLS bootstrapping for example), and increased the back-off (max out to 30s), then in theory, the gap of T6-T5 could be as long as 30s + jitter, which aligns with the logs timestamps we observed in above.

30 seconds for a node list sounds like a lot. almost seems like this cluster would need 'priority and fairness' configured.

Or, we should invalidate the node informer cache after successfully registering the node from kubelet to maintain the correct version of node object in memory.

that seems like something that has to be done either way.

@bcho
Copy link
Author

bcho commented Jan 3, 2025

hi @neolit123

listing the node doesn't mean it has synced, correct?

yep, this simple fix is to check if the informer has synced and it has the node record read from kube-apiserver

but that can block indefinitely? how about a configurable timeout that can cover uses cases like yours.

If the node informer never reads the node object from remote, it's supposed to be blocked or restarted because the states are drifted. But I didn't find a good way for doing so in current implementation.

30 seconds for a node list sounds like a lot. almost seems like this cluster would need 'priority and fairness' configured.

Agree, but this is the current default settings from the back off mgr used in the node lister. This is also why it might be better to start the node lister informer or invalidate its cache after registering the node. However I am not sure what is the impact to the static pods creation process.

@neolit123
Copy link
Member

hi @neolit123

listing the node doesn't mean it has synced, correct?

yep, this simple fix is to check if the informer has synced and it has the node record read from kube-apiserver

what are the implications of doing it like return IsSynced && nodeIsListed instead of listing after sync fails in the helper function?

but that can block indefinitely? how about a configurable timeout that can cover uses cases like yours.

If the node informer never reads the node object from remote, it's supposed to be blocked or restarted because the states are drifted. But I didn't find a good way for doing so in current implementation.

indeed, that might be non-trivial.

30 seconds for a node list sounds like a lot. almost seems like this cluster would need 'priority and fairness' configured.

Agree, but this is the current default settings from the back off mgr used in the node lister. This is also why it might be better to start the node lister informer or invalidate its cache after registering the node. However I am not sure what is the impact to the static pods creation process.

SIG Node (kubelet owners) should review and advise here.
you can also join their weekly zoom meeting and/or email the mailing list:
https://github.com/kubernetes/community/blob/master/sig-node/README.md

from my POV a configurable backoff / timeout option might make sense, if that can be set on the informer.

i will run ok-to-test on this PR, static pods are quite tricky and i don't think we have full test coverage for all their edge cases. in any case, we must not break them.

/ok-to-test

@k8s-ci-robot k8s-ci-robot added ok-to-test Indicates a non-member PR verified by an org member that is safe to test. and removed needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. labels Jan 3, 2025
@bcho
Copy link
Author

bcho commented Jan 3, 2025

hi @neolit123

listing the node doesn't mean it has synced, correct?

yep, this simple fix is to check if the informer has synced and it has the node record read from kube-apiserver

what are the implications of doing it like return IsSynced && nodeIsListed instead of listing after sync fails in the helper function?

The main issue is the informer is hiding the short watch signal. Even if we do a full list immediately after short watch, there is still chance the node object is not yet regstiered in kube-apiserver side (event T4 in the issue). Giving this helper function runs in parallel with the node registeration, therefore, the only viable way in current implementation is to back-off
when seeing node is not listed until node regisration happens. This is the main reason why I thought the optimal solution is to start node informer only after node registration.

Will try to join the node sig call, thanks for the pointer and reviews!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/kubelet cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/invalid-commit-message Indicates that a PR should not merge because it has an invalid commit message. do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. kind/bug Categorizes issue or PR as related to a bug. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. sig/node Categorizes an issue or PR as relevant to SIG Node. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Development

Successfully merging this pull request may close these issues.

kubelet could reject pods with NodeAffinity error due to incomplete informer cache on the node object
6 participants