Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Exposure the race condition on the pod preemption #94358

Closed
wants to merge 1 commit into from

Conversation

chendave
Copy link
Member

@chendave chendave commented Aug 31, 2020

  • Fix a potential test flake.
  • Add new testcase to exposure the race condition on pod preemption.
  • Take the chance to fix a typo.

Signed-off-by: Dave Chen dave.chen@arm.com

What type of PR is this?

Add one of the following kinds:
/kind bug

What this PR does / why we need it:

Which issue(s) this PR fixes:

Ref #93505

Special notes for your reviewer:

Does this PR introduce a user-facing change?:


Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:


@k8s-ci-robot k8s-ci-robot added kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. kind/regression Categorizes issue or PR as related to a regression from a prior release. needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Aug 31, 2020
@chendave
Copy link
Member Author

/release-note-none

@k8s-ci-robot
Copy link
Contributor

@chendave: you can not set the release note label to "release-note-none" because the PR has the label "kind/deprecation".

In response to this:

/release-note-none

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: chendave
To complete the pull request process, please assign huang-wei
You can assign the PR to them by writing /assign @huang-wei in a comment when ready.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot requested review from ahg-g and wgliang August 31, 2020 09:46
@k8s-ci-robot k8s-ci-robot added sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. and removed needs-sig Indicates an issue or PR lacks a `sig/foo` label and requires one. labels Aug 31, 2020
@chendave
Copy link
Member Author

/remove-kind api-change
/remove-kind deprecation
/remove-kind regression

@k8s-ci-robot k8s-ci-robot removed kind/api-change Categorizes issue or PR as related to adding, removing, or otherwise changing an API kind/deprecation Categorizes issue or PR as related to a feature/enhancement marked for deprecation. kind/regression Categorizes issue or PR as related to a regression from a prior release. labels Aug 31, 2020
@chendave
Copy link
Member Author

/release-note-none

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. and removed do-not-merge/release-note-label-needed Indicates that a PR should not merge because it's missing one of the release note labels. labels Aug 31, 2020
@chendave
Copy link
Member Author

/cc @Huang-Wei @ahg-g @alculquicondor @soulxu

I will propose a fix later, the preemption could eventually got convergent if we skip the move rquest when the unscheduleQ is empty.

@k8s-ci-robot
Copy link
Contributor

@chendave: GitHub didn't allow me to request PR reviews from the following users: soulxu.

Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs.

In response to this:

/cc @Huang-Wei @ahg-g @alculquicondor @soulxu

I will propose a fix later, the preemption could eventually got convergent if we skip the move rquest when the unscheduleQ is empty.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

pkg/scheduler/internal/queue/scheduling_queue_test.go Outdated Show resolved Hide resolved
if q.podBackoffQ.Len() != 1 {
t.Error("Expected 1 items to be in podBackoffQ")
}
// the lowPriority pod is popped and got scheduled while the highPriorityPod is stuck in the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is by design to avoid starvation from high priority pods. Let's discuss in the original issue. However, we intend to reduce the problems of this with #94009

Copy link
Member Author

@chendave chendave Sep 1, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

problem here is that the highPriorityPod should not be in backoffQ in the first place, it is moved to be backoffQ just because there was a move request happened first.
lowPriority pod popped here should be fine, but there should a way for the preemption to get convergent, pls see the code following,

	// another pod is added to activeQ.
	q.Add(&unschedulablePod)
	if q.activeQ.Len() != 1 {
		t.Error("Expected 1 item to be in activeQ")
	}

it's possible a new pod is added to Queue while the highPriorityPod is still backoff-ing, after the backoff time is up, the highPriorityPod needs to preempt the lowPriority pod again, if the move request is detected the highPriorityPod is backoff again.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer this to be split in a different PR so we can already merge the tests using the deterministic clock.

- Fix a potential test flake.
- Add new testcase to exposure the race condition on pod preemption.
- Take the chance to fix a typo.

Signed-off-by: Dave Chen <dave.chen@arm.com>
@chendave
Copy link
Member Author

chendave commented Sep 1, 2020

/retest

@k8s-ci-robot
Copy link
Contributor

k8s-ci-robot commented Sep 1, 2020

@chendave: The following test failed, say /retest to rerun all failed tests:

Test name Commit Details Rerun command
pull-kubernetes-e2e-kind 12de87f link /test pull-kubernetes-e2e-kind

Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here.

if q.podBackoffQ.Len() != 1 {
t.Error("Expected 1 items to be in podBackoffQ")
}
// the lowPriority pod is popped and got scheduled while the highPriorityPod is stuck in the
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I prefer this to be split in a different PR so we can already merge the tests using the deterministic clock.

}
// the lowPriority pod is popped and got scheduled while the highPriorityPod is stuck in the
// backoffQ.
q.Pop()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I still stuck at the problem of the low priority pod shouldn't be gotten a chance to schedule successful, since when the low priority pod getting scheduling, the resources already token by high priority pod.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please thinking about the case that high priority pod is still in the unscheduleQ or backoffQ, low priority pod is the only one in the activeQ.

@chendave
Copy link
Member Author

chendave commented Sep 3, 2020

/hold

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Sep 3, 2020
@k8s-ci-robot
Copy link
Contributor

@chendave: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 23, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 21, 2021
@chendave
Copy link
Member Author

/close

@k8s-ci-robot
Copy link
Contributor

@chendave: Closed this PR.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. kind/bug Categorizes issue or PR as related to a bug. kind/failing-test Categorizes issue or PR as related to a consistently or frequently failing test. kind/flake Categorizes issue or PR as related to a flaky test. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. release-note-none Denotes a PR that doesn't merit a release note. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants