Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sched: make CycleState's Read()/Write()/Delete() thread-safe #101542

Merged

Conversation

Huang-Wei
Copy link
Member

@Huang-Wei Huang-Wei commented Apr 28, 2021

What type of PR is this?

/kind bug
/sig scheduling

What this PR does / why we need it:

This is a followup of the TBD item in #95887 (comment).

Scheduler used and exposed a map called CycleState to store transient data within one scheduling cycle. This struct is usually keyed with a particular plugin name + phase (PreFilter or PreScore), and valued with the concrete data. Without this PR, this struct is:

  1. written without locking in PreFilter / PreScore phases
  2. read without locking in Filter / Score phases

Behavior 2 should be adjusted to acquire read lock before using it as we shouldn't make that assumption for out-of-tree plugins - which is how the racing condition occured in #95887. (although in-tree plugins don't write data to CycleState)

In terms of behavior 1, although in-tree plugins use Write() in sequential phase, and hence can be lock-free. But again, to eventually remove misleading Lock()/Unlock() functions in CycleState, we decide to make no assumption which phase it can/should be used, in other words, add lock inside Write() as well.

Note that the racing pattern this PR solves is multiple plugins run in parallel to compete for CycleSatate. Another racing pattern is the same plugin runs in parallel (on different nodes), which can be resolved by the plugin's implementation itself (check #96777 for more details).

Which issue(s) this PR fixes:

Fixes #95887

Special notes for your reviewer:

I didn't see an obvious performance downgrade according to the bench diff:

SchedulingBasic/5000Nodes
+--------------------------+---------------------------+----------+--------+---------+---------+---------+
|          METRIC          |           GROUP           | QUANTILE |  UNIT  |   OLD   |   NEW   |  DIFF   |
+--------------------------+---------------------------+----------+--------+---------+---------+---------+
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Average  | pods/s |  166.67 |  162.56 | -2.47%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc50   | pods/s |  178.56 |  184.89 | +3.55%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc90   | pods/s |  208.00 |  202.30 | -2.74%  |
| SchedulingThroughput     | SchedulingBasic/5000Nodes | Perc99   | pods/s |  209.00 |  202.50 | -3.11%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Average  | ms     |   26.75 |   25.30 | -5.41%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |   12.53 |   12.52 | -0.09%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     |   29.98 |   30.88 | +3.01%  |
| scheduler_e2e_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     |  524.08 |  446.84 | -14.74% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     | 2382.24 | 2498.31 | +4.87%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     | 2447.23 | 2604.43 | +6.42%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc90   | ms     | 4565.14 | 4613.83 | +1.07%  |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc99   | ms     | 5064.51 | 5069.38 | +0.10%  |
+--------------------------+---------------------------+----------+--------+---------+---------+---------+
Preemption/5000Nodes
+--------------------------+----------------------+----------+--------+-----------+-----------+--------+
|          METRIC          |        GROUP         | QUANTILE |  UNIT  |    OLD    |    NEW    |  DIFF  |
+--------------------------+----------------------+----------+--------+-----------+-----------+--------+
| SchedulingThroughput     | Preemption/5000Nodes | Average  | pods/s |      9.90 |      9.78 | -1.18% |
| SchedulingThroughput     | Preemption/5000Nodes | Perc50   | pods/s |      0.00 |      0.00 | NaN%   |
| SchedulingThroughput     | Preemption/5000Nodes | Perc90   | pods/s |     42.40 |     42.20 | -0.47% |
| SchedulingThroughput     | Preemption/5000Nodes | Perc99   | pods/s |     67.40 |     66.62 | -1.15% |
| scheduler_e2e_scheduling | Preemption/5000Nodes | Average  | ms     |     36.57 |     36.53 | -0.10% |
| scheduler_e2e_scheduling | Preemption/5000Nodes | Perc50   | ms     |     26.96 |     27.08 | +0.44% |
| scheduler_e2e_scheduling | Preemption/5000Nodes | Perc90   | ms     |     85.95 |     84.09 | -2.16% |
| scheduler_e2e_scheduling | Preemption/5000Nodes | Perc99   | ms     |    194.85 |    189.65 | -2.67% |
| scheduler_pod_scheduling | Preemption/5000Nodes | Average  | ms     | 456669.70 | 460623.34 | +0.87% |
| scheduler_pod_scheduling | Preemption/5000Nodes | Perc50   | ms     | 490881.59 | 491069.79 | +0.04% |
| scheduler_pod_scheduling | Preemption/5000Nodes | Perc90   | ms     | 622464.32 | 622501.96 | +0.01% |
| scheduler_pod_scheduling | Preemption/5000Nodes | Perc99   | ms     | 652070.43 | 652074.20 | +0.00% |
+--------------------------+----------------------+----------+--------+-----------+-----------+--------+
SchedulingNodeAffinity/5000Nodes
+--------------------------+----------------------------------+----------+--------+---------+---------+---------+
|          METRIC          |              GROUP               | QUANTILE |  UNIT  |   OLD   |   NEW   |  DIFF   |
+--------------------------+----------------------------------+----------+--------+---------+---------+---------+
| SchedulingThroughput     | SchedulingNodeAffinity/5000Nodes | Average  | pods/s |  125.00 |  125.00 | +0.00%  |
| SchedulingThroughput     | SchedulingNodeAffinity/5000Nodes | Perc50   | pods/s |  117.00 |  120.00 | +2.56%  |
| SchedulingThroughput     | SchedulingNodeAffinity/5000Nodes | Perc90   | pods/s |  190.50 |  187.40 | -1.63%  |
| SchedulingThroughput     | SchedulingNodeAffinity/5000Nodes | Perc99   | pods/s |  190.50 |  187.40 | -1.63%  |
| scheduler_e2e_scheduling | SchedulingNodeAffinity/5000Nodes | Average  | ms     |   91.10 |   77.56 | -14.86% |
| scheduler_e2e_scheduling | SchedulingNodeAffinity/5000Nodes | Perc50   | ms     |   15.11 |   15.00 | -0.73%  |
| scheduler_e2e_scheduling | SchedulingNodeAffinity/5000Nodes | Perc90   | ms     |   67.56 |   57.52 | -14.86% |
| scheduler_e2e_scheduling | SchedulingNodeAffinity/5000Nodes | Perc99   | ms     | 2483.95 | 1880.22 | -24.31% |
| scheduler_pod_scheduling | SchedulingNodeAffinity/5000Nodes | Average  | ms     | 3058.69 | 3080.09 | +0.70%  |
| scheduler_pod_scheduling | SchedulingNodeAffinity/5000Nodes | Perc50   | ms     | 3297.63 | 3227.48 | -2.13%  |
| scheduler_pod_scheduling | SchedulingNodeAffinity/5000Nodes | Perc90   | ms     | 4762.16 | 4757.87 | -0.09%  |
| scheduler_pod_scheduling | SchedulingNodeAffinity/5000Nodes | Perc99   | ms     | 6228.38 | 6765.56 | +8.62%  |
+--------------------------+----------------------------------+----------+--------+---------+---------+---------+
SchedulingPodAffinity/5000Nodes
+--------------------------+---------------------------------+----------+--------+----------+----------+--------+
|          METRIC          |              GROUP              | QUANTILE |  UNIT  |   OLD    |   NEW    |  DIFF  |
+--------------------------+---------------------------------+----------+--------+----------+----------+--------+
| SchedulingThroughput     | SchedulingPodAffinity/5000Nodes | Average  | pods/s |    33.65 |    33.23 | -1.25% |
| SchedulingThroughput     | SchedulingPodAffinity/5000Nodes | Perc50   | pods/s |    34.10 |    33.22 | -2.57% |
| SchedulingThroughput     | SchedulingPodAffinity/5000Nodes | Perc90   | pods/s |    42.90 |    42.50 | -0.93% |
| SchedulingThroughput     | SchedulingPodAffinity/5000Nodes | Perc99   | pods/s |    46.60 |    45.10 | -3.22% |
| scheduler_e2e_scheduling | SchedulingPodAffinity/5000Nodes | Average  | ms     |    45.07 |    45.56 | +1.09% |
| scheduler_e2e_scheduling | SchedulingPodAffinity/5000Nodes | Perc50   | ms     |    39.93 |    39.74 | -0.48% |
| scheduler_e2e_scheduling | SchedulingPodAffinity/5000Nodes | Perc90   | ms     |    92.52 |    90.29 | -2.40% |
| scheduler_e2e_scheduling | SchedulingPodAffinity/5000Nodes | Perc99   | ms     |   206.22 |   205.90 | -0.16% |
| scheduler_pod_scheduling | SchedulingPodAffinity/5000Nodes | Average  | ms     | 14263.98 | 14465.57 | +1.41% |
| scheduler_pod_scheduling | SchedulingPodAffinity/5000Nodes | Perc50   | ms     | 14118.47 | 14445.44 | +2.32% |
| scheduler_pod_scheduling | SchedulingPodAffinity/5000Nodes | Perc90   | ms     | 33113.01 | 33451.37 | +1.02% |
| scheduler_pod_scheduling | SchedulingPodAffinity/5000Nodes | Perc99   | ms     | 40175.30 | 40209.14 | +0.08% |
+--------------------------+---------------------------------+----------+--------+----------+----------+--------+
SchedulingPodAntiAffinity/5000Nodes
+--------------------------+-------------------------------------+----------+--------+---------+---------+---------+
|          METRIC          |                GROUP                | QUANTILE |  UNIT  |   OLD   |   NEW   |  DIFF   |
+--------------------------+-------------------------------------+----------+--------+---------+---------+---------+
| SchedulingThroughput     | SchedulingPodAntiAffinity/5000Nodes | Average  | pods/s |  118.00 |  117.57 | -0.37%  |
| SchedulingThroughput     | SchedulingPodAntiAffinity/5000Nodes | Perc50   | pods/s |  115.60 |  111.40 | -3.63%  |
| SchedulingThroughput     | SchedulingPodAntiAffinity/5000Nodes | Perc90   | pods/s |  178.90 |  178.00 | -0.50%  |
| SchedulingThroughput     | SchedulingPodAntiAffinity/5000Nodes | Perc99   | pods/s |  178.90 |  178.00 | -0.50%  |
| scheduler_e2e_scheduling | SchedulingPodAntiAffinity/5000Nodes | Average  | ms     |   88.81 |  131.50 | +48.07% |
| scheduler_e2e_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc50   | ms     |   13.04 |   13.54 | +3.83%  |
| scheduler_e2e_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc90   | ms     |  119.43 |  152.76 | +27.91% |
| scheduler_e2e_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc99   | ms     | 2038.12 | 2649.58 | +30.00% |
| scheduler_pod_scheduling | SchedulingPodAntiAffinity/5000Nodes | Average  | ms     | 1028.14 | 1061.80 | +3.27%  |
| scheduler_pod_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc50   | ms     | 1028.81 | 1064.57 | +3.48%  |
| scheduler_pod_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc90   | ms     | 2225.85 | 2267.39 | +1.87%  |
| scheduler_pod_scheduling | SchedulingPodAntiAffinity/5000Nodes | Perc99   | ms     | 3165.23 | 3599.39 | +13.72% |
+--------------------------+-------------------------------------+----------+--------+---------+---------+---------+
TopologySpreading/5000Nodes
+--------------------------+-----------------------------+----------+--------+----------+----------+--------+
|          METRIC          |            GROUP            | QUANTILE |  UNIT  |   OLD    |   NEW    |  DIFF  |
+--------------------------+-----------------------------+----------+--------+----------+----------+--------+
| SchedulingThroughput     | TopologySpreading/5000Nodes | Average  | pods/s |    64.75 |    63.56 | -1.84% |
| SchedulingThroughput     | TopologySpreading/5000Nodes | Perc50   | pods/s |    69.50 |    67.60 | -2.73% |
| SchedulingThroughput     | TopologySpreading/5000Nodes | Perc90   | pods/s |    80.90 |    79.90 | -1.24% |
| SchedulingThroughput     | TopologySpreading/5000Nodes | Perc99   | pods/s |    86.80 |    85.50 | -1.50% |
| scheduler_e2e_scheduling | TopologySpreading/5000Nodes | Average  | ms     |    26.64 |    27.61 | +3.64% |
| scheduler_e2e_scheduling | TopologySpreading/5000Nodes | Perc50   | ms     |    18.13 |    19.30 | +6.45% |
| scheduler_e2e_scheduling | TopologySpreading/5000Nodes | Perc90   | ms     |    46.88 |    47.63 | +1.60% |
| scheduler_e2e_scheduling | TopologySpreading/5000Nodes | Perc99   | ms     |   149.39 |   153.74 | +2.91% |
| scheduler_pod_scheduling | TopologySpreading/5000Nodes | Average  | ms     | 14399.95 | 14615.29 | +1.50% |
| scheduler_pod_scheduling | TopologySpreading/5000Nodes | Perc50   | ms     | 14435.52 | 14594.17 | +1.10% |
| scheduler_pod_scheduling | TopologySpreading/5000Nodes | Perc90   | ms     | 33046.80 | 33264.58 | +0.66% |
| scheduler_pod_scheduling | TopologySpreading/5000Nodes | Perc99   | ms     | 40168.68 | 40190.46 | +0.05% |
+--------------------------+-----------------------------+----------+--------+----------+----------+--------+

Does this PR introduce a user-facing change?

Scheduler's CycleState now embeds internal read/write locking inside its Read() and Write() functions. Meanwhile, Lock() and Unlock() function are removed.

action required: scheduler plugin developers are now required to remove CycleState#Lock() and CycleState#Unlock(). Just simply use Read() and Write() as they're natively thread-safe now.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/M Denotes a PR that changes 30-99 lines, ignoring generated files. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. labels Apr 28, 2021
@k8s-ci-robot
Copy link
Contributor

@Huang-Wei: This issue is currently awaiting triage.

If a SIG or subproject determines this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot added needs-priority Indicates a PR lacks a `priority/foo` label and requires one. sig/storage Categorizes an issue or PR as relevant to SIG Storage. labels Apr 28, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Huang-Wei

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Apr 28, 2021
@Huang-Wei
Copy link
Member Author

/hold

I don't expect the introduced read lock to introduce significant overhead. Will run perf tests to verify it.

@k8s-ci-robot k8s-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Apr 28, 2021
@Huang-Wei
Copy link
Member Author

Updated the test result in Special notes for your reviewer: section.

/cc @alculquicondor @ahg-g
FYI @fabiokung

@k8s-ci-robot k8s-ci-robot requested a review from ahg-g April 29, 2021 16:41
@fabiokung
Copy link
Contributor

Thank you for pushing this forward @Huang-Wei! 👏

@alculquicondor
Copy link
Member

I didn't see an obvious performance downgrade according to the bench diff:

| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Average  | ms     |  778.43 |  887.20 | +13.97% |
| scheduler_pod_scheduling | SchedulingBasic/5000Nodes | Perc50   | ms     |  524.26 |  814.97 | +55.45% |

That is very significant.

@Huang-Wei
Copy link
Member Author

That is very significant.

To ensure it's not an outlier. I will rerun the test for SchedulingBasic for more rounds.

@Huang-Wei
Copy link
Member Author

Huang-Wei commented Apr 29, 2021

Rerun SchedulingBasic and SchedulingNodeAffinity for 20 times, and updated the test result. @alculquicondor

@ahg-g
Copy link
Member

ahg-g commented Apr 29, 2021

The results are quite noisy, but I am not concerned about performance, rlock should be very light weight (assuming no contention).
/lgtm
/hold

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Apr 29, 2021
// race-free via cycleState#Lock()/Unlock().
cycleState.RLock()
defer cycleState.RUnlock()

c, err := cycleState.Read(preFilterStateKey)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not add the lock in the Read function?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was wondering the same, but I think the issue here is that adding the rlock inside Read syncs access to the map only, not the value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but this is not about locking the value.

Copy link
Member

@ahg-g ahg-g May 3, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True. Adding RLock to Read but not lock in the Write function is strange though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not add the lock in the Read function?

An enforced read lock inside Read is inappropriate as users should have the option to choose what kind of lock around Read - either simply read lock like the in-tree plugins, or write lock as the original issue stated.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ahg-g I think we're on the same page of how to efficiently use CycleState. What we're talking about here is how to deal with misuse of CycleState, from the perspective of defensive programming.

because I want to remove all Lock related functions, they are confusing and will likely be misused and cause problems.

This is fair. So @alculquicondor @ahg-g what's your take on this:

  • keep 4 functions: Read/SafeRead/Write/SafeWrite, and remove Lock/Unlock
    • in terms of sequential phases (PreFilter/PreScore), use Write or SafeWrite?
  • keep 2 functions: Read/Write, and remove Lock/Unlock - i.e., we only have locking version of read/write.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I say, let's start with option 2 and evaluate performance. Please make sure preemption performance is not affected as much. I think we would lock for each Pod that is added/removed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sounds good, we also need to Lock inside Delete (not sure if anyone uses it).

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SG, will update and run the perf test.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the PR and testing result.

Note that the value of scheduler_e2e_scheduling varies a lot due to unpredictable binding duration. In my test, I tried my best to exclude the obvious outliers, but it's still challenging for this metric.

- add internal locking to CycleState's Read()/Write()/Delete() functions
- remove Lock() and Unlock() functions
@Huang-Wei Huang-Wei force-pushed the sched-plugin-read-locking branch from 0644c7f to 9c45e8a Compare May 5, 2021 19:01
@k8s-ci-robot k8s-ci-robot removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 5, 2021
@Huang-Wei Huang-Wei changed the title sched: add read lock for shared cycleState sched: make CycleState's Read()/Write()/Delete() thread-safe May 5, 2021
@alculquicondor
Copy link
Member

Awesome, I'm happy with the results and the code change. I'll leave the LGTM to @ahg-g

@Huang-Wei
Copy link
Member Author

I guess I need to add some "Action Required" notes to the release notes, as we removed the Lock()/Unlock() functions.

@k8s-ci-robot k8s-ci-robot added release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. and removed release-note Denotes a PR that will be considered when it comes time to generate release notes. labels May 5, 2021
@ahg-g
Copy link
Member

ahg-g commented May 6, 2021

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label May 6, 2021
@Huang-Wei
Copy link
Member Author

/hold cancel

@k8s-ci-robot k8s-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label May 6, 2021
@binacs
Copy link
Member

binacs commented May 6, 2021

/retest

@k8s-ci-robot k8s-ci-robot merged commit ca38d18 into kubernetes:master May 6, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.22 milestone May 6, 2021
@Huang-Wei Huang-Wei deleted the sched-plugin-read-locking branch May 6, 2021 16:36
@lining2020x
Copy link

@Huang-Wei @ahg-g Hello, will this MR be backported to v1.19? If so, when will this be done?

@lining2020x
Copy link

@Huang-Wei @ahg-g Hello, will this MR be backported to v1.19? If so, when will this be done?

I have found the cherry pick guide 😂 . With the help of hack/cherry_pick_pull.sh, I have just submmited a cherry-pick PR for v1.19 branch.

@lining2020x
Copy link

As well as PRs for branch release-1.20 and release-1.21.

@alculquicondor
Copy link
Member

/remove-kind bug
/kind feature

Unfortunately, I don't think this qualifies for backport: it's not a user facing bug.

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. and removed kind/bug Categorizes issue or PR as related to a bug. labels Jul 26, 2021
@Huang-Wei
Copy link
Member Author

@lining2020x I don't quite think it's qualified for backporting. One reason is as @alculquicondor mentioned, it won't impact the end-users; the other reason is for out-of-tree plugin developers, you still have workarounds: e.g., inject a top-level state object as a placeholder, and then compose the locking logic inside the top-level state object in your plugin.

@lining2020x
Copy link

lining2020x commented Jul 27, 2021

@Huang-Wei I think this is a developer-friendly enhancement to the scheduler framework. According to the above test results of the performance impact caused by the increase of read-write locks, it did not bring obvious negative effects. Although I think it worthes, I will close my cherry-pick PRs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. release-note-action-required Denotes a PR that introduces potentially breaking changes that require user action. sig/scheduling Categorizes an issue or PR as relevant to SIG Scheduling. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

sched: concurrent map read/write when scheduler plugins write to CycleState during the Filter phase
7 participants