Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add initial KEP for maxUnavailable in StatefulSets #678

Merged
merged 2 commits into from
Mar 30, 2019
Merged

add initial KEP for maxUnavailable in StatefulSets #678

merged 2 commits into from
Mar 30, 2019

Conversation

krmayankk
Copy link

/sig apps

@k8s-ci-robot k8s-ci-robot added the sig/apps Categorizes an issue or PR as relevant to SIG Apps. label Jan 7, 2019
@k8s-ci-robot k8s-ci-robot requested a review from jdumars January 7, 2019 07:35
@k8s-ci-robot k8s-ci-robot added the kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory label Jan 7, 2019
@k8s-ci-robot k8s-ci-robot added sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. sig/pm cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Jan 7, 2019
@krmayankk
Copy link
Author

/assign @bgrant0607

@krmayankk
Copy link
Author

/assign @kow3ns @janetkuo @Kargakis

faster rollout.
3: My Stateful clustered application, has followers and leaders, with followers being many more than 1. My application can tolerate many followers going
down at the same time. I want to be able to do faster rollouts by bringing down 2 or more followers at the same time. This is only possible if StatefulSet
supports maxUnaavailable in Rolling Updates.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo

3: My Stateful clustered application, has followers and leaders, with followers being many more than 1. My application can tolerate many followers going
down at the same time. I want to be able to do faster rollouts by bringing down 2 or more followers at the same time. This is only possible if StatefulSet
supports maxUnaavailable in Rolling Updates.
4: Sometimes i just want easier tracking of revisions of a rolling update. Deployment does it through ReplicaSets and has its own nuances. Understanding

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why this is an argument for adding maxUnavailable in StatefulSets.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the argument is not a very strong one , but nonetheless here it goes:-

  • i am a deployment user which doesn't care much about identity of pods or need state per pod although if i got stable pod identity, it doesnt hurt.
  • I dont care about maxSurge
  • i frequently check my revisions and here is what i see, which doesn't tell me anything useful at all, like when the last revision was deployed, what that revision has. In order to find all that information i need to dig into replicasets.
kubectl rollout history deployment.v1.apps/gcp-test-app-nginx -n csc-sam
deployment.apps/gcp-test-app-nginx 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>
  • When using statefulsets, i can see the revisions, their age and get more information using the following:-
kubectl get controllerrevisions -n csc-sam --context gke_gsf-core-devmvp-sam2_us-central1_sam
NAME                                                    CONTROLLER                           REVISION      AGE
stateful-func-pvc-75d57f7557   statefulset.apps/stateful-func-pvc   1          13h
stateful-func-pvc-767dd4dfbc   statefulset.apps/stateful-func-pvc   2          7h
  • statefulsets already supports rolling update but they are slow. Only if they supported maxUnavailable, i can choose to move my deployments to use statefulsets, just to get the nicer revision history.

@Kargakis i can remove this reason if you think its pointless.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is more of a kubectl rollout history issue rather than a deployments issue. While I agree that controller revisions are more intuitive than replica sets, I don't see how this related to maxUnavailable so I would remove.

during Rolling Update.

### Non-Goals
maxUnavailable is only implemeted to affect the Rolling Update of StatefulSet. Considering maxUnavailable for Pod Management Policy is beyond the purview

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this proposal should at least discuss how any existing StatefulSet upgrade feature is affected by maxUnavailable.

Copy link
Author

@krmayankk krmayankk Feb 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Kargakis are you asking about upgrading from a release without maxUnavailable to a release which has this feature ? We will keep the default value to be 1, so there would not be any surprises


1: My containers publish metrics to a time series system. If I am using a Deployment, each rolling update creates a new pod name and hence the metrics
published by these new pod starts a new time series which makes tracking metrics for the application difficult. While this could be mitigated,
it requires some tricks on the time series collection side. It would be so much better, If we could use a StatefulSet object so my object names doesnt

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this fixed if Prometheus is able to treat all pods under a service as a single source of metrics? Sounds like the right way to fix this issue.

Copy link
Author

@krmayankk krmayankk Feb 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we want Prometheus to treat a service as composed of n distinct entities sending metrics(where n is the number of replicas in the service) and not as n *(number of rolling updates to the service) distinct entities sending metrics to Prometheus. Imagine that node-0 is sending metrics and after upgrade node-0 starts sending metrics with some weird behavior which allows you to know node-0 started misbehaving after upgrade. With deployments node-xaqr was sending metrics and now node-uehd is sending metrics and there is no relation between then

published by these new pod starts a new time series which makes tracking metrics for the application difficult. While this could be mitigated,
it requires some tricks on the time series collection side. It would be so much better, If we could use a StatefulSet object so my object names doesnt
change and hence all metrics goes to a single time series. This will be easier if StatefulSet is at feature parity with Deployments.
2: My Container does some initial startup tasks like loading up cache or something that takes a lot of time. If we used StatefulSet, we can only go one

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

But does this container-example need to use statefulsets at all?

Copy link
Author

@krmayankk krmayankk Feb 25, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes because each of these containers have a per container volume/state which is not shared with other containers. This per container volume is being used in myriad ways:-

  • on startup ,each container for e.g. uses an init container
    • to download data from lets say a blob store AND/OR
    • does some pre-processing on the data in the volume which is non trivial and takes some time


Why should this KEP _not_ be implemented.

## Alternatives

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, another alternative would be that you should be able to use OnDelete and deploy your own upgrade logic in your own custom controller.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will add the alternative

pod at a time which would result in a slow rolling update. If we did maxUnavailable for StatefulSet with a greater than 1 number, it would allow for a
faster rollout.
3: My Stateful clustered application, has followers and leaders, with followers being many more than 1. My application can tolerate many followers going
down at the same time. I want to be able to do faster rollouts by bringing down 2 or more followers at the same time. This is only possible if StatefulSet

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we had similar cases for apps in a large cluster. It is quite normal that after the upgrade, serveral pods among thousands of pods will remain non-ready for a long time, possibly due to problem of downloading images or starting containers. We don't want that these long tail pods to delay the entire rolling upgrade process since our app can tolerate many failed pods. Currently, whenever such delay happens, human operator must be involved to diagnose the problem. if statefulset support maxUnavailable greater than 1, the rolling process can be much quicker.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for this feature as StatefulSet should not assume the application can't tolerate down instances. Actually, most stateful applications are well designed distributed software.

### User Stories

#### Story 1
As a User of Kubernetes, I can create a StatefulSet with RollingUpdate Strategy and specify maxUnavailable.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Instead of saying specify maxUnavailable this needs to talk about what you want that would cause you to use maxUnavailable. For example, and ensure enough instances of my application are running to handle the workload.

Can you update the language to not use maxUnavailable as the reason for asking for it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point , will do

...
```

### Risks and Mitigations
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please consider the risks and mitigations of the API modification.


### Risks and Mitigations

## Graduation Criteria
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a plan for implementation and delivery. Use your best estimate for when you think you can implement each phase of the modification. Consider how/if the feature will be gated at alpha. What will the criteria for beta and GA be? Once enabled, can it be rolled back safely?

Consider the following scenarios:-

1: My containers publish metrics to a time series system. If I am using a Deployment, each rolling update creates a new pod name and hence the metrics
published by these new pod starts a new time series which makes tracking metrics for the application difficult. While this could be mitigated,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these new pod
Do you mean just the new pod, or collection of all old pods and the new pod?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the old pod, has a time series based on its old hostname(pod name) going on called lets say x
the new pod, starts a new time series based on its new hostname(pod name) called lets say y

// RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType.
type RollingUpdateStatefulSetStrategy struct {
// Partition indicates the ordinal at which the StatefulSet should be
// partitioned.
Copy link

@crimsonfaith91 crimsonfaith91 Jan 15, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

An example here will be clearer.
You can add 5-replica example wrote below here.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sorry i should have made this clearer. this is not a new field but an existing field.

// partitioned.
// Default value is 0.
// +optional
Partition *int32 `json:"partition,omitempty" protobuf:"varint,1,opt,name=partition"`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we also want to support >2 partitions for this KEP?
For example, given 5 replicas, there are 3 partitions (0 | 1,2 | 3,4).
If we want to support this, this field can be made to a slice.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why you need >2 partitions. What are the revisions of the 3 partitions (0 | 1,2 | 3,4) ?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

as previously mentioned partitions is not a new field and we are not changing anything there

// partitioned.
// Default value is 0.
// +optional
Partition *int32 `json:"partition,omitempty" protobuf:"varint,1,opt,name=partition"`

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

PartitionOrdinal
We should make it clear that this field records first ordinal of second partition. Simply Partition may result in misunderstanding that this field means number of partitions.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i have clarified this is an existing field and not a new field being proposed

@bgrant0607 bgrant0607 removed their assignment Jan 16, 2019
Copy link
Member

@justaugustus justaugustus left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove any references to NEXT_KEP_NUMBER and rename the KEP to just be the draft date and KEP title.
KEP numbers will be obsolete once #703 merges.

@krmayankk
Copy link
Author

Thanks for all the feedback, should have the updated KEP in couple days

@bgrant0607
Copy link
Member

Some historical background.

The original issue I filed that motivated what became StatefulSet just a few days after we open-sourced Kubernetes was http://issues.k8s.io/260

It was partly inspired by Borg's Job API: https://ai.google/research/pubs/pub43438

Not only was it intended to support stateful applications, but also distributed applications where each instance required a distinct, stable network identity, which motivated many of the original name ideas, such as nominal services: kubernetes/kubernetes#260 (comment)

For large such applications, updating one instance at a time can be too constraining, and can prolong the overall disruption of the service by slowing the upgrade process.

In many cases, completing the upgrade faster may be preferable, such as:

  • instances aren't replicated
  • shards are statically assigned, but spread intelligently (e.g., x and N/2+x)
  • shards are dynamically assigned, with some overprovisioned warm spares (e.g., N+K)

In all of these cases, maxUnavailable (with minReadySeconds) would address the problem, and would have the benefit of being consistent with Deployment and DaemonSet.

Now, I haven't read this full proposal (sorry), and haven't had time to think about how maxUnavailable would interact with some of the other functionality that has since been added to StatefulSet (e.g., partition, OrderedReady), but I believe the use case is valid, something like this is necessary for StatefulSets of non-trivial size, and it's reasonable to expect StatefulSet to reach parity with the other workload controllers in this respect.

@krmayankk
Copy link
Author

working on updating this today @bgrant0607


The purpose of this enhancement is to implement maxUnavailable for StatefulSet during RollingUpdate. When a StatefulSet’s
`.spec.updateStrategy.type` is set to `RollingUpdate`, the StatefulSet controller will delete and recreate each Pod
in the StatefulSet. The updating of each Pod currently happens one at a time. With support for `maxUnavailable`, the updating
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

More specifically, the updating of each Pod happens one at a time when podManagementPolicy is set to OrderedReady, which is the default value. If the podManagementPolicy is set to Parallel, all Pods satisfied with partition will be updated at the same time.

// partitioned.
// Default value is 0.
// +optional
Partition *int32 `json:"partition,omitempty" protobuf:"varint,1,opt,name=partition"`
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not sure why you need >2 partitions. What are the revisions of the 3 partitions (0 | 1,2 | 3,4) ?

```

- By Default, if maxUnavailable is not specified, its value will be assumed to be 1 and StatefulSets will follow their old behavior.
- if MaxUnavailable is specified, it cannot be greater than total number of replicas
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mayby maxUnavailable just can not be 0. It can be greater than replicas, which may have the same effects as maxUnavailable==replicas.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i would throw an error if maxUnavailable > replicas, since it has no meaning

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mayby we can refer to Deployment maxUnavailable validation:

  • must be valid percent and can not be more than 100% when it is string
  • must be nonnegative and can not be less then 1 when it is int

https://github.com/kubernetes/kubernetes/blob/master/pkg/apis/apps/validation/validation.go#L425

- if MaxUnavailable is specified, it cannot be greater than total number of replicas
- If a partition is specified, maxUnavailable will only apply to all the pods which are staged by the partition. Lets say total replicas
is 5 and partition is set to 2 and maxUnavailable is set to 2. If the image is changed in this scenario, pods with ordinal 4 and 3 will go
down at the same time(because of maxUnavailable), once they are running and ready, pods with ordinal 2 will go down. Pods with ordinal 0
Copy link

@FillZpp FillZpp Feb 27, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

once they are running and ready, pods with ordinal 2 will go down

I think this may change to once one of 3 or 4 is running and ready, pod with ordinal 2 will go down.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FillZpp the ordering is still maintained. So until both 3 and 4 are both updated, it cannot proceed to 2. Its a different debate whether we want to support maxUnavilable for parallel pod management policy as well or not, in which case your assumption will be true. I dont see any drawbacks with support maxUnavailable with parallel pod management. Looking for others to chime in on any gotchas, issues with doing that.

Copy link

@FillZpp FillZpp Feb 28, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krmayankk I'm not completely agree with you.

If 2 can only update when both 3 and 4 are already updated, then it is not maxUnavailable strategy, it may be called batch or something. maxUnavailable means there are most 2 pods updating at one time, like Sliding Window.

So i think this may be the right way:

  • If podManagementPolicy==Parallel, then once one of 3 or 4 is running and ready, pod with ordinal 2 will go down
  • if podManagementPolicy==OrderedReady by default, then once 4 is running and ready, pod with ordinal 2 will go down

There is no need to wait for both 3 and 4 are running and ready.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@FillZpp i am open to suggestions here on what the right behavior should be. Your suggestion for orderedready seem to violate the ordering. Since 2 is going down before 3. Also its not clear what is the difference in your example between parallel and orderedready ?

My suggestion is still the following:-

  • for podManagementPolicy==OrderedReady, for replicas=5 and partition=2, maxUnavailable =2, 4 and 3 will start updating at the same time, once both have finished updating, 2 will go down
  • for podManagementPolicy==Parallel, i need to test if partition affects it or not. I thought partition doesnt change anything. But i think what you are suggesting makes sense . Basically any 2 can go down, for e.g. 2 and 3 and then in the end 4.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@krmayankk I don't think since 4 is finished, then updating 3 and 2 together violates the ordering, it is not 2 going down before 3.

The ordering is 3 and 4 updating first, and once 4 is finished, you can update 2 immediately. So 2 maybe update together with 3, but it is not updating before 3. It's just like you update 4 and 3 at first.

For example, for podManagementPolicy==OrderedReady && replicas==10 && partition=2 && maxUnavailable =2, you think both 8&9 finished, then 6&7, then 4&5, then 2&3 ? I don't think this is 'maxUnavailable', we may call it 'batch' strategy.

What do you think about it? @kow3ns @janetkuo @Kargakis

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What @FillZpp describes makes sense to me re how OrderedReady and MaxUnavailable interact. I can't think of any use case for such a combo though to be honest. Do you guys have any in mind?

@krmayankk
Copy link
Author

@justaugustus i have removed the kep number and any such changes. PTAL

@justaugustus
Copy link
Member

@krmayankk -- Thanks for removing the KEP number references! :)

Could you fix this error:

Verifying verify-spelling.sh
keps/sig-apps/20190226-maxunavailable-for-statefulsets.md:190:33: "wih" is a misspelling of "with"
FAILED verify-spelling.sh	12s
========================
FAILED TESTS
========================
./hack/../hack/verify-spelling.sh

and then squash your commits?

@krmayankk
Copy link
Author

@justaugustus fixed

@justaugustus
Copy link
Member

@janetkuo @Kargakis @kow3ns -- You're listed as assignees here. Looks like this KEP has gone through a few cycles of review. Would you consider it in a good state to merge at this point, especially given that it's provisional?


Consider the following scenarios:-

1: My containers publish metrics to a time series system. If I am using a Deployment, each rolling update creates a new pod name and hence the metrics
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is not formatted as a numbered list. You can fix it by using 1. instead of 1:.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

`.spec.updateStrategy.type` is set to `RollingUpdate`, the StatefulSet controller will delete and recreate each Pod
in the StatefulSet. The updating of each Pod currently happens one at a time when `spec.podManagementPolicy` is `OrderedReady`.
With support for `maxUnavailable`, the updating will proceed `maxUnavailable` number of pods at a time in `OrderedReady` case
only.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't podManagementPolicy orthogonal to update policy? It's a policy for scaling, not update.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

correct, removed its reference. maxUnavailale is only applicable for rolling updates. It has nothing to do with podManagementPolicy


#### Implementation

https://github.com/kubernetes/kubernetes/blob/master/pkg/controller/statefulset/stateful_set_control.go#L504
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

master blob will change over time. If you want to include this, you need to pin to a commit.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

if podsDeleted < set.Spec.UpdateStrategy.RollingUpdate.MaxUnavailable {
podsDeleted ++;
continue;
}
Copy link
Member

@janetkuo janetkuo Mar 19, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should describe how this feature can be implemented and how it interacts with existing features, such as partition and pod management policy, rather than a code block.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the implementation cannot be concretely defined, untill we agree on the semantics. I will update the doc with the sematics in case of partition . Also as you rightly mentioned earlier, the maxUnavailable has nothing to do with pod management policy. So i will update the docs accordingly. Once we have agreed on the semantics i will update the implementation section as well.

@krmayankk
Copy link
Author

@janetkuo @kow3ns can you review this again ?

will also help while upgrading from a release which doesnt support maxUnavailable to a release which supports this field.
- If maxUnavailable is specified, it cannot be greater than total number of replicas.
- If maxUnavailable is specified and partition is also specified, MaxUnavailable cannot be greater than `replicas-partition`
- If a partition is specified, maxUnavailable will only apply to all the pods which are staged by the partition. Which means all Pods
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kow3ns @janetkuo @Kargakis @FillZpp i have added three options below for more discussions, please review.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pinging again @kow3ns @janetkuo who should i follow up to get this merged and should the followup discussion happen in a new PR or a issue for further design details ?

@kow3ns
Copy link
Member

kow3ns commented Mar 29, 2019

/approve

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 29, 2019
@kow3ns
Copy link
Member

kow3ns commented Mar 29, 2019

As noted this still needs some work, but we can follow up in subsequent PRs. The approach that we have take with the SideCars KEP seems to be scalable.

@justaugustus
Copy link
Member

(Adding a lgtm, since @kow3ns has approved it as provisional)
/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Mar 30, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: justaugustus, kow3ns, krmayankk

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/kep Categorizes KEP tracking issues and PRs modifying the KEP directory lgtm "Looks good to me", indicates that a PR is ready to be merged. sig/apps Categorizes an issue or PR as relevant to SIG Apps. sig/architecture Categorizes an issue or PR as relevant to SIG Architecture. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.