Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cAdvisor /stats/summary endpoint in kubelet returns incorrect cpu usage numbers #27194

Closed
hasbro17 opened this issue Jun 10, 2016 · 66 comments · Fixed by #27591
Closed

cAdvisor /stats/summary endpoint in kubelet returns incorrect cpu usage numbers #27194

hasbro17 opened this issue Jun 10, 2016 · 66 comments · Fixed by #27591
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.
Milestone

Comments

@hasbro17
Copy link

Environment

Kubernetes version: 1.2.3
Docker version: 1.10.3
3 node(c4.xLarge) cluster on AWS running CoreOS 1010.4.0.

Issue

After facing an issue with incorrect metrics being reported by heapster kubernetes-retired/heapster#1177 I tried querying the cadvisor /stats/summary endpoint directly to see if that would give me consistent values for node cpu usage.
I have one pod with cpu request=1000m and limit=1000m. In that pod I run a busy loop to consume a 100% of the cpu. This is what top shows on the node.
screen shot 2016-06-10 at 10 07 52 am

I query the /stats/summary endpoint every 5 seconds, however it seems that the latest timestamps are only updated every 15 seconds or so. Checking the summary.Node.CPU.UsageNanoCores value from the summary returned gives me the following output(formatted):

TS:2016-06-09T20:05:20-07:00, Percentage:105.539566 Val:1055395658
TS:2016-06-09T20:05:31-07:00, Percentage:107.416097 Val:1074160974
TS:2016-06-09T20:05:44-07:00, Percentage:108.195877 Val:1081958770
TS:2016-06-09T20:05:59-07:00, Percentage:106.670910 Val:1066709101
TS:2016-06-09T20:06:19-07:00, Percentage:14.360576 Val:143605762
TS:2016-06-09T20:06:31-07:00, Percentage:108.000277 Val:1080002769
TS:2016-06-09T20:06:41-07:00, Percentage:108.373232 Val:1083732315
TS:2016-06-09T20:06:56-07:00, Percentage:107.025070 Val:1070250700
TS:2016-06-09T20:07:16-07:00, Percentage:13.004869 Val:130048687
TS:2016-06-09T20:07:31-07:00, Percentage:106.839146 Val:1068391461
TS:2016-06-09T20:07:48-07:00, Percentage:107.614464 Val:1076144640
TS:2016-06-09T20:08:06-07:00, Percentage:4.232330 Val:42323305
TS:2016-06-09T20:08:20-07:00, Percentage:106.009173 Val:1060091732
TS:2016-06-09T20:08:35-07:00, Percentage:108.121440 Val:1081214401
TS:2016-06-09T20:08:50-07:00, Percentage:106.659561 Val:1066595609
TS:2016-06-09T20:09:07-07:00, Percentage:1.724644 Val:17246439
TS:2016-06-09T20:09:19-07:00, Percentage:106.633227 Val:1066332268
TS:2016-06-09T20:09:38-07:00, Percentage:9.938621 Val:99386209
TS:2016-06-09T20:09:53-07:00, Percentage:107.046112 Val:1070461118
TS:2016-06-09T20:10:10-07:00, Percentage:3.373636 Val:33736361
TS:2016-06-09T20:10:25-07:00, Percentage:107.338541 Val:1073385413
TS:2016-06-09T20:10:39-07:00, Percentage:108.575783 Val:1085757834
TS:2016-06-09T20:10:54-07:00, Percentage:107.055382 Val:1070553817
TS:2016-06-09T20:11:13-07:00, Percentage:7.869509 Val:78695088
TS:2016-06-09T20:11:32-07:00, Percentage:11.476262 Val:114762620
TS:2016-06-09T20:11:45-07:00, Percentage:106.928681 Val:1069286811
TS:2016-06-09T20:12:03-07:00, Percentage:3.309632 Val:33096320
TS:2016-06-09T20:12:15-07:00, Percentage:105.832345 Val:1058323450
TS:2016-06-09T20:12:34-07:00, Percentage:5.079409 Val:50794090
TS:2016-06-09T20:12:47-07:00, Percentage:106.305439 Val:1063054389
TS:2016-06-09T20:13:05-07:00, Percentage:3.613690 Val:36136900
TS:2016-06-09T20:13:24-07:00, Percentage:9.785441 Val:97854409
TS:2016-06-09T20:13:43-07:00, Percentage:12.661783 Val:126617830

As you can see I'm not getting a steady report of near 100% cpu usage values for UsageNanoCores. Any idea why this might be the case or how I can debug this issue. Also is there any way I can change the resolution of the summary stats to get more fine grained reporting.

@hongchaodeng
Copy link
Contributor

/cc @xiang90

@xiang90
Copy link
Contributor

xiang90 commented Jun 10, 2016

/cc @vishh

@dchen1107 dchen1107 added sig/node Categorizes an issue or PR as relevant to SIG Node. kind/support Categorizes issue or PR as a support question. labels Jun 13, 2016
@philips
Copy link
Contributor

philips commented Jun 13, 2016

@xiang90 @vishh is on vacation. I am guessing @timstclair is taking over while he is out?

@piosz
Copy link
Member

piosz commented Jun 15, 2016

cc @fgrzadkowski @mwielgus

@piosz
Copy link
Member

piosz commented Jun 15, 2016

cc @jszczepkowski

@dchen1107
Copy link
Member

Ok, we received several similar reports over different channel. Looks like there is a regression in the node monitoring pipeline. But @vishh and @timstclair are out this week. The rest of node team will take a look.

@dchen1107 dchen1107 assigned dchen1107 and Random-Liu and unassigned vishh Jun 15, 2016
@dchen1107
Copy link
Member

A little clarification here:

  1. This is not regression from 1.2 release. The issue was reported against 1.2 release.
  2. The issue is filed against CoreOS nodes. There is known integration issue between cAdvisor and CoreOS. But we need to verify if it is CoreOS specific.
  3. There are many compatability fixes in cAdvisor after 1.2. We need to verify if this particular issue is handled.

@Random-Liu could you please see if we can reproduce the issue on GCE first. Then we can look deeply.

@Random-Liu
Copy link
Member

Random-Liu commented Jun 15, 2016

I can reproduce this in my GCE cluster.

The pod spec:

apiVersion: v1
kind: Pod
metadata:
 name: busyloop
spec:
 containers:
 - name: busyloop
   image: busybox:1.24
   resources:
    limits:
     cpu: "1000m"
    requests:
     cpu: "1000m"
   command:
   - "/bin/sh"
   - "-c"
   - "while true; do let a=a+1; done"

The cpu usage shown on the node:
image

The CPU usage from summary api:

"2016-06-15T18:18:50Z"
42617410 <------
"2016-06-15T18:19:08Z"
22084312 <------
"2016-06-15T18:19:20Z"
1041369086
"2016-06-15T18:19:20Z"
1041369086
"2016-06-15T18:19:37Z"
1038933723
"2016-06-15T18:19:54Z"
1036406938
"2016-06-15T18:20:10Z"
1060789362
"2016-06-15T18:20:25Z"
1036855643
"2016-06-15T18:20:36Z"
1035081293
"2016-06-15T18:20:55Z"
37240397 <------
"2016-06-15T18:21:14Z"
63126343 <------

@dchen1107
Copy link
Member

I turned up a cluster and checked cAdvisor reports, there is no issue on cAdvisor side. This is a good news. If there is an issue, should be at Kubelet side when generating the summary report.

@Random-Liu
Copy link
Member

Random-Liu commented Jun 15, 2016

  1. This both happens to node total cpu usage and container cpu usage.
  2. This already happened from summary api initially added ba5be34
{
  "usageNanoCores": 92771898,
  "usageCoreNanoSeconds": 706834898989
}
{
  "usageNanoCores": 107032388,
  "usageCoreNanoSeconds": 727412973043
}
{
  "usageNanoCores": 1027715766,
  "usageCoreNanoSeconds": 738008086317
}
{
  "usageNanoCores": 1031570384,
  "usageCoreNanoSeconds": 748728467038
}
{
  "usageNanoCores": 1032540287,
  "usageCoreNanoSeconds": 766018195725
}
{
  "usageNanoCores": 1030987975,
  "usageCoreNanoSeconds": 783682285808
}
{
  "usageNanoCores": 54850471,
  "usageCoreNanoSeconds": 803166613887
}
{
  "usageNanoCores": 1033434503,
  "usageCoreNanoSeconds": 821532921078
}
{
  "usageNanoCores": 51873488,
  "usageCoreNanoSeconds": 840957415420
}
{
  "usageNanoCores": 1032030277,
  "usageCoreNanoSeconds": 853306743742
}
{
  "usageNanoCores": 1032416963,
  "usageCoreNanoSeconds": 868911126177
}
{
  "usageNanoCores": 1033925228,
  "usageCoreNanoSeconds": 884786175548
}
{
  "usageNanoCores": 1052274284,
  "usageCoreNanoSeconds": 900235028759
}
{
  "usageNanoCores": 1028935005,
  "usageCoreNanoSeconds": 918629560058
}
{
  "usageNanoCores": 1037630827,
  "usageCoreNanoSeconds": 930626761245
}
{
  "usageNanoCores": 1033762611,
  "usageCoreNanoSeconds": 946582359518
}
{
  "usageNanoCores": 1032846972,
  "usageCoreNanoSeconds": 960848048815
}
{
  "usageNanoCores": 103042617,
  "usageCoreNanoSeconds": 981330261506
}
{
  "usageNanoCores": 102881754,
  "usageCoreNanoSeconds": 1001823960435
}
{
  "usageNanoCores": 61051671,
  "usageCoreNanoSeconds": 1021430463957
}

@Random-Liu
Copy link
Member

Random-Liu commented Jun 15, 2016

This is definitely a cadvisor issue. I got the following data with ba5be34:

summary 1054030207
cadvisor 1054030207
summary 1054030207
cadvisor 1054030207
summary 1050024459
cadvisor 1050024459
summary 1050024459
cadvisor 1050024459
summary 135275808 <------
cadvisor 135275808 <------
summary 1043402283
cadvisor 1043402283
summary 1043402283
cadvisor 1043402283
summary 16972453  <------
cadvisor 16972453  <------
summary 1031975045
cadvisor 1031975045
summary 1031879659
cadvisor 1031879659
summary 1031879659
cadvisor 1031879659
summary 95699418  <------
cadvisor 95699418  <------
summary 95699418  <------
cadvisor 95699418  <------
summary 1034200592
cadvisor 1034200592
summary 1030551299
cadvisor 1030551299
summary 1030551299
cadvisor 1030551299
summary 1030428344
cadvisor 1030428344
summary 1031580099
cadvisor 1031580099
summary 1031580099
cadvisor 1031580099
summary 1029971315
cadvisor 1029971315
summary 1035317622
cadvisor 1035317622

The script I use:

#!/bin/bash
while true
do
  echo "summary" `curl -s http://localhost:10255/stats/summary | ./jq '.node.cpu.usageNanoCores'`
  echo "cadvisor" `curl -s http://localhost:4194/api/v2.1/stats | ./jq '(."/" | reverse)[0].cpu_inst.usage.total'`
  sleep 10
done

@dchen1107
Copy link
Member

I think I know the root cause based on the data collected by @Random-Liu above, but need to verify.

UsageNanoCores was introduced to record the total CPU usage (sum of all cores) averaged over the sample window, but I couldn't find the code summarizing the usages on cores together. In @Random-Liu's test, I think the node has 2 cores, and the busyloop container is running on two cores. I believe cadvisor does report the usages on two cores, but summary only report the first one here.

In summary, it is a kubelet summary code bug, not cAdvisor issue.

@dchen1107
Copy link
Member

@Random-Liu Since I don't have the test environment ready for this yet, could you please help me quickly validate my theory.

Please update your busyloop container's cpuset.cpus to 0. You can simply modify /sys/fs/cgroup/cpuset//cpuset.cpus from 0-1 to 0. Then run your stats collection script.

@timstclair
Copy link

@dchen1107 - I sort of doubt that's the issue, since the summary just copies the field from the cAdvisor API: https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/stats/summary.go#L289

Without looking too deep (vacation and all), how do the cumulative numbers look (UsageCoreNanoSeconds)? If the issue is only with the "instantaneous" stats, I suspect there's a problem in the conversion logic found here.

@dchen1107
Copy link
Member

@timstclair you should on vacation :-)

Yes, I just saw the code, we simply use Total from cAdvisor API. Also @Random-Liu just mentioned to me that the initial report reporting node usage, not pod usage above.

@Random-Liu
Copy link
Member

@timstclair @dchen1107 If we manually calculate with the cumulative number UsageCoreNanoSeconds, the result is right!

@dchen1107
Copy link
Member

@Random-Liu found the root cause here https://github.com/google/cadvisor/blob/master/info/v2/conversion.go#L209:

    convertToRate := func(lastValue, curValue uint64) (uint64, error) {
        if curValue < lastValue {
            return 0, fmt.Errorf("cumulative stats decrease")
        }
        valueDelta := curValue - lastValue
        return (valueDelta * 1e9) / timeDeltaNs, nil

When the valueDelta is too big, times 1e9 makes it overflow.

@dchen1107
Copy link
Member

Can we use this https://golang.org/pkg/math/big/ in above conversion code?

@mcabranches
Copy link

Team Thanks for yours attention!

@dchen1107 do you mean the cpu_usage provided by heapster?

If so, when I use any "source" provided by summary in heapster (=kubernetes.summary_api:'' or kubernetes.summary_api:https://kubernetes.default) the cpu/usage doesn't get populated!

curl 10.125.7.224:31580/api/v1/model/namespaces/rubis-ns/pods/rubis-vhqc6/metrics/cpu/usage
{
"metrics": [],
"latestTimestamp": "0001-01-01T00:00:00Z"
}

@fgrzadkowski
Copy link
Contributor

@mwielgus Can you please help with debugging this? This seams important.

@fgrzadkowski
Copy link
Contributor

Can you please include logs from heapster?

@mcabranches
Copy link

@fgrzadkowski! Here are the logs from heapster:

#kubectl logs heapster-p46du --namespace=kube-system
I0731 00:04:40.341956 1 heapster.go:65] /heapster --source=kubernetes.summary_api:https://kubernetes.default
I0731 00:04:40.342228 1 heapster.go:66] Heapster version 1.1.0
I0731 00:04:40.344169 1 configs.go:60] Using Kubernetes client with master "https://kubernetes.default" and version "v1"
I0731 00:04:40.344224 1 configs.go:61] Using kubelet port 10255
I0731 00:04:40.348266 1 heapster.go:92] Starting with Metric Sink
I0731 00:04:40.442059 1 heapster.go:171] Starting heapster on port 8082
I0731 00:04:57.792526 1 handlers.go:178] No metrics for pod rubis-ns/rubis-vhqc6
I0731 00:05:05.054048 1 manager.go:79] Scraping metrics start: 2016-07-31 00:04:00 +0000 UTC, end: 2016-07-31 00:05:00 +0000 UTC
I0731 00:05:05.543036 1 manager.go:152] ScrapeMetrics: time: 488.782574ms size: 42
I0731 00:05:27.792658 1 handlers.go:178] No metrics for pod rubis-ns/rubis-vhqc6
I0731 00:05:57.793659 1 handlers.go:178] No metrics for pod rubis-ns/rubis-vhqc6
I0731 00:06:05.000190 1 manager.go:79] Scraping metrics start: 2016-07-31 00:05:00 +0000 UTC, end: 2016-07-31 00:06:00 +0000 UTC
I0731 00:06:05.672868 1 manager.go:152] ScrapeMetrics: time: 672.507236ms size: 42
I0731 00:07:05.000263 1 manager.go:79] Scraping metrics start: 2016-07-31 00:06:00 +0000 UTC, end: 2016-07-31 00:07:00 +0000 UTC
I0731 00:07:05.712846 1 manager.go:152] ScrapeMetrics: time: 712.422692ms size: 42
I0731 00:08:05.000290 1 manager.go:79] Scraping metrics start: 2016-07-31 00:07:00 +0000 UTC, end: 2016-07-31 00:08:00 +0000 UTC
I0731 00:08:06.143908 1 manager.go:152] ScrapeMetrics: time: 1.143036471s size: 42
I0731 00:09:05.000321 1 manager.go:79] Scraping metrics start: 2016-07-31 00:08:00 +0000 UTC, end: 2016-07-31 00:09:00 +0000 UTC
I0731 00:09:05.552872 1 manager.go:152] ScrapeMetrics: time: 552.247551ms size: 42

@derekwaynecarr
Copy link
Member

I am not sure if this is related but I just fixed a bug in cAdvisor where
it reported incorrect usage stats in v2 API due to an incorrect for-loop.

google/cadvisor#1395

I need to sweep cAdvisor to see if that pattern exists elsewhere, but
thought it may help in this case when debugging.

On Saturday, July 30, 2016, mcabranches notifications@github.com wrote:

@fgrzadkowski https://github.com/fgrzadkowski! Here are the logs from
heapster:

#kubectl logs heapster-p46du --namespace=kube-system
I0731 00:04:40.341956 1 heapster.go:65] /heapster
--source=kubernetes.summary_api:https://kubernetes.default
I0731 00:04:40.342228 1 heapster.go:66] Heapster version 1.1.0
I0731 00:04:40.344169 1 configs.go:60] Using Kubernetes client with master
"https://kubernetes.default" and version "v1"
I0731 00:04:40.344224 1 configs.go:61] Using kubelet port 10255
I0731 00:04:40.348266 1 heapster.go:92] Starting with Metric Sink
I0731 00:04:40.442059 1 heapster.go:171] Starting heapster on port 8082
I0731 00:04:57.792526 1 handlers.go:178] No metrics for pod
rubis-ns/rubis-vhqc6
I0731 00:05:05.054048 1 manager.go:79] Scraping metrics start: 2016-07-31
00:04:00 +0000 UTC, end: 2016-07-31 00:05:00 +0000 UTC
I0731 00:05:05.543036 1 manager.go:152] ScrapeMetrics: time: 488.782574ms
size: 42
I0731 00:05:27.792658 1 handlers.go:178] No metrics for pod
rubis-ns/rubis-vhqc6
I0731 00:05:57.793659 1 handlers.go:178] No metrics for pod
rubis-ns/rubis-vhqc6
I0731 00:06:05.000190 1 manager.go:79] Scraping metrics start: 2016-07-31
00:05:00 +0000 UTC, end: 2016-07-31 00:06:00 +0000 UTC
I0731 00:06:05.672868 1 manager.go:152] ScrapeMetrics: time: 672.507236ms
size: 42
I0731 00:07:05.000263 1 manager.go:79] Scraping metrics start: 2016-07-31
00:06:00 +0000 UTC, end: 2016-07-31 00:07:00 +0000 UTC
I0731 00:07:05.712846 1 manager.go:152] ScrapeMetrics: time: 712.422692ms
size: 42
I0731 00:08:05.000290 1 manager.go:79] Scraping metrics start: 2016-07-31
00:07:00 +0000 UTC, end: 2016-07-31 00:08:00 +0000 UTC
I0731 00:08:06.143908 1 manager.go:152] ScrapeMetrics: time: 1.143036471s
size: 42
I0731 00:09:05.000321 1 manager.go:79] Scraping metrics start: 2016-07-31
00:08:00 +0000 UTC, end: 2016-07-31 00:09:00 +0000 UTC
I0731 00:09:05.552872 1 manager.go:152] ScrapeMetrics: time: 552.247551ms
size: 42


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
#27194 (comment),
or mute the thread
https://github.com/notifications/unsubscribe-auth/AF8dbLZf3EM9dQxmXYQSk-AkG2Mfi_lTks5qa-f3gaJpZM4IzO4S
.

@ichekrygin
Copy link

ichekrygin commented Aug 18, 2016

I am running on v1.3.5 and this still seems to be an issue

curl http://localhost:8001/api/v1/proxy/namespaces/kube-system/services/heapster/api/v1/model/namespaces/production/pods/rain-377063289-tz0tq/metrics/cpu/usage_rate
{
  "metrics": [
   {
    "timestamp": "2016-08-18T03:04:00Z",
    "value": 18446744073709551449
   },
   {
    "timestamp": "2016-08-18T03:05:00Z",
    "value": 705
   },
   {
    "timestamp": "2016-08-18T03:06:00Z",
    "value": 18446744073709551028
   },
   {
    "timestamp": "2016-08-18T03:07:00Z",
    "value": 696
   },
   {
    "timestamp": "2016-08-18T03:08:00Z",
    "value": 18446744073709550587
   },
   {
    "timestamp": "2016-08-18T03:09:00Z",
    "value": 0
   },

Node version:

kubelet --version
Kubernetes v1.3.5

@timstclair timstclair reopened this Aug 18, 2016
@dchen1107
Copy link
Member

Looks like we have overflow in some other places in our stack.

@dchen1107
Copy link
Member

@timstclair and I together looked at @ichekrygin's node stats more closer, and status of Kubelet / cAdvisor looks sane. I believe the overflow is at heapster. I will close this one, and open another one for heapster.

@derekwaynecarr
Copy link
Member

@dchen1107 - ha, i was greping code for similar math that could be wrong and had same conclusion. can you link to heapster issue when opened?

@ichekrygin
Copy link

ichekrygin commented Aug 18, 2016

@dchen1107 - for now I am commenting on kubernetes-retired/heapster#1168 (which is closed). I hope to get it re-opened - if not, I will create a new one

@dchen1107
Copy link
Member

@ichekrygin Thanks for pointing me the proper heapster issue. I just opened #30939 and marked it for 1.4 milestone. Thanks!

@shamil
Copy link

shamil commented Sep 9, 2016

I'm facing similar issue, but with kubelet /metrics endpoint. I use Prometheus to scrape that endpoint and seems like container_cpu_usage_seconds_total results are not updated properly. Container shows about 14% usage, but kubelet doesn't update the metric's value at all.

Here is the PromQL I use:

container_cpu_usage_seconds_total{kubernetes_namespace="default", kubernetes_container_name="logstash"}

And here is the actual top snapshot from the container

top - 12:34:46 up 9 days,  3:50,  0 users,  load average: 1.47, 1.46, 1.72
Tasks:   4 total,   1 running,   3 sleeping,   0 stopped,   0 zombie
%Cpu(s):  4.2 us,  0.8 sy,  0.0 ni, 94.8 id,  0.0 wa,  0.0 hi,  0.2 si,  0.1 st
KiB Mem:  33016188 total, 16315132 used, 16701056 free,   591984 buffers
KiB Swap:        0 total,        0 used,        0 free. 10561156 cached Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND     
    1 logstash  20   0 7490020 922680  19096 S  13.6  2.8 927:10.96 java        
10141 root      20   0   21972   3728   3204 S   0.0  0.0   0:00.00 bash        
11432 root      20   0    4336    712    624 S   0.0  0.0   0:00.00 sh          
11443 root      20   0   23620   2696   2356 R   0.0  0.0   0:00.00 top     

@timstclair
Copy link

@shamil That sounds like a separate issue, could you open a new issue?

@shamil
Copy link

shamil commented Sep 9, 2016

@timstclair isn't it related to kubelet? The issue title states kubelet returns incorrect cpu usage numbers. Just the endpoint is different (/metrics and not /stats/summary). I believe the returned data is same just the format is different...

@timstclair
Copy link

It is processed differently, and goes through a different pipeline. The original source of the numbers is the same, but I think these issues are not related.

@shamil
Copy link

shamil commented Sep 9, 2016

OK, submitted #32414

k8s-github-robot pushed a commit that referenced this issue Sep 29, 2016
Automatic merge from submit-queue

Rewrite summary e2e test to check metric sanity

Take two, forked from #28195

Adds a test library that extends the ginkgo matchers to check nested data structures. Then uses the new matcher library to thoroughly check the validity of every field in the summary metrics API. This approach is more flexible than the previous approach since it allows for different tests per-field, and is easier to add case-by-case exceptions. It also places the lower & upper bounds side-by-side, making the test much easier to read & reason about.

Most fields are expected to be within some bounds. This is not intended to be a performance test, so metric bounds are very loose. Rather, I'm looking to check that the values are sane to catch bugs like #27194

Fixes #23411, #31989

/cc @kubernetes/sig-node
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug. priority/critical-urgent Highest priority. Must be actively worked on as someone's top priority right now. sig/node Categorizes an issue or PR as relevant to SIG Node.
Projects
None yet
Development

Successfully merging a pull request may close this issue.