Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

scheduler: initialize podsWithAffinity #33967

Merged
merged 1 commit into from
Oct 5, 2016

Conversation

xiang90
Copy link
Contributor

@xiang90 xiang90 commented Oct 3, 2016

Without initializing podsWithAffinity, scheduler panics when deleting
a pod from a node that has no pods with affinity ever scheduled to.

Initialize podsWithAffinity to avoid scheduler panic

Fix #33772


This change is Reviewable

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 3, 2016

@davidopp
Copy link
Member

davidopp commented Oct 3, 2016

LGTM

Thanks!

@davidopp davidopp added lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Oct 3, 2016
@davidopp
Copy link
Member

davidopp commented Oct 3, 2016

BTW the bot says you haven't signed the Linux Foundation CLA...

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 3, 2016

@davidopp Just signed. Do you know how can I trigger the CLA check again?

@davidopp davidopp closed this Oct 3, 2016
@davidopp davidopp reopened this Oct 3, 2016
@davidopp
Copy link
Member

davidopp commented Oct 3, 2016

Not sure. I closed anre reopened the PR, maybe that will help?

@philips
Copy link
Contributor

philips commented Oct 3, 2016

@davidopp FYI, we signed the CNCF corp CLA and it should cover all @coreos.com emails; I escalated to @caniszczyk and @sarahnovotny over email as it isn't working elsewhere.

@k8s-github-robot k8s-github-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. release-note-label-needed do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. labels Oct 3, 2016
@xiang90 xiang90 added release-note-none Denotes a PR that doesn't merit a release note. and removed release-note-label-needed do-not-merge DEPRECATED. Indicates that a PR should not merge. Label can only be manually applied/removed. labels Oct 3, 2016
@k8s-ci-robot
Copy link
Contributor

Jenkins GCI GCE e2e failed for commit 8ddac82abadc4dc050b64db52404c56ed92ff0ad. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gce e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@k8s-ci-robot
Copy link
Contributor

Jenkins GCI GKE smoke e2e failed for commit 8ddac82abadc4dc050b64db52404c56ed92ff0ad. Full PR test history.

The magic incantation to run this job again is @k8s-bot gci gke e2e test this. Please help us cut down flakes by linking to an open flake issue when you hit one in your PR.

@kdima
Copy link

kdima commented Oct 3, 2016

@xiang90 I have just tried out this patch on our cluster I still see the nil pointer exception unfortunately. The exact same stack trace.

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 3, 2016

@kdima How did you reproduce this? Can you try to print out variable n around line 228?

@kdima
Copy link

kdima commented Oct 3, 2016

I can see the following errors before the first exception happens:

1 listers.go:68] can not retrieve list of objects using index : object has no meta: object does not implement the Object interfaces
Oct 03 22:40:05 h-stg-core-2 docker[4353]: I1003 22:40:05.567754       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"memcached-cl802", UID:"54790899-89ba-11e6-bdf7-42010a140002", APIVersion:"v1", ResourceVersion:"3117", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned memcached-cl802 to h-stg-wk2-0
Oct 03 22:40:05 h-stg-core-2 docker[4353]: I1003 22:40:05.571129       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"memcached-i04gz", UID:"54793a8c-89ba-11e6-bdf7-42010a140002", APIVersion:"v1", ResourceVersion:"3119", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned memcached-i04gz to h-stg-wk2-1
Oct 03 22:40:05 h-stg-core-2 docker[4353]: W1003 22:40:05.580528       1 listers.go:68] can not retrieve list of objects using index : object has no meta: object does not implement the Object interfaces
Oct 03 22:40:05 h-stg-core-2 docker[4353]: I1003 22:40:05.584154       1 scheduler.go:135] Failed to bind pod: default/tasks-fix-kibana-x2mnx
Oct 03 22:40:05 h-stg-core-2 docker[4353]: E1003 22:40:05.584168       1 scheduler.go:137] scheduler cache ForgetPod failed: pod state wasn't assumed but get forgotten. Pod key: default/tasks-fix-kibana-x2mnx
Oct 03 22:40:05 h-stg-core-2 docker[4353]: E1003 22:40:05.584173       1 factory.go:530] Error scheduling default tasks-fix-kibana-x2mnx: Operation cannot be fulfilled on pods/binding "tasks-fix-kibana-x2mnx": pod tasks-fix-kibana-x2mnx is already assigned to node "h-stg-wk2-0"; retrying
Oct 03 22:40:05 h-stg-core-2 docker[4353]: I1003 22:40:05.584202       1 factory.go:607] Updating pod condition for default/tasks-fix-kibana-x2mnx to (PodScheduled==False)
Oct 03 22:40:05 h-stg-core-2 docker[4353]: I1003 22:40:05.584320       1 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"default", Name:"tasks-fix-kibana-x2mnx", UID:"2f730ddf-89ba-11e6-bdf7-42010a140002", APIVersion:"v1", ResourceVersion:"2217", FieldPath:""}): type: 'Normal' reason: 'FailedScheduling' Binding rejected: Operation cannot be fulfilled on pods/binding "tasks-fix-kibana-x2mnx": pod tasks-fix-kibana-x2mnx is already assigned to node "h-stg-wk2-0"
Oct 03 22:40:05 h-stg-core-2 docker[4353]: W1003 22:40:05.586715       1 listers.go:68] can not retrieve list of objects using index : object has no meta: object does not implement the Object interfaces
Oct 03 22:40:05 h-stg-core-2 docker[4353]: E1003 22:40:05.586967       1 scheduler.go:116] scheduler cache AssumePod failed: pod state wasn't initial but get assumed. Pod key: default/tasks-fix-elasticsearch-02zw8
Oct 03 22:40:05 h-stg-core-2 docker[4353]: E1003 22:40:05.587258       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference)

@kdima
Copy link

kdima commented Oct 3, 2016

@xiang90 yes sure give me a second. It usually happens within 10-15 minutes of provisioning a cluster. I am not quite sure how exactly to reproduce it.

@kdima
Copy link

kdima commented Oct 3, 2016

@xiang90 actually as I was investigating this before I saw that n is nill at that point. I can print out the whole cache though hopefully that can help.

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 3, 2016

That would be helpful. It seems that there are multiple issues with the cache.

@kdima
Copy link

kdima commented Oct 3, 2016

@xiang90 yeap that was my impression as well. I hit quite a few pod already assigned to node I thought they were caused by the nil pointer but it might be the other way around.

@kdima
Copy link

kdima commented Oct 3, 2016

@xiang90 yeap n is nil at that point.

@kdima
Copy link

kdima commented Oct 3, 2016

So I checked whether it is related to nodes being destroyed or created but it does not correlate.

@kdima
Copy link

kdima commented Oct 3, 2016

I just saw the exception on three schedulers (i.e. all of them) running on three separate instances at the same time. I thought it was master elected?

Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /usr/lib/go/src/runtime/asm_amd64.s:479
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /usr/lib/go/src/runtime/panic.go:458
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /usr/lib/go/src/runtime/panic.go:62
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /usr/lib/go/src/runtime/sigpanic_unix.go:24
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/node_info.go:193
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:236
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:259
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:196
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:134
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:182
Oct 03 23:33:36 h-stg-core-1 docker[2192]: <autogenerated>:52
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:318
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/delta_fifo.go:420
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:126
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:84
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:85
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102
Oct 03 23:33:36 h-stg-core-1 docker[2192]: /usr/lib/go/src/runtime/asm_amd64.s:2086
Oct 03 23:33:36 h-stg-core-1 docker[2192]: panic: runtime error: invalid memory address or nil pointer dereference [recovered]
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         panic: runtime error: invalid memory address or nil pointer dereference
Oct 03 23:33:36 h-stg-core-1 docker[2192]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1a9241c]
Oct 03 23:33:36 h-stg-core-1 docker[2192]: goroutine 41 [running]:
Oct 03 23:33:36 h-stg-core-1 docker[2192]: panic(0x2ce11a0, 0xc420016050)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /usr/lib/go/src/runtime/panic.go:500 +0x1a1
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
Oct 03 23:33:36 h-stg-core-1 docker[2192]: panic(0x2ce11a0, 0xc420016050)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /usr/lib/go/src/runtime/panic.go:458 +0x243
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*NodeInfo).removePod(0x0, 0xc42124f400, 0xc420a7d9f0, 0x1)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/node_info.go:193 +0x7c
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*schedulerCache).removePod(0xc420051580, 0xc42124f400, 0xc4212a7e60, 0x1d)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:236 +0x368
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*schedulerCache).RemovePod(0xc420051580, 0xc42124f400, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:259 +0x22d
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/plugin/pkg/scheduler/factory.(*ConfigFactory).deletePodFromCache(0xc4201b64d0, 0x31f5040, 0xc42124f400)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:196 +0xa9
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/plugin/pkg/scheduler/factory.(*ConfigFactory).(k8s.io/kubernetes/plugin/pkg/scheduler/factory.deletePodFromCache)-fm(0x31f5040, 0xc42124f400)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:134 +0x3e
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.ResourceEventHandlerFuncs.OnDelete(0xc420285170, 0xc420285180, 0xc4202851a0, 0x31f5040, 0xc42124f400)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:182 +0x49
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.(*ResourceEventHandlerFuncs).OnDelete(0xc42089b280, 0x31f5040, 0xc42124f400)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         <autogenerated>:52 +0x78
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.NewIndexerInformer.func1(0x2d48f60, 0xc4212a7e20, 0xc4212a7e20, 0x2d48f60)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:318 +0x4ee
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.(*DeltaFIFO).Pop(0xc4201b6580, 0xc42089c870, 0x0, 0x0, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/delta_fifo.go:420 +0x22a
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.(*Controller).processLoop(0xc420494a80)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:126 +0x3c
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.(*Controller).(k8s.io/kubernetes/pkg/client/cache.processLoop)-fm()
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102 +0x2a
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc420a7df70)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:84 +0x19
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc420a7df70, 0x3b9aca00, 0x0, 0x2b5b201, 0xc4202249c0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:85 +0xad
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/util/wait.Until(0xc420a7df70, 0x3b9aca00, 0xc4202249c0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x4d
Oct 03 23:33:36 h-stg-core-1 docker[2192]: k8s.io/kubernetes/pkg/client/cache.(*Controller).Run(0xc420494a80, 0xc4202249c0)
Oct 03 23:33:36 h-stg-core-1 docker[2192]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102 +0x1af

The other box log

Oct 03 23:33:36 h-stg-core-0 docker[3530]: E1003 23:33:36.836743       1 runtime.go:64] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:70
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:63
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:49
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /usr/lib/go/src/runtime/asm_amd64.s:479
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /usr/lib/go/src/runtime/panic.go:458
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /usr/lib/go/src/runtime/panic.go:62
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /usr/lib/go/src/runtime/sigpanic_unix.go:24
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/node_info.go:193
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:236
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:259
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:196
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:134
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:182
Oct 03 23:33:36 h-stg-core-0 docker[3530]: <autogenerated>:52
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:318
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/delta_fifo.go:420
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:126
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:84
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:85
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102
Oct 03 23:33:36 h-stg-core-0 docker[3530]: /usr/lib/go/src/runtime/asm_amd64.s:2086
Oct 03 23:33:36 h-stg-core-0 docker[3530]: panic: runtime error: invalid memory address or nil pointer dereference [recovered]
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         panic: runtime error: invalid memory address or nil pointer dereference
Oct 03 23:33:36 h-stg-core-0 docker[3530]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x28 pc=0x1a9241c]
Oct 03 23:33:36 h-stg-core-0 docker[3530]: goroutine 57 [running]:
Oct 03 23:33:36 h-stg-core-0 docker[3530]: panic(0x2ce11a0, 0xc420016050)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /usr/lib/go/src/runtime/panic.go:500 +0x1a1
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/util/runtime.HandleCrash(0x0, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/runtime/runtime.go:56 +0x126
Oct 03 23:33:36 h-stg-core-0 docker[3530]: panic(0x2ce11a0, 0xc420016050)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /usr/lib/go/src/runtime/panic.go:458 +0x243
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*NodeInfo).removePod(0x0, 0xc420a06780, 0xc4210219f0, 0x1)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/node_info.go:193 +0x7c
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*schedulerCache).removePod(0xc42028c0c0, 0xc420a06780, 0xc420a035c0, 0x1d)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:236 +0x368
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache.(*schedulerCache).RemovePod(0xc42028c0c0, 0xc420a06780, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/schedulercache/cache.go:259 +0x22d
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/plugin/pkg/scheduler/factory.(*ConfigFactory).deletePodFromCache(0xc4201e7130, 0x31f5040, 0xc420a06780)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:196 +0xa9
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/plugin/pkg/scheduler/factory.(*ConfigFactory).(k8s.io/kubernetes/plugin/pkg/scheduler/factory.deletePodFromCache)-fm(0x31f5040, 0xc420a06780)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/plugin/pkg/scheduler/factory/factory.go:134 +0x3e
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.ResourceEventHandlerFuncs.OnDelete(0xc4202dfe30, 0xc4202dfe40, 0xc4202dfe50, 0x31f5040, 0xc420a06780)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:182 +0x49
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.(*ResourceEventHandlerFuncs).OnDelete(0xc420b39940, 0x31f5040, 0xc420a06780)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         <autogenerated>:52 +0x78
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.NewIndexerInformer.func1(0x2d48f60, 0xc420a03580, 0xc420a03580, 0x2d48f60)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:318 +0x4ee
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.(*DeltaFIFO).Pop(0xc4201e71e0, 0xc4200f8ba0, 0x0, 0x0, 0x0, 0x0)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/delta_fifo.go:420 +0x22a
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.(*Controller).processLoop(0xc4201ad030)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:126 +0x3c
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/client/cache.(*Controller).(k8s.io/kubernetes/pkg/client/cache.processLoop)-fm()
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/client/cache/controller.go:102 +0x2a
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/util/wait.JitterUntil.func1(0xc421021f70)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:84 +0x19
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/util/wait.JitterUntil(0xc421021f70, 0x3b9aca00, 0x0, 0x2b5b201, 0xc4200874a0)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:85 +0xad
Oct 03 23:33:36 h-stg-core-0 docker[3530]: k8s.io/kubernetes/pkg/util/wait.Until(0xc421021f70, 0x3b9aca00, 0xc4200874a0)
Oct 03 23:33:36 h-stg-core-0 docker[3530]:         /home/dima/Private/Work/k8s/src/k8s.io/kubernetes/pkg/util/wait/wait.go:47 +0x4d

@ravilr
Copy link
Contributor

ravilr commented Oct 5, 2016

please also cherrypick this to 1.4.1.

@k8s-cherrypick-bot
Copy link

Removing label cherrypick-candidate because no release milestone was set. This is an invalid state and thus this PR is not being considered for cherry-pick to any release branch. Please add an appropriate release milestone and then re-add the label.

@davidopp davidopp added lgtm "Looks good to me", indicates that a PR is ready to be merged. cherrypick-candidate labels Oct 5, 2016
@davidopp davidopp added this to the v1.4 milestone Oct 5, 2016
@davidopp
Copy link
Member

davidopp commented Oct 5, 2016

LGTM

Thanks for fixing!

@wojtek-t wojtek-t removed the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 5, 2016
@wojtek-t
Copy link
Member

wojtek-t commented Oct 5, 2016

@xiang90 - thanks a lot for working on it.

However, I completely don't understand why the first commit is needed. In fact I'm pretty sure the first commit is not needed and we should just left the second one. I temporary removed LGTM because of that, so can you please clarify it?

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 5, 2016

@wojtek-t

OK. I thought we cannot range over the nil map. But it seems that ranging over a nil map is actually OK.

A nil map behaves like an empty map when reading, but attempts to write to a nil map will cause a 
runtime panic; don't do that. To initialize a map, use the built in make function:

It is a good practice to initialize the map, but I guess it is not necessary for the cache. I am going to remove the 1st commit.

@xiang90
Copy link
Contributor Author

xiang90 commented Oct 5, 2016

@wojtek-t Fixed. PTAL.

@k8s-github-robot k8s-github-robot added size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 5, 2016
@wojtek-t
Copy link
Member

wojtek-t commented Oct 5, 2016

OK. I thought we cannot range over the nil map. But it seems that ranging over a nil map is actually OK.

Yes we can.

@xiang90 - thanks a lot. LGTM

@wojtek-t wojtek-t added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Oct 5, 2016
@k8s-github-robot
Copy link

@k8s-bot test this [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue

@k8s-github-robot k8s-github-robot merged commit 7856e46 into kubernetes:master Oct 5, 2016
@jessfraz jessfraz added the cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. label Oct 6, 2016
@jessfraz
Copy link
Contributor

jessfraz commented Oct 6, 2016

@xiang90 can you open the PR to cherry-pick this into release-1.4

@jessfraz jessfraz added release-note Denotes a PR that will be considered when it comes time to generate release notes. and removed release-note-none Denotes a PR that doesn't merit a release note. labels Oct 6, 2016
k8s-github-robot pushed a commit that referenced this pull request Oct 6, 2016
#33163-#33227-#33359-#33605-#33967-#33977-#34158-origin-release-1.4

Automatic merge from submit-queue

Automated cherry pick of #32914 #33163 #33227 #33359 #33605 #33967 #33977 #34158 origin release 1.4

Cherry pick of #32914 #33163 #33227 #33359 #33605 #33967 #33977 #34158 on release-1.4.

#32914: Limit the number of names per image reported in the node
#33163: fix the appending bug
#33227: remove cpu limits for dns pod. The current limits are not
#33359: Fix goroutine leak in federation service controller
#33605: Add periodic ingress reconciliations.
#33967: scheduler: cache.delete deletes the pod from node specified
#33977: Heal the namespaceless ingresses in federation e2e.
#34158: Add missing argument to log message in federated ingress
@k8s-cherrypick-bot
Copy link

Commit found in the "release-1.4" branch appears to be this PR. Removing the "cherrypick-candidate" label. If this is an error find help to get your PR picked.

shyamjvs pushed a commit to shyamjvs/kubernetes that referenced this pull request Dec 1, 2016
…ck-of-#32914-kubernetes#33163-kubernetes#33227-kubernetes#33359-kubernetes#33605-kubernetes#33967-kubernetes#33977-kubernetes#34158-origin-release-1.4

Automatic merge from submit-queue

Automated cherry pick of kubernetes#32914 kubernetes#33163 kubernetes#33227 kubernetes#33359 kubernetes#33605 kubernetes#33967 kubernetes#33977 kubernetes#34158 origin release 1.4

Cherry pick of kubernetes#32914 kubernetes#33163 kubernetes#33227 kubernetes#33359 kubernetes#33605 kubernetes#33967 kubernetes#33977 kubernetes#34158 on release-1.4.

kubernetes#32914: Limit the number of names per image reported in the node
kubernetes#33163: fix the appending bug
kubernetes#33227: remove cpu limits for dns pod. The current limits are not
kubernetes#33359: Fix goroutine leak in federation service controller
kubernetes#33605: Add periodic ingress reconciliations.
kubernetes#33967: scheduler: cache.delete deletes the pod from node specified
kubernetes#33977: Heal the namespaceless ingresses in federation e2e.
kubernetes#34158: Add missing argument to log message in federated ingress
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cherry-pick-approved Indicates a cherry-pick PR into a release branch has been approved by the release branch manager. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.