Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl/apiserver problems if minion down #2951

Closed
eparis opened this issue Dec 15, 2014 · 5 comments · Fixed by #3083
Closed

kubectl/apiserver problems if minion down #2951

eparis opened this issue Dec 15, 2014 · 5 comments · Fixed by #3083
Labels
area/apiserver area/client-libraries area/usability priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@eparis
Copy link
Contributor

eparis commented Dec 15, 2014

# kubectl version
Client Version: version.Info{Major:"0", Minor:"6+", GitVersion:"v0.6.0-297-g5ef34bf5231190-dirty", GitCommit:"5ef34bf52311901b997119cc49eff944c610081b", GitTreeState:"dirty"}
Server Version: &version.Info{Major:"0", Minor:"6+", GitVersion:"v0.6.0-297-g5ef34bf5231190-dirty", GitCommit:"5ef34bf52311901b997119cc49eff944c610081b", GitTreeState:"dirty"}
# kubectl get minions
NAME                LABELS
10.13.137.14        <none>
10.13.137.156       <none>
10.13.137.64        <none>
10.13.137.228       <none>
# kubectl get pods
NAME                                   IMAGE(S)                                 HOST                LABELS                                       STATUS
d6db45ed-84ab-11e4-8f20-5254000d45bb   kubernetes/example-guestbook-php-redis   10.13.137.14/       name=frontend,uses=redisslave,redis-master   Running
d6db9804-84ab-11e4-8f20-5254000d45bb   kubernetes/example-guestbook-php-redis   10.13.137.14/       name=frontend,uses=redisslave,redis-master   Running
redis-master                           dockerfile/redis                         10.13.137.14/       name=redis-master                            Running
5028e707-84a9-11e4-8f20-5254000d45bb   brendanburns/redis-slave                 10.13.137.14/       name=redisslave,uses=redis-master            Running
50291a36-84a9-11e4-8f20-5254000d45bb   brendanburns/redis-slave                 10.13.137.14/       name=redisslave,uses=redis-master            Running
df2e94f2-84a9-11e4-8f20-5254000d45bb   kubernetes/example-guestbook-php-redis   10.13.137.14/       name=frontend,uses=redisslave,redis-master   Running
df2f5f30-84a9-11e4-8f20-5254000d45bb   kubernetes/example-guestbook-php-redis   10.13.137.14/       name=frontend,uses=redisslave,redis-master   Running

Pull the power plug on 10.13.137.14 (not sure why every pod was scheduled on the single minion, but, whatever)

# kubectl get pods
   [30 second-ish hang the first time]
F1215 17:40:42.100056    1669 get.go:75] The requested resource does not exist.

power 10.13.137.14 back up and everything works swimmingly again.

@bgrant0607 bgrant0607 added area/usability priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. area/client-libraries area/apiserver labels Dec 16, 2014
@bgrant0607
Copy link
Member

I strongly suspect this is due to fetching pod status from minions on demand when missing in the pod cache. We need to totally change how status information is maintained: #2726.

@bgrant0607
Copy link
Member

/cc @satnam6502

@lavalamp lavalamp self-assigned this Dec 19, 2014
@lavalamp
Copy link
Member

I started taking a look at this. There are multiple problems. We make at least 2 RPCs per pod when you list pods, and a 3rd if the cache is empty. All of these RPCs are unneeded, though some are easier to remove than others. I'll make a few PRs.

@brendandburns
Copy link
Contributor

#3051 addresses part of this.

@brendandburns
Copy link
Contributor

Closing this, as I can't reproduce @ head given recent changes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/apiserver area/client-libraries area/usability priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants