-
Notifications
You must be signed in to change notification settings - Fork 40k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Processes may be leaked when docker is killed repeatedly in a short time frame #41450
Comments
Are these processes still tracked by their respective cgroups?
…On Tue, Feb 14, 2017 at 4:46 PM, Yu-Ju Hong ***@***.***> wrote:
Forked from #37580 <#37580>
If docker gets killed repeated in a short time frame (while kubelet is
running and trying to create containers), some container processes may get
reparented to PID 1 and continue running, but no longer visible from the
docker daemon.
This can be produced by running the Network should recover from ip leaks
- test creates 100 pods with the pause image.
- test restarts docker (systemctl restart docker) 6 times, with 20s
interval in between.
- test completes successfully.
- Run ps -C "pause" -f and see multiple processes running the pause
command still alive.
- Run docker ps and see no running container.
Running the test a few times (< 3) should reproduce the issue.
/cc @kubernetes/sig-node-bugs
<https://github.com/orgs/kubernetes/teams/sig-node-bugs>
—
You are receiving this because you are on a team that was mentioned.
Reply to this email directly, view it on GitHub
<#41450>, or mute the
thread
<https://github.com/notifications/unsubscribe-auth/AGvIKHeU9rHqKgrjpsw14lsyOPWRAYwdks5rckrZgaJpZM4MBLXy>
.
|
Didn't check when I still had the node. Should be easy to reproduce and verify though. |
/cc @kubernetes/rh-cluster-infra to see if anyone has encountered this issue. |
Does docker 1.12 have same problem? |
Don't know. No one has verified yet. |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Need to verify whether newer docker + COS image has the same issue |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Forked from #37580
docker version: 1.11.2
OS: gci
If docker gets killed repeated in a short time frame (while kubelet is running and trying to create containers), some container processes may get reparented to PID 1 and continue running, but no longer visible from the docker daemon.
This can be produced by running the
Network should recover from ip leaks
node e2e test.systemctl restart docker
) 6 times, with 20s interval in between.ps -C "pause" -f
and see multiple processes running thepause
command still alive.docker ps
and see no running container.Running the test a few times (< 3) should reproduce the issue.
EDIT: This issue has been there since the test was first introduced in Nov 2016. We will need to check and see if this is fixed in newer docker versions.
/cc @kubernetes/sig-node-bugs
The text was updated successfully, but these errors were encountered: