-
Notifications
You must be signed in to change notification settings - Fork 74
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update monitor to subscribe to channel #24
Conversation
monitor.go
Outdated
if status, ok := exitErr.Sys().(syscall.WaitStatus); ok { | ||
return status.ExitStatus(), nil | ||
func (m *defaultMonitor) Start(c *exec.Cmd) (chan Exit, error) { | ||
ec := make(chan Exit, 1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this can be moved just before the go func
to avoid an extra close(ec)
2 lines below
} | ||
return 0, nil | ||
ec <- Exit{ | ||
Timestamp: time.Now(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: get the time as soon as c.Wait()
returns for more accuracy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think you will get any closer with the error handling
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mostly because I don't trust the scheduler :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
its sends the value so the fields are all done
Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
dbe04b9
to
d9f4d00
Compare
@mlaventure updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
I wonder if we could use context to better manage the shutdown. There might be a few cases for deadlock if the caller doesn't handle the channels appropriately, but that would be on them. ;)
@stevvooe ya, this is all internal code for the most part, you can see the full PR for containerd here: containerd/containerd#1452 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Depends on containerd/go-runc#24 The is currently a race with the reaper where you could miss some exit events from processes. The problem before and why the reaper was so complex was because processes could fork, getting a pid, and then fail on an execve before we would have time to register the process with the reaper. This could cause pids to fill up in a map as a way to reduce the race. This changes makes the reaper handle multiple subscribers so that the caller can handle locking, for when they want to wait for a specific pid, without affecting other callers using the reaper code. Exit events are broadcast to multiple subscribers, in the case, the runc commands and container pids that we get from a pid-file. Locking while the entire container stats no longs affects runc commands where you want to call `runc create` and wait until that has been completed. Signed-off-by: Michael Crosby <crosbymichael@gmail.com>
Signed-off-by: Michael Crosby crosbymichael@gmail.com