-
Notifications
You must be signed in to change notification settings - Fork 40.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add probe triggered log and shift the periodics timer after manual run #119089
Add probe triggered log and shift the periodics timer after manual run #119089
Conversation
Hi @mochizuki875. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/sig node |
/ok-to-test |
/triage accepted |
/assign |
@bart0sh kubernetes/pkg/kubelet/prober/worker.go Lines 185 to 186 in 2f90380
Should I remove regular log? |
the right fix will be to shift the periodics timer after the manual run. Also maybe update docs for probes /lgtm cancel |
@SergeyKanzhelev
How about updating the periodic timer to run the probe again at intervals of e.g. Assume that |
2f90380
to
b3f9f16
Compare
I've fix as below:
In addition, I've create PR which update docs. Best regards. |
/retest |
pkg/kubelet/prober/worker.go
Outdated
@@ -178,7 +178,13 @@ probeLoop: | |||
case <-w.stopCh: | |||
break probeLoop | |||
case <-probeTicker.C: | |||
klog.V(4).InfoS("Triggerd Probe by periodSeconds", "probeType", w.probeType, "pod", klog.KObj(w.pod), "podUID", w.pod.UID, "containerName", w.container.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think this message is necessary because every probe emits a level 4 log when it runs:
kubernetes/pkg/kubelet/prober/prober.go
Lines 136 to 179 in de6db3f
func (pb *prober) runProbe(ctx context.Context, probeType probeType, p *v1.Probe, pod *v1.Pod, status v1.PodStatus, container v1.Container, containerID kubecontainer.ContainerID) (probe.Result, string, error) { | |
timeout := time.Duration(p.TimeoutSeconds) * time.Second | |
switch { | |
case p.Exec != nil: | |
klog.V(4).InfoS("Exec-Probe runProbe", "pod", klog.KObj(pod), "containerName", container.Name, "execCommand", p.Exec.Command) | |
command := kubecontainer.ExpandContainerCommandOnlyStatic(p.Exec.Command, container.Env) | |
return pb.exec.Probe(pb.newExecInContainer(ctx, container, containerID, command, timeout)) | |
case p.HTTPGet != nil: | |
req, err := httpprobe.NewRequestForHTTPGetAction(p.HTTPGet, &container, status.PodIP, "probe") | |
if err != nil { | |
return probe.Unknown, "", err | |
} | |
if klogV4 := klog.V(4); klogV4.Enabled() { | |
port := req.URL.Port() | |
host := req.URL.Hostname() | |
path := req.URL.Path | |
scheme := req.URL.Scheme | |
headers := p.HTTPGet.HTTPHeaders | |
klogV4.InfoS("HTTP-Probe", "scheme", scheme, "host", host, "port", port, "path", path, "timeout", timeout, "headers", headers) | |
} | |
return pb.http.Probe(req, timeout) | |
case p.TCPSocket != nil: | |
port, err := probe.ResolveContainerPort(p.TCPSocket.Port, &container) | |
if err != nil { | |
return probe.Unknown, "", err | |
} | |
host := p.TCPSocket.Host | |
if host == "" { | |
host = status.PodIP | |
} | |
klog.V(4).InfoS("TCP-Probe", "host", host, "port", port, "timeout", timeout) | |
return pb.tcp.Probe(host, port, timeout) | |
case p.GRPC != nil: | |
host := status.PodIP | |
service := "" | |
if p.GRPC.Service != nil { | |
service = *p.GRPC.Service | |
} | |
klog.V(4).InfoS("GRPC-Probe", "host", host, "service", service, "port", p.GRPC.Port, "timeout", timeout) | |
return pb.grpc.Probe(host, service, int(p.GRPC.Port), timeout) | |
So, we can distinguish whether a probe is periodical or manual by searching the message for manual run added below.
In addition, this message looks noisy because this is emitted even while the probe in on hold. It means this message is logged for a startup probe while the container is running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@hshiina
Indeed, you are right.
I've removed the periodical log.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
LGTM label has been added. Git tree hash: 3194f64f70f439205c9021bd1b57ec52312dc6fd
|
ping for approval! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mochizuki875, SergeyKanzhelev The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
this will be tricky to test, merging as-is is fine. Thanks |
@SergeyKanzhelev /remove-hold |
What type of PR is this?
/kind cleanup
What this PR does / why we need it:
Output a log when probe is triggered.
Especially
readinessProbe
, it will be executed by two triggers:periodSeconds
which isreadinessProbe
's field)However, there are no means to know which executed
readinessProbe
, and it may make confusion like #118815.In addition, shift the periodic timer of
ReadinessProbe
after manual run.Which issue(s) this PR fixes:
Fixes #118815
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: