Need some indication of taskruns never leaving pending state before timeouts #779
Description
Expected Behavior
A TaskRun never leaving Pending state, with its underlying pod started, should have this fact made clear in log storage
Actual Behavior
No such information occurs
Steps to Reproduce the Problem
- Define ResourceQuotas such that Pods cannot be started in a namespace
- Start a TaskRun with a timeout
- Analyze Results after that TaskRun times out
Additional Info
With #699 we fixed the situation in general where if a timeout/cancel occurred, we would still go on to fetch/store the underlying pod logs.
However, in systems with quotas or severe node pressure at the k8s level, TaskRuns can stay stuck in Pending and any created Pods will never get started.
If you see the comments at
you'll see the prior observations of tkn making the distinction of errors difficult, and thus, errors with tkn getting logs are ignored.That is proving unusable for users how may not have access to view events, pods, or etcd entities in general before the attempt to store logs occurs and then the pipelinerun/taskrun are potentially pruned form etcd.
before exiting the streamLogs
code needs to confirm if any underlying pods for TaskRuns exist, and if not, store any helpful debug info in what is set to the GRPC UpdateLog
call and/or direct S3 storage. In particular
- the TaskRun yaml
- a listing of Pods in the namespace in questions
- a list of events for the TaskRun ... i.e. the eventList retrieved at
I'll also attach a PR/TR which was timedout/cancelled where the taskrun never left Pending state.
You'll see from the annotations that they go from pending straight to a terminal state, meaning a pod never got associated.
@khrm @sayan-biswas @avinal @enarha FYI / PTAL / WDYT