Skip to content

Add a new metrics to indicates the the current queue length. #1969

Not planned
@whalecold

Description

Should we add a metrics named controller_runtime_queue_length to indicates the length of the queue in the controller. Some times the Reconcile has not been invoked, i want to know if the queue is emtpy.

Activity

FillZpp

FillZpp commented on Aug 9, 2022

@FillZpp
Contributor

You should use workqueue_depth, which is the current depth of workqueue. Here are some metrics for workqueue: https://github.com/kubernetes-sigs/controller-runtime/blob/master/pkg/metrics/workqueue.go#L41

Somefive

Somefive commented on Sep 30, 2022

@Somefive

Will there be any conflict between

workqueue.SetProvider(workqueueMetricsProvider{})
and https://github.com/kubernetes/component-base/blob/03d57670a9cda43def5d9c960823d6d4558e99ff/metrics/prometheus/workqueue/metrics.go#L101?

Both repository try to set the Provider but only the earliest will take effect. In the case where component-base library is initialized first, the workqueue_depth metrics in component-base will be used and the metrics in controller-runtime will not work. It will cause the default exposed metrics in controller-runtime to be unable to show the workflow_depth number.

Is there any hint or recommendation for handling that?

FillZpp

FillZpp commented on Oct 10, 2022

@FillZpp
Contributor

@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.

On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to workqueue.SetProvider. So why do you need the component-base?

Somefive

Somefive commented on Oct 21, 2022

@Somefive

@Somefive k/component-base is synced from k/k/staging/src/k8s.io/component-base and mostly for those core components of Kubernetes, such as KCM, kube-scheduler, which will not import controller-runtime.

On the other hand, most custom Operators based on controller-runtime probably don't have to rely on component-base. But if they are both imported by a project, you will find they all register to workqueue.SetProvider. So why do you need the component-base?

The component-base library might not be directly depended. However, other libraries like k8s.io/apiextensions-apiserver, sigs.k8s.io/controller-runtime, github.com/coreos/prometheus-operator, and many others depends on that. If the codes in these libraries call component-base functions, the initialization function in component-base will work and might call workqueue.SetProvider before controller-runtime, which will prevent the later controller-runtime from setting its own workqueue_depth metrics.

k8s-triage-robot

k8s-triage-robot commented on Jan 19, 2023

@k8s-triage-robot

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

added
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on Jan 19, 2023
k8s-triage-robot

k8s-triage-robot commented on Feb 18, 2023

@k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

added
lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.
and removed
lifecycle/staleDenotes an issue or PR has remained open with no activity and has become stale.
on Feb 18, 2023
k8s-triage-robot

k8s-triage-robot commented on Mar 20, 2023

@k8s-triage-robot

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

k8s-ci-robot

k8s-ci-robot commented on Mar 20, 2023

@k8s-ci-robot
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

RainbowMango

RainbowMango commented on Dec 12, 2024

@RainbowMango
Member

/reopen

As mentioned by @Somefive, the issue indeed exists and has not yet been resolved.

k8s-ci-robot

k8s-ci-robot commented on Dec 12, 2024

@k8s-ci-robot
Contributor

@RainbowMango: Reopened this issue.

In response to this:

/reopen

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

5 remaining items

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Add a new metrics to indicates the the current queue length. · Issue #1969 · kubernetes-sigs/controller-runtime