Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Expose endpoint to return query with injected labels #231

Open
c3-tiffanyhui opened this issue Jun 26, 2024 · 2 comments
Open

Expose endpoint to return query with injected labels #231

c3-tiffanyhui opened this issue Jun 26, 2024 · 2 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature.

Comments

@c3-tiffanyhui
Copy link

c3-tiffanyhui commented Jun 26, 2024

Currently all prom-label-proxy endpoints are serving purely as passthrough to Prometheus (ie: all serve as true proxy).

Would it be possible to also expose an endpoint that takes in the query and returns the query with the injected labels?

Use case: Clients define an alert rule but prior to configuring it on Prometheus, application logic leverages this endpoint to retrieve a modified expression that is then sent as the exp to be configured with Prometheus

@simonpasquier
Copy link
Contributor

No strong opinion on the request but I'm not sure to understand why the users want to configure the rule with the modified expression.

2 remarks on top of my head:

  • The endpoint path should have a prefix that avoids any clash with the proxied APIs.
  • Authentication & authorization are still outside the scope of prom-label-proxy, even for this additional endpoint.

@simonpasquier simonpasquier added the kind/feature Categorizes issue or PR as related to a new feature. label Jun 26, 2024
@c3-tiffanyhui
Copy link
Author

Thanks for the response, @simonpasquier . Completely agree with the 2 remarks.

To expand upon the use case:
tenant1 authors alert rule avg(container_memory_working_set_bytes) by (pod) / avg(kube_pod_container_resource_limits{resource="memory", unit="byte"}) by (pod) > 0.75
tenant2 authors the same alert rule avg(container_memory_working_set_bytes) by (pod) / avg(kube_pod_container_resource_limits{resource="memory", unit="byte"}) by (pod) > 0.75
tenant3 authors a similar alert rule with a different threshold avg(container_memory_working_set_bytes) by (pod) / avg(kube_pod_container_resource_limits{resource="memory", unit="byte"}) by (pod) > 0.5

Note that there is no tenant filtering, ex: tenant="tenant1", in the expressions.

If these 3 rules are configured on prometheus as is, my understanding is that each rule will get triggered but the desired use case is to only trigger it on the specific tenant's resource. The alert instance will not have tenant available as a label to filter on (since it is by (pod)). Adding tenant as hardcoded value on each alert rule will also result in misattributing instances (since tenant1 can trigger tenant2's rule).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature.
Projects
None yet
Development

No branches or pull requests

2 participants