-
Notifications
You must be signed in to change notification settings - Fork 2.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k8sattributes processor evaluating pod identifier result empty field #29630
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hi, Same issue here on a k3s cluster (rancher like) |
Can be mitigated by adding needed envvars to pods directly:
|
Thanks @fculpo , Is there any way to get "deployment name" ? |
Do you have a way to reproduce this with a simple setup? |
Hi, spawning a simple k3s cluster with the k8sattributes processor should not enhance spans with metadatas. |
I've not tested yet, I focused on configuring processor on our standard clusters for now. |
@joegoldman2 what kind of k8s cluster are you using? I have still not been able to reproduce this issue. |
The debug message is coming from here:
The preset's pod associations are: pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection @vnStrawHat in your config you've removed the @joegoldman2 for you it looks like your data doesn't contain |
I was also unable to reproduce with the latest minikube and collector. |
Can you reproduce on rancher based k8s ? (ie. k3s)
I could not get any metadata while any GKE,AKS,AKS clusters where fine
…On Tue, Jan 30, 2024 at 5:15 PM Tyler Helmuth ***@***.***> wrote:
I was also unable to reproduce with the latest minikube and collector.
—
Reply to this email directly, view it on GitHub
<#29630 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACPL7JYOXC2GSYDQX7WRMTYREMDBAVCNFSM6AAAAABAFSEH7SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXGM4TOMJWGE>
.
You are receiving this because you were mentioned.Message ID:
<open-telemetry/opentelemetry-collector-contrib/issues/29630/1917397161@
github.com>
|
To clarify, you were able to get AKS to populate correctly? |
I had Grafana Tempo on AKS and instrumenting itself to Grafana Agent (using
the k8sattributes processor) which was working, and was surprised that k3s
did not, even if trying a lot of processor configuration.
…On Tue, Jan 30, 2024 at 5:26 PM Tyler Helmuth ***@***.***> wrote:
AKS clusters where fine
To clarify, you were able to get AKS to populate correctly?
—
Reply to this email directly, view it on GitHub
<#29630 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AACPL7KJMCDU4GRELKHXFZ3YRENLFAVCNFSM6AAAAABAFSEH7SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSMJXGQZDAMJXG4>
.
You are receiving this because you were mentioned.Message ID:
<open-telemetry/opentelemetry-collector-contrib/issues/29630/1917420177@
github.com>
|
Ok cool, this was my suspicion. I have no idea why the direct pod connections isn't working as expected, could be something unique to AKS setup. @jinja2 any ideas? |
I haven't looked at IPAM/network setup in AKS specifically, but I would guess the pod IP might be getting SNAT'd to that of the node's primary ip address, possibly due to pod cidr not being routable in the Azure subnet. I would suggest looking at your cluster's networking setup to understand why this is happening and if AKS provides a CNI option to preserve pod ip. |
@jinja2 I am so glad you are part of this project because I don't know any of the networking/infra stuff you just said lol |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
has anyone found a way to make this work in EKS? When doing the below, it almost works, except it then causes all of the traces from the service mesh to no longer be associated.
|
we are using eks with collector in deployment mode deployment and in daemonset, and also experiencing this issue. |
Hey I think I figured it out. You need to make sure that your k8sattributes processors run first [source]
so for your case, swap batch and k8sattributes service:
pipelines:
traces:
receivers: [ ... ]
processors: [k8sattributes, batch]
exporters: [ ... ]
I also had a problem with an istio sidecar confusing the collector with the sidecar loopback IP 127.0.0.6. I'm not sure if the pod_association handles the X-Forwarded-For header so I just disabled istio on the collector pod |
That did not fix anything on our end. |
@jseiser sorry to hear that. My answer was intended for the original question rather than for your use case in Eks. However the processor order needs to be correct in EKS too. Maybe you can set the log level to INFO and share it. That's how I figured out what was happening |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
We are still running into this issue. |
that fixed it for us |
Component(s)
processor/k8sattributes
Describe the issue you're reporting
Hello everyone,
I try to using k8sattributes processor on Rancher rke cluster but the debug log show all field empty:
K8s environment:
Otel Collector pod init by Opentelemetry Operator with config bellow:
Collector.yaml
services_account.yaml
I was try to change
pod_association
tobut result is same
Same config work as expected in native k8s cluster (without rancher).
Debug log do not show to much information able to use to debug k8sattributes processor.
Pod log:
my config is correct ?
have any option to get more log of k8sattributes processor for debug ?
Anyone success to use k8sattributes in Rancher RKE ?
The text was updated successfully, but these errors were encountered: