-
Notifications
You must be signed in to change notification settings - Fork 1.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Help configuring Distributed Tracing using env var #8535
Comments
Hello Vincent, Thank you for your help!! I just edited the configMap, but the key looks suspicius
But the result looks the same. I don't see any references to |
@alvarolop yeah the field need to be not under It should look a bit like the following. apiVersion: v1
kind: ConfigMap
metadata:
name: config-tracing
namespace: tekton-pipelines
labels:
app.kubernetes.io/instance: default
app.kubernetes.io/part-of: tekton-pipelines
data:
enabled: "true"
credentialsSecret: "jaeger-creds" |
Hahahha ok, it was a nice challenge test, but for me it was not clear how to configure it :) So, now I see in the Tekton controller logs that it is using my OTEL endpoint, but it complains the following depending on which config I set:
Do you have any example of this configuration? |
The host is parsed and the various part are passed to pipeline/pkg/tracing/tracing.go Lines 156 to 180 in cef86d1
Do you see the request reaching OTEL on your side? Any log there that may help? |
oh, and this looks like the OTLP protocol? I saw the port which looks like thrift HTTP, so that is why I configured the OTLP like this receivers:
# Enable Thrift receiver for Tekton traces
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_binary:
endpoint: 0.0.0.0:6832
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268 Do you know if it is possible to use OTLP? Or that is not yet supported as stated here? #7175 Anyway, no, I don't see any traces reaching the OTEL |
The work on enabling a gRPC endpoint was started here #7721 but never finished unfortunately. Would that have helped? |
For us, it would simplify the OpenTelemetry collector configuration, as the rest of our applications send traces in OTLP format. Less dependencies, less configuration and definitelly we think that OTLP is the way forward :) Also, this is my first app in jaeger format, so maybe the collector is not correct and that's why it is failing |
Could you confirm that the protocol is |
Just an update. I used the OTLP HTTP port and send it to the OTLP HTTP receiver in OpenTelemetry, and it worked. Thank you all for your help. I was expecting something else (Like clearly seeing all the traces and spans) but it works :) Seeing traces and spans clearly in traces is possible without modifying the actual pipelines? Do you know what I mean? |
The reconciler code is instrumented so that you can see all function invocations in each reconciliation cycle. Unfortunately, we lost our contributor who was implementing tracing in Tekton, so implementation stopped at the level you can see today. The initial plan included more features that are not yet implemented (or fully designed):
If you're interested in contributing or know someone who might be, please let me know; I'd be happy to mentor new contributors and help make this happen. |
Wow, that would be amazing to see!! Sadly, I don't have enough knowledge to contribute to this. |
Expected Behavior
I'm deploying Tekton on OpenShift using the operator and I'm trying to configure Distributed tracing for Tasks and Pipelines as especified in https://github.com/tektoncd/community/blob/main/teps/0124-distributed-tracing-for-tasks-and-pipelines.md
What I was expecting is that after configuring the parameter OTEL_EXPORTER_JAEGER_ENDPOINT in a pipeline, that would automatically trigger that the traces and spans of that pipeline would be sent to my OpenTelemetry Collector that was deployed also using the Operator.
Also, I'm not finding the actual documentation about this feature, which I think it is pretty cool! 😄
Actual Behavior
What I see in the logs of the pipelines controller and webhook is some references about KNative tracing, but this is not really doing anything I think.
Steps to Reproduce the Problem
Additional Info
Kubernetes version:
Output of
kubectl version
:Tekton Pipeline version:
Output of
tkn version
orkubectl get pods -n tekton-pipelines -l app=tekton-pipelines-controller -o=jsonpath='{.items[0].metadata.labels.version}'
The text was updated successfully, but these errors were encountered: