Configure application instrumentation¶
After the Observe Agent is up and running, follow these steps to direct your application’s trace data to Observe:
Note
To view all available Agent Chart versions, run: helm search repo observe --versions | grep observe/agent.
If you’re currently using Agent Chart versions 0.38 through 0.40, please upgrade to version 0.41 or later.
- Install the OpenTelemetry App if you have not already done so. 
- Create a token associated with OpenTelemetry App’s datastream. 
- Add - TRACE_TOKENto the existing- agent-credentialssecret. Replace- <YOUR ANOTHER OBSERVE TOKEN>(your instance’s token)
kubectl get secret agent-credentials -n observe -o json | jq --arg tracetoken "$(echo -n <YOUR ANOTHER OBSERVE TOKEN> | base64)" '.data["TRACE_TOKEN"]=$tracetoken' | kubectl apply -f -
- Restart the pods. 
kubectl rollout restart deployment -n observe
kubectl rollout restart daemonset -n observe
- Once the Observe Agent is up and running on a Kubernetes cluster, you can configure your application running on the same Kubernetes cluster to send telemetry data to the Observe Agent using one of the following addresses: 
Note
When setting up the endpoint to send traces, make sure you use the path that your OTLP library requires. Some libraries need traces to go to /v1/traces, while others expect them at the root path /.
- OTLP/HTTP endpoint: - http://observe-agent-forwarder.observe.svc.cluster.local:4318
- OTLP/grpc endpoint: - http://observe-agent-forwarder.observe.svc.cluster.local:4317
For example, if you are using the OpenTelemetry Astronomy Shop Demo app, create the app.yaml.
default:
  envOverrides:
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: 'http://observe-agent-forwarder.observe.svc.cluster.local:4318'
Upgrade the helm chart for the OpenTelemetry Astronomy Shop Demo app.
helm upgrade --reuse-values -f app.yaml my-otel-demo open-telemetry/opentelemetry-demo
- Explore your trace data using the Trace Explorer and the Service Explorer 
- Install the OpenTelemetry App if you have not already done so. 
- Create a token associated with OpenTelemetry App’s datastream. 
- Create the - trace-values.yamlfile with the following config. Replace- <YOUR ANOTHER OBSERVE TOKEN>(your instance’s token) and- <YOUR OBSERVE COLLECTION ENDPOINT>(your instance’s collection endpoint) on each host.
agent:
  config:
    nodeLogsMetrics:
      # This config specifies both a new receiver and exporter and sets up a new traces pipeline with them
      receivers:
        otlp/app-telemetry:
          protocols:
            grpc:
              endpoint: ${env:MY_POD_IP}:4317
            http:
              endpoint: ${env:MY_POD_IP}:4318
      processors:
        attributes/debug_source_app_traces:
          actions:
          - action: insert
            key: debug_source
            value: app_traces
        attributes/debug_source_app_logs:
          actions:
          - action: insert
            key: debug_source
            value: app_logs
        attributes/debug_source_app_metrics:
          actions:
          - action: insert
            key: debug_source
            value: app_metrics
      exporters:
        otlphttp/observe-traces:
          # (ex: https://123456789012.collect.observeinc.com/v2/otel)
          endpoint: "<YOUR OBSERVE COLLECTION ENDPOINT>/v2/otel"
          headers:
            # (ex: Bearer a1b2c3d4e5f6g7h8i9k0:l1m2n3o4p5q6r7s8t9u0v1w2x3y4z5a6)
            authorization: "Bearer <YOUR ANOTHER OBSERVE TOKEN>"
      service:
        pipelines:
          traces/observe-forward:
            receivers: [otlp/app-telemetry]
            processors: [memory_limiter, k8sattributes, batch, resourcedetection/cloud, resource/observe_common, attributes/debug_source_app_traces]
            exporters: [otlphttp/observe-traces]
          logs/observe-forward:
            receivers: [otlp/app-telemetry]
            processors: [memory_limiter, k8sattributes, batch, resourcedetection/cloud, resource/observe_common, attributes/debug_source_app_logs]
            exporters: [otlphttp/observe/base]
          metrics/observe-forward:
            receivers: [otlp/app-telemetry]
            processors: [memory_limiter, k8sattributes, batch, resourcedetection/cloud, resource/observe_common, attributes/debug_source_app_metrics]
            exporters: [prometheusremotewrite]
node-logs-metrics:
  service:
    enabled: true
    type: ClusterIP
  networkPolicy:
    enabled: true
- Redeploy the Observe Agent with the updated config. 
helm upgrade --reuse-values observe-agent observe/agent -n observe --values trace-values.yaml --version 0.37.0
- Restart the pods. 
kubectl rollout restart deployment -n observe
kubectl rollout restart daemonset -n observe
- Once the Observe Agent is up and running on a Kubernetes cluster, you can configure your application running on the same Kubernetes cluster to send telemetry data to the Observe Agent using one of the following addresses: 
Note
When setting up the endpoint to send traces, make sure you use the path that your OTLP library requires. Some libraries need traces to go to /v1/traces, while others expect them at the root path /.
- OTLP/HTTP endpoint: - http://observe-agent-node-logs-metrics.observe.svc.cluster.local:4318
- OTLP/grpc endpoint: - http://observe-agent-node-logs-metrics.observe.svc.cluster.local:4317
For example, if you are using the OpenTelemetry Astronomy Shop Demo app, create the app.yaml.
default:
  envOverrides:
  - name: OTEL_EXPORTER_OTLP_ENDPOINT
    value: 'http://observe-agent-node-logs-metrics.observe.svc.cluster.local:4318'
Upgrade the helm chart for the OpenTelemetry Astronomy Shop Demo app.
helm upgrade --reuse-values -f app.yaml my-otel-demo open-telemetry/opentelemetry-demo
- Explore your trace data using the Trace Explorer and the Service Explorer