Send data from an existing OpenTelemetry Collector¶
Note
Observe distributes an Agent which wraps the OpenTelemetry Collector and is configured out-of-the-box to work with Observe Kubernetes and APM use cases. These steps are not required when using the Agent.
Observe provides an OTLP endpoint which can receive OpenTelemetry data over HTTP/protobuf.
Configure the Collector to export to Observe’s OTLP endpoint¶
You’ll first need to create a datastream and authentication token in Observe.
Configure an OTLP HTTP exporter with the token in the header:
exporters:
...
otlphttp:
# (ex: https://123456789012.collect.observeinc.com/v2/otel)
endpoint: "${OBSERVE_COLLECTION_ENDPOINT}/v2/otel"
headers:
# (ex: Bearer a1b2c3d4e5f6g7h8i9k0:l1m2n3o4p5q6r7s8t9u0v1w2x3y4z5a6)
authorization: "Bearer ${OBSERVE_TOKEN}"
Finally, include the exporter in your pipeline for logs, metrics, and traces:
service:
...
pipelines:
logs:
...
exporters: [otlphttp]
metrics:
...
exporters: [otlphttp]
traces:
...
exporters: [otlphttp]
Enrich telemetry with Kubernetes metadata¶
To correlate APM and Kubernetes data, add the k8sattributesprocessor
to your Collector configuration.
The specific configuration depends on how your OpenTelemetry Collectors are deployed. Example configuration for the most common deployment pattern in Kubernetes is the Collector running as an agent (e.g., sidecar or Daemonset) that then exports data to the observability backend.
processors:
k8sattributes:
extract:
metadata:
- k8s.namespace.name
- k8s.deployment.name
- k8s.replicaset.name
- k8s.statefulset.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.node.name
- k8s.pod.name
- k8s.pod.uid
- k8s.cluster.uid
- k8s.node.name
- k8s.node.uid
- k8s.container.name
- container.id
passthrough: false
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
service:
...
pipelines:
logs:
...
processors: [k8sattributesprocessor]
...
metrics:
...
processors: [k8sattributesprocessor]
...
traces:
...
processors: [k8sattributesprocessor]
...
Alternatively, the Collector can be configured to extract Kubernetes metadata from pod labels and annotations. Example configuration for extracting environment from pod labels:
In the deployment definition:
spec:
template:
metadata:
labels:
observeinc.com/env: prod
In the Collector configuration:
processors:
k8sattributes:
extract:
labels:
- tag_name: deployment.environment
key: observeinc.com/env
from: pod
For more configuration options, please refer to the OpenTelemetry documentation on this processor.
Send histogram metrics¶
OpenTelemetry histogram metrics are not supported today.
Use the Prometheus Remote Write Exporter to convert your histogram metrics to the Prometheus format and export them to our Prometheus endpoint.
Example configuration:
exporters:
...
prometheusremotewrite:
# (ex: https://123456789012.collect.observeinc.com/v1/prometheus)
endpoint: "${OBSERVE_COLLECTION_ENDPOINT}/v1/prometheus"
headers:
# (ex: Bearer a1b2c3d4e5f6g7h8i9k0:l1m2n3o4p5q6r7s8t9u0v1w2x3y4z5a6)
authorization: "Bearer ${OBSERVE_TOKEN}"
resource_to_telemetry_conversion:
enabled: true # Convert resource attributes to metric labels
send_metadata: true
...
service:
...
pipelines:
metrics:
...
exporters: [prometheusremotewrite]
Sending data directly from application instrumentation to Observe¶
While it is possible for application instrumentation to bypass the OpenTelemetry Collector entirely and export telemetry data directly to Observe, we do not recommend this in production systems for several reasons:
Any changes to data collection, processing, or ingestion require direct code modifications in your application, increasing development and operational effort.
When telemetry data is exported directly from your application, the export processes (e.g., batching, retries, and serialization) consume CPU, memory, and network bandwidth. This can degrade the performance of the application, especially under high load or when handling bursts of telemetry data.
Network failures or backend issues might result in lost telemetry data.