Configure your own OTel collector without Kubernetes

Learn how to configure your own OTel collector to send trace data to Observe from any environment.

This topic describes how you can send OpenTelemetry trace data to Observe from any environment, including CI pipelines, standalone applications, and serverless functions, without the Observe Agent.

📘

Note

The Observe Agent is distributed by Observe and wraps the OpenTelemetry Collector. It is configured out-of-the-box to work with Observe Kubernetes and APM use cases. The instructions on this page are not required when using the Observe Agent.

Observe provides an OTLP endpoint which can receive OpenTelemetry data over HTTP/protobuf.

Create an ingest token

Before you begin, use the Add Data portal to create an ingest token.

  1. In Observe, select Data & integrations > Add data from the left navigation rail.
  2. Click on the Linux card.
  3. Click the Install tab.
  4. Click Create in the Create a new ingest token section, then copy the token that is generated.

You don't need to complete the rest of the integration, you just need the token.

Configure the Collector to export to Observe's OTLP endpoint

Configure OTLP HTTP exporters below. Replace <YOUR_INGEST_TOKEN> with the ingest token (ex: a1b2c3d4e5f6g7h8i9k0:l1m2n3o4p5q6r7s8t9u0v1w2x3y4z5a6) and <YOUR_OBSERVE_COLLECTION_ENDPOINT> with your instance's collection endpoint (ex: https://123456789012.collect.observeinc.com/).

exporters:
  ...
  otlphttp/observelogs:
    # (ex: https://123456789012.collect.observeinc.com/v2/otel)
    endpoint: "<YOUR_OBSERVE_COLLECTION_ENDPOINT>/v2/otel"
    headers:
      # (ex: Bearer a1b2c3d4e5f6g7h8i9k0:l1m2n3o4p5q6r7s8t9u0v1w2x3y4z5a6)
      authorization: "Bearer <YOUR_INGEST_TOKEN>"
      x-observe-target-package: "Host Explorer"
    sending_queue:
      num_consumers: 4
      queue_size: 100
    retry_on_failure:
      enabled: true
    compression: zstd
  otlphttp/observemetrics:
    endpoint: "<YOUR_OBSERVE_COLLECTION_ENDPOINT>/v2/otel"
    headers:
      authorization: "Bearer <YOUR_INGEST_TOKEN>"
      x-observe-target-package: "Metrics"
    sending_queue:
      num_consumers: 4
      queue_size: 100
    retry_on_failure:
      enabled: true
    compression: zstd
  otlphttp/observetracing:
    endpoint: "<YOUR_OBSERVE_COLLECTION_ENDPOINT>/v2/otel"
    headers:
      authorization: "Bearer <YOUR_INGEST_TOKEN>"
      x-observe-target-package: "Tracing"
    sending_queue:
      num_consumers: 4
      queue_size: 100
    retry_on_failure:
      enabled: true
    compression: zstd

Finally, include the exporter in your pipeline for logs, metrics, and traces:

service:
  ...
  pipelines:
    logs:
      ...
      exporters: [otlphttp/observelogs]
    metrics:
      ...
      exporters: [otlphttp/observemetrics]
    traces:
      ...
      exporters: [otlphttp/observetracing]

Enrich telemetry with Kubernetes metadata

To correlate APM and Kubernetes data, add the k8sattributesprocessor to your Collector configuration.

The specific configuration depends on how your OpenTelemetry Collectors are deployed. Example configuration for the most common deployment pattern in Kubernetes is the Collector running as an agent (e.g., sidecar or Daemonset) that then exports data to the observability backend.

processors:
    k8sattributes:
    extract:
        metadata:
        - k8s.namespace.name
        - k8s.deployment.name
        - k8s.replicaset.name
        - k8s.statefulset.name
        - k8s.daemonset.name
        - k8s.cronjob.name
        - k8s.job.name
        - k8s.node.name
        - k8s.pod.name
        - k8s.pod.uid
        - k8s.cluster.uid
        - k8s.node.name
        - k8s.node.uid
        - k8s.container.name
        - container.id
    passthrough: false
    pod_association:
    - sources:
        - from: resource_attribute
        name: k8s.pod.ip
    - sources:
        - from: resource_attribute
        name: k8s.pod.uid
    - sources:
        - from: connection
service:
  ...
  pipelines:
    logs:
      ...
      processors: [k8sattributesprocessor]
      ...
    metrics:
      ...
      processors: [k8sattributesprocessor]
      ...
    traces:
      ...
      processors: [k8sattributesprocessor]
      ...

Alternatively, the Collector can be configured to extract Kubernetes metadata from pod labels and annotations. Example configuration for extracting environment from pod labels:

In the deployment definition:

spec:
  template:
    metadata:
      labels:
        observeinc.com/env: prod

In the Collector configuration:

processors:
  k8sattributes:
    extract:
      labels:
        - tag_name: deployment.environment
          key: observeinc.com/env
          from: pod

For more configuration options, please refer to the OpenTelemetry documentation on this processor.

Send data directly from application instrumentation to Observe

While it is possible for application instrumentation to bypass the OpenTelemetry Collector entirely and export telemetry data directly to Observe, we do not recommend this in production systems for several reasons:

  • Any changes to data collection, processing, or ingestion require direct code modifications in your application, increasing development and operational effort.
  • When telemetry data is exported directly from your application, the export processes (e.g., batching, retries, and serialization) consume CPU, memory, and network bandwidth. This can degrade the performance of the application, especially under high load or when handling bursts of telemetry data.
  • Network failures or backend issues might result in lost telemetry data.