Configuration¶
All user-provided configuration is in the observe-agent.yaml
file. The agent will process this file when it starts and produce a final otel-collector
configuration. The default configuration fragments for otel-collector
are organized by connection type in the connections
folder.
Enable or Disable Connections¶
These configuration fragments are tied to a specific feature that can be enabled or disabled. For example, in the host_monitoring
connection type we have the fragments logs.yaml
and metrics.yaml
. Each of these is tied to a boolean field in the observe-agent.yaml
file and will be included or omitted based on the value there.
Note
Since there’s no guarantee that any given feature will be enabled or disabled, these fragments must be independent of each other and cannot reference anything defined in other fragments. Referencing configuration between fragments could produce a broken otel-collector
configuration.
Overriding existing OTEL Collector Configuration¶
You can also override existing components that are defined in the default fragments. To do so, find the name of the component you want to override and redefine it in the otel_config_overrides
section of observe-agent.yaml
. This section will override any prior definitions of components with the same name.
otel_config_overrides:
exporters:
debug:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
service:
pipelines:
# This will override the existing metrics/host_monitoring pipeline and output to stdout debug instead
metrics/host_monitoring:
receivers: [hostmetrics/host-monitoring-host]
processors: [memory_limiter]
exporters: [debug]
observe-agent.yaml
Schema¶
Field |
Default |
Description |
---|---|---|
token |
Observe authentication token |
|
observe_url |
Observe data collection endpoint |
|
debug |
false |
Set log level of agent to “DEBUG” |
host_monitoring |
Specifies options for the Host Monitoring Connection |
|
host_monitoring.enabled |
true |
Enables Host Monitoring Connection |
host_monitoring.logs |
Specifies options for the logs component within Host Monitoring |
|
host_monitoring.logs.enabled |
true |
Enables the logs component within Host Monitoring |
host_monitoring.logs.include |
[] |
Specify file paths from which to pull host logs |
host_monitoring.metrics |
Specifies options for the metrics component within Host Monitoring |
|
host_monitoring.metrics.host |
Specifies options for host level metrics within Host Monitoring |
|
host_monitoring.metrics.host.enabled |
true |
Enables host level metrics within Host Monitoring |
host_monitoring.metrics.process |
Specifies options for process level metrics within Host Monitoring |
|
host_monitoring.metrics.process.enabled |
false |
Enables process level metrics within Host Monitoring |
otel_config_overrides |
Defines overrides to be added to the OTEL Collector configuration |
Note
The observe_url
value is composed from https://${OBSERVE_CUSTOMER_ID}.collect.${OBSERVE_INSTANCE}
. For example, if you typically login to https://123456789012.observeinc.com
, your ${OBSERVE_COLLECTION_ENDPOINT} is https://123456789012.collect.observeinc.com
.
Note
Some Observe instances may optionally use a name instead of Customer ID; if this is the case for your instance, contact your Observe Data Engineer to discuss implementation. A stem name will work as is, but a DNS redirect name may require client configuration.
Adding custom OTEL Collector Configuration¶
The top level observe-agent.yaml
includes a section for providing additional OTEL collector configuration, otel_config_overrides
. For example, to add a new exporter and a new pipeline to use it, you could define both in this section as follows:
otel_config_overrides:
exporters:
debug:
verbosity: detailed
sampling_initial: 5
sampling_thereafter: 200
service:
pipelines:
metrics/debug:
receivers: [hostmetrics/host-monitoring-host]
processors: [memory_limiter]
exporters: [debug]
When the agent starts, it adds this section to the otel-collector
configuration and loads it.
Tailing a log file¶
For example, this otel_config_overrides
section will tail a log file:
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
filelog/custom-name:
# Define directories or files to include here
include: [/opt/ProductionApp/RiskLogs/**/*.log]
include_file_path: true
storage: file_storage
retry_on_failure:
enabled: true
max_log_size: 4MiB
service:
pipelines:
logs/custom-logs:
receivers: [filelog/custom-name]
processors: [memory_limiter, transform/truncate, resourcedetection, batch]
exporters: [otlphttp/observe, count]
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
filelog/custom-name:
# Define directories or files to include here
include: [D:\\Production\ App\\Risk\ Logs\\**\\*.log]
include_file_path: true
storage: file_storage
retry_on_failure:
enabled: true
max_log_size: 4MiB
service:
pipelines:
logs/custom-logs:
receivers: [filelog/custom-name]
processors: [memory_limiter, transform/truncate, resourcedetection, batch]
exporters: [otlphttp/observe, count]
Handling multiline log records¶
Multiline configuration in File Log Receiver can be configured to split log entries on a pattern other than newlines.
Let’s suppose you have the following log records, and need to treat multiline log records as one record or one observation.
[2024-11-09T05:47:56.722293Z] INFO: User login successful. User ID: 12345
[2024-11-09T06:00:23.982100Z] WARNING: API request failed.
Error: ConnectionTimeout
Endpoint: /api/data/fetch
Retrying in 10 seconds...
[2024-11-09T06:01:13.742852Z] DEBUG: Starting backup process.
Directory: /data/backup
Estimated files: 543
[2024-11-09T06:02:45.123812Z] ERROR: Failed to load configuration file.
File path: /etc/app/config.yaml
Cause: FileNotFoundError
Stack trace:
File "/app/main.py", line 23, in load_config
config = open(config_path, 'r')
FileNotFoundError: [Errno 2] No such file or directory: '/etc/app/config.yaml'
The following line_start_pattern: \[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{6}Z\]
matches a timestamp format like [2024-11-09T05:47:56.722293Z]
.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
filelog/host_monitoring:
include:
- /var/log/**/*.log
- /var/log/syslog
include_file_path: true
storage: file_storage
retry_on_failure:
enabled: true
max_log_size: 4MiB
operators:
- type: filter
expr: 'body matches "otel-contrib"'
multiline:
line_start_pattern: \[\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}\.\d{6}Z\]
Filtering metrics¶
Filter Processor can be configured to filter metrics. Let’s suppose you’d like to collect metrics with the following conditions only.
starts with
process.
and ends with.time
starts with
system.cpu.
ends with
.utilization
If you’d like to filter out metrics with the same conditions above, replace include:
with exclude:
.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
processors:
filter/metrics:
metrics:
include:
match_type: regexp
metric_names:
- process\..*\.time
- system\.cpu\..*
- .*\.utilization
service:
pipelines:
metrics/forward:
receivers: [otlp]
processors: [resourcedetection, resourcedetection/cloud, filter/metrics]
exporters: [otlphttp/observe]
metrics/host_monitoring_host:
receivers: [hostmetrics/host-monitoring-host]
processors: [memory_limiter, resourcedetection, resourcedetection/cloud, batch, filter/metrics]
exporters: [otlphttp/observe]
metrics/agent-filestats:
receivers: [filestats/agent]
processors: [resourcedetection, resourcedetection/cloud, filter/metrics]
exporters: [otlphttp/observe]
metrics/agent-internal:
receivers: [prometheus/agent, count]
processors: [memory_limiter, transform/truncate, resourcedetection, resourcedetection/cloud, batch, filter/metrics]
exporters: [otlphttp/observe]
If you are collecting process metrics, use the following configuration.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
processors:
filter/metrics:
metrics:
include:
match_type: regexp
metric_names:
- process\..*\.time
- system\.cpu\..*
- .*\.utilization
service:
pipelines:
metrics/forward:
receivers: [otlp]
processors: [resourcedetection, resourcedetection/cloud, filter/metrics]
exporters: [otlphttp/observe]
metrics/host_monitoring_host:
receivers: [hostmetrics/host-monitoring-host]
processors: [memory_limiter, resourcedetection, resourcedetection/cloud, batch, filter/metrics]
exporters: [otlphttp/observe]
metrics/host_monitoring_process:
receivers: [hostmetrics/host-monitoring-process]
processors: [memory_limiter, resourcedetection, resourcedetection/cloud, batch, filter/metrics]
exporters: [otlphttp/observe]
metrics/agent-filestats:
receivers: [filestats/agent]
processors: [resourcedetection, resourcedetection/cloud, filter/metrics]
exporters: [otlphttp/observe]
metrics/agent-internal:
receivers: [prometheus/agent, count]
processors: [memory_limiter, transform/truncate, resourcedetection, resourcedetection/cloud, batch, filter/metrics]
exporters: [otlphttp/observe]
Receiving data from a Splunk Forwarder¶
To use the Observe Agent to receive data from Splunk forwarders, you have a Splunk Enterprise or Cloud Instance alongside either a Splunk Universal Forwarder or a Splunk Heavy Forwarder routing data to your Splunk instance. The Observe Agent will receive data from the forwarder over TCP port 9997. This requires the following configurations for the Observe Agent and Splunk Forwarders.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
tcplog/s2s:
add_attributes: true
listen_address: 0.0.0.0:9997
operators:
- field: attributes.log_type
type: add
value: splunk_tcp
service:
pipelines:
logs/forward:
receivers: [tcplog/s2s]
exporters: [otlphttp/observe]
[tcpout]
defaultGroup = observeAgent
[tcpout:observeAgent]
server = hostname:9997
compressed = false
useACK = false
sendCookedData = false
Sampling spans or log records¶
Probabilistic Sampling Processor can be configured to sample spans or log.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
processors:
probabilistic_sampler:
sampling_percentage: 50.0
service:
pipelines:
traces/forward:
receivers: [otlp]
processors: [probabilistic_sampler]
exporters: [debug]
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
processors:
probabilistic_sampler:
sampling_percentage: 15
service:
pipelines:
logs/forward:
receivers: [otlp]
processors: [resourcedetection, resourcedetection/cloud, probabilistic_sampler]
exporters: [otlphttp/observe, count]
logs/host_monitoring-file:
receivers: [filelog/host_monitoring]
processors: [memory_limiter, transform/truncate, resourcedetection, resourcedetection/cloud, batch, probabilistic_sampler]
exporters: [otlphttp/observe, count]
logs/host_monitoring-journald:
receivers: [journald/host_monitoring]
processors: [memory_limiter, transform/truncate, resourcedetection, resourcedetection/cloud, batch, probabilistic_sampler]
exporters: [otlphttp/observe, count]
Collecting metrics from a MongoDB instance¶
MongoDB Receiver can be configured to collect metrics from a MongoDB instance.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
mongodb:
hosts:
- endpoint: localhost:27017
username: otel
password: ${env:MONGODB_PASSWORD}
initial_delay: 1s
tls:
insecure: true
insecure_skip_verify: true
service:
pipelines:
metrics/mongo:
receivers: [mongodb]
processors: [memory_limiter, resourcedetection, resourcedetection/cloud, batch]
exporters: [otlphttp/observe]
Collecting logs from a MongoDB instance¶
File Log Receiver can be configured to collect logs from a MongoDB instance. Additionally, you can append metadata like host.name
to the logs.
# this should be added to the existing observe-agent.yaml
otel_config_overrides:
receivers:
filelog/mongod:
include:
- /log/mongod.log
processors:
resource/mongod:
attributes:
- key : host.name
value: "my_host"
action: upsert
service:
pipelines:
logs/new-pipeline:
receivers: [filelog/mongod]
processors: [memory_limiter,resource/mongod]
exporters: [otlphttp/observe]