Elastic Beats¶
Elastic contains a collection of data forwarders, called Beats, built on a common underlying library. Data from Beats can be ingested into Observe using the Elastic endpoint.
Installation¶
Before you start, ensure you are running the Apache2 OSS licensed version of the appropriate Beat.
The corresponding binaries can be downloaded from the following links:
Note
Functionbeat does not have an Apache2 licensed version. Use our lambda function to collect AWS data instead.
Getting Started¶
To send data to Observe, configure an elasticsearch
output in your configuration file:
setup.dashboards.enabled: false
setup.template.enabled: false
setup.ilm.enabled: false
output.elasticsearch:
hosts: ["https://${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}.collect.observeinc.com:443/v1/elastic"]
headers:
Authorization: "Bearer ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}"
compression_level: 4
slow_start: true
allow_older_versions: true
Important
Observe accepts data over the Elastic endpoint, but does not run Elasticsearch software under the hood. As such, you must disable any configuration unrelated to raw data ingestion. This includes Kibana dashboards, template loading, pipelines, index lifecycle management, as well as any modules which use these features.
The provided snippet expects OBSERVE_CUSTOMER
and OBSERVE_TOKEN
values as environment variables.
compression_level
is not required, but Observe recommends setting it to reduce egress traffic. Observe also recommends setting slow_start
in order to reduce the number of events in a batch if a request fails due to a payload exceeding the maximum body size limit for the Observe API.
This section contains examples of working configurations for different Beats. These are intended as starting points, and should be modified as needed.
Note
The examples below assume that OBSERVE_CUSTOMER
and OBSERVE_TOKEN
values are available as environment variables.
Filebeat¶
The following configuration reads data from a local file and sends it to Observe:
name: docs-example
# Disable several unneeded features
setup.ilm.enabled: false
setup.dashboards.enabled: false
setup.template.enabled: false
# Send logs from file example.log
filebeat.inputs:
- type: log
enabled: true
max_bytes: 131072
paths:
- example.log
# Where to send the inputs defined above
output.elasticsearch:
hosts: ["https://${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}.collect.observeinc.com:443/v1/elastic"]
headers:
Authorization: "Bearer ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}"
compression_level: 4
allow_older_versions: true
# Add additional metadata
# host recommended for everyone, cloud and/or docker if using
processors:
- add_host_metadata: ~
# - add_cloud_metadata: ~
# - add_docker_metadata: ~
To use this example, save the above snippet as example.yaml
and run:
$ filebeat -e -c example.yaml 2>&1 | tee -a example.log
...
INFO log/harvester.go:302 Harvester started for file: example.log
INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(https://{OBSERVE_CUSTOMER}.collect.observeinc.com:443/v1/elastic))
INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 7.0.0
INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://{OBSERVE_CUSTOMER}.collect.observeinc.com:443/v1/elastic)) established
The -e
flag directs Filebeat to log to stderr*, which then pipes to a file. This file, contains the Filebeat logs, and monitored by Filebeat.
Observe recommends using the max_bytes
option to cap the maximum size of a log line
sent to Observe. While Observe accepts a maximum log line size of 4MB, we suggest a
more conservative limit of 128KB for most use cases.
Metricbeat¶
The following configuration sends CPU data to Observe:
name: docs-example
setup.dashboards.enabled: false
setup.template.enabled: false
setup.ilm.enabled: false
metricbeat.modules:
- module: system
metricsets: [cpu]
cpu.metrics: [percentages, normalized_percentages, ticks]
output.elasticsearch:
hosts: ["https://${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}.collect.observeinc.com/v1/elastic"]
headers:
Authorization: "Bearer ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}"
compression_level: 4
slow_start: true
allow_older_versions: true
# Add additional metadata
# host recommended for everyone, cloud and/or docker if using
processors:
- add_host_metadata: ~
# - add_cloud_metadata: ~
# - add_docker_metadata: ~
To use this example, save the above snippet as example.yaml
and run the following:
$ metricbeat -e -c example.yaml
...
INFO [publisher] pipeline/module.go:113 Beat name: docs-example
INFO instance/beat.go:468 metricbeat start running.
INFO [monitoring] log/log.go:117 Starting metrics logging every 30s
INFO [publisher_pipeline_output] pipeline/output.go:143 Connecting to backoff(elasticsearch(https://{OBSERVE_CUSTOMER}.collect.observeinc.com/v1/elastic))
INFO [publisher] pipeline/retry.go:219 retryer: send unwait signal to consumer
INFO [publisher] pipeline/retry.go:223 done
INFO [esclientleg] eslegclient/connection.go:314 Attempting to connect to Elasticsearch version 7.0.0
INFO [publisher_pipeline_output] pipeline/output.go:151 Connection to backoff(elasticsearch(https://{OBSERVE_CUSTOMER}.collect.observeinc.com/v1/elastic)) established
How does Beats handle failures?¶
Each Beat has a retry mechanism. For example, Filebeat uses a max_retries
setting. See the Beats documentation for more information.