Elastic Beats

Elastic has a collection of data forwarders, known as Beats, built on a common underlying library. Data from Beats can be ingested into Observe using the Elastic endpoint

Installation

Before you start, ensure you are running the Apache2 OSS licensed version of the appropriate Beat.

The corresponding binaries can be downloaded from the following links:

Note

Functionbeat does not have an Apache2 licensed version. Use our lambda function to collect AWS data instead.

Getting Started

To send data to Observe, configure an elasticsearch output in your configuration file:

setup.dashboards.enabled: false
setup.template.enabled: false
setup.ilm.enabled: false

output.elasticsearch:
  hosts: ["https://collect.observeinc.com/v1/elastic"]
  username: ${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}
  password: ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}
  compression_level: 4
  slow_start: true

Important

Observe accepts data over the Elastic endpoint, but does not run Elasticsearch software under the hood. As such, you must disable any configuration unrelated to raw data ingestion. This includes Kibana dashboards, template loading, pipelines, index lifecycle management, as well as any modules which use the aformentioned features.

The above snippet expects OBSERVE_CUSTOMER and OBSERVE_TOKEN values to be provided as environment variables. compression_level is not required, but we recommend setting it to reduce egress traffic. We also recommend setting slow_start in order to reduce the number of events in a batch if a request fails due to the payload exceeding the maximum body size limit for our API.

This section contains examples of working configurations for different Beats. These are intended as starting points, and should be modified as needed.

Note

The examples below assume that OBSERVE_CUSTOMER and OBSERVE_TOKEN values are available as environment variables.

Filebeat

The following configuration reads data from a local file and sends it to Observe:

name: docs-example

# Disable several unneeded features
setup.ilm.enabled: false
setup.dashboards.enabled: false
setup.template.enabled: false

# Send logs from file example.log
filebeat.inputs:
- type: log
  enabled: true
  max_bytes: 131072
  paths:
    - example.log

# Where to send the inputs defined above
output.elasticsearch:
  hosts: ["https://collect.observeinc.com:443/v1/elastic"]
  username: ${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}
  password: ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}
  compression_level: 4

# Add additional metadata
# host recommended for everyone, cloud and/or docker if using
processors:
  - add_host_metadata: ~
#  - add_cloud_metadata: ~
#  - add_docker_metadata: ~

To use this example, save the above snippet as example.yaml and run:

$ filebeat -e -c example.yaml 2>&1 | tee -a example.log
...
INFO    log/harvester.go:302    Harvester started for file: example.log
INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(https://collect.observeinc.com:443/v1/elastic))
INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.0.0
INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(https://collect.observeinc.com:443/v1/elastic)) established

The -e flag directs Filebeat to log to stderr, which is then piped to a file. This file, containing Filebeat’s own logs, is then monitored by Filebeat.

We recommend using the max_bytes option to cap the maximum size of a log line sent to Observe. While Observe accepts a maximum log line size of 1MB, we suggest a more conservative limit of 128KB for most usecases.

Metricbeat

The following configuration sends CPU data to Observe:

name: docs-example

setup.dashboards.enabled: false
setup.template.enabled: false
setup.ilm.enabled: false

metricbeat.modules:
- module: system
  metricsets: [cpu]
  cpu.metrics: [percentages, normalized_percentages, ticks]

output.elasticsearch:
  hosts: ["https://collect.observeinc.com:443/v1/elastic"]
  username: ${OBSERVE_CUSTOMER:?OBSERVE_CUSTOMER not set}
  password: ${OBSERVE_TOKEN:?OBSERVE_TOKEN not set}
  compression_level: 4
  slow_start: true

To use this example, save the above snippet as example.yaml and run:

$ metricbeat -e -c example.yaml
...
INFO    [publisher]     pipeline/module.go:113  Beat name: docs-example
INFO    instance/beat.go:468    metricbeat start running.
INFO    [monitoring]    log/log.go:117  Starting metrics logging every 30s
INFO    [publisher_pipeline_output]     pipeline/output.go:143  Connecting to backoff(elasticsearch(https://collect.observeinc.com/v1/elastic))
INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
INFO    [publisher]     pipeline/retry.go:223     done
INFO    [esclientleg]   eslegclient/connection.go:314   Attempting to connect to Elasticsearch version 7.0.0
INFO    [publisher_pipeline_output]     pipeline/output.go:151  Connection to backoff(elasticsearch(https://collect.observeinc.com/v1/elastic)) established

FAQ

How are failures handled?

Each Beat has its own retry mechanism. For example, Filebeat uses a max_retries setting. See the Beats documentation for more information.