The Observe AWS Integration streamlines the process of collecting data from AWS. Install it once and ingest logs and metrics from several common AWS services. Configure ingest for additional services by sending that data to forwarders that are already set up for you.

The AWS Integration works with the datasets in your workspace. Contact us for assistance creating datasets and modeling the relationships between them. We can automate many common data modeling tasks for you, ensuring an accurate picture of your infrastructure.

If you are already ingesting AWS data, we are happy to discuss if the AWS Integration could enhance your existing data collection strategy.

What data does it ingest?

Standard ingest sources

The AWS Integration automatically ingests the following types of data from a single region:

To do this, it creates several forwarding paths:

These forwarders work in a single region, as many AWS services are specific to a particular region. For information about multi-region collection, see How do I collect data from multiple regions? in the FAQ.

Additional ingest sources

With these already configured and working, add additional services by configuring them to write to the bucket or send logs to one of the forwarders. Details for common services may be found in our documentation:

  • API Gateway execution and access logs from your REST API

  • AppSync request logs from your GraphQL API

  • CloudWatch logs from EC2, Route53, and other services

  • GuardDuty security findings for threat detection

  • S3 access logs for requests to S3 buckets

Using AWS Integration data

After shaping, the incoming data populates datasets like these:

  • CloudWatch Log Group - Application errors, log event detail

  • IAM

    • IAM Group - Which groups are accessing resources

    • IAM Policy - Policies in use, their descriptions and contents

    • IAM Role - Compare role permissions over time

    • IAM User - Most active users

  • EC2

    • EC2 EBS Volume - Volumes in use, size, usage and performance metrics

    • EC2 Instance - What instances are in which VPCs, instance type, IP address

    • EC2 Network Interface - Associated instance, type, DNS name

    • EC2 Subnet - CIDR block, number of addresses available

    • EC2 VPC - Account and region, if default

  • Account - View resources by account

  • Lambda Function - Active functions, associated Log Group, invocation metrics

  • S3 Bucket - Buckets by account and region



Use our CloudFormation template to automate installing the AWS integration. To install via the AWS Console:

  1. Navigate to the CloudFormation console and view existing stacks.

  2. Click Create stack. If prompted, select With new resources.

  3. Provide the template details:

    1. Under Specify template, select Amazon S3 URL.

    2. In the Amazon S3 URL field, enter

    3. Click Next to continue. (You may be prompted to view the function in Designer. Click Next again to skip.)

  4. Specify the stack details:

    1. In Stack name, provide a name for this stack. It must be unique within a region, and is used to name created resources.

    2. Under Required Parameters, provide your Customer ID in ObserveCustomer and ingest token in ObserveToken.

    3. Click Next

  5. Under Configure stack options, there are no required options to configure. Click Next to continue.

  6. Review your stack options:

    1. Under Capabilities, check the box to acknowledge that this stack may create IAM resources.

    2. Click Create stack

Video instructions

Alternatively, you can deploy the CloudFormation template using the awscli utility:


If you have multiple AWS profiles, make sure you configure the appropriate AWS_REGION and AWS_PROFILE environment variables in addition to OBSERVE_CUSTOMER and OBSERVE_TOKEN.

$ curl -Lo collection.yaml
$ aws cloudformation deploy --template-file ./collection.yaml \
	  --stack-name ObserveLambda \
	  --capabilities CAPABILITY_NAMED_IAM \
	  --parameter-overrides ObserveCustomer="${OBSERVE_CUSTOMER?}" ObserveToken="${OBSERVE_TOKEN?}"

You may also use our Terraform module to install the AWS integration and created the needed Kinesis Firehose delivery stream. The following is an example instantiation of this module:

module "observe_collection" {
  source           = ""
  observe_customer = "${OBSERVE_CUSTOMER}"
  observe_token    = "${OBSERVE_TOKEN}"

We recommend that you pin the module version to the latest tagged version.


Where are the integration’s forwarders located?

All resources are created in the region where you installed the AWS Integration, such as us-east-1. They are named based on the CloudFormation stack name or Terraform module name you provided.

For example, a CloudFormation stack called Observe-AWS-Integration would result in names like:

  • Lambda function Observe-AWS-Integration

  • S3 bucket observe-aws-integration-bucket-1a2b3c4d5e

  • Kinesis Firehose delivery stream Observe-AWS-Integration-Delivery-Stream-1a2b3c4d5e


To ensure the generated resources comply with AWS naming rules, your stack or module name should contain only:

  • Letters (A-Z and a-z)

  • Numbers (0-9)

  • Hyphens (-)

  • Maximum of 30 characters

How do I collect data from multiple regions?

The Observe AWS integration operates on a per-region basis because some sources, such as CloudWatch metrics, are specific to a single region. For multiple regions, we recommend installing the integration in each region. You may do this with a CloudFormation StackSet, or by tying the Terraform module into your existing manifests.

What permissions are required?

The integration periodically queries the AWS API for information about certain services. To do this, its Lambda function has permissions to execute the following actions:

  • dynamodb:List*

  • dynamodb:Describe*

  • ec2:Describe*

  • ecs:List*

  • ecs:Describe*

  • elasticache:Describe*

  • elasticloadbalancing:Describe*

  • firehose:List*

  • firehose:Describe*

  • iam:Get*

  • iam:List*

  • lambda:List*

  • logs:Describe*

  • rds:Describe*

  • redshift:Describe*

  • route53:List*

  • s3:List*

You may change these permissions if needed. If the Lambda function does not have permission for a particular service, it will not collect that information.

The integration S3 bucket is subscribed to the Observe Lambda, with permissions that allow other AWS services to write to it. For example, ELB access logs or VPC flow logs.