Deploy to a serverless EKS Fargate cluster

Serverless Kubernetes services, such as AWS EKS Fargate, may not support daemonsets, so the observe-agent must be configured to avoid installing daemonsets. To accomplish this, add the following to your values.yaml file:

node:
  # Disables the node-logs-metrics daemonset.
  # This workload is currently not supported in serverless kubernetes.
  enabled: false
  forwarder:
    enabled: true

forwarder:
  # Changes the forwarder from a daemonset to a deployment
  mode: deployment
  # Sets the number of replicas for the forwarder deployment.
  # This can be adjusted based on your needs.
  replicaCount: 2

After this, you can continue sending OTLP data to the forwarder with the same service URI. For example:

Collecting Pod Metrics and Logs on EKS Fargate

Prerequisites

Before deploying the Observe Agent on EKS Fargate, ensure that you do the following:

  1. Create a fargate profile associated with the observe namespace, where the Observe Agent will be installed:

    # fill in your cluster name and region
    eksctl create fargateprofile \
    --cluster demo-fargate-cluster \
    --name observe-profile \
    --namespace observe \
    --region us-east-2
  2. Install the opentelemetry-operator Helm Chart in the observe namespace.

    helm install opentelemetry-operator open-telemetry/opentelemetry-operator \
    --set "manager.collectorImage.repository=ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-k8s" \
    --set admissionWebhooks.certManager.enabled=false \
    --set admissionWebhooks.autoGenerateCert.enabled=true \
    --namespace observe
  3. Wait for the new pods to be running and ready: (Run kubectl get pods -n observe, and you should see a pod named opentelemetry-operator-[hash string])

Pod Metrics from EKS Fargate

To configure the Observe Agent to collect pod metrics from EKS Fargate, do the following:

  1. Add the following to your values.yaml file and install/upgrade the Helm Chart:

    nodeless:
     enabled: true
     hostingPlatform: fargate
     metrics:
       enabled: true
    
     # this is a map from namespaces to service accounts within that
     # namespace. It will apply the cluster role for that namespace
     # and serviceAccount that you would otherwise apply manually
     # in the next step.
     serviceAccounts:
       dev: ["devServiceAccount1", "devServiceAccount2"]
       production: ["productionServiceAccount1", "productionServiceAccount2"]
  2. (Only if you did not add them automatically above) To grant permissions to the serviceAccounts manually, apply a cluster role to allow the sidecar to query the kubelet API. Then apply the changes with kubectl apply -f cluster-role.yaml

    # create a file: cluster-role.yaml
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: otel-sidecar-role
    rules:
      - apiGroups: [""]
        resources:
          - nodes
          - nodes/proxy
          - namespaces
          - pods
        verbs: ["get", "list", "watch"]
    
      - apiGroups: ["apps"]
        resources:
          - replicasets
        verbs: ["get", "list", "watch"]
    
    ---
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRoleBinding
    metadata:
      name: otel-sidecar-role-binding
    subjects:
    - kind: ServiceAccount
      name: [your service account]
      namespace: [namespace to monitor]
    roleRef:
      kind: ClusterRole
      name: otel-sidecar-role
      apiGroup: rbac.authorization.k8s.io
  3. Annotate the pods that you would like to monitor.

EKS Fargate Container Logs

You can also collect container logs from EKS Fargate containers with the Observe Agent. To do so:

  1. Add the following to your values.yaml file and install/upgrade the helm chart:
    nodeless:
      enabled: true
      hostingPlatform: fargate
      logs:
        enabled: true
        containerNameFromFile: false # This parameter is for fargate specifically
        retryOnFailure:
          enabled: true
          initialInterval: 1s
          maxInterval: 30s
          maxElapsedTime: 5m
        include: '["/applogs/**/*.log", "/applogs/**/*.log.*"]'
        exclude: '["**/*.gz", "**/*.tmp"]'
        multiline:
        lookbackPeriod: 24h
        startAt: end
        autoMultilineDetection: false
        # this is a map from namespaces to service accounts within that namespace. 
        # It will apply the cluster role for that namespace and serviceAccount that 
        # you would otherwise apply manually in 
      serviceAccounts:
      dev: ["devServiceAccount1", "devServiceAccount2"]
      production: ["productionServiceAccount1", "productionServiceAccount2"]
    There are some configuration options under logs. These include most of the config options from the node-logs-metrics log collection config. containerNameFromFile, is specific to EKS Fargate, controls how we get the container name that we are monitoring. When false, we will use the first container in the pod. When true, we will use the file name that the application writes to. For example. applogs/container1.log would result in a container name of container1.
  2. As with the Metrics, you can grant serviceAccount permissions manually. Refer to Step 2 of the Metrics setup above.
  3. After deploying the agent, add the following to each container you would like to monitor:
     volumeMounts:
       - name: log-storage
         mountPath: /applogs
         readOnly: false
    And add the following to the pod spec:
     volumes:
     - name: log-storage
       emptyDir:
       sizeLimit: 50Mi
    
    Please note that if you do not add these and try to monitor pod logs, the monitored pod will crash, as our injected sidecars depend on the volume existing.
  4. Modify your application to write logs to /applogs/ as well as stdout and stderr. For example, in a Dockerfile, you can add the following:
     CMD ["sh", "-c", "instruction > >(tee -a /applogs/container.log > >&1) 2> >(tee -a /applogs/container.log >&2)"]
    Logs will automatically be rotated by the logging-sidecar sidecar that is injected into all monitored pods.
  5. Annotate the pods that you would like to monitor

Pod Annotation

Add "sidecar.opentelemetry.io/inject": "observe/fargate-collector" as an annotation to all deployments whose pods you wish to monitor. To quickly do this for all deployments in a namespace, run the following:

for d in $(kubectl get deployments -n $TARGET_NAMESPACE -o name); do
  kubectl patch $d -n $TARGET_NAMESPACE --type='merge' -p '{"spec": {"template": {"metadata": {"annotations": {"sidecar.opentelemetry.io/inject": "observe/fargate-collector"}}}}}'
done

This should force a rolling restart of pods in that namespace, which is necessary for the operator to inject a sidecar into the application pods. To do so manually, you can run the following:

kubectl -n [namespace with pods to monitor] rollout restart deploy