Deploy to a serverless EKS Fargate cluster
Serverless Kubernetes services, such as AWS EKS Fargate, may not support daemonsets, so the observe-agent must be configured to avoid installing daemonsets. To accomplish this, add the following to your values.yaml file:
node:
# Disables the node-logs-metrics daemonset.
# This workload is currently not supported in serverless kubernetes.
enabled: false
forwarder:
enabled: true
forwarder:
# Changes the forwarder from a daemonset to a deployment
mode: deployment
# Sets the number of replicas for the forwarder deployment.
# This can be adjusted based on your needs.
replicaCount: 2After this, you can continue sending OTLP data to the forwarder with the same service URI. For example:
- http://observe-agent-forwarder.observe.svc.cluster.local:4318 for OTLP/HTTP
- http://observe-agent-forwarder.observe.svc.cluster.local:4317 for OTLP/gRPC
Collecting Pod Metrics and Logs on EKS Fargate
Prerequisites
Before deploying the Observe Agent on EKS Fargate, ensure that you do the following:
-
Create a fargate profile associated with the
observenamespace, where the Observe Agent will be installed:# fill in your cluster name and region eksctl create fargateprofile \ --cluster demo-fargate-cluster \ --name observe-profile \ --namespace observe \ --region us-east-2 -
Install the
opentelemetry-operatorHelm Chart in theobservenamespace.helm install opentelemetry-operator open-telemetry/opentelemetry-operator \ --set "manager.collectorImage.repository=ghcr.io/open-telemetry/opentelemetry-collector-releases/opentelemetry-collector-k8s" \ --set admissionWebhooks.certManager.enabled=false \ --set admissionWebhooks.autoGenerateCert.enabled=true \ --namespace observe -
Wait for the new pods to be running and ready: (Run
kubectl get pods -n observe, and you should see a pod namedopentelemetry-operator-[hash string])
Pod Metrics from EKS Fargate
To configure the Observe Agent to collect pod metrics from EKS Fargate, do the following:
-
Add the following to your
values.yamlfile and install/upgrade the Helm Chart:nodeless: enabled: true hostingPlatform: fargate metrics: enabled: true # this is a map from namespaces to service accounts within that # namespace. It will apply the cluster role for that namespace # and serviceAccount that you would otherwise apply manually # in the next step. serviceAccounts: dev: ["devServiceAccount1", "devServiceAccount2"] production: ["productionServiceAccount1", "productionServiceAccount2"] -
(Only if you did not add them automatically above) To grant permissions to the serviceAccounts manually, apply a cluster role to allow the sidecar to query the kubelet API. Then apply the changes with
kubectl apply -f cluster-role.yaml# create a file: cluster-role.yaml apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-sidecar-role rules: - apiGroups: [""] resources: - nodes - nodes/proxy - namespaces - pods verbs: ["get", "list", "watch"] - apiGroups: ["apps"] resources: - replicasets verbs: ["get", "list", "watch"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-sidecar-role-binding subjects: - kind: ServiceAccount name: [your service account] namespace: [namespace to monitor] roleRef: kind: ClusterRole name: otel-sidecar-role apiGroup: rbac.authorization.k8s.io -
Annotate the pods that you would like to monitor.
EKS Fargate Container Logs
You can also collect container logs from EKS Fargate containers with the Observe Agent. To do so:
- Add the following to your
values.yamlfile and install/upgrade the helm chart:There are some configuration options undernodeless: enabled: true hostingPlatform: fargate logs: enabled: true containerNameFromFile: false # This parameter is for fargate specifically retryOnFailure: enabled: true initialInterval: 1s maxInterval: 30s maxElapsedTime: 5m include: '["/applogs/**/*.log", "/applogs/**/*.log.*"]' exclude: '["**/*.gz", "**/*.tmp"]' multiline: lookbackPeriod: 24h startAt: end autoMultilineDetection: false # this is a map from namespaces to service accounts within that namespace. # It will apply the cluster role for that namespace and serviceAccount that # you would otherwise apply manually in serviceAccounts: dev: ["devServiceAccount1", "devServiceAccount2"] production: ["productionServiceAccount1", "productionServiceAccount2"]logs. These include most of the config options from thenode-logs-metricslog collection config.containerNameFromFile, is specific to EKS Fargate, controls how we get the container name that we are monitoring. When false, we will use the first container in the pod. When true, we will use the file name that the application writes to. For example.applogs/container1.logwould result in a container name ofcontainer1. - As with the Metrics, you can grant serviceAccount permissions manually. Refer to Step 2 of the Metrics setup above.
- After deploying the agent, add the following to each container you would like to monitor:
And add the following to the pod spec:
volumeMounts: - name: log-storage mountPath: /applogs readOnly: falsePlease note that if you do not add these and try to monitor pod logs, the monitored pod will crash, as our injected sidecars depend on the volume existing.volumes: - name: log-storage emptyDir: sizeLimit: 50Mi - Modify your application to write logs to
/applogs/as well asstdoutandstderr. For example, in aDockerfile, you can add the following:Logs will automatically be rotated by theCMD ["sh", "-c", "instruction > >(tee -a /applogs/container.log > >&1) 2> >(tee -a /applogs/container.log >&2)"]logging-sidecarsidecar that is injected into all monitored pods. - Annotate the pods that you would like to monitor
Pod Annotation
Add "sidecar.opentelemetry.io/inject": "observe/fargate-collector" as an annotation to all deployments whose pods you wish to monitor. To quickly do this for all deployments in a namespace, run the following:
for d in $(kubectl get deployments -n $TARGET_NAMESPACE -o name); do
kubectl patch $d -n $TARGET_NAMESPACE --type='merge' -p '{"spec": {"template": {"metadata": {"annotations": {"sidecar.opentelemetry.io/inject": "observe/fargate-collector"}}}}}'
doneThis should force a rolling restart of pods in that namespace, which is necessary for the operator to inject a sidecar into the application pods. To do so manually, you can run the following:
kubectl -n [namespace with pods to monitor] rollout restart deployUpdated about 19 hours ago