Filter Logs or Metrics

Filter Logs

Let’s suppose you want to exclude all logs with a path starting with /var/log/pods/default_log-generator-csv

  1. Create filter-logs-values.yaml with the following configuration.

node:
  containers:
    logs:
      enabled: true
      # log lines above this size will be truncated
      maxLogSize: 512kb
      # If true, the receiver will pause reading a file and attempt to resend the current batch of logs if it encounters an error from downstream components.
      retryOnFailure:
        enabled: true
      # A list of file glob patterns that match the file paths to be read.
      include: '["/var/log/pods/*/*/*.log", "/var/log/kube-apiserver-audit.log"]'
      # A list of file glob patterns to exclude from reading. This is applied against the paths matched by include.
      exclude: '["/var/log/pods/default_log-generator-csv*/**"]'
      # time unit 1m, 1h, 1d
      excludeOlderThan: 1d
      # At startup, where to start reading logs from the file. Options are beginning or end.
      startAt: end
  1. Redeploy the Observe Agent

Run the following command to redeploy the Observe Agent in the observe namespace with the prometheusScrape configuration.

helm upgrade --reuse-values observe-agent observe/agent -n observe --values filter-logs-values.yaml
  1. Restart pods

Run the following commands to restart the pods with the updated configuration.

kubectl rollout restart deployment -n observe
kubectl rollout restart daemonset -n observe

Run the following command to make sure the Observe Agent has been redeployed successfully.

kubectl get pods -o wide -n observe

For more examples, see Generating sample logs

Filter Metrics By Name

The following config block can be used to filter prometheus metrics from pods.

application:
  # use this option to scrape prometheus metrics from pods
  prometheusScrape:
    enabled: true
    interval: 10s
    # namespaces to exclude from scraping
    namespaceDropRegex: (.*istio.*|.*ingress.*|kube-system)
    # namespaces to explicity include for scraping - can use or (ns1|ns2)
    namespaceKeepRegex: (default)
    # port names to scrape from - can use or .*metrics|otherportname
    portKeepRegex: .*metrics|web
    # metrics to drop
    metricDropRegex: .*gc_heap_allocs.*
    # metrics to keep
    metricKeepRegex: (.*)

Let’s suppose you’d like to stop collecting scrape_samples_post_metric_relabeling.

  1. Create filter-prometheus-metrics.yaml with the following configuration.

application:
  # use this option to scrape prometheus metrics from pods
  prometheusScrape:
    enabled: true
    interval: 60s
    # namespaces to exclude from scraping
    namespaceDropRegex: (.*istio.*|.*ingress.*|kube-system)
    # namespaces to explicity include for scraping - can use or (ns1|ns2)
    namespaceKeepRegex: (default)
    # port names to scrape from - can use or .*metrics|otherportname
    portKeepRegex: .*metrics|web
    # metrics to drop
    metricDropRegex: (scrape_samples_post_metric_relabeling)
    # metrics to keep
    metricKeepRegex: (.*)
  1. Redeploy the Observe Agent

Run the following command to redeploy the Observe Agent in the observe namespace with the prometheusScrape configuration.

helm upgrade --reuse-values observe-agent observe/agent -n observe --values filter-prometheus-metrics.yaml
  1. Restart pods

Run the following commands to restart the pods with the updated configuration.

kubectl rollout restart deployment -n observe
kubectl rollout restart daemonset -n observe

Run the following command to make sure the Observe Agent has been redeployed successfully.

kubectl get pods -o wide -n observe

For more examples, see Prometheus pod metrics scrape example