Tutorial: Instrumenting Single Container Java Apps¶
Real world Observability can be full of edge cases. While we want all important systems to be continuously built as microservices from a monorepo, sometimes important systems are not that way. Luckily, we do not have to go without Observability of important edge cases! In this tutorial, we will look at two ways to instrument a stand alone Java Docker application running on Amazon Elastic Compute nodes. With no `docker-compose1 or any orchestration system, we will enable tracing of that application.
Best Practice Method¶
In order to ensure reliable telemetry collection, we want to put the collection components into a separate container which is run alongside the application. We will do this with a shared Docker network bridge for your Java application and the OpenTelemetry (OTel) Collector Exporter container.
Create the OTel Collector Exporter Configuration¶
Create a configuration file for the OTel Collector Exporter
container. This file will define how the collector receives, processes, and exports telemetry data.
Save the following configuration as otel-collector-config.yaml
:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
http:
endpoint: 0.0.0.0:4318 # This will accept traffic on all interfaces
processors:
batch:
exporters:
logging:
otlphttp:
endpoint: "https://{OBSERVE_CUSTOMER}.collect.observeinc.com/v2/otel"
headers:
authorization: "Bearer {OBSERVE_TOKEN}"
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlphttp]
Alternative: Use an Observe Agent Container¶
Observe also packages the OTel collector as a container image. See the documentation for configuration details.
Deploy the OTel Collector Exporter Container¶
Deploy the OTel Collector Exporter container using the otel-collector-config.yaml
configuration file. Ensure that the command is run from the directory containing the OTel-collector-config.yaml
file, or adjust the path as needed.
docker run -v $(pwd)/otel-collector-config.yaml:/etc/otel-collector-config.yaml -v /var/logs:/tmp --network=otelnetwork --name OTelcol -p 4318:4318 -p 4317:4317 otel/opentelemetry-collector-contrib --config /etc/otel-collector-config.yaml
Deploy or Connect to the Application Container¶
Finally, we need to let the Java application container know about the bridge. This is done by deploying it on the same otelnetwork
bridge network to enable communication with OTel Collector Exporter
. If your application container is already running, you can simply connect it to the otelnetwork
bridge.
Deploying a New Application Container¶
To deploy a new Java application container on the otelnetwork
bridge:
docker run --network=otelnetwork -p 8080:8080 my_app
Connect to a Running Application Container¶
If your Java application container is already running, connect it to the otelnetwork
bridge:
Stop the existing container (if needed, to avoid downtime):
docker stop <existing_container_name>
Connect the stopped container to the
otelnetwork
bridge:
docker network connect otelnetwork <existing_container_name>
Restart the container:
docker start <existing_container_name>
Submit a Test Span through the OpenTelemetry Collector¶
To validate your Observe setup, send a test span through the OpenTelemetry Collector to confirm the configuration and connection to the Observe backend.
Create a Test Span:
Save the following JSON in a file named span.json
:
{
"resourceSpans": [
{
"resource": {
"attributes": [
{
"key": "service.name",
"value": {
"stringValue": "test-service"
}
}
]
},
"scopeSpans": [
{
"scope": {
"name": "manual-test"
},
"spans": [
{
"traceId": "71699b6fe85982c7c8995ea3d9c95df2",
"spanId": "3c191d03fa8be065",
"name": "spanitron",
"kind": 2,
"droppedAttributesCount": 0,
"events": [],
"droppedEventsCount": 0,
"status": {
"code": 1
}
}
]
}
]
}
]
}
Send the Test Span:
Use the following curl
command to send the test span to the OpenTelemetry Collector:
curl -i http://localhost:4318/v1/traces -X POST -H "Content-Type: application/json" -d @span.json
Note
-i
: include the headers in the output, for troubleshootinghttp://localhost:4318/v1/traces
: The URL and port where the OTel collector container can receive OTLP/HTTP traces-X POST
: The type of HTTP action to use-H "Content-Type: application/json"
: Informing the OTel collector that this is JSON data-d @span.json
: Sends the data from the span.json file.
Verify in the Observe instance:
Log in to your Observe instance and navigate to the data.
Alternative Method¶
A single-container Java application’s administrator may not be willing to set up a separate collector or network bridge. We can simplify telemetry collection in this case, by instrumenting the application with the OpenTelemetry Java agent. This method will directly ship traces from the Java agent (SDK) to Observe. However, this approach has some limitations:
In environments with multiple Java applications in separate containers, there’s a risk of dropping payloads due to the absence of an intermediary agent to batch and flush traces. Network issues or endpoint unavailability can prevent successful trace collection.
Each application instance sends its own data, which can cause increased network traffic.
Managing individual telemetry configurations and ensuring reliable delivery from each container can become complex and error-prone for large-scale deployments.
To instrument the application, we will make the following changes to its Dockerfile:
Note
Some Observe instances may optionally use a name instead of Customer ID; if this is the case for your instance, contact your Observe Data Engineer to discuss implementation. A stem name will work as is, but a DNS redirect name may require client configuration.
# Copy the built application from the build stage
COPY --from=build /app/build/libs/java-app.jar /app/java-app.jar
# Download the OpenTelemetry Java agent and save it to the /app directory
RUN wget https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/latest/download/opentelemetry-javaagent.jar -O /app/opentelemetry-javaagent.jar
# Update the package lists from APK (Alpine Linux package manager)
RUN apk update
# Install curl, a command-line tool for transferring data with URLs
RUN apk add curl
# Expose port 8080 to allow outside access to application
EXPOSE 8080
# Set environment variables for OpenTelemetry
# OTEL_SERVICE_NAME: Name of the service for traces
ENV OTEL_SERVICE_NAME=java-app
# OTEL_TRACES_EXPORTER: Exporter for traces, set to use OTLP
ENV OTEL_TRACES_EXPORTER=otlp
# OTEL_METRICS_EXPORTER: Exporter for metrics, set to use OTLP
ENV OTEL_METRICS_EXPORTER=otlp
# OTEL_LOGS_EXPORTER: Exporter for logs, set to use OTLP
ENV OTEL_LOGS_EXPORTER=otlp
# OTEL_EXPORTER_OTLP_ENDPOINT: Endpoint for sending telemetry
data
# Note: Replace OBSERVE_CUSTOMER with your Observe Customer ID
ENV OTEL_EXPORTER_OTLP_ENDPOINT=https://{OBSERVE_CUSTOMER}.collect.observeinc.com/v2/otel
# OTEL_EXPORTER_OTLP_PROTOCOL: Protocol to use for OTLP (HTTP with protocol buffer encoding)
ENV OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf"
# OTEL_EXPORTER_OTLP_TRACES_HEADERS: Authorization header for OTLP traces export
# Note: The value includes special characters that need to be in URL encoded.
# For example:
# - A space should be encoded as %20
# - A colon (:) should be encoded as %3A
# The example below shows the Authorization header with these encodings.
ENV OTEL_EXPORTER_OTLP_TRACES_HEADERS="Authorization=Bearer%20ds1LHhHGf9YsAjODYuCq%3ABDnB88ZbMRsnj-g6Mw23w45Nr22b3vJG"
# Command to run the application with OpenTelemetry Java agent
CMD ["java", "-javaagent:/app/opentelemetry-javaagent.jar", "-jar", "/app/java-app.jar"]
Edit the command line as needed, and test thoroughly. For more information on this approach, you may find the Observe OTEL Instrumentation Tutorial useful, or OpenTelemetry’s Java documentation.