Python instrumentation for LLM observability¶
Supported Frameworks and Libraries¶
OpenLLMetry automatically instruments the following provider APIs and libraries:
OpenAI / Azure OpenAI
Anthropic
Cohere
Ollama
Mistral AI
HuggingFace
Bedrock (AWS)
SageMaker (AWS)
Replicate
Vertex AI (GCP)
Google Generative AI (Gemini)
IBM Watsonx AI
Together AI
Aleph Alpha
Groq
Supported Vector Databases¶
Chroma
Pinecone
Qdrant
Weaviate
Milvus
Marqo
LanceDB
Supported Frameworks¶
LangChain
LlamaIndex
Haystack
LiteLLM
CrewAI
Miscellaneous¶
Model Context Protocol
Install & configure the Python instrumentation¶
Add traceloop SDK as a dependency
pip install traceloop-sdk
Initialize the SDK
from traceloop.sdk import Traceloop Traceloop.init( app_name="<YOUR_SERVICE_NAME>", telemetry_enabled=False )
Set the following environment variables:¶
Environment variable |
Example Values |
Description |
Optional? |
---|---|---|---|
|
http://<YOUR_OBSERVE_AGENT_HOSTNAME>:4318 |
OTLP endpoint |
No |
|
true / false |
Enables or disables extraction of inputs and outputs from LLM calls |
Yes |
|
deployment.environment=dev,service.namespace=inference |
A list of key=value resource attributes you wish to add to your spans |
Yes |
Add attributes at runtime¶
To annotate your span data with custom attributes like customer_id
, user_id
, and so on, we recommend OpenLLMetry’s set_association_properties
. For example:
Traceloop.set_association_properties({ "user_id": "user12345", "chat_id": "chat12345" })
Frequently Asked Questions¶
Instrumentation does not work on Guincorn based app¶
This may happen because frameworks like Gunicorn use a fork process model. The recommended approach for gunicorn is to call the instrumentor in the post_fork
operation. For example, modify your gunicorn_conf.py
to add the following:
from traceloop.sdk import Traceloop
from typing import Any
def post_fork(server: Any, worker: Any) -> None:
Traceloop.init(
app_name="<YOUR_SERVICE_NAME>",
telemetry_enabled=False
)
Interoperability with OpenTelemetry zero-code or automatic instrumentation¶
To ensure OpenLLMetry does not conflict with OpenTelemetry, you would need to ensure the following:
If you use zero-code instrumentation via
opentelemetry-instrument
, you would need to switch to a programmatic approachTo ensure correct ordering of instrumentations,
auto_instrumentation.initialize()
must be called afterTraceloop.init(...)
Add an environment variable to inform the OpenTelemetry SDK to ignore OpenLLMetry instrumentations -
OTEL_PYTHON_DISABLED_INSTRUMENTATIONS=pinecone_client,qdrant_client,mistralai,haystack-ai,chromadb,llama-index,crewai,lancedb,openai,langchain,milvus,aleph_alpha_client,google_generativeai,weaviate_client,marqo,cohere,replicate,groq,ibm-watson-machine-learning,together,google_cloud_aiplatform,ollama,mcp
The
app_name
in yourTraceloop.init(…)
must match your OpenTelemetry SDK identifiers or theOTEL_SERVICE_NAME
environment variable.