Other instrumentation for LLM observability

Since OpenLLMetry only supports Python and Javascript/Typescript, for other languages, you can rely on manual instrumentation using OpenTelemetry SDKs.

In such cases, ensure you model your distributed traces, such that they embed the following attributes, where relevant:

Instrument your LLM-powered app

  • Install the relevant OpenTelemetry SDK for your application. See Language APIs & SDKs in the OpenTelemetry documentation to find the installation instructions for your programming language.
  • Create spans using the SDK with the following general attributes and LLM span classifiers described below.

General attributes

AttributeValue TypeDescription
gen_ai.is_llm_rootstringIndicates the current span is the root LLM operation.
gen_ai.systemstringGenAI system, such as OpenAI.
llm.request.modelstringRequested LLM model name, such as gpt-4o.
llm.response.modelstringModel used by the LLM response, such as gpt-4o.
gen_ai.usage.prompt_tokensint64Prompt token usage.
gen_ai.usage.completion_tokensint64Completion token usage.
gen_ai.token.typestringToken type (prompt, completion, input, or output)

LLM span classifiers

AttributeAllowed ValuesDescription
traceloop.span.kindagent, workflow, task, or toolA step or workflow triggered from the application.
llm.request.typechat, embedding, completion, or rerankThe type of LLM call, if the span is an LLM request.

Record inputs and outputs

📘

Note

It is highly recommended that you do not ship sensitive or PII information to Observe. LLM spans commonly comprise text and media such as images and videos for inputs and outputs. Recording inputs and outputs is optional, but can be useful depending on the data and your troubleshooting requirements. You may choose either of the following approaches:

  • Pure media -> media operation
  • interchangeable text -> media operation

Attributes for text-based inputs and outputs

AttributeValue TypeDescription
gen_ai.prompt.0.content to gen_ai.prompt.N.contentstringA list of sequential prompts (inputs).
gen_ai.completion.0.content to gen_ai.completion.N.contentstringA list of sequential completions (outputs).
gen_ai.prompt.0.role to gen_ai.prompt.N.roleuser, assistant, or systemA list of attributes sequentially the role involved in each prompt.
gen_ai.completion.0.role to gen_ai.completion.N.roleuser, assistant, or systemA list of attributes sequentially the role involved in each completion.

Attributes for media-based inputs and outputs

Since Observe does not support storing binary data types, when using LLM models to process, generate, or translate media, we suggest storing a minimum viable hash or a back-reference such as a URL to the media.

AttributeValue TypeDescription
gen_ai.prompt.0.content to gen_ai.prompt.N.contentstringA list of attributes sequentially indicating a signature of the media (inputs).
gen_ai.completion.0.content to gen_ai.completion.N.contentstringA list of attributes sequentially indicating a signature of the media (outputs).