Instrumenting other languages for LLM observability¶
Since OpenLLMetry only supports Python and Javascript/Typescript, for other languages, you can rely on manual instrumentation using OpenTelemetry SDKs.
In such cases, ensure you model your distributed traces, such that they embed the following attributes, where relevant:
Instrumenting your LLM-powered app¶
Install the relevant OpenTelemetry SDK for your application - https://opentelemetry.io/docs/languages/
Create spans using the SDK with attributes described below
General Attributes¶
Attribute |
Value Type |
Description |
|---|---|---|
|
|
Indicates the current span is the root LLM operation |
|
|
GenAI system. E.g. |
|
|
Requested LLM model name. E.g. |
|
|
Model used by the LLM response. E.g. |
|
|
Prompt token usage |
|
|
Completion token usage |
|
|
Token type ( |
LLM Span Classifiers¶
Attribute |
Allowed Values |
Description |
|---|---|---|
|
|
A step or workflow triggered from the application |
|
|
The type of LLM call, if the span is an LLM request |
Recording Inputs and Outputs¶
Note: It is highly recommended that you do not ship sensitive or PII information to Observe.
LLM spans commonly comprise text and media (e.g. image / video) for inputs and outputs. Recording inputs and outputs is optional, but can be useful depending on the data and your troubleshooting requirements. You may choose either of the following approaches (e.g. a pure media -> media operation ) or interchangeably (e.g. text -> media operation).
Attributes for text-based inputs / outputs
Attribute
Value Type
Description
gen_ai.prompt.0.contenttogen_ai.prompt.N.contentstringA list of sequential prompts (inputs)
gen_ai.completion.0.contenttogen_ai.completion.N.contentstringA list of sequential completions (outputs)
gen_ai.prompt.0.roletogen_ai.prompt.N.roleOne of
user,assistantorsystemA list of attributes sequentially the role involved in each prompt
gen_ai.completion.0.roletogen_ai.completion.N.roleOne of
user,assistantorsystemA list of attributes sequentially the role involved in each completion
Attributes for media-based inputs / outputs
Since Observe does not support storing binary data types, when using LLM models to process, generate, or translate media, we suggest storing a minimum viable hash or a back-reference such as a URL to the media.
Attribute
Value Type
Description
gen_ai.prompt.0.contenttogen_ai.prompt.N.contentstringA list of attributes sequentially indicating a signature of the media (inputs)
gen_ai.completion.0.contenttogen_ai.completion.N.contentstringA list of attributes sequentially indicating a signature of the media (outputs)