Instrumenting other languages for LLM observability

Since OpenLLMetry only supports Python and Javascript/Typescript, for other languages, you can rely on manual instrumentation using OpenTelemetry SDKs.

In such cases, ensure you model your distributed traces, such that they embed the following attributes, where relevant:

Instrumenting your LLM-powered app

General Attributes

Attribute

Value Type

Description

gen_ai.is_llm_root

string

Indicates the current span is the root LLM operation

gen_ai.system

string

GenAI system. E.g. OpenAI

llm.request.model

string

Requested LLM model name. E.g. gpt-4o

llm.response.model

string

Model used by the LLM response. E.g. gpt-4o

gen_ai.usage.prompt_tokens

int64

Prompt token usage

gen_ai.usage.completion_tokens

int64

Completion token usage

gen_ai.token.type

string

Token type (prompt, completion, input, or output)

LLM Span Classifiers

Attribute

Allowed Values

Description

traceloop.span.kind

agent, workflow, task, tool

A step or workflow triggered from the application

llm.request.type

chat, embedding, completion, rerank

The type of LLM call, if the span is an LLM request

Recording Inputs and Outputs

Note: It is highly recommended that you do not ship sensitive or PII information to Observe. LLM spans commonly comprise text and media (e.g. image / video) for inputs and outputs. Recording inputs and outputs is optional, but can be useful depending on the data and your troubleshooting requirements. You may choose either of the following approaches (e.g. a pure media -> media operation ) or interchangeably (e.g. text -> media operation).

  • Attributes for text-based inputs / outputs

    Attribute

    Value Type

    Description

    gen_ai.prompt.0.content to gen_ai.prompt.N.content

    string

    A list of sequential prompts (inputs)

    gen_ai.completion.0.content to gen_ai.completion.N.content

    string

    A list of sequential completions (outputs)

    gen_ai.prompt.0.role to gen_ai.prompt.N.role

    One of user, assistant or system

    A list of attributes sequentially the role involved in each prompt

    gen_ai.completion.0.role to gen_ai.completion.N.role

    One of user, assistant or system

    A list of attributes sequentially the role involved in each completion

  • Attributes for media-based inputs / outputs

    Since Observe does not support storing binary data types, when using LLM models to process, generate, or translate media, we suggest storing a minimum viable hash or a back-reference such as a URL to the media.

    Attribute

    Value Type

    Description

    gen_ai.prompt.0.content to gen_ai.prompt.N.content

    string

    A list of attributes sequentially indicating a signature of the media (inputs)

    gen_ai.completion.0.content to gen_ai.completion.N.content

    string

    A list of attributes sequentially indicating a signature of the media (outputs)