Skip to main content

OpenAI

Required OpenAI Attributes

The following cost calculation tags are required for traces originating from OpenAI API calls.

Always Required Fields

These fields are always required for Beakpoint to calculate OpenAI costs:

Attribute NameExample ValueAllowed Values
gen_ai.systemopenaiopenai (must be this exact value)
gen_ai.request.modelgpt-4.1Any valid OpenAI model name
gen_ai.usage.input_tokens512Non-negative integer
gen_ai.usage.output_tokens128Non-negative integer

Optional Enrichment Attributes

These fields are optional but improve cost accuracy when provided:

Attribute NameExample ValueDescription
gen_ai.response.modelgpt-4.1-2025-04-14The exact model version returned in the response. When present, this takes precedence over gen_ai.request.model for pricing lookups.
gen_ai.usage.input_tokens.cached256Number of input tokens served from the prompt cache. Cached tokens are billed at a reduced rate.
gen_ai.usage.output_tokens.reasoning64Number of tokens used for internal reasoning (o-series models). Billed at the standard output token rate.

Supported Models

Beakpoint calculates costs for the following OpenAI models. Prices are per 1 million tokens (USD).

ModelInput ($/M)Cached Input ($/M)Output ($/M)
gpt-4.1$2.00$0.50$8.00
gpt-4.1-mini$0.40$0.10$1.60
gpt-4.1-nano$0.10$0.025$0.40
note

Prices reflect OpenAI list pricing and may change. Beakpoint keeps these rates up to date, but check the OpenAI pricing page for the latest figures.

Python Example

The quickest way to emit the required attributes is with the opentelemetry-instrumentation-openai-v2 package, which automatically attaches GenAI semantic conventions to every OpenAI API call.

pip install opentelemetry-instrumentation-openai-v2
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
from openai import OpenAI

# Instrument before creating the client
OpenAIInstrumentor().instrument()

client = OpenAI()

response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}],
)

The instrumentation automatically sets gen_ai.system, gen_ai.request.model, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, and the optional enrichment attributes whenever they are available in the API response.

For full setup instructions, including how to configure the OpenTelemetry exporter for Beakpoint, see the Track LLM Costs guide.