OpenAI
Required OpenAI Attributes
The following cost calculation tags are required for traces originating from OpenAI API calls.
Always Required Fields
These fields are always required for Beakpoint to calculate OpenAI costs:
| Attribute Name | Example Value | Allowed Values |
|---|---|---|
gen_ai.system | openai | openai (must be this exact value) |
gen_ai.request.model | gpt-4.1 | Any valid OpenAI model name |
gen_ai.usage.input_tokens | 512 | Non-negative integer |
gen_ai.usage.output_tokens | 128 | Non-negative integer |
Optional Enrichment Attributes
These fields are optional but improve cost accuracy when provided:
| Attribute Name | Example Value | Description |
|---|---|---|
gen_ai.response.model | gpt-4.1-2025-04-14 | The exact model version returned in the response. When present, this takes precedence over gen_ai.request.model for pricing lookups. |
gen_ai.usage.input_tokens.cached | 256 | Number of input tokens served from the prompt cache. Cached tokens are billed at a reduced rate. |
gen_ai.usage.output_tokens.reasoning | 64 | Number of tokens used for internal reasoning (o-series models). Billed at the standard output token rate. |
Supported Models
Beakpoint calculates costs for the following OpenAI models. Prices are per 1 million tokens (USD).
| Model | Input ($/M) | Cached Input ($/M) | Output ($/M) |
|---|---|---|---|
gpt-4.1 | $2.00 | $0.50 | $8.00 |
gpt-4.1-mini | $0.40 | $0.10 | $1.60 |
gpt-4.1-nano | $0.10 | $0.025 | $0.40 |
Prices reflect OpenAI list pricing and may change. Beakpoint keeps these rates up to date, but check the OpenAI pricing page for the latest figures.
Python Example
The quickest way to emit the required attributes is with the opentelemetry-instrumentation-openai-v2 package, which automatically attaches GenAI semantic conventions to every OpenAI API call.
pip install opentelemetry-instrumentation-openai-v2
from opentelemetry.instrumentation.openai_v2 import OpenAIInstrumentor
from openai import OpenAI
# Instrument before creating the client
OpenAIInstrumentor().instrument()
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4.1-mini",
messages=[{"role": "user", "content": "Hello!"}],
)
The instrumentation automatically sets gen_ai.system, gen_ai.request.model, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, and the optional enrichment attributes whenever they are available in the API response.
For full setup instructions, including how to configure the OpenTelemetry exporter for Beakpoint, see the Track LLM Costs guide.