Skip to main content
The genai_input_cost function calculates the cost of input tokens (prompt tokens) for a GenAI API call based on the model name and number of input tokens. This helps you understand and track the cost of prompts separately from responses. You can use this function to analyze prompt costs, optimize prompt engineering for cost efficiency, track input spending separately, or create detailed cost breakdowns.

For users of other query languages

If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you would need to lookup pricing and calculate costs manually.
| lookup model_pricing model OUTPUT input_price
| eval input_cost=(input_tokens * input_price / 1000000)
In ANSI SQL, you would join with a pricing table and calculate input costs.
SELECT 
  l.*,
  (l.input_tokens * p.input_price / 1000000) as input_cost
FROM ai_logs l
JOIN model_pricing p ON l.model = p.model_name

Usage

Syntax

genai_input_cost(model, input_tokens)

Parameters

  • model (string, required): The name of the AI model (for example, ‘gpt-4’, ‘claude-3-opus’, ‘gpt-3.5-turbo’).
  • input_tokens (long, required): The number of input tokens (prompt tokens) used in the API call.

Returns

Returns a real number representing the cost in dollars (USD) for the input tokens based on the model’s pricing.

Use case examples

  • Log analysis
  • OpenTelemetry traces
  • Security logs
Analyze input costs separately to understand how much you spend on prompts versus responses.Query
['sample-http-logs']
| where uri contains '/api/openai'
| extend model_name = tostring(todynamic(response_body)['model'])
| extend prompt_tokens = tolong(todynamic(response_body)['usage']['prompt_tokens'])
| extend prompt_cost = genai_input_cost(model_name, prompt_tokens)
| summarize total_prompt_cost = sum(prompt_cost) by model_name, bin(_time, 1h)
Run in PlaygroundOutput
_timemodel_nametotal_prompt_cost
2024-01-15T10:00:00Zgpt-44.56
2024-01-15T10:00:00Zgpt-3.5-turbo0.23
This query breaks down prompt costs by model and time, helping you understand where prompt spending occurs.
  • genai_output_cost: Calculates output token cost. Use this alongside input costs to understand the full cost breakdown.
  • genai_cost: Calculates total cost (input + output). Use this when you need combined costs.
  • genai_get_pricing: Gets pricing information. Use this to understand the pricing structure behind cost calculations.
  • genai_estimate_tokens: Estimates token count from text. Combine with input cost to predict prompt costs before API calls.