Skip to main content
The genai_is_truncated function checks whether an AI model response was truncated due to reaching token limits or other constraints. It analyzes the finish reason returned by the API to determine if the response was cut short. You can use this function to identify incomplete responses, monitor quality issues, detect token limit problems, or track when conversations need continuation.

For users of other query languages

If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
In Splunk SPL, you would check the finish_reason field manually.
| eval is_truncated=if(finish_reason="length", "true", "false")
In ANSI SQL, you would check the finish_reason field value.
SELECT 
  conversation_id,
  CASE WHEN finish_reason = 'length' THEN true ELSE false END as is_truncated
FROM ai_logs

Usage

Syntax

genai_is_truncated(messages, finish_reason)

Parameters

  • messages (dynamic, required): An array of message objects from a GenAI conversation. Each message typically contains role and content fields.
  • finish_reason (string, required): The finish reason returned by the AI API (such as ‘stop’, ‘length’, ‘content_filter’, ‘tool_calls’).

Returns

Returns a boolean value: true if the response was truncated (typically when finish_reason is ‘length’), false otherwise.

Use case examples

  • Log analysis
  • OpenTelemetry traces
  • Security logs
Monitor the rate of truncated responses to understand if token limits are causing quality issues.Query
['sample-http-logs']
| where uri contains '/api/chat'
| extend finish = tostring(todynamic(response_body)['choices'][0]['finish_reason'])
| extend is_truncated = genai_is_truncated(todynamic(response_body)['messages'], finish)
| summarize 
    truncated_count = countif(is_truncated),
    total_count = count(),
    truncation_rate = round(100.0 * countif(is_truncated) / count(), 2)
by bin(_time, 1h)
Run in PlaygroundOutput
_timetruncated_counttotal_counttruncation_rate
2024-01-15T10:00:00Z4514503.10
2024-01-15T11:00:00Z5215233.41
This query tracks the rate of truncated responses over time, helping you identify when token limits are causing problems.
  • genai_estimate_tokens: Estimates token count. Use this to predict if responses might be truncated before making API calls.
  • genai_conversation_turns: Counts conversation turns. Analyze this alongside truncation to understand context length issues.
  • genai_extract_assistant_response: Extracts assistant responses. Use this to examine truncated responses.
  • strlen: Returns string length. Use this to analyze the length of truncated responses.