Skip to main content

Overview

In long-running voice AI conversations, context grows with every exchange. This increases token usage, raises costs, and can eventually hit context window limits. Pipecat includes built-in context summarization that automatically compresses older conversation history while preserving recent messages and important context.

How It Works

Context summarization automatically triggers when either condition is met:
  • Token limit reached: Context size exceeds max_context_tokens (estimated using ~4 characters per token)
  • Message count reached: Number of new messages exceeds max_unsummarized_messages
When triggered, the system:
  1. Sends a LLMContextSummaryRequestFrame to the LLM service
  2. The LLM generates a concise summary of older messages
  3. Context is reconstructed as: [system_message] + [summary] + [recent_messages]
  4. Incomplete function call sequences and recent messages are preserved
Context summarization is asynchronous and happens in the background without blocking the pipeline. The system uses request IDs to match summary requests with results and handles interruptions gracefully.

Enabling Context Summarization

Enable summarization by setting enable_context_summarization=True in LLMAssistantAggregatorParams:
from pipecat.processors.aggregators.llm_response_universal import (
    LLMAssistantAggregatorParams,
    LLMContextAggregatorPair,
)

# Create aggregators with summarization enabled
user_aggregator, assistant_aggregator = LLMContextAggregatorPair(
    context,
    assistant_params=LLMAssistantAggregatorParams(
        enable_context_summarization=True,
    ),
)
With the default configuration, summarization triggers at 8000 estimated tokens or after 20 new messages, whichever comes first.

Customizing Behavior

Use LLMContextSummarizationConfig to control when and how summarization occurs:
from pipecat.utils.context.llm_context_summarization import LLMContextSummarizationConfig

user_aggregator, assistant_aggregator = LLMContextAggregatorPair(
    context,
    assistant_params=LLMAssistantAggregatorParams(
        enable_context_summarization=True,
        context_summarization_config=LLMContextSummarizationConfig(
            max_context_tokens=8000,           # Trigger at 8000 tokens
            target_context_tokens=6000,        # Target summary size
            max_unsummarized_messages=20,      # Or trigger after 20 new messages
            min_messages_after_summary=4,      # Keep last 4 messages uncompressed
            summarization_prompt=None,         # Custom prompt (optional)
        ),
    ),
)
Configuration parameters:
ParameterDefaultDescription
max_context_tokens8000Maximum context size (in estimated tokens) before triggering summarization
target_context_tokens6000Target token count for the generated summary
max_unsummarized_messages20Maximum new messages before triggering summarization
min_messages_after_summary4Number of recent messages to preserve uncompressed
summarization_promptNoneCustom prompt for summary generation (uses built-in default if None)

What Gets Preserved

Context summarization intelligently preserves:
  • System messages: The first system message (defining assistant behavior) is always kept
  • Recent messages: The last N messages stay uncompressed (configured by min_messages_after_summary)
  • Function call sequences: Incomplete function call/result pairs are not split during summarization

Custom Summarization Prompts

You can override the default summarization prompt to control how the LLM generates summaries:
custom_prompt = """Summarize this conversation concisely.
Focus on: key decisions, user preferences, and action items.
Keep the summary under {target_tokens} tokens."""

config = LLMContextSummarizationConfig(
    summarization_prompt=custom_prompt,
)
When no custom prompt is provided, Pipecat uses a built-in prompt that instructs the LLM to create a concise summary preserving key information, user preferences, and conversation flow.

Next Steps