TranscriptProcessor
Factory for creating and managing user and assistant transcript processors with shared event handling
Overview
TranscriptProcessor
is a factory that creates and manages processors for handling conversation transcripts from both users and assistants. It provides unified access to transcript processors with shared event handling, making it easy to track and respond to conversation updates in real-time.
The processor normalizes messages from various LLM services (OpenAI, Anthropic, Google) into a consistent format and emits events when new messages are added to the conversation.
Check out the transcript processor examples for OpenAI, Anthropic, and Google Gemini to see it in action.
Factory Methods
Creates/returns a UserTranscriptProcessor instance that handles user messages
Creates/returns an AssistantTranscriptProcessor instance that handles assistant messages
Registers event handlers that will be applied to both processors
Events
Emitted when new messages are added to the conversation transcript. Handler receives: - processor: The TranscriptProcessor instance - frame: TranscriptionUpdateFrame containing new messages
Transcription Messages
Transcription messages are normalized to this format:
Pipeline Integration
The TranscriptProcessor is designed to be used in a pipeline to process conversation transcripts in real-time. In the pipeline:
- The UserTranscriptProcessor (
transcript.user()
) handles TranscriptionFrames from the STT service - The AssistantTranscriptProcessor (
transcript.assistant()
) handles TextFrames from the LLM service
Ensure that the processors are placed after their respective services in the pipeline. See the Basic Usage example below for more details.
Usage Examples
Basic Usage
This example shows the basic usage for the TranscriptProcessor factory.
Maintaining Conversation History
This example extends the basic usage example by showing how to create a custom handler to maintain conversation history and log new messages with timestamps.
Frame Flow
Notes
- Supports multiple LLM services (OpenAI, Anthropic, Google)
- Normalizes message formats from different services
- Maintains conversation history with timestamps
- Emits events for real-time transcript updates
- Thread-safe for concurrent processing