OpenTelemetry Tracing
Monitor and analyze your Pipecat conversational pipelines using OpenTelemetry
Overview
Pipecat includes built-in support for OpenTelemetry tracing, allowing you to gain deep visibility into your voice applications. Tracing helps you:
- Track latency and performance across your conversation pipeline
- Monitor service health and identify bottlenecks
- Visualize conversation turns and service dependencies
- Collect usage metrics and operational analytics
Installation
To use OpenTelemetry tracing with Pipecat, install the tracing dependencies:
For local development and testing, we recommend using Jaeger as a trace collector. You can run it with Docker:
Then access the UI at http://localhost:16686
Basic Setup
Enabling tracing in your Pipecat application requires two steps:
- Initialize the OpenTelemetry SDK with your preferred exporter
- Enable tracing in your PipelineTask
For complete working examples, see our sample implementations:
- Jaeger Tracing Example - Uses gRPC exporter with Jaeger
- Langfuse Tracing Example - Uses HTTP exporter with Langfuse for LLM-focused observability
Trace Structure
Pipecat organizes traces hierarchically, following the natural structure of conversations:
For real-time multimodal services like Gemini Live and OpenAI Realtime, the structure adapts to their specific patterns:
This hierarchical structure makes it easy to:
- Track the full lifecycle of a conversation
- Measure latency for individual turns
- Identify which services are contributing to delays
- Compare performance across different conversations
Exporter Options
Pipecat supports any OpenTelemetry-compatible exporter. Common options include:
OTLP Exporter (for Jaeger, Grafana, etc.)
HTTP OTLP Exporter (for Langfuse, etc.)
See our Langfuse example for details on configuring this exporter.
Console Exporter (for debugging)
The console exporter can be enabled alongside any other exporter by setting console_export=True
:
Cloud Provider Exporters
Many cloud providers offer OpenTelemetry-compatible observability services:
- AWS X-Ray
- Google Cloud Trace
- Azure Monitor
- Datadog APM
Check the OpenTelemetry documentation for specific exporter configurations: OpenTelemetry Vendors
Span Attributes
Pipecat enriches spans with detailed attributes about service operations:
TTS Service Spans
gen_ai.system
: Service provider (e.g., “cartesia”)gen_ai.request.model
: Model ID/namevoice_id
: Voice identifiertext
: The text being synthesizedmetrics.character_count
: Number of characters in the textmetrics.ttfb
: Time to first byte in secondssettings.*
: Service-specific configuration parameters
STT Service Spans
gen_ai.system
: Service provider (e.g., “deepgram”)gen_ai.request.model
: Model ID/nametranscript
: The transcribed textis_final
: Whether the transcription is finallanguage
: Detected or configured languagevad_enabled
: Whether voice activity detection is enabledmetrics.ttfb
: Time to first byte in secondssettings.*
: Service-specific configuration parameters
LLM Service Spans
gen_ai.system
: Service provider (e.g., “openai”, “gcp.gemini”)gen_ai.request.model
: Model ID/namegen_ai.operation.name
: Operation type (e.g., “chat”)stream
: Whether streaming is enabledinput
: JSON-serialized input messagesoutput
: Complete response texttools
: JSON-serialized tools configurationtools.count
: Number of tools availabletools.names
: Comma-separated tool namessystem
: System message contentgen_ai.usage.input_tokens
: Number of prompt tokensgen_ai.usage.output_tokens
: Number of completion tokensmetrics.ttfb
: Time to first byte in secondsgen_ai.request.*
: Standard parameters (temperature, max_tokens, etc.)
Multimodal Service Spans (Gemini Live & OpenAI Realtime)
Setup Spans
gen_ai.system
: “gcp.gemini” or “openai”gen_ai.request.model
: Model identifiertools.count
: Number of available toolstools.definitions
: JSON-serialized tool schemassystem_instruction
: System prompt (truncated)session.*
: Session configuration parameters
Request Spans (OpenAI Realtime)
input
: JSON-serialized context messages being sentgen_ai.operation.name
: “llm_request”
Response Spans
output
: Complete assistant response textoutput_modality
: “TEXT” or “AUDIO” (Gemini Live)gen_ai.usage.input_tokens
: Prompt tokens usedgen_ai.usage.output_tokens
: Completion tokens generatedfunction_calls.count
: Number of function calls madefunction_calls.names
: Comma-separated function namesmetrics.ttfb
: Time to first response in seconds
Tool Call/Result Spans (Gemini Live)
tool.function_name
: Name of the function being calledtool.call_id
: Unique identifier for the calltool.arguments
: Function arguments (truncated)tool.result
: Function execution result (truncated)tool.result_status
: “completed”, “error”, or “parse_error”
Turn Spans
turn.number
: Sequential turn numberturn.type
: Type of turn (e.g., “conversation”)turn.duration_seconds
: Duration of the turnturn.was_interrupted
: Whether the turn was interruptedconversation.id
: ID of the parent conversation
Conversation Spans
conversation.id
: Unique identifier for the conversationconversation.type
: Type of conversation (e.g., “voice”)
Usage Metrics
Pipecat’s tracing implementation automatically captures usage metrics for LLM and TTS services:
LLM Token Usage
Token usage is captured in LLM spans as:
gen_ai.usage.input_tokens
gen_ai.usage.output_tokens
TTS Character Count
Character count is captured in TTS spans as:
metrics.character_count
Performance Metrics
Pipecat traces capture key performance metrics for each service:
Time To First Byte (TTFB)
The time it takes for a service to produce its first response:
metrics.ttfb
(in seconds)
Processing Duration
The total time spent processing in each service is captured in the span duration.
Configuration Options
PipelineTask Parameters
Enable or disable tracing for the pipeline
Whether to enable turn tracking.
Custom ID for the conversation. If not provided, a UUID will be generated
Any additional attributes to add to top-level OpenTelemetry conversation span.
setup_tracing() Parameters
Name of the service for traces
A pre-configured OpenTelemetry span exporter instance
Whether to also export traces to console (useful for debugging)
Example
Here’s a complete example showing OpenTelemetry tracing setup with Jaeger:
Troubleshooting
If you’re having issues with tracing:
- No Traces Visible: Ensure the OpenTelemetry packages are installed and that your collector endpoint is correct
- Missing Service Data: Verify that
enable_metrics=True
is set in PipelineParams - Debugging Tracing: Enable console export with
console_export=True
to view traces in your logs - Connection Errors: Check network connectivity to your trace collector
- Collector Configuration: Verify your collector is properly set up to receive traces