Monitor and analyze your Pipecat conversational pipelines using OpenTelemetry
console_export=True
:
gen_ai.system
: Service provider (e.g., “cartesia”)gen_ai.request.model
: Model ID/namevoice_id
: Voice identifiertext
: The text being synthesizedmetrics.character_count
: Number of characters in the textmetrics.ttfb
: Time to first byte in secondssettings.*
: Service-specific configuration parametersgen_ai.system
: Service provider (e.g., “deepgram”)gen_ai.request.model
: Model ID/nametranscript
: The transcribed textis_final
: Whether the transcription is finallanguage
: Detected or configured languagevad_enabled
: Whether voice activity detection is enabledmetrics.ttfb
: Time to first byte in secondssettings.*
: Service-specific configuration parametersgen_ai.system
: Service provider (e.g., “openai”, “gcp.gemini”)gen_ai.request.model
: Model ID/namegen_ai.operation.name
: Operation type (e.g., “chat”)stream
: Whether streaming is enabledinput
: JSON-serialized input messagesoutput
: Complete response texttools
: JSON-serialized tools configurationtools.count
: Number of tools availabletools.names
: Comma-separated tool namessystem
: System message contentgen_ai.usage.input_tokens
: Number of prompt tokensgen_ai.usage.output_tokens
: Number of completion tokensmetrics.ttfb
: Time to first byte in secondsgen_ai.request.*
: Standard parameters (temperature, max_tokens, etc.)gen_ai.system
: “gcp.gemini” or “openai”gen_ai.request.model
: Model identifiertools.count
: Number of available toolstools.definitions
: JSON-serialized tool schemassystem_instruction
: System prompt (truncated)session.*
: Session configuration parametersinput
: JSON-serialized context messages being sentgen_ai.operation.name
: “llm_request”output
: Complete assistant response textoutput_modality
: “TEXT” or “AUDIO” (Gemini Live)gen_ai.usage.input_tokens
: Prompt tokens usedgen_ai.usage.output_tokens
: Completion tokens generatedfunction_calls.count
: Number of function calls madefunction_calls.names
: Comma-separated function namesmetrics.ttfb
: Time to first response in secondstool.function_name
: Name of the function being calledtool.call_id
: Unique identifier for the calltool.arguments
: Function arguments (truncated)tool.result
: Function execution result (truncated)tool.result_status
: “completed”, “error”, or “parse_error”turn.number
: Sequential turn numberturn.type
: Type of turn (e.g., “conversation”)turn.duration_seconds
: Duration of the turnturn.was_interrupted
: Whether the turn was interruptedconversation.id
: ID of the parent conversationconversation.id
: Unique identifier for the conversationconversation.type
: Type of conversation (e.g., “voice”)gen_ai.usage.input_tokens
gen_ai.usage.output_tokens
metrics.character_count
metrics.ttfb
(in seconds)enable_metrics=True
is set in PipelineParamsconsole_export=True
to view traces in your logs