Add function calling to your Pipecat bot. Examples exist for each LLM provider supported in Pipecat.View Recipe →
Collect audio frames from the user and bot for later processing or storage.View Recipe →
Capture user and bot transcripts for later processing or storage.View Recipe →
Play a background sound in your Pipecat bot. The audio is mixed with the transport audio to create a single integrated audio stream.View Recipe →
Specify a strategy for mute to mute user input, allowing the bot to continue without interruption.View Recipe →
Use a wake phrase to wake up your Pipecat bot.View Recipe →
Play sound effects in your Pipecat bot.View Recipe →
A ParallelPipeline example showing how to dynamically switch languages.View Recipe →
Detect when a user is idle and automatically respond.View Recipe →
Learn how to debug your Pipecat bot with an Observer by observing frames flowing through the pipeline.View Recipe →
A live graphical debugger for the Pipecat voice and multimodal conversational AI framework. It lets you visualize pipelines and debug frames in real time — so you can see exactly what your bot is thinking and doing.View Recipe →
Parse user email from the LLM response.View Recipe →
Detect when a user has finished speaking and automatically respond. Learn more about smart-turn model.View Recipe →
Use MCP tools to interact with external services.View Recipe →
Learn how to configure interruption strategies for your Pipecat bot.View Recipe →
Pass a video frame from a live video stream to a model and get a description.View Recipe →
Handle user and bot end of turn events to add custom logic after a turn.View Recipe →