Strategy for managing context during transitions to this node.
Copy
Ask AI
# Example context strategy configuration"context_strategy": ContextStrategyConfig( strategy=ContextStrategy.RESET_WITH_SUMMARY, summary_prompt="Summarize the key points discussed so far.")
Actions that execute before the LLM inference. For example, you can send a message to the TTS to speak a phrase (e.g. “Hold on a moment…”), which may be effective if an LLM function call takes time to execute.
Copy
Ask AI
# Example pre_actions"pre_actions": [ { "type": "tts_say", "text": "Hold on a moment..." }],
Required when using RESET_WITH_SUMMARY. Prompt text for generating the
conversation summary.
Copy
Ask AI
# Example usageconfig = ContextStrategyConfig( strategy=ContextStrategy.RESET_WITH_SUMMARY, summary_prompt="Summarize the key points discussed so far.")
You cannot specify both transition_to and transition_callback in the same
function schema.
Example usage:
Copy
Ask AI
from pipecat_flows import FlowsFunctionSchema# Define a function schemacollect_name_schema = FlowsFunctionSchema( name="collect_name", description="Record the user's name", properties={ "name": { "type": "string", "description": "The user's name" } }, required=["name"], handler=collect_name_handler, transition_to="next_node")# Use in node configurationnode_config = { "task_messages": [ {"role": "system", "content": "Ask the user for their name."} ], "functions": [collect_name_schema]}# Pass to flow managerawait flow_manager.set_node("greeting", node_config)
# Access current conversation contextcontext = flow_manager.get_current_context()# Use in handlersasync def process_response(args: FlowArgs) -> FlowResult: context = flow_manager.get_current_context() # Process conversation history return {"status": "success"}
Function handlers can be defined with three different signatures:
Copy
Ask AI
async def handler_with_flow_manager(args: FlowArgs, flow_manager: FlowManager) -> FlowResult: """Modern handler that receives both arguments and FlowManager access.""" # Access state previous_data = flow_manager.state.get("stored_data") # Access pipeline resources await flow_manager.task.queue_frame(TTSSpeakFrame("Processing your request...")) # Store data in state for later flow_manager.state["new_data"] = args["input"] return { "status": "success", "result": "Processed with flow access" }
The framework automatically detects which signature your handler is using and calls it appropriately.
pre_actions and post_actions are used to manage conversation flow. They are included in the NodeConfig and executed before and after the LLM completion, respectively.
Three kinds of actions are available:
Pre-canned actions: These actions perform common tasks and require little configuration.
Function actions: These actions run developer-defined functions at the appropriate time.
Custom actions: These are fully developer-defined actions, providing flexibility at the expense of complexity.
Actions that run developer-defined functions at the appropriate time. For example, if used in post_actions, they’ll run after the bot has finished talking and after any previous post_actions have finished.
Fully developer-defined actions, providing flexibility at the expense of complexity.
Here’s the complexity: because these actions aren’t queued in the Pipecat pipeline, they may execute seemingly early if used in post_actions; they’ll run immediately after the LLM completion is triggered but won’t wait around for the bot to finish talking.
Why would you want this behavior? You might be writing an action that:
Itself just queues another Frame into the Pipecat pipeline (meaning there would no benefit to waiting around for sequencing purposes)
Does work that can be done a bit sooner, like logging that the LLM was updated