NodeConfig and may contain the following properties:
name: The name of the node; used as a reference to transition to the node.role_message: Astrdefining the bot’s role/personality. Sent as the LLM’s system instruction and persists across transitions until changed. Typically set once in the initial node.task_messages: A list of messagedictsdefining the current node’s objectives.functions: A list of function call definitions and their corresponding handlers.pre_actions: Actions to execute before LLM inference. Actions run once upon transitioning to a node.post_actions: Actions to execute after LLM inference. Actions run once after the node’s initial LLM inference.context_strategy: Strategy for updating context during transitions. The default behavior is to append messages to the context.respond_immediately: Whether to run LLM inference as soon as the node is set. The default is True.
The only required field is
task_messages, as your bot always needs a prompt
to advance the conversation.Messages
Messages define what your bot should do and how it should behave at each node in your conversation flow.Message Types
There are two types of messages you can configure: Role Message (Optional) Defines your bot’s personality, tone, and overall behavior as a plain string. This is sent as the LLM’s system instruction and persists across node transitions until a new node explicitly sets it again. Typically set once in the initial node. Task Messages (Required) Define the specific objective your bot should accomplish in the current node. These messages focus the LLM on the immediate task at hand, such as asking a specific question or processing particular information.Message Format
The role message is a plain string, while task messages use OpenAI format as a list ofdicts:
Cross-Provider Compatibility
Task messages use Pipecat’s default OpenAI message format and are automatically translated to work with your chosen LLM provider. Therole_message is sent as the LLM’s system instruction via LLMUpdateSettingsFrame, which is handled by each provider’s implementation.
Respond Immediately
For each node in the conversation, you can decide whether the LLM should respond immediately upon entering the node (the default behavior) or whether the LLM should wait for the user to speak first before responding. You do this using therespond_immediately field.