Skip to main content

The activation model

In a multi-agent system, only one agent is active at a time (per bridge). The active agent receives frames from the bus. Inactive agents exist but don’t process frames. Every agent has an active property that defaults to True. In handoff scenarios, voice agents should start with active=False so only one is active at a time — the parent then activates the first one explicitly. You control which agent is active using three methods:
MethodWhat it does
activate_agent(name, args)Activate another agent
deactivate_agent(name)Deactivate another agent
handoff_to(name, args)Deactivate self, then activate the target
handoff_to() is the most common — it’s a single call that transfers control from the current agent to another.

Building a handoff system

Let’s walk through how the two-agent handoff works. You need three pieces:

1. A main agent with a bus bridge

The main agent owns the transport (audio I/O) and places a BusBridgeProcessor in its pipeline instead of an LLM. The bridge routes frames to whichever LLM agent is active:
from pipecat_subagents.agents import BaseAgent
from pipecat_subagents.bus import BusBridgeProcessor

class MainAgent(BaseAgent):
    def __init__(self, name, *, bus, transport):
        super().__init__(name, bus=bus)
        self._transport = transport

    async def build_pipeline(self) -> Pipeline:
        stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY"))
        tts = CartesiaTTSService(api_key=os.getenv("CARTESIA_API_KEY"))

        context = LLMContext()
        context_aggregator = LLMContextAggregatorPair(
            context,
            user_params=LLMUserAggregatorParams(vad_analyzer=SileroVADAnalyzer()),
        )

        bridge = BusBridgeProcessor(bus=self.bus, agent_name=self.name)

        return Pipeline([
            self._transport.input(),
            stt,
            context_aggregator.user(),
            bridge,                        # Where the LLM would go
            tts,
            self._transport.output(),
            context_aggregator.assistant(),
        ])

2. LLM agents with bridged=()

LLM agents set bridged=() to receive frames from the bus. Each one runs its own LLM and defines its own system prompt:
from pipecat_subagents.agents import LLMAgent

class GreeterAgent(LLMAgent):
    def __init__(self, name, *, bus):
        super().__init__(name, bus=bus, bridged=())

    def build_llm(self) -> LLMService:
        return OpenAILLMService(
            api_key=os.getenv("OPENAI_API_KEY"),
            settings=OpenAILLMSettings(
                system_instruction="You are a friendly greeter. Route product questions to support.",
            ),
        )
bridged=() means the agent receives frames from all bridges. You can filter by bridge name with bridged=("voice",) if you have multiple bridges.

3. Handoff via tools

The LLM decides when to transfer by calling a tool. The tool calls handoff_to():
from pipecat_subagents.agents import tool, LLMAgentActivationArgs

class GreeterAgent(LLMAgent):
    # ... build_llm() as above ...

    @tool(cancel_on_interruption=False)
    async def transfer_to_agent(self, params: FunctionCallParams, agent: str, reason: str):
        """Transfer the user to another agent.

        Args:
            agent (str): The agent to transfer to (e.g. 'support').
            reason (str): Why the user is being transferred.
        """
        await self.handoff_to(
            agent,
            activation_args=LLMAgentActivationArgs(
                messages=[{"role": "developer", "content": reason}],
            ),
            result_callback=params.result_callback,
        )
For LLMAgent, handoff_to() also accepts a messages parameter. These messages are injected and spoken by the current agent before the transfer happens — useful for announcing the handoff:
await self.handoff_to(
    agent,
    messages=[{"role": "developer", "content": f"Tell the user about the transfer ({reason})."}],
    activation_args=LLMAgentActivationArgs(
        messages=[{"role": "developer", "content": reason}],
    ),
    result_callback=params.result_callback,
)
When the greeter calls handoff_to("support", ...):
  1. The greeter is deactivated — it stops receiving frames from the bus
  2. The support agent is activated with the provided arguments
  3. The support agent’s on_activated() fires, injecting the reason message into its LLM context
  4. The support agent starts responding to the user
The transition is seamless — the user experiences it as a natural conversation flow.

Activation arguments

When activating an LLMAgent, you can pass LLMAgentActivationArgs to give the target agent context about why it was activated:
from pipecat_subagents.agents import LLMAgentActivationArgs

await self.activate_agent(
    "support",
    activation_args=LLMAgentActivationArgs(
        messages=[{"role": "developer", "content": "The user asked about Rocket Boots."}],
    ),
)
The target agent receives these messages in its LLM context and immediately runs the LLM to respond.

Waiting for agents to be ready

Before you can activate an agent, it needs to be ready (its pipeline must be started). Use the @agent_ready decorator to declare a handler that fires when a specific agent registers:
from pipecat_subagents.agents import agent_ready

class MainAgent(BaseAgent):
    @agent_ready(name="greeter")
    async def on_greeter_ready(self, data: AgentReadyData) -> None:
        await self.activate_agent(
            "greeter",
            activation_args=LLMAgentActivationArgs(
                messages=[{"role": "developer", "content": "Welcome the user."}],
            ),
        )
The framework automatically watches for the named agent when the main agent starts. If the agent is already registered, the handler fires immediately. This is the typical pattern: the main agent waits for the first LLM agent, then activates it to start the conversation.

Putting it all together

Here’s the full flow:
1

Runner starts

AgentRunner creates the bus and starts the main agent’s pipeline.
2

Client connects

The main agent creates child LLM agents and adds them via add_agent().
3

Agent becomes ready

The @agent_ready handler fires on the parent. The main agent activates the greeter.
4

User speaks

Audio flows: transport -> STT -> BusBridge -> bus -> greeter’s LLM -> bus -> BusBridge -> TTS -> transport.
5

LLM decides to transfer

The greeter’s LLM calls transfer_to_agent. The tool calls handoff_to("support", ...).
6

Handoff completes

Greeter deactivates, support activates with context. Audio now flows through the support agent.

What’s next

Handoff transfers a conversation between agents. But sometimes you need agents to do work in parallel. Next, let’s look at task coordination.

Task Coordination

Dispatch work to multiple agents in parallel