In a multi-agent system, only one agent is active at a time (per bridge). The active agent receives frames from the bus. Inactive agents exist but don’t process frames.Every agent has an active property that defaults to True. In handoff scenarios, voice agents should start with active=False so only one is active at a time — the parent then activates the first one explicitly.You control which agent is active using three methods:
Method
What it does
activate_agent(name, args)
Activate another agent
deactivate_agent(name)
Deactivate another agent
handoff_to(name, args)
Deactivate self, then activate the target
handoff_to() is the most common — it’s a single call that transfers control from the current agent to another.
The main agent owns the transport (audio I/O) and places a BusBridgeProcessor in its pipeline instead of an LLM. The bridge routes frames to whichever LLM agent is active:
from pipecat_subagents.agents import BaseAgentfrom pipecat_subagents.bus import BusBridgeProcessorclass MainAgent(BaseAgent): def __init__(self, name, *, bus, transport): super().__init__(name, bus=bus) self._transport = transport async def build_pipeline(self) -> Pipeline: stt = DeepgramSTTService(api_key=os.getenv("DEEPGRAM_API_KEY")) tts = CartesiaTTSService(api_key=os.getenv("CARTESIA_API_KEY")) context = LLMContext() context_aggregator = LLMContextAggregatorPair( context, user_params=LLMUserAggregatorParams(vad_analyzer=SileroVADAnalyzer()), ) bridge = BusBridgeProcessor(bus=self.bus, agent_name=self.name) return Pipeline([ self._transport.input(), stt, context_aggregator.user(), bridge, # Where the LLM would go tts, self._transport.output(), context_aggregator.assistant(), ])
The LLM decides when to transfer by calling a tool. The tool calls handoff_to():
from pipecat_subagents.agents import tool, LLMAgentActivationArgsclass GreeterAgent(LLMAgent): # ... build_llm() as above ... @tool(cancel_on_interruption=False) async def transfer_to_agent(self, params: FunctionCallParams, agent: str, reason: str): """Transfer the user to another agent. Args: agent (str): The agent to transfer to (e.g. 'support'). reason (str): Why the user is being transferred. """ await self.handoff_to( agent, activation_args=LLMAgentActivationArgs( messages=[{"role": "developer", "content": reason}], ), result_callback=params.result_callback, )
For LLMAgent, handoff_to() also accepts a messages parameter. These messages are injected and spoken by the current agent before the transfer happens — useful for announcing the handoff:
await self.handoff_to( agent, messages=[{"role": "developer", "content": f"Tell the user about the transfer ({reason})."}], activation_args=LLMAgentActivationArgs( messages=[{"role": "developer", "content": reason}], ), result_callback=params.result_callback,)
When the greeter calls handoff_to("support", ...):
The greeter is deactivated — it stops receiving frames from the bus
The support agent is activated with the provided arguments
The support agent’s on_activated() fires, injecting the reason message into its LLM context
The support agent starts responding to the user
The transition is seamless — the user experiences it as a natural conversation flow.
When activating an LLMAgent, you can pass LLMAgentActivationArgs to give the target agent context about why it was activated:
from pipecat_subagents.agents import LLMAgentActivationArgsawait self.activate_agent( "support", activation_args=LLMAgentActivationArgs( messages=[{"role": "developer", "content": "The user asked about Rocket Boots."}], ),)
The target agent receives these messages in its LLM context and immediately runs the LLM to respond.
Before you can activate an agent, it needs to be ready (its pipeline must be started). Use the @agent_ready decorator to declare a handler that fires when a specific agent registers:
The framework automatically watches for the named agent when the main agent starts. If the agent is already registered, the handler fires immediately. This is the typical pattern: the main agent waits for the first LLM agent, then activates it to start the conversation.