LLMAgent overview
LLMAgent extends BaseAgent with everything you need to run an LLM-powered agent:
- A pipeline with your LLM service, automatically built
- Tool registration via the
@tooldecorator - Activation handling that injects messages and runs the LLM
LLMAgent and implement build_llm():
LLM (with tools from build_tools() automatically registered). When bridged=() is set, the framework wraps this pipeline with edge processors that connect it to the bus.
The @tool decorator
The@tool decorator marks a method as an LLM-callable tool. The framework automatically collects all @tool-decorated methods and registers them with the LLM service.
Args section in the docstring.
Tool options
The@tool decorator accepts options:
| Option | Default | Description |
|---|---|---|
cancel_on_interruption | True | Cancel the tool if the user interrupts |
timeout | None | Maximum execution time in seconds |
Tool parameters
Every tool method receivesself and params: FunctionCallParams as the first two arguments. Additional arguments are the tool’s parameters that the LLM fills in.
The params object gives you access to:
params.result_callback(result)— return the result to the LLMparams.llm— the LLM service instance, useful for queuing frames
Returning results
Always callparams.result_callback() to return the tool result to the LLM:
Activation with messages
When anLLMAgent is activated, you can inject messages into its context:
on_activated() implementation:
- Sets the tools from
build_tools() - Injects the provided messages into the LLM context
- Runs the LLM if
run_llmisTrue(the default)
Custom pipelines
If you need more control, you can overridebuild_pipeline() entirely. For example, to add TTS to the agent’s own pipeline: