Enable LLMs to interact with external services and APIs in your voice AI pipeline
FunctionSchema
that works across all supported LLM providers. This makes it easy to define functions once and use them with any provider.
As a shorthand, you could also bypass specifying a function configuration at all and instead use “direct” functions. Under the hood, these are converted to FunctionSchema
s.
ToolsSchema
will be automatically converted to the correct format for your LLM provider through adapters.
FunctionSchema
or in a provider-specific format) and instead pass the function directly to your ToolsSchema
. Pipecat will auto-configure the function, gathering relevant metadata from its signature and docstring. Metadata includes:
FunctionCallParams
, followed by any others necessary for the function.
register_function
register_direct_function
cancel_on_interruption=True
(default): Function call is cancelled if user interruptscancel_on_interruption=False
: Function call continues even if user interruptscancel_on_interruption=False
for critical operations that should complete even if the user starts speaking. Function calls are async, so you can continue the conversation while the function executes. Once the result returns, the LLM will automatically incorporate it into the conversation context. LLMs vary in terms of how well they incorporate changes to previous messages, so you may need to experiment with your LLM provider to see how it handles this.
FunctionCallParams
object containing all the information needed for execution:
params.arguments
params.result_callback(result)
FunctionCallResultProperties
object passed to the result callback.
FunctionCallResultProperties
provides fine-grained control over LLM execution:
run_llm=True
: Run LLM after function call (default behavior)run_llm=False
: Don’t run LLM after function call (useful for chained calls)on_context_updated
: Async callback executed after the function result is added to contextrun_llm=False
) when you have back-to-back function
calls. If you skip a completion, you must manually trigger one from the
context aggregator.