Learn how to configure language models to generate intelligent responses in your voice AI pipeline
LLMTextFrame
s, which are used by subsequent processors to create audio output for the bot.
OpenAILLMContextFrame
containing conversation historyLLMFullResponseStartFrame
LLMTextFrame
s containing response tokens to downstream processors (enables real-time TTS processing)LLMFullResponseEndFrame
to mark the completion of the responseFunctionCallsStartedFrame
: Indicates function execution beginningFunctionCallInProgressFrame
: Indicates a function is currently executingFunctionCallResultFrame
: Contains results from executed functionsbase_url
parameter.
BaseOpenAILLMService
that most providers extend, enabling easy switching between compatible services:
LLMService
base class with shared configuration options:
run_in_parallel
: Controls whether function calls execute simultaneously or sequentially
True
(default): Faster execution when multiple functions are calledFalse
: Sequential execution for dependent function callson_completion_timeout
: Triggered when LLM requests timeouton_function_calls_started
: Triggered when function calls are initiated