Skip to main content

Overview

LLMSwitcher is a specialized version of ServiceSwitcher designed specifically for managing multiple LLM services. Beyond basic service switching, it provides convenient methods for running ad-hoc inferences and registering function handlers across all LLMs simultaneously. This is particularly useful when you need to switch between different LLM providers based on task complexity, cost optimization, or specific model capabilities while maintaining a consistent function calling interface.

Constructor

from pipecat.pipeline.llm_switcher import LLMSwitcher
from pipecat.pipeline.service_switcher import ServiceSwitcherStrategyManual

switcher = LLMSwitcher(
    llms=[openai_service, gemini_service],
    strategy_type=ServiceSwitcherStrategyManual
)
llms
List[LLMService]
required
List of LLM service instances to switch between.
strategy_type
Type[ServiceSwitcherStrategy]
required
The strategy class to use for switching logic. Pass the class itself, not an instance.

Properties

llms
List[LLMService]
The list of LLM services managed by this switcher.
active_llm
Optional[LLMService]
The currently active LLM service, or None if no LLMs are configured.

Methods

run_inference()

Run a one-shot inference with the currently active LLM, outside of the normal pipeline flow.
result = await llm_switcher.run_inference(context=llm_context)
context
LLMContext
required
The LLM context containing conversation history and messages.
Returns: Optional[str] - The LLM’s response as a string, or None if no LLM is active.

register_function()

Register a function handler with all LLMs in the switcher, regardless of which is currently active.
llm_switcher.register_function(
    function_name="get_weather",
    handler=handle_weather,
    cancel_on_interruption=True
)
function_name
Optional[str]
required
The name of the function to handle. Use None for a catch-all handler that processes all function calls.
handler
Callable
required
The async function handler. Should accept a single FunctionCallParams parameter.
start_callback
Optional[Callable]
deprecated
Legacy callback function (deprecated). Put initialization code at the top of your handler instead.
cancel_on_interruption
bool
default:"True"
Whether to cancel this function call when a user interruption occurs.

Usage Examples

Basic LLM Switching

from pipecat.pipeline.llm_switcher import LLMSwitcher
from pipecat.pipeline.service_switcher import ServiceSwitcherStrategyManual
from pipecat.services.openai.llm import OpenAILLMService
from pipecat.services.google.llm import GoogleLLMService
from pipecat.frames.frames import ManuallySwitchServiceFrame

# Create LLM services
openai = OpenAILLMService(api_key=os.getenv("OPENAI_API_KEY"))
gemini = GoogleLLMService(api_key=os.getenv("GOOGLE_API_KEY"))

# Create switcher
llm_switcher = LLMSwitcher(
    llms=[openai, gemini],
    strategy_type=ServiceSwitcherStrategyManual
)

# Use in pipeline
pipeline = Pipeline([
    transport.input(),
    stt,
    context_aggregator.user(),
    llm_switcher,
    tts,
    transport.output(),
    context_aggregator.assistant()
])

# Switch to cheaper model for simple tasks
await task.queue_frame(ManuallySwitchServiceFrame(service=gpt35))
I