Settings are the runtime-configurable properties of AI services, including things like the model, voice, language, temperature, and other provider-specific options. Every service exposes a Settings class that you can use in two ways:
- Configure at initialization: pass a
settings= argument when constructing a service.
- Update at runtime: push an
*UpdateSettingsFrame through the pipeline to change settings mid-conversation.
Configuring settings at initialization
Pass a settings= argument to any service constructor using that service’s Settings class:
from pipecat.services.openai.llm import OpenAILLMService
llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
settings=OpenAILLMService.Settings(
model="gpt-4o",
temperature=0.7,
system_instruction="You are a helpful assistant.",
),
)
from pipecat.services.cartesia.tts import CartesiaTTSService
tts = CartesiaTTSService(
api_key=os.getenv("CARTESIA_API_KEY"),
settings=CartesiaTTSService.Settings(
voice="71a7ad14-091c-4e8e-a314-022ece01c121",
),
)
from pipecat.services.deepgram.stt import DeepgramSTTService
stt = DeepgramSTTService(
api_key=os.getenv("DEEPGRAM_API_KEY"),
settings=DeepgramSTTService.Settings(
model="nova-3",
language="en",
smart_format=True,
),
)
You only need to specify the settings you want to override. Everything else uses the service’s defaults.
Common settings by service type
Each service type has a base set of settings. Individual services may extend these with provider-specific fields.
LLM settings
| Setting | Type | Description |
|---|
model | str | Model identifier (e.g. "gpt-4o", "claude-sonnet-4-5-20250929") |
system_instruction | str | System prompt for the model |
temperature | float | Sampling temperature |
max_tokens | int | Maximum tokens to generate |
top_p | float | Nucleus sampling probability |
top_k | int | Top-k sampling parameter |
frequency_penalty | float | Frequency penalty |
presence_penalty | float | Presence penalty |
seed | int | Random seed for reproducibility |
TTS settings
| Setting | Type | Description |
|---|
model | str | TTS model identifier |
voice | str | Voice identifier or name |
language | Language | str | Language for speech synthesis |
STT settings
| Setting | Type | Description |
|---|
model | str | STT model identifier |
language | Language | str | Language for speech recognition |
Individual services extend these base settings with provider-specific fields.
For example, Deepgram STT adds endpointing, smart_format, diarize, and
more. See the individual service
documentation for the full list of
available settings.
Updating settings at runtime
You can change service settings while the pipeline is running by pushing an update settings frame. This is useful for scenarios like switching languages, changing voices, or adjusting LLM parameters mid-conversation.
Use these frame types:
| Frame | Target |
|---|
LLMUpdateSettingsFrame | LLM services |
TTSUpdateSettingsFrame | TTS services |
STTUpdateSettingsFrame | STT services |
Only include the fields you want to change. Unspecified fields are left as-is.
from pipecat.frames.frames import TTSUpdateSettingsFrame
# Change TTS voice mid-conversation
await task.queue_frame(
TTSUpdateSettingsFrame(
delta=CartesiaTTSService.Settings(voice="new-voice-id")
)
)
from pipecat.frames.frames import LLMUpdateSettingsFrame
# Lower the temperature for more deterministic responses
await task.queue_frame(
LLMUpdateSettingsFrame(
delta=OpenAILLMService.Settings(temperature=0.2)
)
)
from pipecat.frames.frames import STTUpdateSettingsFrame
from pipecat.transcriptions.language import Language
# Switch STT language to Spanish
await task.queue_frame(
STTUpdateSettingsFrame(
delta=DeepgramSTTService.Settings(language=Language.ES)
)
)
Update settings frames are uninterruptible. They will always be processed even
if a user interruption occurs.
Service-specific settings
Services can extend the base settings with provider-specific fields. For example:
- Cartesia TTS adds
generation_config (volume, speed, emotion) and pronunciation_dict_id
- OpenAI TTS adds
instructions and speed
- Deepgram STT adds
endpointing, smart_format, diarize, punctuate, and many more
- OpenAI LLM adds
max_completion_tokens
These service-specific fields work the same way — set them at initialization or update them at runtime:
from pipecat.services.cartesia.tts import CartesiaTTSService, GenerationConfig
tts = CartesiaTTSService(
api_key=os.getenv("CARTESIA_API_KEY"),
settings=CartesiaTTSService.Settings(
voice="71a7ad14-091c-4e8e-a314-022ece01c121",
generation_config=GenerationConfig(speed=1.2, emotion="excited"),
),
)
See the individual service documentation for a complete list of available settings:
Every Settings class includes an extra dict for passing provider-specific parameters that Pipecat doesn’t have a dedicated field for. This is useful when a provider supports options that haven’t been explicitly added to the Settings dataclass yet:
llm = OpenAILLMService(
api_key=os.getenv("OPENAI_API_KEY"),
settings=OpenAILLMService.Settings(
model="gpt-4o",
extra={"logprobs": True, "top_logprobs": 5},
),
)
Values in extra are passed through to the underlying API call. You can also update extra at runtime:
await task.queue_frame(
LLMUpdateSettingsFrame(
delta=OpenAILLMService.Settings(
extra={"logprobs": False},
)
)
)
The extra dict is merged on updates — new keys are added and existing keys
are overwritten, but keys not present in the delta are left unchanged.
The InputParams / params= pattern is deprecated as of v0.0.105. Use Settings / settings= instead.Before:llm = OpenAILLMService(
model="gpt-4o",
params=OpenAILLMService.InputParams(
temperature=0.7,
),
)
After:llm = OpenAILLMService(
settings=OpenAILLMService.Settings(
model="gpt-4o",
temperature=0.7,
),
)
Note that model has moved from a top-level constructor argument into Settings. Both the old and new patterns still work during the deprecation period, but settings values take precedence when both are provided.