LLM service implementation using Ollama with OpenAI-compatible interface
OLLamaLLMService
provides access to locally-run Ollama models through an OpenAI-compatible interface. It inherits from BaseOpenAILLMService
and allows you to run various open-source models locally while maintaining compatibility with OpenAI’s API format.
Complete API documentation and method details
Official Ollama documentation and model library
Download and setup instructions for Ollama
To use Ollama services, you need to install both Ollama and the Pipecat dependency:
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision modelsLLMUpdateSettingsFrame
- Runtime parameter updatesLLMFullResponseStartFrame
/ LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/ FunctionCallResultFrame
- Function call lifecycleErrorFrame
- Connection or processing errorsLearn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Inherits all OpenAI metrics capabilities for local monitoring:
Enable with:
LLM service implementation using Ollama with OpenAI-compatible interface
OLLamaLLMService
provides access to locally-run Ollama models through an OpenAI-compatible interface. It inherits from BaseOpenAILLMService
and allows you to run various open-source models locally while maintaining compatibility with OpenAI’s API format.
Complete API documentation and method details
Official Ollama documentation and model library
Download and setup instructions for Ollama
To use Ollama services, you need to install both Ollama and the Pipecat dependency:
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision modelsLLMUpdateSettingsFrame
- Runtime parameter updatesLLMFullResponseStartFrame
/ LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/ FunctionCallResultFrame
- Function call lifecycleErrorFrame
- Connection or processing errorsLearn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Inherits all OpenAI metrics capabilities for local monitoring:
Enable with: