Overview
AzureLLMService provides access to Azure OpenAI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with enterprise-grade security and compliance.
Azure LLM API Reference
Pipecat’s API methods for Azure OpenAI integration
Example Implementation
Complete example with function calling
Azure OpenAI Documentation
Official Azure OpenAI documentation and setup
Azure Portal
Create OpenAI resources and get credentials
Installation
To use Azure OpenAI services, install the required dependency:Prerequisites
Azure OpenAI Setup
Before using Azure OpenAI LLM services, you need:- Azure Account: Sign up at Azure Portal
- OpenAI Resource: Create an Azure OpenAI resource in your subscription
- Model Deployment: Deploy your chosen model (GPT-4, GPT-4o, etc.)
- Credentials: Get your API key, endpoint, and deployment name
Required Environment Variables
AZURE_CHATGPT_API_KEY: Your Azure OpenAI API keyAZURE_CHATGPT_ENDPOINT: Your Azure OpenAI endpoint URLAZURE_CHATGPT_MODEL: Your model deployment name
Configuration
Azure OpenAI API key for authentication.
Azure OpenAI endpoint URL (e.g.,
"https://your-resource.openai.azure.com/").Azure model deployment name. This is the name you gave when deploying the model, not the base model name.
Azure OpenAI API version string.
AzureLLMService inherits from OpenAILLMService, it also accepts the following parameters:
Runtime-configurable model settings. See OpenAI InputParams for details.
Request timeout in seconds. Used when
retry_on_timeout is enabled to determine when to retry.Whether to retry the request once if it times out. The retry attempt has no timeout limit.
InputParams
AzureLLMService uses the same InputParams as OpenAILLMService. See the OpenAI InputParams section for the full parameter reference.
Usage
Basic Setup
With Custom Parameters
Updating Settings at Runtime
Model settings can be changed mid-conversation usingUpdateSettingsFrame:
Notes
- Deployment name vs model name: The
modelparameter should be your Azure deployment name, not the underlying model name (e.g., use"my-gpt4-deployment"instead of"gpt-4"). - API version: Different API versions support different features. Check the Azure OpenAI documentation for version-specific capabilities.
- Full OpenAI compatibility: Since
AzureLLMServiceinherits fromOpenAILLMService, it supports all the same features including function calling, vision input, and streaming responses.
Event Handlers
AzureLLMService supports the same event handlers as OpenAILLMService, inherited from LLMService:
| Event | Description |
|---|---|
on_completion_timeout | Called when an LLM completion request times out |
on_function_calls_started | Called when function calls are received and execution is about to start |