Azure
Large Language Model service implementation using Azure OpenAI API
Overview
AzureLLMService
provides access to Azure OpenAI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses, function calling, and context management.
API Reference
Complete API documentation and method details
Azure OpenAI Docs
Official Azure OpenAI documentation and setup
Example Code
Working example with function calling
Installation
To use Azure OpenAI services, install the required dependency:
You’ll need to set up your Azure OpenAI credentials:
AZURE_CHATGPT_API_KEY
- Your Azure OpenAI API keyAZURE_CHATGPT_ENDPOINT
- Your Azure OpenAI endpoint URLAZURE_CHATGPT_MODEL
- Your model deployment name
Get your credentials from the Azure Portal under your Azure OpenAI resource.
Frames
Input
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision processingLLMUpdateSettingsFrame
- Runtime parameter updates
Output
LLMFullResponseStartFrame
/LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/FunctionCallResultFrame
- Function call lifecycleErrorFrame
- API or processing errors
Azure vs OpenAI Differences
Feature | Azure OpenAI | Standard OpenAI |
---|---|---|
Authentication | API key + endpoint | API key only |
Deployment | Custom deployment names | Model names directly |
Compliance | Enterprise SOC, HIPAA | Standard compliance |
Regional | Multiple Azure regions | OpenAI regions only |
Pricing | Azure billing integration | OpenAI billing |
Function Calling
Function Calling Guide
Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Context Management
Context Management Guide
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Usage Example
Metrics
Inherits all OpenAI metrics capabilities:
- Time to First Byte (TTFB) - Response latency measurement
- Processing Duration - Total request processing time
- Token Usage - Prompt tokens, completion tokens, and totals
Enable with:
Additional Notes
- OpenAI Compatibility: Full compatibility with OpenAI API features and parameters
- Regional Deployment: Deploy in your preferred Azure region for compliance and latency
- Deployment Names: Use your Azure deployment name as the model parameter, not OpenAI model names
- Automatic Retries: Built-in retry logic handles transient Azure service issues