Anthropic
Large Language Model service implementation using Anthropic’s Claude API
Overview
AnthropicLLMService
provides integration with Anthropic’s Claude models, supporting streaming responses, function calling, and prompt caching. It includes specialized context handling for Anthropic’s message format.
Installation
To use AnthropicLLMService
, install the required dependencies:
You’ll also need to set up your Anthropic API key as an environment variable: ANTHROPIC_API_KEY
Configuration
Constructor Parameters
Anthropic API key
Model identifier
Model configuration parameters
Input Parameters
Enables beta prompt caching functionality
Additional parameters to pass to the model
Maximum number of tokens to generate. Must be greater than or equal to 1
Controls randomness in the output. Range: [0.0, 1.0]
Controls diversity via nucleus sampling. Must be greater than or equal to 0
Controls diversity via nucleus sampling. Range: [0.0, 1.0]
Input Frames
Contains conversation context
Contains conversation messages
Contains image for vision processing
Updates model settings
Controls prompt caching behavior
Output Frames
Contains generated text
Indicates ongoing function call
Contains function call results
Context Management
The Anthropic service uses specialized context management to handle conversations and message formatting. This includes managing the conversation history, system prompts, function calls, and converting between OpenAI and Anthropic message formats.
AnthropicLLMContext
The base context manager for Anthropic conversations:
Context Aggregators
Context aggregators handle message format conversion and management. The service provides a method to create paired aggregators:
Creates user and assistant aggregators for handling message formatting.
Parameters
The context object containing conversation history and settings
Controls text preprocessing for assistant responses
Usage Example
The context management system ensures proper message formatting and history tracking throughout the conversation while handling the conversion between OpenAI and Anthropic message formats automatically.
Methods
See the LLM base class methods for additional functionality.
Usage Examples
Basic Usage
With Function Calling
Frame Flow
Metrics Support
The service collects various metrics:
- Token usage (prompt and completion)
- Cache metrics (creation and read tokens)
- Processing time
- Time to first byte (TTFB)
Features
Prompt Caching
Message Format Conversion
Automatically handles conversion between:
- OpenAI message format
- Anthropic message format
- Function calling format
Token Estimation
Provides token usage tracking and estimation:
Notes
- Supports streaming responses
- Handles function calling
- Provides prompt caching
- Manages conversation context
- Supports vision inputs
- Includes metrics collection
- Thread-safe processing
- Handles interruptions