Overview

AnthropicLLMService provides integration with Anthropic’s Claude models, supporting streaming responses, function calling, and prompt caching. It includes specialized context handling for Anthropic’s message format.

Installation

To use AnthropicLLMService, install the required dependencies:

pip install pipecat-ai[anthropic]

You’ll also need to set up your Anthropic API key as an environment variable: ANTHROPIC_API_KEY

Configuration

Constructor Parameters

api_key
str
required

Anthropic API key

model
str
default: "claude-3-5-sonnet-20240620"

Model identifier

params
InputParams

Model configuration parameters

Input Parameters

enable_prompt_caching_beta
Optional[bool]
default: "False"

Enables beta prompt caching functionality

extra
Optional[Dict[str, Any]]
default: "{}"

Additional parameters to pass to the model

max_tokens
Optional[int]
default: "4096"

Maximum number of tokens to generate. Must be greater than or equal to 1

temperature
Optional[float]
default: "NOT_GIVEN"

Controls randomness in the output. Range: [0.0, 1.0]

top_k
Optional[int]
default: "NOT_GIVEN"

Controls diversity via nucleus sampling. Must be greater than or equal to 0

top_p
Optional[float]
default: "NOT_GIVEN"

Controls diversity via nucleus sampling. Range: [0.0, 1.0]

Input Frames

OpenAILLMContextFrame
Frame

Contains conversation context

LLMMessagesFrame
Frame

Contains conversation messages

VisionImageRawFrame
Frame

Contains image for vision processing

LLMUpdateSettingsFrame
Frame

Updates model settings

LLMEnablePromptCachingFrame
Frame

Controls prompt caching behavior

Output Frames

TextFrame
Frame

Contains generated text

FunctionCallInProgressFrame
Frame

Indicates ongoing function call

FunctionCallResultFrame
Frame

Contains function call results

Context Management

The Anthropic service uses specialized context management to handle conversations and message formatting. This includes managing the conversation history, system prompts, function calls, and converting between OpenAI and Anthropic message formats.

AnthropicLLMContext

The base context manager for Anthropic conversations:

context = AnthropicLLMContext(
    messages=[],  # Conversation history
    tools=[],     # Available function calling tools
    system="You are a helpful assistant"  # System prompt
)

Context Aggregators

Context aggregators handle message format conversion and management. The service provides a method to create paired aggregators:

create_context_aggregator
static method

Creates user and assistant aggregators for handling message formatting.

@staticmethod
def create_context_aggregator(
    context: OpenAILLMContext,
    *,
    assistant_expect_stripped_words: bool = True
) -> AnthropicContextAggregatorPair

Parameters

context
OpenAILLMContext
required

The context object containing conversation history and settings

assistant_expect_stripped_words
bool
default: "True"

Controls text preprocessing for assistant responses

Usage Example


# 1. Create the context
context = AnthropicLLMContext(
    messages=[],
    system="You are a helpful assistant"
)

# 2. Create aggregators for message handling
aggregators = AnthropicLLMService.create_context_aggregator(context)

# 3. Access individual aggregators
user_aggregator = aggregators.user()      # Handles user message formatting
assistant_aggregator = aggregators.assistant()  # Handles assistant responses

# 4. Use in a pipeline
pipeline = Pipeline([
    user_aggregator,
    llm_service,
    assistant_aggregator
])

The context management system ensures proper message formatting and history tracking throughout the conversation while handling the conversion between OpenAI and Anthropic message formats automatically.

Methods

See the LLM base class methods for additional functionality.

Usage Examples

Basic Usage

# Configure service
llm_service = AnthropicLLMService(
    api_key="your-api-key",
    model="claude-3-5-sonnet-20240620",
    params=AnthropicLLMService.InputParams(
        temperature=0.7,
        max_tokens=1000
    )
)

# Create pipeline
pipeline = Pipeline([
    context_manager,
    llm_service,
    response_handler
])

With Function Calling

# Configure function calling
context = AnthropicLLMContext(
    system="You are a helpful assistant",
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get weather information",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                }
            }
        }
    }]
)

# Use in pipeline
pipeline = Pipeline([
    context_aggregator,
    llm_service,
    function_handler
])

Frame Flow

Metrics Support

The service collects various metrics:

  • Token usage (prompt and completion)
  • Cache metrics (creation and read tokens)
  • Processing time
  • Time to first byte (TTFB)

Features

Prompt Caching

# Enable prompt caching
await pipeline.push_frame(
    LLMEnablePromptCachingFrame(enable=True)
)

Message Format Conversion

Automatically handles conversion between:

  • OpenAI message format
  • Anthropic message format
  • Function calling format

Token Estimation

Provides token usage tracking and estimation:

def _estimate_tokens(self, text: str) -> int:
    """Estimates token count for partial responses"""

Notes

  • Supports streaming responses
  • Handles function calling
  • Provides prompt caching
  • Manages conversation context
  • Supports vision inputs
  • Includes metrics collection
  • Thread-safe processing
  • Handles interruptions