Overview

OpenAI LLM services provide chat completion capabilities using OpenAI’s API. The implementation includes two main classes:

  • BaseOpenAILLMService: Base class providing core OpenAI chat completion functionality
  • OpenAILLMService: Implementation with context aggregation support

Installation

To use OpenAI services, install the required dependencies:

pip install pipecat-ai[openai]

You’ll also need to set up your OpenAI API key as an environment variable: OPENAI_API_KEY

BaseOpenAILLMService

Constructor Parameters

model
str
required

OpenAI model identifier (e.g., “gpt-4”, “gpt-3.5-turbo”)

api_key
str

OpenAI API key (defaults to environment variable)

base_url
str

Custom API endpoint URL

params
InputParams

Model configuration parameters

Input Parameters

extra
Optional[Dict[str, Any]]

Additional parameters to pass to the model

frequency_penalty
Optional[float]

Reduces likelihood of repeating tokens based on their frequency. Range: [-2.0, 2.0]

max_completion_tokens
Optional[int]

Maximum number of tokens in the completion. Must be greater than or equal to 1

max_tokens
Optional[int]

Maximum number of tokens to generate. Must be greater than or equal to 1

presence_penalty
Optional[float]

Reduces likelihood of repeating any tokens that have appeared. Range: [-2.0, 2.0]

seed
Optional[int]

Random seed for deterministic generation. Must be greater than or equal to 0

temperature
Optional[float]

Controls randomness in the output. Range: [0.0, 2.0]

top_p
Optional[float]

Controls diversity via nucleus sampling. Range: [0.0, 1.0]

Input Frames

OpenAILLMContextFrame
Frame

Contains OpenAI-specific conversation context

LLMMessagesFrame
Frame

Contains conversation messages

VisionImageRawFrame
Frame

Contains image for vision model processing

LLMUpdateSettingsFrame
Frame

Updates model settings

Output Frames

TextFrame
Frame

Contains generated text chunks

FunctionCallInProgressFrame
Frame

Indicates start of function call

FunctionCallResultFrame
Frame

Contains function call results

OpenAILLMService

Extended implementation with context aggregation support.

Constructor Parameters

model
str
default: "gpt-4"

OpenAI model identifier

params
BaseOpenAILLMService.InputParams

Model configuration parameters

Context Management

The OpenAI service uses specialized context management to handle conversations and message formatting. This includes managing the conversation history and system prompts, and tool calls.

OpenAILLMContext

The base context manager for OpenAI conversations:

context = OpenAILLMContext(
    messages=[],  # Conversation history
    tools=[],     # Available function calling tools
    system="You are a helpful assistant"  # System prompt
)

Context Aggregators

Context aggregators handle message format conversion and management. The service provides a method to create paired aggregators:

create_context_aggregator
static method

Creates user and assistant aggregators for handling message formatting.

@staticmethod
def create_context_aggregator(
    context: OpenAILLMContext,
    *,
    assistant_expect_stripped_words: bool = True
) -> OpenAIContextAggregatorPair

Parameters

context
OpenAILLMContext
required

The context object containing conversation history and settings

assistant_expect_stripped_words
bool
default: "True"

Controls text preprocessing for assistant responses

Usage Example


# 1. Create the context
context = OpenAILLMContext(
    messages=[],
    system="You are a helpful assistant"
)

# 2. Create aggregators for message handling
aggregators = OpenAILLMService.create_context_aggregator(context)

# 3. Access individual aggregators
user_aggregator = aggregators.user()      # Handles user message formatting
assistant_aggregator = aggregators.assistant()  # Handles assistant responses

# 4. Use in a pipeline
pipeline = Pipeline([
    user_aggregator,
    llm_service,
    assistant_aggregator
])

The context management system ensures proper message formatting and history tracking throughout the conversation.

Methods

See the LLM base class methods for additional functionality.

Usage Examples

Basic Usage

from pipecat.services.openai import OpenAILLMService

# Configure service
llm_service = OpenAILLMService(
    model="gpt-4",
    params=OpenAILLMService.InputParams(
        temperature=0.7,
        max_tokens=1000
    )
)

# Create pipeline
pipeline = Pipeline([
    context_manager,
    llm_service,
    response_handler
])

With Function Calling

# Configure function calling
context = OpenAILLMContext(
    system_prompt="You are a helpful assistant.",
    tools=[{
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get weather information",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {"type": "string"}
                }
            }
        }
    }]
)

# Create context aggregators
aggregators = OpenAILLMService.create_context_aggregator(context)

Frame Flow

Notes

  • Supports streaming responses
  • Handles function calling
  • Provides metrics collection
  • Supports vision models
  • Manages conversation context
  • Thread-safe processing