OpenAI
Large Language Model services using OpenAI’s chat completion API
Overview
OpenAI LLM services provide chat completion capabilities using OpenAI’s API. The implementation includes two main classes:
BaseOpenAILLMService
: Base class providing core OpenAI chat completion functionalityOpenAILLMService
: Implementation with context aggregation support
Installation
To use OpenAI services, install the required dependencies:
You’ll also need to set up your OpenAI API key as an environment variable: OPENAI_API_KEY
BaseOpenAILLMService
Constructor Parameters
OpenAI model identifier (e.g., “gpt-4”, “gpt-3.5-turbo”)
OpenAI API key (defaults to environment variable)
Custom API endpoint URL
Model configuration parameters
Input Parameters
Additional parameters to pass to the model
Reduces likelihood of repeating tokens based on their frequency. Range: [-2.0, 2.0]
Maximum number of tokens in the completion. Must be greater than or equal to 1
Maximum number of tokens to generate. Must be greater than or equal to 1
Reduces likelihood of repeating any tokens that have appeared. Range: [-2.0, 2.0]
Random seed for deterministic generation. Must be greater than or equal to 0
Controls randomness in the output. Range: [0.0, 2.0]
Controls diversity via nucleus sampling. Range: [0.0, 1.0]
Input Frames
Contains OpenAI-specific conversation context
Contains conversation messages
Contains image for vision model processing
Updates model settings
Output Frames
Contains generated text chunks
Indicates start of function call
Contains function call results
OpenAILLMService
Extended implementation with context aggregation support.
Constructor Parameters
OpenAI model identifier
Model configuration parameters
Context Management
The OpenAI service uses specialized context management to handle conversations and message formatting. This includes managing the conversation history and system prompts, and tool calls.
OpenAILLMContext
The base context manager for OpenAI conversations:
Context Aggregators
Context aggregators handle message format conversion and management. The service provides a method to create paired aggregators:
Usage Example
The context management system ensures proper message formatting and history tracking throughout the conversation.
Methods
See the LLM base class methods for additional functionality.
Usage Examples
Basic Usage
With Function Calling
Frame Flow
Notes
- Supports streaming responses
- Handles function calling
- Provides metrics collection
- Supports vision models
- Manages conversation context
- Thread-safe processing