OpenAI
Large Language Model services using OpenAI’s chat completion API
Overview
OpenAILLMService
provides chat completion capabilities using OpenAI’s API, supporting features like streaming responses, function calling, vision input, and advanced context management.
Installation
To use OpenAI services, install the required dependencies:
You’ll also need to set up your OpenAI API key as an environment variable: OPENAI_API_KEY
You can obtain an OpenAI API key from the OpenAI platform.
OpenAILLMService
Constructor Parameters
OpenAI model identifier. See OpenAI’s docs for the latest supported models.
OpenAI API key (defaults to environment variable)
Custom API endpoint URL for alternative OpenAI-compatible services.
OpenAI organization identifier
OpenAI project identifier
Model configuration parameters (see below)
Input Parameters
The params
object accepts the following configuration settings:
Reduces likelihood of repeating tokens based on their frequency in the output so far.
Range: [-2.0, 2.0] where higher values reduce repetition more.
Reduces likelihood of repeating any tokens that have appeared in the output so far.
Range: [-2.0, 2.0] where higher values reduce repetition more.
Controls randomness/creativity in the output. Lower values are more deterministic, higher values more creative.
Range: [0.0, 2.0]
Controls diversity via nucleus sampling. Only tokens with top_p cumulative probability are considered.
Range: [0.0, 1.0]
Maximum number of tokens to generate. Set to limit response length.
Alternative way to specify maximum completion length.
Seed for deterministic generation. Useful for reproducible outputs.
Additional parameters to pass to the OpenAI API.
Input Frames
The service processes the following input frames:
Contains OpenAI-specific conversation context
Contains conversation messages
Contains image for vision model processing
Updates model settings
Requests an image from a user
Contains user-provided image data
Output Frames
The service produces the following output frames:
Indicates the start of a response
Indicates the end of a response
Contains streamed completion chunks
Indicates start of function call
Contains function call results
Contains error information
Methods
See the LLM base class methods for additional functionality.
Context Management
OpenAI’s API requires a specific format for conversation history and tools. The OpenAILLMContext
class manages this conversation state, including:
- Message history (user, assistant, system messages)
- Function/tool definitions
- Tool choice preferences
- Image and multimedia content
Constructor Parameters
Initial list of conversation messages. Each message should be an object with at least a “role” (user, assistant, or system) and “content” fields.
Defaults to an empty list.
Function definitions the model can call. Use for integrating external data sources or capabilities.
Defaults to NOT_GIVEN (no functions available).
Controls when the model can call functions. Options include “auto” (model decides), “required” (must call a function), or a specific function name.
Defaults to NOT_GIVEN (auto
).
Creating a Context
The simplest way to create a context is:
Context Aggregators
To integrate the context into a pipeline, you need paired aggregators that handle:
- Adding user messages to the context before sending to the LLM
- Capturing assistant responses and adding them to the context
The context is shared between both aggregators, so all messages are captured in conversation history automatically.
Using Context Aggregators in a Pipeline
Multimodal Capabilities
OpenAI’s latest models (like GPT-4o) support multimodal inputs including images:
Function Calling
This service supports function calling (also known as tool calling) which allows the LLM to request information from external services and APIs. For example, you can enable your bot to:
- Check current weather conditions
- Query databases
- Access external APIs
- Perform custom actions
Function Calling Guide
Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Usage Examples
Basic Usage
With Function Calling
Frame Flow
Error Handling
The service handles common API errors including:
- Authentication errors
- Rate limiting
- Invalid requests
- Network connectivity issues
- API timeouts
For unexpected errors, the service emits an ErrorFrame
with details.