Fireworks AI
LLM service implementation using Fireworks AI’s API with OpenAI-compatible interface
Overview
FireworksLLMService
provides access to Fireworks AI’s language models through an OpenAI-compatible interface. It inherits from BaseOpenAILLMService
and supports streaming responses, function calling, and context management.
Installation
To use FireworksLLMService
, install the required dependencies:
You’ll also need to set up your Fireworks API key as an environment variable: FIREWORKS_API_KEY
Configuration
Constructor Parameters
Your Fireworks AI API key
Model identifier
Fireworks AI API endpoint
Input Parameters
Inherits all input parameters from BaseOpenAILLMService:
Input Frames
Contains OpenAI-specific conversation context
Contains conversation messages
Contains image for vision model processing
Updates model settings
Output Frames
Contains generated text chunks
Indicates start of function call
Contains function call results
Methods
See the LLM base class methods for additional functionality.
Usage Example
Function Calling
Supports OpenAI-compatible function calling with the firefunction-v1
model:
Available Models
Fireworks AI provides access to various models, including:
Model Name | Description |
---|---|
accounts/fireworks/models/firefunction-v1 | Optimized for function calling |
accounts/fireworks/models/llama-v2-7b | Llama 2 7B base model |
accounts/fireworks/models/llama-v2-13b | Llama 2 13B base model |
accounts/fireworks/models/llama-v2-70b | Llama 2 70B base model |
accounts/fireworks/models/mixtral-8x7b | Mixtral 8x7B model |
Frame Flow
Inherits the BaseOpenAI LLM Service frame flow:
Metrics Support
The service collects standard LLM metrics:
- Token usage (prompt and completion)
- Processing duration
- Time to First Byte (TTFB)
- Function call metrics
Notes
- OpenAI-compatible interface
- Supports streaming responses
- Handles function calling
- Manages conversation context
- Includes token usage tracking
- Thread-safe processing
- Automatic error handling