Overview
FireworksLLMService provides access to Fireworks AI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with optimized inference infrastructure.
Fireworks LLM API Reference
Pipecat’s API methods for Fireworks AI integration
Example Implementation
Complete example with function calling
Fireworks Documentation
Official Fireworks AI API documentation and features
Fireworks Platform
Access models and manage API keys
Installation
To use Fireworks AI services, install the required dependency:Prerequisites
Fireworks AI Account Setup
Before using Fireworks AI LLM services, you need:- Fireworks Account: Sign up at Fireworks AI
- API Key: Generate an API key from your account dashboard
- Model Selection: Choose from available open-source and proprietary models
Required Environment Variables
FIREWORKS_API_KEY: Your Fireworks AI API key for authentication
Configuration
Fireworks AI API key for authentication.
Model identifier to use.
Base URL for Fireworks API endpoint.
InputParams
This service uses the same input parameters asOpenAILLMService. See OpenAI LLM for details.
Usage
Basic Setup
With Custom Parameters
Notes
- Fireworks does not support the
seed,max_completion_tokens, orstream_optionsparameters. Usemax_tokensinstead. - Model identifiers use the
accounts/fireworks/models/prefix format.