Skip to main content

Overview

FireworksLLMService provides access to Fireworks AI’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with optimized inference infrastructure.

Installation

To use Fireworks AI services, install the required dependency:
pip install "pipecat-ai[fireworks]"

Prerequisites

Fireworks AI Account Setup

Before using Fireworks AI LLM services, you need:
  1. Fireworks Account: Sign up at Fireworks AI
  2. API Key: Generate an API key from your account dashboard
  3. Model Selection: Choose from available open-source and proprietary models

Required Environment Variables

  • FIREWORKS_API_KEY: Your Fireworks AI API key for authentication

Configuration

api_key
str
required
Fireworks AI API key for authentication.
model
str
default:"accounts/fireworks/models/firefunction-v2"
Model identifier to use.
base_url
str
default:"https://api.fireworks.ai/inference/v1"
Base URL for Fireworks API endpoint.

InputParams

This service uses the same input parameters as OpenAILLMService. See OpenAI LLM for details.

Usage

Basic Setup

import os
from pipecat.services.fireworks import FireworksLLMService

llm = FireworksLLMService(
    api_key=os.getenv("FIREWORKS_API_KEY"),
    model="accounts/fireworks/models/firefunction-v2",
)

With Custom Parameters

from pipecat.services.fireworks import FireworksLLMService

llm = FireworksLLMService(
    api_key=os.getenv("FIREWORKS_API_KEY"),
    model="accounts/fireworks/models/firefunction-v2",
    params=FireworksLLMService.InputParams(
        temperature=0.7,
        top_p=0.9,
        max_tokens=1024,
    ),
)

Notes

  • Fireworks does not support the seed, max_completion_tokens, or stream_options parameters. Use max_tokens instead.
  • Model identifiers use the accounts/fireworks/models/ prefix format.