Skip to main content

Overview

GroqLLMService provides access to Groq’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with ultra-fast inference speeds.

Installation

To use Groq services, install the required dependency:
pip install "pipecat-ai[groq]"

Prerequisites

Groq Account Setup

Before using Groq LLM services, you need:
  1. Groq Account: Sign up at Groq Console
  2. API Key: Generate an API key from your console dashboard
  3. Model Selection: Choose from available models with ultra-fast inference

Required Environment Variables

  • GROQ_API_KEY: Your Groq API key for authentication

Configuration

api_key
str
required
Groq API key for authentication.
base_url
str
default:"https://api.groq.com/openai/v1"
Base URL for Groq API endpoint.
model
str
default:"llama-3.3-70b-versatile"
Model identifier to use.

InputParams

This service uses the same input parameters as OpenAILLMService. See OpenAI LLM for details.

Usage

Basic Setup

import os
from pipecat.services.groq import GroqLLMService

llm = GroqLLMService(
    api_key=os.getenv("GROQ_API_KEY"),
    model="llama-3.3-70b-versatile",
)

With Custom Parameters

from pipecat.services.groq import GroqLLMService

llm = GroqLLMService(
    api_key=os.getenv("GROQ_API_KEY"),
    model="llama-3.3-70b-versatile",
    params=GroqLLMService.InputParams(
        temperature=0.7,
        top_p=0.9,
        max_completion_tokens=1024,
    ),
)

Notes

  • Groq provides ultra-fast inference using custom LPU (Language Processing Unit) hardware.
  • Groq fully supports the OpenAI-compatible parameter set inherited from OpenAILLMService.