Skip to main content

Overview

GrokLLMService provides access to Grok’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management with Grok’s unique reasoning capabilities.

Installation

To use Grok services, install the required dependencies:
pip install "pipecat-ai[grok]"

Prerequisites

Grok Account Setup

Before using Grok LLM services, you need:
  1. X.AI Account: Sign up at X.AI Console
  2. API Key: Generate an API key from your console dashboard
  3. Model Selection: Choose from available Grok models

Required Environment Variables

  • XAI_API_KEY: Your X.AI API key for authentication

Configuration

api_key
str
required
X.AI API key for authentication.
base_url
str
default:"https://api.x.ai/v1"
Base URL for Grok API endpoint.
model
str
default:"grok-3-beta"
Model identifier to use.

InputParams

This service uses the same input parameters as OpenAILLMService. See OpenAI LLM for details.

Usage

Basic Setup

import os
from pipecat.services.grok import GrokLLMService

llm = GrokLLMService(
    api_key=os.getenv("XAI_API_KEY"),
    model="grok-3-beta",
)

With Custom Parameters

from pipecat.services.grok import GrokLLMService

llm = GrokLLMService(
    api_key=os.getenv("XAI_API_KEY"),
    model="grok-3-beta",
    params=GrokLLMService.InputParams(
        temperature=0.7,
        top_p=0.9,
        max_completion_tokens=1024,
    ),
)

Notes

  • Grok uses incremental token reporting. The service accumulates token usage metrics during processing and reports the final totals at the end of each request.
  • Grok supports prompt caching and reasoning tokens, which are tracked in usage metrics when available.