Grok
LLM service implementation using Grok’s API with OpenAI-compatible interface
Overview
GrokLLMService
provides access to Grok’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses, function calling, and context management.
Installation
To use GrokLLMService
, install the required dependencies:
You’ll also need to set up your Grok API key as an environment variable: GROK_API_KEY
Configuration
Constructor Parameters
Your Grok API key
Model identifier
Grok API endpoint
Input Parameters
Inherits OpenAI-compatible parameters:
Reduces likelihood of repeating tokens based on their frequency. Range: [-2.0, 2.0]
Maximum number of tokens to generate. Must be greater than or equal to 1
Reduces likelihood of repeating any tokens that have appeared. Range: [-2.0, 2.0]
Controls randomness in the output. Range: [0.0, 2.0]
Controls diversity via nucleus sampling. Range: [0.0, 1.0]
Usage Example
Methods
See the LLM base class methods for additional functionality.
Function Calling
Supports OpenAI-compatible function calling:
Available Models
Model Name | Description |
---|---|
grok-beta | Grok’s beta model |
See Grok’s docs for a complete list of supported models.
Frame Flow
Inherits the OpenAI LLM Service frame flow:
Metrics Support
The service collects standard LLM metrics:
- Token usage (prompt and completion)
- Processing duration
- Time to First Byte (TTFB)
- Function call metrics
Notes
- OpenAI-compatible interface
- Supports streaming responses
- Handles function calling
- Manages conversation context
- Includes token usage tracking
- Thread-safe processing
- Automatic error handling