Groq
LLM service implementation using Groq’s API with OpenAI-compatible interface
Overview
GroqLLMService
provides access to Groq’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses, function calling, and context management.
API Reference
Complete API documentation and method details
Groq Docs
Official Groq API documentation and features
Example Code
Working example with function calling
Installation
To use Groq services, install the required dependency:
You’ll also need to set up your Groq API key as an environment variable: GROQ_API_KEY
.
Get your API key for free from Groq Console.
Frames
Input
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision processing (select models)LLMUpdateSettingsFrame
- Runtime parameter updates
Output
LLMFullResponseStartFrame
/LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/FunctionCallResultFrame
- Function call lifecycleErrorFrame
- API or processing errors
Function Calling
Function Calling Guide
Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Context Management
Context Management Guide
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Usage Example
Metrics
Inherits all OpenAI metrics capabilities:
- Time to First Byte (TTFB) - Ultra-low latency measurements
- Processing Duration - Hardware-accelerated processing times
- Token Usage - Prompt tokens, completion tokens, and totals
- Function Call Metrics - Tool usage and execution tracking
Enable with:
Additional Notes
- OpenAI Compatibility: Full compatibility with OpenAI API features and parameters
- Real-time Optimized: Ideal for conversational AI and streaming applications
- Open Source Models: Access to Llama, Mixtral, and other open-source models
- Vision Support: Select models support image understanding capabilities
- Free Tier: Generous free tier available for development and testing