OpenPipe
LLM service implementation using OpenPipe for LLM request logging and fine-tuning
Overview
OpenPipeLLMService
extends the BaseOpenAILLMService to provide integration with OpenPipe, enabling request logging, model fine-tuning, and performance monitoring. It maintains compatibility with OpenAI’s API while adding OpenPipe’s logging and optimization capabilities.
API Reference
Complete API documentation and method details
OpenPipe Docs
Official OpenPipe API documentation and features
Installation
To use OpenPipeLLMService
, install the required dependencies:
You’ll need to set up both API keys as environment variables:
OPENPIPE_API_KEY
- Your OpenPipe API keyOPENAI_API_KEY
- Your OpenAI API key
Get your OpenPipe API key from OpenPipe Dashboard.
Frames
Input
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listVisionImageRawFrame
- Images for vision processingLLMUpdateSettingsFrame
- Runtime parameter updates
Output
LLMFullResponseStartFrame
/LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunksFunctionCallInProgressFrame
/FunctionCallResultFrame
- Function call lifecycleErrorFrame
- API or processing errors
Function Calling
Function Calling Guide
Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.
Context Management
Context Management Guide
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Usage Example
Metrics
Inherits all OpenAI metrics plus OpenPipe-specific logging:
- Time to First Byte (TTFB) - Response latency measurement
- Processing Duration - Total request processing time
- Token Usage - Detailed consumption tracking
Enable with:
Additional Notes
- OpenAI Compatibility: Full compatibility with OpenAI API features and parameters
- Privacy Aware: Configurable data retention and filtering policies
- Cost Optimization: Detailed analytics help optimize model usage and costs
- Fine-tuning Pipeline: Seamless transition from logging to custom model training