Overview

OpenPipeLLMService extends the BaseOpenAILLMService to provide integration with OpenPipe, enabling request logging, model fine-tuning, and performance monitoring. It maintains compatibility with OpenAI’s API while adding OpenPipe’s logging and optimization capabilities.

Installation

To use OpenPipeLLMService, install the required dependencies:

pip install pipecat-ai[openpipe]

You’ll need to set up the following environment variables:

  • OPENPIPE_API_KEY - Your OpenPipe API key
  • OPENAI_API_KEY - Your OpenAI API key

Configuration

Constructor Parameters

model
str
default: "gpt-4o"

Model identifier

api_key
str | None

OpenAI API key

base_url
str | None

OpenAI API endpoint

openpipe_api_key
str | None

OpenPipe API key

openpipe_base_url
str
default: "https://app.openpipe.ai/api/v1"

OpenPipe API endpoint

tags
Dict[str, str] | None

Custom tags for request logging

Input Parameters

Inherits all input parameters from BaseOpenAILLMService:

extra
Optional[Dict[str, Any]]

Additional parameters to pass to the model

frequency_penalty
Optional[float]

Reduces likelihood of repeating tokens based on their frequency. Range: [-2.0, 2.0]

max_completion_tokens
Optional[int]

Maximum number of tokens in the completion. Must be greater than or equal to 1

max_tokens
Optional[int]

Maximum number of tokens to generate. Must be greater than or equal to 1

presence_penalty
Optional[float]

Reduces likelihood of repeating any tokens that have appeared. Range: [-2.0, 2.0]

seed
Optional[int]

Random seed for deterministic generation. Must be greater than or equal to 0

temperature
Optional[float]

Controls randomness in the output. Range: [0.0, 2.0]

top_p
Optional[float]

Controls diversity via nucleus sampling. Range: [0.0, 1.0]

Usage Example

from pipecat.services.openpipe import OpenPipeLLMService
from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext

# Configure service with tags
service = OpenPipeLLMService(
    model="gpt-4",
    openpipe_api_key="your-openpipe-key",
    tags={
        "environment": "production",
        "feature": "customer-support"
    },
    params=OpenPipeLLMService.InputParams(
        temperature=0.7,
        max_tokens=1000
    )
)

# Create context
context = OpenAILLMContext(
    messages=[
        {"role": "system", "content": "You are a helpful assistant"},
        {"role": "user", "content": "How do I optimize my API usage?"}
    ]
)

# Use in pipeline
pipeline = Pipeline([
    context_manager,  # Manages conversation context
    service,          # Processes LLM requests with logging
    text_handler      # Handles responses
])

Request Logging

OpenPipe automatically logs requests with configurable tags:

# Configure detailed logging
service = OpenPipeLLMService(
    tags={
        "environment": "production",
        "feature": "chat",
        "version": "v1.0",
        "user_type": "premium"
    }
)

Frame Flow

Inherits the BaseOpenAI LLM Service frame flow with added logging:

Metrics Support

The service collects standard metrics plus OpenPipe-specific data:

  • Token usage (prompt and completion)
  • Processing duration
  • Time to First Byte (TTFB)
  • Request logs and metadata
  • Performance metrics
  • Cost tracking

Common Use Cases

  1. Performance Monitoring

    • Request latency tracking
    • Token usage monitoring
    • Cost analysis
  2. Model Optimization

    • Data collection for fine-tuning
    • Response quality monitoring
    • Usage pattern analysis
  3. Development and Testing

    • Request logging for debugging
    • A/B testing
    • Quality assurance

Notes

  • Maintains OpenAI API compatibility
  • Automatic request logging
  • Support for custom tags
  • Fine-tuning data collection
  • Performance monitoring
  • Cost tracking capabilities
  • Thread-safe processing