Overview

OpenPipeLLMService extends the BaseOpenAILLMService to provide integration with OpenPipe, enabling request logging, model fine-tuning, and performance monitoring. It maintains compatibility with OpenAI’s API while adding OpenPipe’s logging and optimization capabilities.

Installation

To use OpenPipeLLMService, install the required dependencies:

pip install "pipecat-ai[openpipe]"

You’ll need to set up both API keys as environment variables:

  • OPENPIPE_API_KEY - Your OpenPipe API key
  • OPENAI_API_KEY - Your OpenAI API key

Get your OpenPipe API key from OpenPipe Dashboard.

Frames

Input

  • OpenAILLMContextFrame - Conversation context and history
  • LLMMessagesFrame - Direct message list
  • VisionImageRawFrame - Images for vision processing
  • LLMUpdateSettingsFrame - Runtime parameter updates

Output

  • LLMFullResponseStartFrame / LLMFullResponseEndFrame - Response boundaries
  • LLMTextFrame - Streamed completion chunks
  • FunctionCallInProgressFrame / FunctionCallResultFrame - Function call lifecycle
  • ErrorFrame - API or processing errors

Function Calling

Function Calling Guide

Learn how to implement function calling with standardized schemas, register handlers, manage context properly, and control execution flow in your conversational AI applications.

Context Management

Context Management Guide

Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.

Usage Example

import os
from pipecat.services.openpipe.llm import OpenPipeLLMService
from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext
from pipecat.adapters.schemas.function_schema import FunctionSchema
from pipecat.adapters.schemas.tools_schema import ToolsSchema

# Configure OpenPipe service with comprehensive logging
llm = OpenPipeLLMService(
    model="gpt-4o",
    api_key=os.getenv("OPENAI_API_KEY"),
    openpipe_api_key=os.getenv("OPENPIPE_API_KEY"),
    tags={
        "environment": "production",
        "feature": "conversational-ai",
        "deployment": "voice-assistant",
        "version": "v1.2"
    },
    params=OpenPipeLLMService.InputParams(
        temperature=0.7,
        max_completion_tokens=1000
    )
)

# Define function for monitoring tool usage
weather_function = FunctionSchema(
    name="get_weather",
    description="Get current weather information",
    properties={
        "location": {
            "type": "string",
            "description": "City and state, e.g. San Francisco, CA"
        }
    },
    required=["location"]
)

tools = ToolsSchema(standard_tools=[weather_function])

# Create context with system optimization
context = OpenAILLMContext(
    messages=[
        {
            "role": "system",
            "content": """You are a helpful voice assistant. Keep responses
            concise and natural for speech synthesis. All conversations are
            logged for quality improvement."""
        }
    ],
    tools=tools
)

# Create context aggregators
context_aggregator = llm.create_context_aggregator(context)

# Register function with logging awareness
async def get_weather(params):
    location = params.arguments["location"]
    # Function calls are automatically logged by OpenPipe
    await params.result_callback(f"Weather in {location}: 72°F and sunny")

llm.register_function("get_weather", get_weather)

# Use in pipeline - all requests automatically logged
pipeline = Pipeline([
    transport.input(),
    stt,
    context_aggregator.user(),
    llm,  # Automatic logging happens here
    tts,
    transport.output(),
    context_aggregator.assistant()
])

Metrics

Inherits all OpenAI metrics plus OpenPipe-specific logging:

  • Time to First Byte (TTFB) - Response latency measurement
  • Processing Duration - Total request processing time
  • Token Usage - Detailed consumption tracking

Enable with:

task = PipelineTask(
    pipeline,
    params=PipelineParams(
        enable_metrics=True,
        enable_usage_metrics=True
    )
)

Additional Notes

  • OpenAI Compatibility: Full compatibility with OpenAI API features and parameters
  • Privacy Aware: Configurable data retention and filtering policies
  • Cost Optimization: Detailed analytics help optimize model usage and costs
  • Fine-tuning Pipeline: Seamless transition from logging to custom model training