Overview
PerplexityLLMService provides access to Perplexity’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses and context management, with special handling for Perplexity’s incremental token reporting and built-in internet search capabilities.
Perplexity LLM API Reference
Pipecat’s API methods for Perplexity integration
Example Implementation
Complete example with search capabilities
Perplexity Documentation
Official Perplexity API documentation and features
Perplexity Platform
Access search-enhanced models and API keys
Installation
To use Perplexity services, install the required dependencies:Prerequisites
Perplexity Account Setup
Before using Perplexity LLM services, you need:- Perplexity Account: Sign up at Perplexity
- API Key: Generate an API key from your account dashboard
- Model Selection: Choose from available models with built-in search capabilities
Required Environment Variables
PERPLEXITY_API_KEY: Your Perplexity API key for authentication
Unlike other LLM services, Perplexity does not support function calling.
Instead, they offer native internet search built in without requiring special
function calls.
Configuration
Perplexity API key for authentication.
Base URL for Perplexity API endpoint.
Model identifier to use.
InputParams
This service uses the same input parameters asOpenAILLMService. See OpenAI LLM for details.
Usage
Basic Setup
With Custom Parameters
Notes
- Perplexity does not support function calling or tools. The service only sends messages to the API, without tool definitions.
- Perplexity uses incremental token reporting. The service accumulates token usage metrics during processing and reports the final totals at the end of each request.
- Perplexity models have built-in internet search capabilities, providing up-to-date information without requiring additional tool configuration.