Perplexity
LLM service implementation using Perplexity’s API with OpenAI-compatible interface
Overview
PerplexityLLMService
provides access to Perplexity’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses and context management, with special handling for Perplexity’s incremental token reporting.
API Reference
Complete API documentation and method details
Perplexity Docs
Official Perplexity API documentation and features
Example Code
Working example with search capabilities
Unlike other LLM services, Perplexity does not support function calling. Instead, they offer native internet search built in without requiring special function calls.
Installation
To use PerplexityLLMService
, install the required dependencies:
You’ll also need to set up your Perplexity API key as an environment variable: PERPLEXITY_API_KEY
.
Get your API key from Perplexity API.
Frames
Input
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listLLMUpdateSettingsFrame
- Runtime parameter updates
Output
LLMFullResponseStartFrame
/LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunks with citationsErrorFrame
- API or processing errors
Context Management
Context Management Guide
Learn how to manage conversation context, handle message history, and integrate context aggregators for consistent conversational experiences.
Usage Example
Metrics
The service provides specialized token tracking for Perplexity’s incremental reporting:
- Time to First Byte (TTFB) - Response latency measurement
- Processing Duration - Total request processing time
- Token Usage - Accumulated prompt and completion tokens
Enable with:
Additional Notes
- No Function Calling: Perplexity doesn’t support traditional function calling but provides superior built-in search
- Real-time Data: Access to current information without complex function orchestration
- Source Citations: Automatic citation of web sources in responses
- OpenAI Compatible: Uses familiar OpenAI-style interface and parameters