LLM service implementation using Perplexity’s API with OpenAI-compatible interface
PerplexityLLMService
provides access to Perplexity’s language models through an OpenAI-compatible interface. It inherits from OpenAILLMService
and supports streaming responses and context management, with special handling for Perplexity’s incremental token reporting.
PerplexityLLMService
, install the required dependencies:
PERPLEXITY_API_KEY
.
OpenAILLMContextFrame
- Conversation context and historyLLMMessagesFrame
- Direct message listLLMUpdateSettingsFrame
- Runtime parameter updatesLLMFullResponseStartFrame
/ LLMFullResponseEndFrame
- Response boundariesLLMTextFrame
- Streamed completion chunks with citationsErrorFrame
- API or processing errors