Overview
SarvamSTTService provides real-time speech recognition using Sarvam AI’s WebSocket API, supporting Indian language transcription with Voice Activity Detection (VAD) and multiple audio formats for high-accuracy speech recognition.
Sarvam STT API Reference
Pipecat’s API methods for Sarvam STT integration
Example Implementation
Complete example with interruption handling
Sarvam Documentation
Official Sarvam AI STT documentation and features
Sarvam AI Platform
Access API keys and speech models
Installation
To use Sarvam services, install the required dependency:Prerequisites
Sarvam AI Account Setup
Before using Sarvam STT services, you need:- Sarvam AI Account: Sign up at Sarvam AI
- API Key: Generate an API key from your account dashboard
- Model Access: Access to Saarika (STT) or Saaras (STT-Translate) models, including the
saaras:v3model with support for multiple modes (transcribe, translate, verbatim, translit, codemix)
Required Environment Variables
SARVAM_API_KEY: Your Sarvam AI API key for authentication
Configuration
SarvamSTTService
Sarvam API key for authentication.
Sarvam model to use. Allowed values:
"saarika:v2.5" (standard STT), "saaras:v2.5" (STT-Translate, auto-detects language), "saaras:v3" (advanced, supports mode and prompts).Audio sample rate in Hz. Defaults to 16000 if not specified.
Audio codec/format of the input file.
Configuration parameters. See InputParams below.
Seconds of no audio before sending silence to keep the connection alive.
None disables keepalive.Seconds between idle checks when keepalive is enabled.
InputParams
| Parameter | Type | Default | Description |
|---|---|---|---|
language | Language | None | Target language for transcription. Behavior varies by model: saarika:v2.5 defaults to “unknown” (auto-detect), saaras:v2.5 ignores this (auto-detects), saaras:v3 defaults to “en-IN”. |
prompt | str | None | Optional prompt to guide transcription/translation style. Only applicable to saaras models (v2.5 and v3). |
mode | str | None | Mode of operation for saaras:v3 only. Options: "transcribe", "translate", "verbatim", "translit", "codemix". Defaults to "transcribe" for saaras:v3. |
vad_signals | bool | None | Enable VAD signals in responses. When enabled, the service broadcasts UserStartedSpeakingFrame and UserStoppedSpeakingFrame from the server. |
high_vad_sensitivity | bool | None | Enable high VAD sensitivity for more responsive speech detection. |
Usage
Basic Setup
With Language and Model Configuration
With Server-Side VAD
Notes
- Supported languages: Bengali (bn-IN), Gujarati (gu-IN), Hindi (hi-IN), Kannada (kn-IN), Malayalam (ml-IN), Marathi (mr-IN), Tamil (ta-IN), Telugu (te-IN), Punjabi (pa-IN), Odia (od-IN), English (en-IN), and Assamese (as-IN).
- Model-specific parameter validation: The service validates that parameters are compatible with the selected model. For example,
promptis not supported withsaarika:v2.5, andlanguageis not supported withsaaras:v2.5(which auto-detects language). - VAD modes: When
vad_signals=False(default), the service relies on Pipecat’s local VAD and flushes the server buffer onVADUserStoppedSpeakingFrame. Whenvad_signals=True, the service uses Sarvam’s server-side VAD and broadcasts speaking frames from the server.
Event Handlers
In addition to the standard service connection events (on_connected, on_disconnected, on_connection_error), Sarvam STT provides:
| Event | Description |
|---|---|
on_speech_started | Speech detected in the audio stream |
on_speech_stopped | Speech stopped |
on_utterance_end | End of utterance detected |