DEPRECATED: FalSmartTurnAnalyzer is deprecated. Please use
LocalSmartTurnAnalyzerV3
instead, which provides fast CPU inference without requiring external API
calls.
Overview
FalSmartTurnAnalyzer provides an easy way to use Smart Turn detection via Fal.ai’s cloud infrastructure. This implementation requires minimal setup - just an API key - and offers scalable inference without having to manage your own servers.
Installation
pip install "pipecat-ai[remote-smart-turn]"
Requirements
- A Fal.ai account and API key (get one at Fal.ai)
- Internet connectivity for making API calls
Configuration
Constructor Parameters
api_key
Optional[str]
default:"None"
Your Fal.ai API key for authentication (required unless using a custom
deployment)
url
str
default:"https://fal.run/fal-ai/smart-turn/raw"
URL endpoint for the Smart Turn API (defaults to the official Fal deployment)
aiohttp_session
aiohttp.ClientSession
required
An aiohttp client session for making HTTP requests
sample_rate
Optional[int]
default:"None"
Audio sample rate (will be set by the transport if not provided)
params
SmartTurnParams
default:"SmartTurnParams()"
Configuration parameters for turn detection. See
SmartTurnParams
for details.
Example
import os
import aiohttp
from pipecat.audio.turn.smart_turn.fal_smart_turn import FalSmartTurnAnalyzer
from pipecat.audio.vad.silero import SileroVADAnalyzer
from pipecat.audio.vad.vad_analyzer import VADParams
from pipecat.processors.aggregators.llm_response_universal import (
LLMContextAggregatorPair,
LLMUserAggregatorParams,
)
from pipecat.transports.base_transport import TransportParams
from pipecat.turns.user_stop import TurnAnalyzerUserTurnStopStrategy
from pipecat.turns.user_turn_strategies import UserTurnStrategies
async def setup_transport():
async with aiohttp.ClientSession() as session:
transport = SmallWebRTCTransport(
webrtc_connection=webrtc_connection,
params=TransportParams(
audio_in_enabled=True,
audio_out_enabled=True,
vad_analyzer=SileroVADAnalyzer(params=VADParams(stop_secs=0.2)),
),
)
# Configure Smart Turn Detection via user turn strategies
user_aggregator, assistant_aggregator = LLMContextAggregatorPair(
context,
user_params=LLMUserAggregatorParams(
user_turn_strategies=UserTurnStrategies(
stop=[TurnAnalyzerUserTurnStopStrategy(
turn_analyzer=FalSmartTurnAnalyzer(
api_key=os.getenv("FAL_SMART_TURN_API_KEY"),
aiohttp_session=session
)
)]
),
),
)
# Continue with pipeline setup...
Custom Deployment
You can also deploy the Smart Turn model yourself on Fal.ai and point to your custom deployment:
TurnAnalyzerUserTurnStopStrategy(
turn_analyzer=FalSmartTurnAnalyzer(
url="https://fal.run/your-username/your-deployment/raw",
api_key=os.getenv("FAL_API_KEY"),
aiohttp_session=session
)
)
- Latency: While Fal provides global infrastructure, there will be network latency compared to local inference
- Reliability: Depends on network connectivity and Fal.ai service availability
- Scalability: Handles scaling automatically based on your usage
Notes
- Fal handles the model hosting, scaling, and infrastructure management
- The session timeout is controlled by the
stop_secs parameter
- For high-throughput applications, consider deploying your own inference service