Advanced conversational turn detection powered by the smart-turn model
turn_analyzer
parameter in your transport configuration:
stop_secs
value. We recommend 0.2 seconds.SmartTurnParams
class to configure behavior:
FalSmartTurnAnalyzer
class uses a remote service for turn detection inference.
LocalCoreMLSmartTurnAnalyzer
runs inference locally using CoreML, providing lower latency and no network dependencies.
We currently recommend using the PyTorch implementation with the MPS backend on Apple Silicon, rather than CoreML, due to improved performance.
LocalSmartTurnAnalyzerV2
runs inference locally using PyTorch and Hugging Face Transformers, providing a cross-platform solution.
LocalCoreMLSmartTurnAnalyzer
or LocalSmartTurnAnalyzerV2
, you need to set up the model locally:
stop_secs
, the turn is automatically marked as complete.
stop_secs
parameter based on your application’s needs for responsivenessLocalSmartTurnAnalyzerV2
will use CUDA or MPS if available, or will otherwise run on CPU