The OpenAI Realtime WebRTC transport implementation enables real-time audio communication with the OpenAI Realtime service, using a direct WebRTC connection.

Installation

Add the transport dependency to your build.gradle:

implementation "ai.pipecat:openai-realtime-webrtc-transport:0.3.3"

Usage

Create a client:

val transport = OpenAIRealtimeWebRTCTransport.Factory(context)

val options = RTVIClientOptions(
    params = RTVIClientParams(
        baseUrl = null,
        config = OpenAIRealtimeWebRTCTransport.buildConfig(
            apiKey = apiKey,
            initialMessages = listOf(
                LLMContextMessage(role = "user", content = "How tall is the Eiffel Tower?")
            ),
            initialConfig = OpenAIRealtimeSessionConfig(
                voice = "ballad",
                turnDetection = Value.Object("type" to Value.Str("semantic_vad")),
                inputAudioNoiseReduction = Value.Object("type" to Value.Str("near_field")),
                inputAudioTranscription = Value.Object("model" to Value.Str("gpt-4o-transcribe"))
            )
        )
    )
)

val client = RTVIClient(transport, callbacks, options)

client.start().withCallback {
    // ...
}

Resources

Pipecat Android Client Reference

Complete API documentation for the Pipecat Android client.