The Pipecat iOS SDK provides a Swift implementation for building voice and multimodal AI applications on iOS. It handles:

  • Real-time audio streaming
  • Bot communication and state management
  • Media device handling
  • Configuration management
  • Event handling

Installation

Add the SDK to your project using Swift Package Manager:

// Core SDK
.package(url: "https://github.com/pipecat-ai/pipecat-client-ios.git", from: "0.3.0"),

// Daily transport implementation
.package(url: "https://github.com/pipecat-ai/pipecat-client-ios-daily.git", from: "0.3.0"),

Then add the dependencies to your target:

.target(name: "YourApp", dependencies: [
    .product(name: "PipecatClientIOS", package: "pipecat-client-ios")
    .product(name: "PipecatClientIOSDaily", package: "pipecat-client-ios-daily")
]),

Example

Here’s a simple example using Daily as the transport layer:

let clientConfig = [
    ServiceConfig(
        service: "llm",
        options: [
            Option(name: "model", value: .string("meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo")),
            Option(name: "messages", value: .array([
                .object([
                    "role" : .string("system"),
                    "content": .string("You are a helpful assistant.")
                ])
            ]))
        ]
    ),
    ServiceConfig(
        service: "tts",
        options: [
            Option(name: "voice", value: .string("79a125e8-cd45-4c13-8a67-188112f4dd22"))
        ]
    )
]

let options = VoiceClientOptions(
    services: ["llm": "together", "tts": "cartesia"],
    config: clientConfig
)

let client = VoiceClient(baseUrl: "your-api-url", transport: YOUR_TRANSPORT, options: options)
try await client.connect()

Documentation