Skip to main content
The Pipecat client handles media at two levels: local devices (the user’s mic, camera, and speakers) and media tracks (the live audio/video streams flowing between client and bot). This page covers how to work with both.

Bot audio output


Microphone

Enabling and muting

Set enableMic: true in the constructor to start the session with the mic active. This is the default:
const client = new PipecatClient({
  transport: new DailyTransport(),
  enableMic: true,
});

Switching microphones


Camera

Camera is disabled by default. Enable it in the constructor:
const client = new PipecatClient({
  transport: new DailyTransport(),
  enableCam: true,
});
Not all transports support video. enableCam has no effect on transports that don’t support it (e.g. WebSocket).

Speakers


Device initialization before connecting

By default, device access is requested when connect() is called. If you want to enumerate or test devices before the session starts — for example, to show a device picker pre-call — call initDevices() first:
await client.initDevices();
// devices are now available; user hasn't connected yet
const mics = await client.getAllMics();

Media tracks

For advanced use cases — custom rendering, audio processing, recording — you can access the raw MediaStreamTrack objects directly.

Audio visualization


API reference