The Pipecat JavaScript client provides a comprehensive set of methods for managing bot interactions and media handling. These core methods are documented below.

Session connectivity

connect()

async connect(connectParams): Promise<unknown>

This method initiates the connection process, optionally passing parameters that your transport class requires to establish a connection or an endpoint to your server for obtaining those parameters.

connectParams
TransportConnectionParams | ConnectionEndpoint
required

An object containing either the connection the TransportConnectionParams your Transport expects or an endpoint to obtain them.

Check your transport class documentation for the expected shape of TransportConnectionParams. For example, the DailyTransport expects a url and token.

When passing an endpoint, the client will make a request to the endpoint to obtain the connection parameters. The endpoint should return a JSON object with the necessary parameters for your transport. For Pipecat driven applications, it is common practice for this endpoint to also initialize the server-side bot process.

The ConnectionEndpoint object should have the following shape:

This method can be try / catched to handle errors at startup:

try {
  await pcClient.connect({
    endpoint: "http://my-server/connect",
    requestData: {
      llm_provider: "openai",
      initial_prompt: "You are a pirate captain",
    },
  });
} catch (error) {
  console.error("Error connecting to the bot:", error);
}

During the connection process, the transport state will transition through the following states: “authenticating”, “authenticated”, “connecting”, “connected”, “ready”. If you provide the transport connection parameters directly without an endpoint, the transport will skip the authentication states.

Calling connect() asynchronously will resolve when the bot and client signal that they are ready. See messages and events. If you want to call connect() without await, you can use the onBotReady callback or BotReady event to know when you can interact with the bot.

Attempting to call connect() when the transport is already in a ‘connected’ or ‘ready’ state will throw an error. You should disconnect from a session first before attempting to connect again.

disconnect()

async disconnect(): Promise<void>

Disconnects from the active session. The transport state will transition to “disconnecting” and then “disconnected”.

It is common practice for bots to exit and cleanup when the client disconnects.

await pcClient.disconnect();

disconnectBot()

disconnectBot(): void

Triggers the bot to disconnect from the session, leaving the client connected.

await pcClient.disconnectBot();

Messages

Custom messaging between the client and the bot. This is useful for sending data to the bot, triggering specific actions, reacting to server events, or querying the server.

For more, see: messages and events.

sendClientMessage()

sendClientMessage(msgType: string, data?: unknown): void

Sends a custom message to the bot and does not expect a response. This is useful for sending data to the bot or triggering specific actions.

msgType
string
required

A string identifying the message.

data
unknown
required

Optional data to send with the message. This can be any JSON-serializable object.

sendClientRequest()

async sendClientRequest(msgType: string, data: unknown, timeout?: number): Promise<unknown>

Sends a custom request to the bot and expects a response. This is useful for querying the server or triggering specific actions that require a response. The method returns a Promise that resolves with the data from response.

msgType
string
required

A string identifying the message.

data
unknown
required

Optional data to send with the message. This can be any JSON-serializable object.

timeout
number
default:"10000"
required

Optional timeout in milliseconds for the request. If the request does not receive a response within this time, it will reject with an RTVIMessage of type 'error-response'.

Devices

initDevices()

async initDevices(): Promise<void>

Initializes the media device selection machinery, based on enableCam/enableMic selections and defaults (i.e. turns on the local cam/mic). This method can be called before connect() to test and switch between camera and microphone sources.

await pcClient.initDevices();

getAllMics()

async getAllMics(): Promise<MediaDeviceInfo[]>

Returns a list of available microphones in the form of MediaDeviceInfo[].

mic_device_list = pcClient.getAllMics();

getAllCams()

async getAllCams(): Promise<MediaDeviceInfo[]>

Returns a list of available cameras in the form of MediaDeviceInfo[].

cam_device_list = pcClient.getAllCams();

getAllSpeakers()

async getAllSpeakers(): Promise<MediaDeviceInfo[]>

Returns a list of available speakers in the form of MediaDeviceInfo[].

speaker_device_list = pcClient.getAllSpeakers();

selectedMic

selectedMic: MediaDeviceInfo | {}

The currently selected microphone, represented as a MediaDeviceInfo object. If no microphone is selected, it returns an empty object.

current_mic = pcClient.selectedMic;

selectedCam

selectedCam: MediaDeviceInfo | {}

The currently selected camera, represented as a MediaDeviceInfo object. If no camera is selected, it returns an empty object.

current_cam = pcClient.selectedCam;

selectedSpeaker

selectedSpeaker: MediaDeviceInfo | {}

The currently selected speaker, represented as a MediaDeviceInfo object. If no speaker is selected, it returns an empty object.

current_speaker = pcClient.selectedSpeaker;

updateMic()

updateMic(micId: string): void

Switches to the microphone identified by the provided micId, which should match a deviceId in the list returned from getAllMics().

micId
string

deviceId

pcClient.updateMic(deviceId);

updateCam()

updateCam(camId: string): void

Switches to the camera identified by the provided camId, which should match a deviceId in the list returned from getAllCams().

camId
string

deviceId

pcClient.updateCam(deviceId);

updateSpeaker()

updateSpeaker(speakerId: string): void

Switches to the speaker identified by the provided speakerId, which should match a deviceId in the list returned from getAllSpeakers().

speakerId
string

deviceId

pcClient.updateSpeaker(deviceId);

enableMic(enable: boolean)

enableMic(enable: boolean): void

Turn on or off (unmute or mute) the client mic input.

enable
boolean
required

A boolean indicating whether to enable (true) or disable (false) the microphone.

pcClient.enableMic(true);

enableCam(enable: boolean)

enableCam(enable: boolean): void

Turn on or off the client cam input.

enable
boolean
required

A boolean indicating whether to enable (true) or disable (false) the camera.

pcClient.enableCam(true);

enableScreenShare(enable: boolean)

enableScreenShare(enable: boolean): void

Start a screen share from the client’s device.

enable
boolean
required

A boolean indicating whether to enable (true) or disable (false) screen sharing.

pcClient.enableScreenShare(true);

isMicEnabled

isMicEnabled: boolean

An accessor to determine if the client’s microphone is enabled.

mic_enabled = pcClient.isMicEnabled;

isCamEnabled

isCamEnabled: boolean

An accessor to determine if the client’s camera is enabled.

cam_enabled = pcClient.isCamEnabled;

isSharingScreen

An accessor to determine if the client is sharing their screen.

screen_sharing = pcClient.isSharingScreen;

Tracks (audio and video)

tracks()

tracks(): Tracks

Returns a Tracks object with available MediaStreamTrack objects for both the client and the bot.

live_tracks_list = pcClient.tracks()

Tracks Type

{
  local: {
    audio?: MediaStreamTrack;
    video?: MediaStreamTrack;
  },
  bot?: {
    audio?: MediaStreamTrack;
    video?: MediaStreamTrack;
  }
}

Advanced LLM Interactions

appendToContext()

async appendToContext(context: LLMContextMessage): boolean

A method to append data to the bot’s context. This is useful for providing additional information or context to the bot during the conversation.

context
LLMContextMessage
required

The context to append. This should be an object with the following shape:

registerFunctionCallHandler()

registerFunctionCallHandler(functionName: string, callback: FunctionCallCallback): void

Registers a function call handler that will be called when the bot requests a function call. This is useful for when the server-side function handler needs information from the client to execute the function call or when the client needs to perform some action based on the running of function call.

functionName
string
required

The name of the function to handle. This should match the function name in the bot’s context.

callback
FunctionCallCallback
required

type FunctionCallCallback = (fn: FunctionCallParams) => Promise<LLMFunctionCallResult | void>

The callback function to call when the bot sends a function call request. This function should accept the following parameters:

The callback should return a Promise that resolves with the result of the function call or void if no result is needed. If returning a result, it should be a string or Record<string, unknown>.

Other

transport

transport: Transport

A safe accessor for the transport instance used by the client. This is useful for accessing transport-specific methods or properties that are not exposed directly on the client.

const transport = pcClient.transport as DailyTransport;
transport.getSessionInfo();

setLogLevel()

setLogLevel(level: LogLevel): void

Sets the log level for the client. This is useful for debugging and controlling the verbosity of logs. The log levels are defined in the LogLevel enum:

export enum LogLevel {
  NONE = 0,
  ERROR = 1,
  WARN = 2,
  INFO = 3,
  DEBUG = 4,
}

By default, the log level is set to LogLevel.DEBUG.

pcClient.setLogLevel(LogLevel.INFO);