An overview of the RTVI standard and its key features
'rtvi-ai'
.'client-ready'
string
The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations.
AboutClient Object
An object containing information about the client, such as its rtvi-version, client library, and any other relevant metadata. The AboutClient
object follows this structure:
'bot-ready'
string
The version of the RTVI standard being used. This is useful for ensuring compatibility between client and server implementations.
any
(Optional)
An object containing information about the server or bot. It’s structure and value are both undefined by default. This provides flexibility to include any relevant metadata your client may need to know about the server at connection time, without any built-in security concerns. Please be mindful of the data you include here and any security concerns that may arise from exposing sensitive information.
'disconnect-bot'
undefined
'error'
string
Description of the error.
boolean
Indicates if the error is fatal to the session.
'user-started-speaking'
'user-stopped-speaking'
'bot-started-speaking'
'bot-stopped-speaking'
'user-transcription'
string
The transcribed text of the user.
boolean
Indicates if this is a final transcription or a partial result.
string
The timestamp when the transcription was generated.
string
Identifier for the user who spoke.
bot-tts-text
message type.
'bot-transcription'
string
The transcribed text from the bot, typically aggregated at a per-sentence level.
client-message
message type to handle responses from the client.
'server-message'
any
The data
can be any JSON-serializable object, formatted according to your own specifications.
server-response
message type to handle responses from the server.
'client-message'
string
unknown
(optional)t
field indicating the type of message and an optional d
field containing any custom, corresponding data needed for the message.
client-message
. IMPORTANT: The id
should match the id
of the original client-message
to correlate the response with the request.
'client-message'
string
unknown
(optional)t
field indicating the type of message and an optional d
field containing any custom, corresponding data needed for the message.
id
should match the id
of the original client-message
to correlate the response with the request.
'error-response'
string
'append-to-context'
"user"
| "assistant"
The role the context should be appended to. Currently only supports "user"
and "assistant"
.
unknown
The content to append to the context. This can be any data structure the llm understand.
boolean
(optional)
Indicates whether the context should be run immediately after appending. Defaults to false
. If set to false
, the context will be appended but not executed until the next llm run.
'llm-function-call'
string
Name of the function to be called.
string
Unique identifier for this function call.
Record<string, unknown>
Arguments to be passed to the function.
'llm-function-call-result'
string
Name of the called function.
string
Identifier matching the original function call.
Record<string, unknown>
Arguments that were passed to the function.
Record<string, unknown> | string
The result returned by the function.
'bot-llm-search-response'
string
(optional)
Raw search result text.
string
(optional)
Formatted version of the search results.
Array<Origin Object>
Source information and confidence scores for search results. The Origin Object
follows this structure:
bot-llm-started
bot-llm-stopped
'user-llm-text'
string
The user’s input text to be processed by the LLM.
'bot-llm-text'
string
The token text from the LLM.
'bot-tts-started'
'bot-tts-stopped'
'bot-tts-text'
string
The text representation of the generated bot speech.
processing
, ttfb
, characters
.
'metrics'
string
The name of the processor or service that generated the metric.
number
The value of the metric, typically in milliseconds or character count.
string
(optional)
The model of the service that generated the metric, if applicable.