Understanding Function Calling

Function calling (also known as tool calling) allows LLMs to request information from external services and APIs. This enables your bot to access real-time data and perform actions that aren’t part of its training data.

For example, you could give your bot the ability to:

  • Check current weather conditions
  • Look up stock prices
  • Query a database
  • Control smart home devices
  • Schedule appointments

Here’s how it works:

  1. You define functions the LLM can use
  2. When needed, the LLM requests a function call
  3. Your application executes the function
  4. The result is sent back to the LLM
  5. The LLM uses this information in its response

Implementation

1. Define Functions

Functions are defined differently depending on your LLM provider. Here are examples of a weather function for supported providers:

2. Register Function Handlers

Register handlers for your functions using the LLM service’s register_function method:

llm = OpenAILLMService(api_key="your-api-key", model="gpt-4")

# Optional start callback - called when function execution begins
async def start_fetch_weather(function_name, llm, context):
    logger.debug(f"Starting weather fetch: {function_name}")

# Main function handler - called to execute the function
async def fetch_weather_from_api(function_name, tool_call_id, args, llm, context, result_callback):
    # Fetch weather data from your API
    weather_data = {"conditions": "sunny", "temperature": "75"}
    await result_callback(weather_data)

# Register the function
llm.register_function(
    "get_current_weather",
    fetch_weather_from_api,
    start_callback=start_fetch_weather
)

Alternatively, register a handle for all function calls:

# Or register a handler for all functions
llm.register_function(None, fetch_weather_from_api, start_callback=start_fetch_weather)

3. Create the Pipeline

Include your LLM service in your pipeline with the registered functions:

# Initiliaze the LLM context, including messges and tool calls
context = OpenAILLMContext(messages, tools)

# Create the context aggregator, to collec the user and assistnat context
context_aggregator = llm.create_context_aggregator(context)

# Create the pipeline
# 1. Input from the transport
# 2. User context aggregation
# 3. LLM processing
# 4. TTS processing
# 5. Output to the transport
# 6. Assistant context aggregation
pipeline = Pipeline([
    transport.input(),
    context_aggregator.user(),
    llm,
    tts,
    transport.output(),
    context_aggregator.assistant(),
])

Function Handler Details

Handler Parameters

  • function_name: Name of the called function
  • tool_call_id: Unique identifier for the function call
  • args: Arguments passed by the LLM
  • llm: Reference to the LLM service
  • context: Current conversation context
  • result_callback: Async function to return results

Return Values

  • Return data through the result_callback
  • Return None to ignore the function call
  • Errors should be handled within your function

Next steps

  • Check out the function calling examples to see a complete example for specific LLM providers.
  • Refer to your LLM providers documentation to learn more about their funciton calling capabilities.