Overview

NimLLMService provides access to NVIDIA’s NIM language models through an OpenAI-compatible interface. It inherits from OpenAILLMService and supports streaming responses, function calling, and context management, with special handling for NVIDIA’s incremental token reporting.

Installation

To use NimLLMService, install the required dependencies:

pip install "pipecat-ai[nim]"

You’ll need to set up your NVIDIA NIM API key as an environment variable: NIM_API_KEY

Configuration

Constructor Parameters

api_key
str
required

Your NVIDIA NIM API key

model
str
default: "nvidia/llama-3.1-nemotron-70b-instruct"

Model identifier

base_url
str
default: "https://integrate.api.nvidia.com/v1"

NVIDIA NIM API endpoint

Input Parameters

Inherits OpenAI-compatible parameters:

frequency_penalty
Optional[float]

Reduces likelihood of repeating tokens based on their frequency. Range: [-2.0, 2.0]

max_tokens
Optional[int]

Maximum number of tokens to generate. Must be greater than or equal to 1

presence_penalty
Optional[float]

Reduces likelihood of repeating any tokens that have appeared. Range: [-2.0, 2.0]

temperature
Optional[float]

Controls randomness in the output. Range: [0.0, 2.0]

top_p
Optional[float]

Controls diversity via nucleus sampling. Range: [0.0, 1.0]

Usage Example

from pipecat.services.nim import NimLLMService
from pipecat.processors.aggregators.openai_llm_context import OpenAILLMContext
from openai.types.chat import ChatCompletionToolParam
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.task import PipelineParams, PipelineTask

# Configure service
llm = NimLLMService(
    api_key="your-nim-api-key",
    model="nvidia/llama-3.1-nemotron-70b-instruct"
)

# Define tools for function calling
tools = [
    ChatCompletionToolParam(
        type="function",
        function={
            "name": "get_current_weather",
            "description": "Get the current weather",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "The city and state, e.g. San Francisco, CA"
                    },
                    "format": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "The temperature unit to use"
                    }
                },
                "required": ["location", "format"]
            }
        }
    )
]

# Create context with system message and tools
context = OpenAILLMContext(
    messages=[
        {
            "role": "system",
            "content": "You are a helpful assistant in a voice conversation. Keep responses concise."
        }
    ],
    tools=tools
)

# Register function handlers
async def fetch_weather(function_name, tool_call_id, args, llm, context, result_callback):
    await result_callback({"conditions": "nice", "temperature": "75"})

llm.register_function(None, fetch_weather)

# Create context aggregator for message handling
context_aggregator = llm.create_context_aggregator(context)

# Set up pipeline
pipeline = Pipeline([
    transport.input(),
    context_aggregator.user(),
    llm,
    tts,
    transport.output(),
    context_aggregator.assistant()
])

# Create and configure task
task = PipelineTask(
    pipeline,
    PipelineParams(
        allow_interruptions=True,
        enable_metrics=True,
        enable_usage_metrics=True,
    ),
)

Methods

See the LLM base class methods for additional functionality.

Function Calling

Supports OpenAI-compatible function calling:

# Define tools
tools = [{
    "type": "function",
    "function": {
        "name": "get_weather",
        "description": "Get weather information",
        "parameters": {
            "type": "object",
            "properties": {
                "location": {"type": "string"}
            }
        }
    }
}]

# Configure context with tools
context = OpenAILLMContext(
    messages=[],
    tools=tools
)

# Register function handler
@service.function("get_weather")
async def handle_weather(location: str):
    return {"temperature": 72, "condition": "sunny"}

Available Models

NVIDIA NIM provides access to various models:

Model NameDescription
nvidia/llama-3.1-nemotron-70b-instructLlama 3.1 70B Nemotron instruct
nvidia/llama-3.1-nemotron-13b-instructLlama 3.1 13B Nemotron instruct
nvidia/llama-3.1-nemotron-8b-instructLlama 3.1 8B Nemotron instruct

See NVIDIA’s NIM console for a complete list of supported models.

Token Usage Handling

NimLLMService includes special handling for token usage metrics:

  1. Accumulates incremental token updates from NIM
  2. Records prompt tokens on first appearance
  3. Tracks completion tokens as they increase
  4. Reports final totals at the end of processing

This ensures compatibility with OpenAI’s token reporting format while maintaining accurate metrics.

Frame Flow

Inherits the OpenAI LLM Service frame flow:

Metrics Support

The service collects standard LLM metrics:

  • Token usage (prompt and completion)
  • Processing duration
  • Time to First Byte (TTFB)
  • Function call metrics

Notes

  • OpenAI-compatible interface
  • Supports streaming responses
  • Handles function calling
  • Manages conversation context
  • Custom token usage tracking for NIM’s incremental reporting
  • Thread-safe processing
  • Automatic error handling