Mem0
Long-term conversation memory service powered by Mem0
Overview
Mem0MemoryService
provides long-term memory capabilities for conversational agents by integrating with Mem0’s API. It automatically stores conversation history and retrieves relevant past context based on the current conversation, enhancing LLM responses with persistent memory across sessions.
Installation
To use the Mem0 memory service, install the required dependencies:
You’ll also need to set up your Mem0 API key as an environment variable: MEM0_API_KEY
.
You can obtain a Mem0 API key by signing up at mem0.ai.
Mem0MemoryService
Constructor Parameters
Mem0 API key for accessing the service
Unique identifier for the end user to associate with memories
Identifier for the agent using the memory service
Identifier for the specific conversation session
Configuration parameters for memory retrieval (see below)
At least one of user_id
, agent_id
, or run_id
must be provided to
organize memories.
Input Parameters
The params
object accepts the following configuration settings:
Maximum number of relevant memories to retrieve per query
Relevance threshold for memory retrieval (0.0 to 1.0)
Mem0 API version to use
Prefix text to add before retrieved memories
Whether to add memories as a system message (True) or user message (False)
Position in the context where memories should be inserted
Input Frames
The service processes the following input frames:
Contains OpenAI-specific conversation context
Contains conversation messages in standard format
Output Frames
The service may produce the following output frames:
Enhanced messages with relevant memories included
Enhanced OpenAI context with memories included
Contains error information if memory operations fail
Memory Operations
The service performs two main operations automatically:
Message Storage
All conversation messages are stored in Mem0 for future reference. The service:
- Captures full message history from context frames
- Associates messages with the specified user/agent/run IDs
- Stores metadata to enable efficient retrieval
Memory Retrieval
When a new user message is detected, the service:
- Uses the message as a search query
- Retrieves relevant past memories from Mem0
- Formats memories with the configured system prompt
- Adds the formatted memories to the conversation context
- Passes the enhanced context downstream in the pipeline
Pipeline Positioning
The memory service should be positioned after the user context aggregator but before the LLM service:
This ensures that:
- The user’s latest message is included in the context
- The memory service can enhance the context before the LLM processes it
- The LLM receives the enhanced context with relevant memories
Usage Examples
Basic Integration
Frame Flow
Error Handling
The service includes basic error handling to ensure conversation flow continues even when memory operations fail:
- Exceptions during memory storage and retrieval are caught and logged
- If an error occurs during frame processing, an
ErrorFrame
is emitted with error details - The original frame is still passed downstream to prevent the pipeline from stalling
- Connection and authentication errors from the Mem0 API will be logged but won’t interrupt the conversation
While the service attempts to handle errors gracefully, memory operations that fail may result in missing context in conversations. Monitor your application logs for memory-related errors.