CrewAI 框架提供了一个复杂的记忆系统,旨在显著增强 AI 代理的能力。CrewAI 提供**两种截然不同的记忆方法**,以服务不同的用例。
- **基本记忆系统** - 内置短期、长期和实体记忆
- **外部记忆** - 独立的外部记忆提供商
记忆系统组件
| 组件 | 描述 |
|---|
| 短期记忆 | 使用 RAG 暂时存储最近的交互和结果,使代理能够在当前执行过程中回忆和利用与其当前上下文相关的信息。 |
| 长期记忆 | 保留从过去执行中获得的宝贵见解和学习,使代理能够随着时间的推移构建和完善其知识。 |
| 实体记忆 | 捕获并组织在任务中遇到的实体(人物、地点、概念)的信息,促进更深入的理解和关系映射。使用 RAG 存储实体信息。 |
| 上下文记忆 | 通过结合 短期记忆、长期记忆、外部记忆 和 实体记忆 来维护交互上下文,有助于在任务序列或对话中代理响应的连贯性和相关性。 |
1. 基本记忆系统(推荐)
最简单和最常用的方法。通过一个参数为您的 Crew 启用记忆
快速入门
from crewai import Crew, Agent, Task, Process
# Enable basic memory system
crew = Crew(
agents=[...],
tasks=[...],
process=Process.sequential,
memory=True, # Enables short-term, long-term, and entity memory
verbose=True
)
工作原理
- **短期记忆**:使用 ChromaDB 和 RAG 处理当前上下文
- **长期记忆**:使用 SQLite3 跨会话存储任务结果
- **实体记忆**:使用 RAG 跟踪实体(人物、地点、概念)
- **存储位置**:通过
appdirs 包的平台特定位置
- **自定义存储目录**:设置
CREWAI_STORAGE_DIR 环境变量
存储位置透明度
**了解存储位置**:CrewAI 使用平台特定的目录,遵循操作系统约定来存储记忆和知识文件。了解这些位置有助于生产部署、备份和调试。
CrewAI 存储文件的地方
默认情况下,CrewAI 使用 appdirs 库根据平台约定确定存储位置。以下是您的文件确切的存储位置
macOS
~/Library/Application Support/CrewAI/{project_name}/
├── knowledge/ # Knowledge base ChromaDB files
├── short_term_memory/ # Short-term memory ChromaDB files
├── long_term_memory/ # Long-term memory ChromaDB files
├── entities/ # Entity memory ChromaDB files
└── long_term_memory_storage.db # SQLite database
Linux
~/.local/share/CrewAI/{project_name}/
├── knowledge/
├── short_term_memory/
├── long_term_memory/
├── entities/
└── long_term_memory_storage.db
Windows
C:\Users\{username}\AppData\Local\CrewAI\{project_name}\
├── knowledge\
├── short_term_memory\
├── long_term_memory\
├── entities\
└── long_term_memory_storage.db
查找您的存储位置
要查看 CrewAI 在您的系统上存储文件的确切位置
from crewai.utilities.paths import db_storage_path
import os
# Get the base storage path
storage_path = db_storage_path()
print(f"CrewAI storage location: {storage_path}")
# List all CrewAI storage directories
if os.path.exists(storage_path):
print("\nStored files and directories:")
for item in os.listdir(storage_path):
item_path = os.path.join(storage_path, item)
if os.path.isdir(item_path):
print(f"📁 {item}/")
# Show ChromaDB collections
if os.path.exists(item_path):
for subitem in os.listdir(item_path):
print(f" └── {subitem}")
else:
print(f"📄 {item}")
else:
print("No CrewAI storage directory found yet.")
控制存储位置
选项 1:环境变量(推荐)
import os
from crewai import Crew
# Set custom storage location
os.environ["CREWAI_STORAGE_DIR"] = "./my_project_storage"
# All memory and knowledge will now be stored in ./my_project_storage/
crew = Crew(
agents=[...],
tasks=[...],
memory=True
)
选项 2:自定义存储路径
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Configure custom storage location
custom_storage_path = "./storage"
os.makedirs(custom_storage_path, exist_ok=True)
crew = Crew(
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(
db_path=f"{custom_storage_path}/memory.db"
)
)
)
选项 3:项目特定存储
import os
from pathlib import Path
# Store in project directory
project_root = Path(__file__).parent
storage_dir = project_root / "crewai_storage"
os.environ["CREWAI_STORAGE_DIR"] = str(storage_dir)
# Now all storage will be in your project directory
嵌入提供商默认设置
**默认嵌入提供商**:CrewAI 默认使用 OpenAI 嵌入以保持一致性和可靠性。您可以轻松自定义此设置以匹配您的 LLM 提供商或使用本地嵌入。
了解默认行为
# When using Claude as your LLM...
from crewai import Agent, LLM
agent = Agent(
role="Analyst",
goal="Analyze data",
backstory="Expert analyst",
llm=LLM(provider="anthropic", model="claude-3-sonnet") # Using Claude
)
# CrewAI will use OpenAI embeddings by default for consistency
# You can easily customize this to match your preferred provider
自定义嵌入提供商
from crewai import Crew
# Option 1: Match your LLM provider
crew = Crew(
agents=[agent],
tasks=[task],
memory=True,
embedder={
"provider": "anthropic", # Match your LLM provider
"config": {
"api_key": "your-anthropic-key",
"model": "text-embedding-3-small"
}
}
)
# Option 2: Use local embeddings (no external API calls)
crew = Crew(
agents=[agent],
tasks=[task],
memory=True,
embedder={
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
)
调试存储问题
检查存储权限
import os
from crewai.utilities.paths import db_storage_path
storage_path = db_storage_path()
print(f"Storage path: {storage_path}")
print(f"Path exists: {os.path.exists(storage_path)}")
print(f"Is writable: {os.access(storage_path, os.W_OK) if os.path.exists(storage_path) else 'Path does not exist'}")
# Create with proper permissions
if not os.path.exists(storage_path):
os.makedirs(storage_path, mode=0o755, exist_ok=True)
print(f"Created storage directory: {storage_path}")
检查 ChromaDB 集合
import chromadb
from crewai.utilities.paths import db_storage_path
# Connect to CrewAI's ChromaDB
storage_path = db_storage_path()
chroma_path = os.path.join(storage_path, "knowledge")
if os.path.exists(chroma_path):
client = chromadb.PersistentClient(path=chroma_path)
collections = client.list_collections()
print("ChromaDB Collections:")
for collection in collections:
print(f" - {collection.name}: {collection.count()} documents")
else:
print("No ChromaDB storage found")
重置存储(调试)
from crewai import Crew
# Reset all memory storage
crew = Crew(agents=[...], tasks=[...], memory=True)
# Reset specific memory types
crew.reset_memories(command_type='short') # Short-term memory
crew.reset_memories(command_type='long') # Long-term memory
crew.reset_memories(command_type='entity') # Entity memory
crew.reset_memories(command_type='knowledge') # Knowledge storage
生产最佳实践
- **将
CREWAI_STORAGE_DIR** 设置为生产环境中的已知位置,以便更好地控制
- **选择明确的嵌入提供商** 以匹配您的 LLM 设置
- **监控存储目录大小** 以应对大规模部署
- **将存储目录纳入** 您的备份策略
- **设置适当的文件权限**(目录为 0o755,文件为 0o644)
- **为容器化部署使用项目相对路径**
常见存储问题
“ChromaDB 权限被拒绝”错误
# Fix permissions
chmod -R 755 ~/.local/share/CrewAI/
“数据库被锁定”错误
# Ensure only one CrewAI instance accesses storage
import fcntl
import os
storage_path = db_storage_path()
lock_file = os.path.join(storage_path, ".crewai.lock")
with open(lock_file, 'w') as f:
fcntl.flock(f.fileno(), fcntl.LOCK_EX | fcntl.LOCK_NB)
# Your CrewAI code here
存储在运行之间不持久
# Verify storage location is consistent
import os
print("CREWAI_STORAGE_DIR:", os.getenv("CREWAI_STORAGE_DIR"))
print("Current working directory:", os.getcwd())
print("Computed storage path:", db_storage_path())
自定义嵌入器配置
CrewAI 支持多种嵌入提供商,让您可以灵活选择最适合您用例的选项。以下是配置不同嵌入提供商以用于您的记忆系统的综合指南。
为什么选择不同的嵌入提供商?
- **成本优化**:本地嵌入(Ollama)在初始设置后是免费的
- **隐私**:使用 Ollama 将您的数据保存在本地或使用您首选的云提供商
- **性能**:某些模型在特定领域或语言中表现更好
- **一致性**:将您的嵌入提供商与您的 LLM 提供商匹配
- **合规性**:满足特定的法规或组织要求
OpenAI 嵌入(默认)
OpenAI 提供可靠、高质量的嵌入,适用于大多数用例。
from crewai import Crew
# Basic OpenAI configuration (uses environment OPENAI_API_KEY)
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder={
"provider": "openai",
"config": {
"model_name": "text-embedding-3-small" # or "text-embedding-3-large"
}
}
)
# Advanced OpenAI configuration
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"api_key": "your-openai-api-key", # Optional: override env var
"model_name": "text-embedding-3-large",
"dimensions": 1536, # Optional: reduce dimensions for smaller storage
"organization_id": "your-org-id" # Optional: for organization accounts
}
}
)
Azure OpenAI 嵌入
适用于拥有 Azure OpenAI 部署的企业用户。
crew = Crew(
memory=True,
embedder={
"provider": "openai", # Use openai provider for Azure
"config": {
"api_key": "your-azure-api-key",
"api_base": "https://your-resource.openai.azure.com/",
"api_type": "azure",
"api_version": "2023-05-15",
"model_name": "text-embedding-3-small",
"deployment_id": "your-deployment-name" # Azure deployment name
}
}
)
Google AI 嵌入
使用 Google 的文本嵌入模型与 Google Cloud 服务集成。
crew = Crew(
memory=True,
embedder={
"provider": "google-generativeai",
"config": {
"api_key": "your-google-api-key",
"model_name": "gemini-embedding-001" # or "text-embedding-005", "text-multilingual-embedding-002"
}
}
)
Vertex AI 嵌入
适用于具有 Vertex AI 访问权限的 Google Cloud 用户。
crew = Crew(
memory=True,
embedder={
"provider": "vertexai",
"config": {
"project_id": "your-gcp-project-id",
"region": "us-central1", # or your preferred region
"api_key": "your-service-account-key",
"model_name": "textembedding-gecko"
}
}
)
Ollama 嵌入(本地)
本地运行嵌入以实现隐私和成本节约。
# First, install and run Ollama locally, then pull an embedding model:
# ollama pull mxbai-embed-large
crew = Crew(
memory=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large", # or "nomic-embed-text"
"url": "https://:11434/api/embeddings" # Default Ollama URL
}
}
)
# For custom Ollama installations
crew = Crew(
memory=True,
embedder={
"provider": "ollama",
"config": {
"model": "mxbai-embed-large",
"url": "http://your-ollama-server:11434/api/embeddings"
}
}
)
Cohere 嵌入
使用 Cohere 的嵌入模型进行多语言支持。
crew = Crew(
memory=True,
embedder={
"provider": "cohere",
"config": {
"api_key": "your-cohere-api-key",
"model_name": "embed-english-v3.0" # or "embed-multilingual-v3.0"
}
}
)
VoyageAI 嵌入
针对检索任务优化的高性能嵌入。
crew = Crew(
memory=True,
embedder={
"provider": "voyageai",
"config": {
"api_key": "your-voyage-api-key",
"model": "voyage-3", # or "voyage-3-lite", "voyage-code-3"
"input_type": "document" # or "query"
}
}
)
AWS Bedrock 嵌入
适用于具有 Bedrock 访问权限的 AWS 用户。
crew = Crew(
memory=True,
embedder={
"provider": "bedrock",
"config": {
"aws_access_key_id": "your-access-key",
"aws_secret_access_key": "your-secret-key",
"region_name": "us-east-1",
"model": "amazon.titan-embed-text-v1"
}
}
)
Hugging Face 嵌入
使用 Hugging Face 的开源模型。
crew = Crew(
memory=True,
embedder={
"provider": "huggingface",
"config": {
"api_key": "your-hf-token", # Optional for public models
"model": "sentence-transformers/all-MiniLM-L6-v2",
"api_url": "https://api-inference.huggingface.co" # or your custom endpoint
}
}
)
IBM Watson 嵌入
适用于 IBM Cloud 用户。
crew = Crew(
memory=True,
embedder={
"provider": "watson",
"config": {
"api_key": "your-watson-api-key",
"url": "your-watson-instance-url",
"model": "ibm/slate-125m-english-rtrvr"
}
}
)
Mem0 提供商
短期记忆和实体记忆都支持与 Mem0 OSS 和 Mem0 Client 的紧密集成作为提供商。以下是如何使用 Mem0 作为提供商。
from crewai.memory.short_term.short_term_memory import ShortTermMemory
from crewai.memory.entity_entity_memory import EntityMemory
mem0_oss_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"local_mem0_config": {
"vector_store": {"provider": "qdrant","config": {"host": "localhost", "port": 6333}},
"llm": {"provider": "openai","config": {"api_key": "your-api-key", "model": "gpt-4"}},
"embedder": {"provider": "openai","config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}}
},
"infer": True # Optional defaults to True
},
}
mem0_client_embedder_config = {
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
short_term_memory_mem0_oss = ShortTermMemory(embedder_config=mem0_oss_embedder_config) # Short Term Memory with Mem0 OSS
short_term_memory_mem0_client = ShortTermMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
entity_memory_mem0_oss = EntityMemory(embedder_config=mem0_oss_embedder_config) # Entity Memory with Mem0 OSS
entity_memory_mem0_client = EntityMemory(embedder_config=mem0_client_embedder_config) # Short Term Memory with Mem0 Client
crew = Crew(
memory=True,
short_term_memory=short_term_memory_mem0_oss, # or short_term_memory_mem0_client
entity_memory=entity_memory_mem0_oss # or entity_memory_mem0_client
)
选择正确的嵌入提供商
选择嵌入提供商时,请考虑性能、隐私、成本和集成需求等因素。
下面是帮助您做出决定的比较
| 提供商 | 最适合 | 优点 | 缺点 |
|---|
| OpenAI | 通用用途,高可靠性 | 高质量,广泛测试 | 付费服务,需要 API 密钥 |
| Ollama | 注重隐私,节省成本 | 免费,本地运行,完全私有 | 需要本地安装/设置 |
| Google AI | 集成到 Google 生态系统 | 性能强劲,支持良好 | 需要 Google 账户 |
| Azure OpenAI | 企业和合规性需求 | 企业级功能,安全性 | 设置过程更复杂 |
| Cohere | 多语言内容处理 | 出色的语言支持 | 更小众的用例 |
| VoyageAI | 信息检索和搜索 | 针对检索任务优化 | 相对较新的提供商 |
| Mem0 | 个性化用户 | 搜索优化嵌入 | 付费服务,需要 API 密钥 |
环境变量配置
为了安全起见,将 API 密钥存储在环境变量中
import os
# Set environment variables
os.environ["OPENAI_API_KEY"] = "your-openai-key"
os.environ["GOOGLE_API_KEY"] = "your-google-key"
os.environ["COHERE_API_KEY"] = "your-cohere-key"
# Use without exposing keys in code
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"model": "text-embedding-3-small"
# API key automatically loaded from environment
}
}
)
测试不同的嵌入提供商
为您的特定用例比较嵌入提供商
from crewai import Crew
from crewai.utilities.paths import db_storage_path
# Test different providers with the same data
providers_to_test = [
{
"name": "OpenAI",
"config": {
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
}
},
{
"name": "Ollama",
"config": {
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
}
]
for provider in providers_to_test:
print(f"\nTesting {provider['name']} embeddings...")
# Create crew with specific embedder
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=provider['config']
)
# Run your test and measure performance
result = crew.kickoff()
print(f"{provider['name']} completed successfully")
解决嵌入问题
模型未找到错误
# Verify model availability
from crewai.rag.embeddings.configurator import EmbeddingConfigurator
configurator = EmbeddingConfigurator()
try:
embedder = configurator.configure_embedder({
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
print("Embedder configured successfully")
except Exception as e:
print(f"Configuration error: {e}")
API 密钥问题
import os
# Check if API keys are set
required_keys = ["OPENAI_API_KEY", "GOOGLE_API_KEY", "COHERE_API_KEY"]
for key in required_keys:
if os.getenv(key):
print(f"✅ {key} is set")
else:
print(f"❌ {key} is not set")
性能比较
import time
def test_embedding_performance(embedder_config, test_text="This is a test document"):
start_time = time.time()
crew = Crew(
agents=[...],
tasks=[...],
memory=True,
embedder=embedder_config
)
# Simulate memory operation
crew.kickoff()
end_time = time.time()
return end_time - start_time
# Compare performance
openai_time = test_embedding_performance({
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
})
ollama_time = test_embedding_performance({
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
})
print(f"OpenAI: {openai_time:.2f}s")
print(f"Ollama: {ollama_time:.2f}s")
实体记忆批处理行为
实体记忆在一次保存多个实体时支持批处理。当您传递 EntityMemoryItem 列表时,系统会
- 发出一个带有
entity_count 的 MemorySaveStartedEvent
- 内部保存每个实体,收集任何部分错误
- 发出带有聚合元数据(保存计数,错误)的 MemorySaveCompletedEvent
- 如果某些实体失败,则引发部分保存异常(包括计数)
这在一次操作中写入许多实体时提高了性能和可观察性。
2. 外部记忆
外部记忆提供了一个独立的记忆系统,它独立于 Crew 的内置记忆运行。这非常适合专门的记忆提供商或跨应用程序的记忆共享。
使用 Mem0 的基本外部记忆
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
# Create external memory instance with local Mem0 Configuration
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
"local_mem0_config": {
"vector_store": {
"provider": "qdrant",
"config": {"host": "localhost", "port": 6333}
},
"llm": {
"provider": "openai",
"config": {"api_key": "your-api-key", "model": "gpt-4"}
},
"embedder": {
"provider": "openai",
"config": {"api_key": "your-api-key", "model": "text-embedding-3-small"}
}
},
"infer": True # Optional defaults to True
},
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
使用 Mem0 Client 的高级外部记忆
使用 Mem0 Client 时,您可以通过使用“includes”、“excludes”、“custom_categories”、“infer”和“run_id”(仅适用于短期记忆)等参数进一步自定义记忆配置。您可以在Mem0 文档中找到更多详细信息。
import os
from crewai import Agent, Crew, Process, Task
from crewai.memory.external.external_memory import ExternalMemory
new_categories = [
{"lifestyle_management_concerns": "Tracks daily routines, habits, hobbies and interests including cooking, time management and work-life balance"},
{"seeking_structure": "Documents goals around creating routines, schedules, and organized systems in various life areas"},
{"personal_information": "Basic information about the user including name, preferences, and personality traits"}
]
os.environ["MEM0_API_KEY"] = "your-api-key"
# Create external memory instance with Mem0 Client
external_memory = ExternalMemory(
embedder_config={
"provider": "mem0",
"config": {
"user_id": "john",
"org_id": "my_org_id", # Optional
"project_id": "my_project_id", # Optional
"api_key": "custom-api-key" # Optional - overrides env var
"run_id": "my_run_id", # Optional - for short-term memory
"includes": "include1", # Optional
"excludes": "exclude1", # Optional
"infer": True # Optional defaults to True
"custom_categories": new_categories # Optional - custom categories for user memory
},
}
)
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory, # Separate from basic memory
process=Process.sequential,
verbose=True
)
自定义存储实现
from crewai.memory.external.external_memory import ExternalMemory
from crewai.memory.storage.interface import Storage
class CustomStorage(Storage):
def __init__(self):
self.memories = []
def save(self, value, metadata=None, agent=None):
self.memories.append({
"value": value,
"metadata": metadata,
"agent": agent
})
def search(self, query, limit=10, score_threshold=0.5):
# Implement your search logic here
return [m for m in self.memories if query.lower() in str(m["value"]).lower()]
def reset(self):
self.memories = []
# Use custom storage
external_memory = ExternalMemory(storage=CustomStorage())
crew = Crew(
agents=[...],
tasks=[...],
external_memory=external_memory
)
🧠 记忆系统比较
| 类别 | 功能 | 基本记忆 | 外部记忆 |
|---|
| 易用性 | 设置复杂性 | 简单 | 中等 |
| 集成 | 内置(上下文) | 独立 |
| 持久性 | 存储 | 本地文件 | 自定义 / Mem0 |
| 跨会话支持 | ✅ | ✅ |
| 个性化 | 用户专属记忆 | ❌ | ✅ |
| 自定义提供商 | 有限 | 任何提供商 |
| 用例适配 | 推荐用于 | 大多数通用用例 | 专业/自定义需求 |
支持的嵌入提供商
OpenAI(默认)
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {"model": "text-embedding-3-small"}
}
)
Ollama
crew = Crew(
memory=True,
embedder={
"provider": "ollama",
"config": {"model": "mxbai-embed-large"}
}
)
Google AI
crew = Crew(
memory=True,
embedder={
"provider": "google-generativeai",
"config": {
"api_key": "your-api-key",
"model_name": "gemini-embedding-001"
}
}
)
Azure OpenAI
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"api_key": "your-api-key",
"api_base": "https://your-resource.openai.azure.com/",
"api_version": "2023-05-15",
"model_name": "text-embedding-3-small"
}
}
)
Vertex AI
crew = Crew(
memory=True,
embedder={
"provider": "vertexai",
"config": {
"project_id": "your-project-id",
"region": "your-region",
"api_key": "your-api-key",
"model_name": "textembedding-gecko"
}
}
)
安全最佳实践
环境变量
import os
from crewai import Crew
# Store sensitive data in environment variables
crew = Crew(
memory=True,
embedder={
"provider": "openai",
"config": {
"api_key": os.getenv("OPENAI_API_KEY"),
"model": "text-embedding-3-small"
}
}
)
存储安全
import os
from crewai import Crew
from crewai.memory import LongTermMemory
from crewai.memory.storage.ltm_sqlite_storage import LTMSQLiteStorage
# Use secure storage paths
storage_path = os.getenv("CREWAI_STORAGE_DIR", "./storage")
os.makedirs(storage_path, mode=0o700, exist_ok=True) # Restricted permissions
crew = Crew(
memory=True,
long_term_memory=LongTermMemory(
storage=LTMSQLiteStorage(
db_path=f"{storage_path}/memory.db"
)
)
)
故障排除
常见问题
记忆在会话之间不持久?
- 检查
CREWAI_STORAGE_DIR 环境变量
- 确保存储目录具有写入权限
- 验证是否通过
memory=True 启用了记忆
Mem0 认证错误?
- 验证是否设置了
MEM0_API_KEY 环境变量
- 在 Mem0 控制面板上检查 API 密钥权限
- 确保已安装
mem0ai 包
大数据集导致高内存使用?
- 考虑使用带有自定义存储的外部记忆
- 在自定义存储搜索方法中实现分页
- 使用较小的嵌入模型以减少内存占用
- 对于大多数用例使用
memory=True(最简单、最快)
- 仅在需要用户特定持久性时使用用户记忆
- 对于大规模或特殊需求,考虑使用外部记忆
- 选择较小的嵌入模型以加快处理速度
- 设置适当的搜索限制以控制记忆检索大小
使用 CrewAI 记忆系统的好处
- 🦾 自适应学习: Crew 随着时间的推移变得更高效,适应新信息并完善其任务方法。
- 🫡 增强个性化: 记忆使代理能够记住用户偏好和历史交互,从而提供个性化体验。
- 🧠 改进问题解决: 访问丰富的记忆存储有助于代理做出更明智的决策,汲取过去的经验和上下文见解。
记忆事件
CrewAI 的事件系统提供了对记忆操作的强大洞察。通过利用记忆事件,您可以监控、调试和优化您的记忆系统的性能和行为。
可用记忆事件
CrewAI 触发以下记忆相关事件
| 事件 | 描述 | 关键属性 |
|---|
| MemoryQueryStartedEvent | 记忆查询开始时触发 | query, limit, score_threshold |
| MemoryQueryCompletedEvent | 记忆查询成功完成时触发 | query, results, limit, score_threshold, query_time_ms |
| MemoryQueryFailedEvent | 记忆查询失败时触发 | query, limit, score_threshold, error |
| MemorySaveStartedEvent | 记忆保存操作开始时触发 | value, metadata, agent_role |
| MemorySaveCompletedEvent | 记忆保存操作成功完成时触发 | value, metadata, agent_role, save_time_ms |
| MemorySaveFailedEvent | 记忆保存操作失败时触发 | value, metadata, agent_role, error |
| MemoryRetrievalStartedEvent | 任务提示的记忆检索开始时触发 | task_id |
| MemoryRetrievalCompletedEvent | 记忆检索成功完成时触发 | task_id, memory_content, retrieval_time_ms |
实际应用
跟踪记忆操作时间以优化您的应用程序
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
import time
class MemoryPerformanceMonitor(BaseEventListener):
def __init__(self):
super().__init__()
self.query_times = []
self.save_times = []
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
self.query_times.append(event.query_time_ms)
print(f"Memory query completed in {event.query_time_ms:.2f}ms. Query: '{event.query}'")
print(f"Average query time: {sum(self.query_times)/len(self.query_times):.2f}ms")
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
self.save_times.append(event.save_time_ms)
print(f"Memory save completed in {event.save_time_ms:.2f}ms")
print(f"Average save time: {sum(self.save_times)/len(self.save_times):.2f}ms")
# Create an instance of your listener
memory_monitor = MemoryPerformanceMonitor()
2. 记忆内容日志记录
记录记忆操作以进行调试和洞察
from crewai.events import (
BaseEventListener,
MemorySaveStartedEvent,
MemoryQueryStartedEvent,
MemoryRetrievalCompletedEvent
)
import logging
# Configure logging
logger = logging.getLogger('memory_events')
class MemoryLogger(BaseEventListener):
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveStartedEvent)
def on_memory_save_started(source, event: MemorySaveStartedEvent):
if event.agent_role:
logger.info(f"Agent '{event.agent_role}' saving memory: {event.value[:50]}...")
else:
logger.info(f"Saving memory: {event.value[:50]}...")
@crewai_event_bus.on(MemoryQueryStartedEvent)
def on_memory_query_started(source, event: MemoryQueryStartedEvent):
logger.info(f"Memory query started: '{event.query}' (limit: {event.limit})")
@crewai_event_bus.on(MemoryRetrievalCompletedEvent)
def on_memory_retrieval_completed(source, event: MemoryRetrievalCompletedEvent):
if event.task_id:
logger.info(f"Memory retrieved for task {event.task_id} in {event.retrieval_time_ms:.2f}ms")
else:
logger.info(f"Memory retrieved in {event.retrieval_time_ms:.2f}ms")
logger.debug(f"Memory content: {event.memory_content}")
# Create an instance of your listener
memory_logger = MemoryLogger()
3. 错误跟踪和通知
捕获并响应记忆错误
from crewai.events import (
BaseEventListener,
MemorySaveFailedEvent,
MemoryQueryFailedEvent
)
import logging
from typing import Optional
# Configure logging
logger = logging.getLogger('memory_errors')
class MemoryErrorTracker(BaseEventListener):
def __init__(self, notify_email: Optional[str] = None):
super().__init__()
self.notify_email = notify_email
self.error_count = 0
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemorySaveFailedEvent)
def on_memory_save_failed(source, event: MemorySaveFailedEvent):
self.error_count += 1
agent_info = f"Agent '{event.agent_role}'" if event.agent_role else "Unknown agent"
error_message = f"Memory save failed: {event.error}. {agent_info}"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
@crewai_event_bus.on(MemoryQueryFailedEvent)
def on_memory_query_failed(source, event: MemoryQueryFailedEvent):
self.error_count += 1
error_message = f"Memory query failed: {event.error}. Query: '{event.query}'"
logger.error(error_message)
if self.notify_email and self.error_count % 5 == 0:
self._send_notification(error_message)
def _send_notification(self, message):
# Implement your notification system (email, Slack, etc.)
print(f"[NOTIFICATION] Would send to {self.notify_email}: {message}")
# Create an instance of your listener
error_tracker = MemoryErrorTracker(notify_email="[email protected]")
记忆事件可以转发到分析和监控平台,以跟踪性能指标、检测异常并可视化记忆使用模式
from crewai.events import (
BaseEventListener,
MemoryQueryCompletedEvent,
MemorySaveCompletedEvent
)
class MemoryAnalyticsForwarder(BaseEventListener):
def __init__(self, analytics_client):
super().__init__()
self.client = analytics_client
def setup_listeners(self, crewai_event_bus):
@crewai_event_bus.on(MemoryQueryCompletedEvent)
def on_memory_query_completed(source, event: MemoryQueryCompletedEvent):
# Forward query metrics to analytics platform
self.client.track_metric({
"event_type": "memory_query",
"query": event.query,
"duration_ms": event.query_time_ms,
"result_count": len(event.results) if hasattr(event.results, "__len__") else 0,
"timestamp": event.timestamp
})
@crewai_event_bus.on(MemorySaveCompletedEvent)
def on_memory_save_completed(source, event: MemorySaveCompletedEvent):
# Forward save metrics to analytics platform
self.client.track_metric({
"event_type": "memory_save",
"agent_role": event.agent_role,
"duration_ms": event.save_time_ms,
"timestamp": event.timestamp
})
记忆事件监听器的最佳实践
- **保持处理程序轻量**:避免在事件处理程序中进行复杂处理,以防止性能影响
- **使用适当的日志级别**:INFO 用于正常操作,DEBUG 用于详细信息,ERROR 用于问题
- **尽可能批量处理指标**:在发送到外部系统之前累积指标
- **优雅地处理异常**:确保您的事件处理程序不会因意外数据而崩溃
- **考虑内存消耗**:注意存储大量事件数据
将 CrewAI 的记忆系统集成到您的项目中非常简单。通过利用所提供的记忆组件和配置,您可以快速使您的代理具备记忆、推理和从交互中学习的能力,从而解锁新的智能和能力水平。