QdrantVectorSearchTool

Qdrant 向量搜索工具利用 Qdrant(一个向量相似性搜索引擎),为您的 CrewAI 智能体提供语义搜索能力。此工具允许您的智能体通过语义相似性搜索存储在 Qdrant 集合中的文档。

安装

安装所需包

uv add qdrant-client

基本用法

这是一个如何使用该工具的最小示例

from crewai import Agent
from crewai_tools import QdrantVectorSearchTool

# Initialize the tool
qdrant_tool = QdrantVectorSearchTool(
    qdrant_url="your_qdrant_url",
    qdrant_api_key="your_qdrant_api_key",
    collection_name="your_collection"
)

# Create an agent that uses the tool
agent = Agent(
    role="Research Assistant",
    goal="Find relevant information in documents",
    tools=[qdrant_tool]
)

# The tool will automatically use OpenAI embeddings
# and return the 3 most relevant results with scores > 0.35

完整工作示例

这是一个完整的示例,展示了如何

  1. 从 PDF 提取文本
  2. 使用 OpenAI 生成嵌入
  3. 存储在 Qdrant 中
  4. 创建一个用于语义搜索的 CrewAI 智能体 RAG 工作流
import os
import uuid
import pdfplumber
from openai import OpenAI
from dotenv import load_dotenv
from crewai import Agent, Task, Crew, Process, LLM
from crewai_tools import QdrantVectorSearchTool
from qdrant_client import QdrantClient
from qdrant_client.models import PointStruct, Distance, VectorParams

# Load environment variables
load_dotenv()

# Initialize OpenAI client
client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))

# Extract text from PDF
def extract_text_from_pdf(pdf_path):
    text = []
    with pdfplumber.open(pdf_path) as pdf:
        for page in pdf.pages:
            page_text = page.extract_text()
            if page_text:
                text.append(page_text.strip())
    return text

# Generate OpenAI embeddings
def get_openai_embedding(text):
    response = client.embeddings.create(
        input=text,
        model="text-embedding-3-small"
    )
    return response.data[0].embedding

# Store text and embeddings in Qdrant
def load_pdf_to_qdrant(pdf_path, qdrant, collection_name):
    # Extract text from PDF
    text_chunks = extract_text_from_pdf(pdf_path)
    
    # Create Qdrant collection
    if qdrant.collection_exists(collection_name):
        qdrant.delete_collection(collection_name)
    qdrant.create_collection(
        collection_name=collection_name,
        vectors_config=VectorParams(size=1536, distance=Distance.COSINE)
    )

    # Store embeddings
    points = []
    for chunk in text_chunks:
        embedding = get_openai_embedding(chunk)
        points.append(PointStruct(
            id=str(uuid.uuid4()),
            vector=embedding,
            payload={"text": chunk}
        ))
    qdrant.upsert(collection_name=collection_name, points=points)

# Initialize Qdrant client and load data
qdrant = QdrantClient(
    url=os.getenv("QDRANT_URL"),
    api_key=os.getenv("QDRANT_API_KEY")
)
collection_name = "example_collection"
pdf_path = "path/to/your/document.pdf"
load_pdf_to_qdrant(pdf_path, qdrant, collection_name)

# Initialize Qdrant search tool
qdrant_tool = QdrantVectorSearchTool(
    qdrant_url=os.getenv("QDRANT_URL"),
    qdrant_api_key=os.getenv("QDRANT_API_KEY"),
    collection_name=collection_name,
    limit=3,
    score_threshold=0.35
)

# Create CrewAI agents
search_agent = Agent(
    role="Senior Semantic Search Agent",
    goal="Find and analyze documents based on semantic search",
    backstory="""You are an expert research assistant who can find relevant 
    information using semantic search in a Qdrant database.""",
    tools=[qdrant_tool],
    verbose=True
)

answer_agent = Agent(
    role="Senior Answer Assistant",
    goal="Generate answers to questions based on the context provided",
    backstory="""You are an expert answer assistant who can generate 
    answers to questions based on the context provided.""",
    tools=[qdrant_tool],
    verbose=True
)

# Define tasks
search_task = Task(
    description="""Search for relevant documents about the {query}.
    Your final answer should include:
    - The relevant information found
    - The similarity scores of the results
    - The metadata of the relevant documents""",
    agent=search_agent
)

answer_task = Task(
    description="""Given the context and metadata of relevant documents,
    generate a final answer based on the context.""",
    agent=answer_agent
)

# Run CrewAI workflow
crew = Crew(
    agents=[search_agent, answer_agent],
    tasks=[search_task, answer_task],
    process=Process.sequential,
    verbose=True
)

result = crew.kickoff(
    inputs={"query": "What is the role of X in the document?"}
)
print(result)

工具参数

必需参数

  • qdrant_url (str):您的 Qdrant 服务器 URL
  • qdrant_api_key (str):用于向 Qdrant 进行身份验证的 API 密钥
  • collection_name (str):要搜索的 Qdrant 集合名称

可选参数

  • limit (int):要返回的最大结果数(默认值:3)
  • score_threshold (float):最小相似度分数阈值(默认值:0.35)
  • custom_embedding_fn (Callable[[str], list[float]]):用于文本向量化的自定义函数

搜索参数

该工具在其 schema 中接受这些参数

  • query (str):用于查找相似文档的搜索查询
  • filter_by (str, 可选):用于过滤的元数据字段
  • filter_value (str, 可选):用于过滤的值

返回格式

该工具以 JSON 格式返回结果

[
  {
    "metadata": {
      // Any metadata stored with the document
    },
    "context": "The actual text content of the document",
    "distance": 0.95  // Similarity score
  }
]

默认嵌入

默认情况下,该工具使用 OpenAI 的 text-embedding-3-small 模型进行向量化。这需要

  • 在环境变量中设置 OpenAI API 密钥:OPENAI_API_KEY

自定义嵌入

除了使用默认嵌入模型外,您可能希望在以下情况下使用自己的嵌入函数:

  1. 想要使用不同的嵌入模型(例如,Cohere、HuggingFace、Ollama 模型)
  2. 需要通过使用开源嵌入模型来降低成本
  3. 对向量维度或嵌入质量有特定要求
  4. 想要使用特定领域的嵌入(例如,用于医学或法律文本)

这里有一个使用 HuggingFace 模型的示例

from transformers import AutoTokenizer, AutoModel
import torch

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')
model = AutoModel.from_pretrained('sentence-transformers/all-MiniLM-L6-v2')

def custom_embeddings(text: str) -> list[float]:
    # Tokenize and get model outputs
    inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
    outputs = model(**inputs)
    
    # Use mean pooling to get text embedding
    embeddings = outputs.last_hidden_state.mean(dim=1)
    
    # Convert to list of floats and return
    return embeddings[0].tolist()

# Use custom embeddings with the tool
tool = QdrantVectorSearchTool(
    qdrant_url="your_url",
    qdrant_api_key="your_key",
    collection_name="your_collection",
    custom_embedding_fn=custom_embeddings  # Pass your custom function
)

错误处理

该工具处理这些特定错误

  • 如果未安装 qdrant-client 则引发 ImportError(可以选择自动安装)
  • 如果未设置 QDRANT_URL 则引发 ValueError
  • 如果缺少 qdrant-client,则提示使用 uv add qdrant-client 进行安装

环境变量

必需的环境变量

export QDRANT_URL="your_qdrant_url"  # If not provided in constructor
export QDRANT_API_KEY="your_api_key"  # If not provided in constructor
export OPENAI_API_KEY="your_openai_key"  # If using default embeddings