跳转到主要内容
执行钩子提供对 CrewAI 代理运行时行为的细粒度控制。与在 Crew 执行前后运行的启动钩子不同,执行钩子在代理执行期间拦截特定操作,允许您修改行为、实施安全检查和添加全面的监控。

执行钩子的类型

CrewAI 提供两类主要执行钩子

1. LLM 调用钩子

控制和监控语言模型交互
  • LLM 调用前:修改提示、验证输入、实施批准门
  • LLM 调用后:转换响应、清理输出、更新对话历史
用例
  • 迭代限制
  • 成本跟踪和令牌使用监控
  • 响应清理和内容过滤
  • LLM 调用的“人在环”批准
  • 添加安全指南或上下文
  • 调试日志记录和请求/响应检查
查看 LLM 钩子文档 →

2. 工具调用钩子

控制和监控工具执行
  • 工具调用前:修改输入、验证参数、阻止危险操作
  • 工具调用后:转换结果、清理输出、记录执行详细信息
用例
  • 破坏性操作的安全防护
  • 敏感操作的人工批准
  • 输入验证和清理
  • 结果缓存和速率限制
  • 工具使用分析
  • 调试日志记录和监控
查看工具钩子文档 →

钩子注册方法

最简洁、最 Pythonic 的钩子注册方式
from crewai.hooks import before_llm_call, after_llm_call, before_tool_call, after_tool_call

@before_llm_call
def limit_iterations(context):
    """Prevent infinite loops by limiting iterations."""
    if context.iterations > 10:
        return False  # Block execution
    return None

@after_llm_call
def sanitize_response(context):
    """Remove sensitive data from LLM responses."""
    if "API_KEY" in context.response:
        return context.response.replace("API_KEY", "[REDACTED]")
    return None

@before_tool_call
def block_dangerous_tools(context):
    """Block destructive operations."""
    if context.tool_name == "delete_database":
        return False  # Block execution
    return None

@after_tool_call
def log_tool_result(context):
    """Log tool execution."""
    print(f"Tool {context.tool_name} completed")
    return None

2. Crew 作用域钩子

仅将钩子应用于特定的 crew 实例
from crewai import CrewBase
from crewai.project import crew
from crewai.hooks import before_llm_call_crew, after_tool_call_crew

@CrewBase
class MyProjCrew:
    @before_llm_call_crew
    def validate_inputs(self, context):
        # Only applies to this crew
        print(f"LLM call in {self.__class__.__name__}")
        return None

    @after_tool_call_crew
    def log_results(self, context):
        # Crew-specific logging
        print(f"Tool result: {context.tool_result[:50]}...")
        return None

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential
        )

钩子执行流程

LLM 调用流程

Agent needs to call LLM

[Before LLM Call Hooks Execute]
    ├→ Hook 1: Validate iteration count
    ├→ Hook 2: Add safety context
    └→ Hook 3: Log request

If any hook returns False:
    ├→ Block LLM call
    └→ Raise ValueError

If all hooks return True/None:
    ├→ LLM call proceeds
    └→ Response generated

[After LLM Call Hooks Execute]
    ├→ Hook 1: Sanitize response
    ├→ Hook 2: Log response
    └→ Hook 3: Update metrics

Final response returned

工具调用流程

Agent needs to execute tool

[Before Tool Call Hooks Execute]
    ├→ Hook 1: Check if tool is allowed
    ├→ Hook 2: Validate inputs
    └→ Hook 3: Request approval if needed

If any hook returns False:
    ├→ Block tool execution
    └→ Return error message

If all hooks return True/None:
    ├→ Tool execution proceeds
    └→ Result generated

[After Tool Call Hooks Execute]
    ├→ Hook 1: Sanitize result
    ├→ Hook 2: Cache result
    └→ Hook 3: Log metrics

Final result returned

钩子上下文对象

LLMCallHookContext

提供对 LLM 执行状态的访问
class LLMCallHookContext:
    executor: CrewAgentExecutor  # Full executor access
    messages: list               # Mutable message list
    agent: Agent                 # Current agent
    task: Task                   # Current task
    crew: Crew                   # Crew instance
    llm: BaseLLM                 # LLM instance
    iterations: int              # Current iteration
    response: str | None         # LLM response (after hooks)

ToolCallHookContext

提供对工具执行状态的访问
class ToolCallHookContext:
    tool_name: str               # Tool being called
    tool_input: dict             # Mutable input parameters
    tool: CrewStructuredTool     # Tool instance
    agent: Agent | None          # Agent executing
    task: Task | None            # Current task
    crew: Crew | None            # Crew instance
    tool_result: str | None      # Tool result (after hooks)

常见模式

安全与验证

@before_tool_call
def safety_check(context):
    """Block destructive operations."""
    dangerous = ['delete_file', 'drop_table', 'system_shutdown']
    if context.tool_name in dangerous:
        print(f"🛑 Blocked: {context.tool_name}")
        return False
    return None

@before_llm_call
def iteration_limit(context):
    """Prevent infinite loops."""
    if context.iterations > 15:
        print("⛔ Maximum iterations exceeded")
        return False
    return None

人在环

@before_tool_call
def require_approval(context):
    """Require approval for sensitive operations."""
    sensitive = ['send_email', 'make_payment', 'post_message']

    if context.tool_name in sensitive:
        response = context.request_human_input(
            prompt=f"Approve {context.tool_name}?",
            default_message="Type 'yes' to approve:"
        )

        if response.lower() != 'yes':
            return False

    return None

监控和分析

from collections import defaultdict
import time

metrics = defaultdict(lambda: {'count': 0, 'total_time': 0})

@before_tool_call
def start_timer(context):
    context.tool_input['_start'] = time.time()
    return None

@after_tool_call
def track_metrics(context):
    start = context.tool_input.get('_start', time.time())
    duration = time.time() - start

    metrics[context.tool_name]['count'] += 1
    metrics[context.tool_name]['total_time'] += duration

    return None

# View metrics
def print_metrics():
    for tool, data in metrics.items():
        avg = data['total_time'] / data['count']
        print(f"{tool}: {data['count']} calls, {avg:.2f}s avg")

响应清理

import re

@after_llm_call
def sanitize_llm_response(context):
    """Remove sensitive data from LLM responses."""
    if not context.response:
        return None

    result = context.response
    result = re.sub(r'(api[_-]?key)["\']?\s*[:=]\s*["\']?[\w-]+',
                   r'\1: [REDACTED]', result, flags=re.IGNORECASE)
    return result

@after_tool_call
def sanitize_tool_result(context):
    """Remove sensitive data from tool results."""
    if not context.tool_result:
        return None

    result = context.tool_result
    result = re.sub(r'\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b',
                   '[EMAIL-REDACTED]', result)
    return result

钩子管理

清除所有钩子

from crewai.hooks import clear_all_global_hooks

# Clear all hooks at once
result = clear_all_global_hooks()
print(f"Cleared {result['total']} hooks")
# Output: {'llm_hooks': (2, 1), 'tool_hooks': (1, 2), 'total': (3, 3)}

清除特定钩子类型

from crewai.hooks import (
    clear_before_llm_call_hooks,
    clear_after_llm_call_hooks,
    clear_before_tool_call_hooks,
    clear_after_tool_call_hooks
)

# Clear specific types
llm_before_count = clear_before_llm_call_hooks()
tool_after_count = clear_after_tool_call_hooks()

注销单个钩子

from crewai.hooks import (
    unregister_before_llm_call_hook,
    unregister_after_tool_call_hook
)

def my_hook(context):
    ...

# Register
register_before_llm_call_hook(my_hook)

# Later, unregister
success = unregister_before_llm_call_hook(my_hook)
print(f"Unregistered: {success}")

最佳实践

1. 保持钩子聚焦

每个钩子应具有单一、明确的职责
# ✅ Good - focused responsibility
@before_tool_call
def validate_file_path(context):
    if context.tool_name == 'read_file':
        if '..' in context.tool_input.get('path', ''):
            return False
    return None

# ❌ Bad - too many responsibilities
@before_tool_call
def do_everything(context):
    # Validation + logging + metrics + approval...
    ...

2. 优雅地处理错误

@before_llm_call
def safe_hook(context):
    try:
        # Your logic
        if some_condition:
            return False
    except Exception as e:
        print(f"Hook error: {e}")
        return None  # Allow execution despite error

3. 就地修改上下文

# ✅ Correct - modify in-place
@before_llm_call
def add_context(context):
    context.messages.append({"role": "system", "content": "Be concise"})

# ❌ Wrong - replaces reference
@before_llm_call
def wrong_approach(context):
    context.messages = [{"role": "system", "content": "Be concise"}]

4. 使用类型提示

from crewai.hooks import LLMCallHookContext, ToolCallHookContext

def my_llm_hook(context: LLMCallHookContext) -> bool | None:
    # IDE autocomplete and type checking
    return None

def my_tool_hook(context: ToolCallHookContext) -> str | None:
    return None

5. 在测试中清理

import pytest
from crewai.hooks import clear_all_global_hooks

@pytest.fixture(autouse=True)
def clean_hooks():
    """Reset hooks before each test."""
    yield
    clear_all_global_hooks()

何时使用哪种钩子

在以下情况下使用 LLM 钩子

  • 实施迭代限制
  • 向提示添加上下文或安全指南
  • 跟踪令牌使用和成本
  • 清理或转换响应
  • 为 LLM 调用实施批准门
  • 调试提示/响应交互

在以下情况下使用工具钩子

  • 阻止危险或破坏性操作
  • 在执行前验证工具输入
  • 为敏感操作实施批准门
  • 缓存工具结果
  • 跟踪工具使用情况和性能
  • 清理工具输出
  • 限制工具调用的速率

在以下情况下同时使用

构建需要监控所有代理操作的全面可观测性、安全性或批准系统。

替代注册方法

编程注册(高级)

用于动态钩子注册或当您需要以编程方式注册钩子时
from crewai.hooks import (
    register_before_llm_call_hook,
    register_after_tool_call_hook
)

def my_hook(context):
    return None

# Register programmatically
register_before_llm_call_hook(my_hook)

# Useful for:
# - Loading hooks from configuration
# - Conditional hook registration
# - Plugin systems
注意:对于大多数用例,装饰器更简洁、更易于维护。

性能考量

  1. 保持钩子快速:钩子在每次调用时执行 - 避免繁重的计算
  2. 尽可能缓存:存储昂贵的验证或查找
  3. 具有选择性:当不需要全局钩子时使用 crew 作用域钩子
  4. 监控钩子开销:在生产环境中分析钩子执行时间
  5. 延迟导入:仅在需要时导入繁重的依赖项

调试钩子

启用调试日志记录

import logging

logging.basicConfig(level=logging.DEBUG)
logger = logging.getLogger(__name__)

@before_llm_call
def debug_hook(context):
    logger.debug(f"LLM call: {context.agent.role}, iteration {context.iterations}")
    return None

钩子执行顺序

钩子按注册顺序执行。如果某个 before 钩子返回 False,则后续钩子不会执行
# Register order matters!
register_before_tool_call_hook(hook1)  # Executes first
register_before_tool_call_hook(hook2)  # Executes second
register_before_tool_call_hook(hook3)  # Executes third

# If hook2 returns False:
# - hook1 executed
# - hook2 executed and returned False
# - hook3 NOT executed
# - Tool call blocked

结论

执行钩子提供对代理运行时行为的强大控制。使用它们可以实施安全防护、批准工作流、全面监控和自定义业务逻辑。结合适当的错误处理、类型安全和性能考量,钩子可实现生产就绪、安全且可观测的代理系统。