如何审查工具调用(功能性 API)¶
本指南演示了如何使用 LangGraph 的功能性 API 在 ReAct 代理中实现人机协作工作流。
我们将以如何使用功能性 API 创建 ReAct 代理指南中创建的代理为基础进行构建。
具体来说,我们将演示如何在执行之前审查由聊天模型生成的工具调用。这可以通过在应用程序的关键点使用 interrupt 函数来实现。
预览:
我们将实现一个简单的函数,用于审查聊天模型生成的工具调用,并在应用程序的入口点内部调用它。
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
设置¶
首先,安装所需的软件包并设置我们的 API 密钥
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
设置 LangSmith 以获得更好的调试体验
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 允许您使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序——在文档中了解更多入门信息。
定义模型和工具¶
首先,让我们定义本示例将使用的工具和模型。如同ReAct 代理指南中一样,我们将使用一个获取某个位置天气描述的占位工具。
本示例将使用 OpenAI 聊天模型,但任何支持工具调用的模型都可以。
API 参考:ChatOpenAI | tool
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny!"
elif "boston" in location.lower():
return "It's rainy!"
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
定义任务¶
我们的任务与ReAct 代理指南中描述的相同
- 调用模型:我们希望使用消息列表查询聊天模型。
- 调用工具:如果模型生成了工具调用,我们希望执行它们。
API 参考:ToolCall | ToolMessage | entrypoint | task
from langchain_core.messages import ToolCall, ToolMessage
from langgraph.func import entrypoint, task
tools_by_name = {tool.name: tool for tool in tools}
@task
def call_model(messages):
"""Call model with a sequence of messages."""
response = model.bind_tools(tools).invoke(messages)
return response
@task
def call_tool(tool_call):
tool = tools_by_name[tool_call["name"]]
observation = tool.invoke(tool_call["args"])
return ToolMessage(content=observation, tool_call_id=tool_call["id"])
定义入口点¶
为了在执行工具调用之前进行审查,我们添加一个调用 interrupt 的 review_tool_call
函数。调用此函数后,执行将暂停,直到我们发出命令恢复。
给定一个工具调用,我们的函数将通过 interrupt
暂停以供人工审查。此时,我们可以选择:
- 接受工具调用;
- 修改工具调用并继续;
- 生成自定义 ToolMessage(例如,指示模型重新格式化其工具调用)。
我们将在下面的用法示例中演示这三种情况。
from typing import Union
def review_tool_call(tool_call: ToolCall) -> Union[ToolCall, ToolMessage]:
"""Review a tool call, returning a validated version."""
human_review = interrupt(
{
"question": "Is this correct?",
"tool_call": tool_call,
}
)
review_action = human_review["action"]
review_data = human_review.get("data")
if review_action == "continue":
return tool_call
elif review_action == "update":
updated_tool_call = {**tool_call, **{"args": review_data}}
return updated_tool_call
elif review_action == "feedback":
return ToolMessage(
content=review_data, name=tool_call["name"], tool_call_id=tool_call["id"]
)
现在,我们可以更新入口点来审查生成的工具调用。如果工具调用被接受或修改,我们像之前一样执行。否则,我们只需附加由人工提供的 ToolMessage
。
提示
先前任务的结果(在本例中是初始模型调用)会被持久化,这样在 interrupt
之后就不会再次运行。
API 参考:MemorySaver | add_messages | Command | interrupt
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph.message import add_messages
from langgraph.types import Command, interrupt
checkpointer = MemorySaver()
@entrypoint(checkpointer=checkpointer)
def agent(messages, previous):
if previous is not None:
messages = add_messages(previous, messages)
llm_response = call_model(messages).result()
while True:
if not llm_response.tool_calls:
break
# Review tool calls
tool_results = []
tool_calls = []
for i, tool_call in enumerate(llm_response.tool_calls):
review = review_tool_call(tool_call)
if isinstance(review, ToolMessage):
tool_results.append(review)
else: # is a validated tool call
tool_calls.append(review)
if review != tool_call:
llm_response.tool_calls[i] = review # update message
# Execute remaining tool calls
tool_result_futures = [call_tool(tool_call) for tool_call in tool_calls]
remaining_tool_results = [fut.result() for fut in tool_result_futures]
# Append to message list
messages = add_messages(
messages,
[llm_response, *tool_results, *remaining_tool_results],
)
# Call model again
llm_response = call_model(messages).result()
# Generate final response
messages = add_messages(messages, llm_response)
return entrypoint.final(value=llm_response, save=messages)
用法¶
让我们演示一些场景。
def _print_step(step: dict) -> None:
for task_name, result in step.items():
if task_name == "agent":
continue # just stream from tasks
print(f"\n{task_name}:")
if task_name in ("__interrupt__", "review_tool_call"):
print(result)
else:
result.pretty_print()
接受工具调用¶
要接受工具调用,我们只需在 Command
中提供的数据中指示该工具调用应该通过。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_Bh5cSwMqCpCxTjx7AjdrQTPd)
Call ID: call_Bh5cSwMqCpCxTjx7AjdrQTPd
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_Bh5cSwMqCpCxTjx7AjdrQTPd', 'type': 'tool_call'}}, resumable=True, ns=['agent:22fcc9cd-3573-b39b-eea7-272a025903e2'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
================================= Tool Message =================================
It's sunny!
call_model:
================================== Ai Message ==================================
The weather in San Francisco is sunny!
修改工具调用¶
要修改工具调用,我们可以提供更新的参数。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_b9h8e18FqH0IQm3NMoeYKz6N)
Call ID: call_b9h8e18FqH0IQm3NMoeYKz6N
Args:
location: san francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'san francisco'}, 'id': 'call_b9h8e18FqH0IQm3NMoeYKz6N', 'type': 'tool_call'}}, resumable=True, ns=['agent:9559a81d-5720-dc19-a457-457bac7bdd83'], when='during'),)
human_input = Command(resume={"action": "update", "data": {"location": "SF, CA"}})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
================================= Tool Message =================================
It's sunny!
call_model:
================================== Ai Message ==================================
The weather in San Francisco is sunny!
生成自定义 ToolMessage¶
要生成自定义 ToolMessage
,我们提供消息的内容。在本例中,我们将要求模型重新格式化其工具调用。
user_message = {"role": "user", "content": "What's the weather in san francisco?"}
print(user_message)
for step in agent.stream([user_message], config):
_print_step(step)
{'role': 'user', 'content': "What's the weather in san francisco?"}
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_VqGjKE7uu8HdWs9XuY1kMV18)
Call ID: call_VqGjKE7uu8HdWs9XuY1kMV18
Args:
location: San Francisco
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco'}, 'id': 'call_VqGjKE7uu8HdWs9XuY1kMV18', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(
resume={
"action": "feedback",
"data": "Please format as <City>, <State>.",
},
)
for step in agent.stream(human_input, config):
_print_step(step)
call_model:
================================== Ai Message ==================================
Tool Calls:
get_weather (call_xoXkK8Cz0zIpvWs78qnXpvYp)
Call ID: call_xoXkK8Cz0zIpvWs78qnXpvYp
Args:
location: San Francisco, CA
__interrupt__:
(Interrupt(value={'question': 'Is this correct?', 'tool_call': {'name': 'get_weather', 'args': {'location': 'San Francisco, CA'}, 'id': 'call_xoXkK8Cz0zIpvWs78qnXpvYp', 'type': 'tool_call'}}, resumable=True, ns=['agent:4b3b372b-9da3-70be-5c68-3d9317346070'], when='during'),)
human_input = Command(resume={"action": "continue"})
for step in agent.stream(human_input, config):
_print_step(step)
call_tool:
================================= Tool Message =================================
It's sunny!
call_model:
================================== Ai Message ==================================
The weather in San Francisco, CA is sunny!