添加人工干预控制¶
代理可能不可靠,并且可能需要人工输入才能成功完成任务。同样,对于某些操作,您可能需要在运行前要求人工批准,以确保一切按预期运行。
LangGraph 的持久化层支持人工干预工作流,允许根据用户反馈暂停和恢复执行。此功能的主要接口是interrupt
函数。在节点内调用interrupt
将暂停执行。通过传入Command,可以恢复执行并接收来自人工的新输入。interrupt
在功能上类似于 Python 的内置input()
,但有一些注意事项。
注意
本教程基于添加内存。
1. 添加human_assistance
工具¶
从为聊天机器人添加内存教程的现有代码开始,将human_assistance
工具添加到聊天机器人。此工具使用interrupt
从人工接收信息。
我们首先选择一个聊天模型
import os
from langchain.chat_models import init_chat_model
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = init_chat_model("openai:gpt-4.1")
👉 阅读 OpenAI 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["ANTHROPIC_API_KEY"] = "sk-..."
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
👉 阅读 Anthropic 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"
llm = init_chat_model(
"azure_openai:gpt-4.1",
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
👉 阅读 Azure 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["GOOGLE_API_KEY"] = "..."
llm = init_chat_model("google_genai:gemini-2.0-flash")
👉 阅读 Google GenAI 集成文档
from langchain.chat_models import init_chat_model
# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html
llm = init_chat_model(
"anthropic.claude-3-5-sonnet-20240620-v1:0",
model_provider="bedrock_converse",
)
👉 阅读 AWS Bedrock 集成文档
现在我们可以将其与附加工具一起整合到我们的StateGraph
中
from typing import Annotated
from langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
human_response = interrupt({"query": query})
return human_response["data"]
tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
message = llm_with_tools.invoke(state["messages"])
# Because we will be interrupting during tool execution,
# we disable parallel tool calling to avoid repeating any
# tool invocations when we resume.
assert len(message.tool_calls) <= 1
return {"messages": [message]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
提示
有关人工干预工作流的更多信息和示例,请参阅人工干预。
2. 编译图¶
我们像以前一样使用checkpointer编译图
3. 可视化图(可选)¶
可视化图,您将获得与以前相同的布局——只是添加了工具!
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
4. 提示聊天机器人¶
现在,向聊天机器人提问,这将启用新的human_assistance
工具
user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
{"messages": [{"role": "user", "content": user_input}]},
config,
stream_mode="values",
)
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================ Human Message =================================
I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================
[{'text': "Certainly! I'd be happy to request expert assistance for you regarding building an AI agent. To do this, I'll use the human_assistance function to relay your request. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01ABUqneqnuHNuo1vhfDFQCW', 'input': {'query': 'A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_01ABUqneqnuHNuo1vhfDFQCW)
Call ID: toolu_01ABUqneqnuHNuo1vhfDFQCW
Args:
query: A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?
聊天机器人生成了一个工具调用,但随后执行被中断。如果您检查图状态,您会看到它停止在工具节点
信息
仔细查看human_assistance
工具
@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
human_response = interrupt({"query": query})
return human_response["data"]
与 Python 内置的input()
函数类似,在工具内部调用interrupt
将暂停执行。进度会根据checkpointer进行持久化;因此,如果它是使用 Postgres 进行持久化的,只要数据库处于活动状态,它就可以随时恢复。在此示例中,它使用内存中的checkpointer进行持久化,并且只要 Python 内核正在运行,就可以随时恢复。
5. 恢复执行¶
要恢复执行,请传入一个包含工具所需数据的Command
对象。此数据的格式可以根据需要进行自定义。对于本示例,请使用一个带有键"data"
的字典
human_response = (
"We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
" It's much more reliable and extensible than simple autonomous agents."
)
human_command = Command(resume={"data": human_response})
events = graph.stream(human_command, config, stream_mode="values")
for event in events:
if "messages" in event:
event["messages"][-1].pretty_print()
================================== Ai Message ==================================
[{'text': "Certainly! I'd be happy to request expert assistance for you regarding building an AI agent. To do this, I'll use the human_assistance function to relay your request. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01ABUqneqnuHNuo1vhfDFQCW', 'input': {'query': 'A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
human_assistance (toolu_01ABUqneqnuHNuo1vhfDFQCW)
Call ID: toolu_01ABUqneqnuHNuo1vhfDFQCW
Args:
query: A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?
================================= Tool Message =================================
Name: human_assistance
We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================
Thank you for your patience. I've received some expert advice regarding your request for guidance on building an AI agent. Here's what the experts have suggested:
The experts recommend that you look into LangGraph for building your AI agent. They mention that LangGraph is a more reliable and extensible option compared to simple autonomous agents.
LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation:
1. Reliability: The experts emphasize that LangGraph is more reliable than simpler autonomous agent approaches. This could mean it has better stability, error handling, or consistent performance.
2. Extensibility: LangGraph is described as more extensible, which suggests that it probably offers a flexible architecture that allows you to easily add new features or modify existing ones as your agent's requirements evolve.
3. Advanced capabilities: Given that it's recommended over "simple autonomous agents," LangGraph likely provides more sophisticated tools and techniques for building complex AI agents.
...
2. Look for tutorials or guides specifically focused on building AI agents with LangGraph.
3. Check if there are any community forums or discussion groups where you can ask questions and get support from other developers using LangGraph.
If you'd like more specific information about LangGraph or have any questions about this recommendation, please feel free to ask, and I can request further assistance from the experts.
Output is truncated. View as a scrollable element or open in a text editor. Adjust cell output settings...
输入已作为工具消息接收和处理。查看此调用的LangSmith 跟踪,以查看上述调用中完成的确切工作。请注意,状态在第一步中加载,以便我们的聊天机器人可以从上次中断的地方继续。
恭喜!您已使用interrupt
为您的聊天机器人添加了人工干预执行,从而允许在需要时进行人工监督和干预。这开启了您可以使用 AI 系统创建的潜在用户界面。由于您已经添加了checkpointer,只要底层持久化层正在运行,图就可以无限期暂停并随时恢复,就好像什么都没发生过一样。
查看下面的代码片段,回顾本教程中的图
import os
from langchain.chat_models import init_chat_model
os.environ["OPENAI_API_KEY"] = "sk-..."
llm = init_chat_model("openai:gpt-4.1")
👉 阅读 OpenAI 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["ANTHROPIC_API_KEY"] = "sk-..."
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
👉 阅读 Anthropic 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"
llm = init_chat_model(
"azure_openai:gpt-4.1",
azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)
👉 阅读 Azure 集成文档
import os
from langchain.chat_models import init_chat_model
os.environ["GOOGLE_API_KEY"] = "..."
llm = init_chat_model("google_genai:gemini-2.0-flash")
👉 阅读 Google GenAI 集成文档
from langchain.chat_models import init_chat_model
# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html
llm = init_chat_model(
"anthropic.claude-3-5-sonnet-20240620-v1:0",
model_provider="bedrock_converse",
)
👉 阅读 AWS Bedrock 集成文档
API 参考:TavilySearch | tool | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition | Command | interrupt
from typing import Annotated
from langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDict
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt
class State(TypedDict):
messages: Annotated[list, add_messages]
graph_builder = StateGraph(State)
@tool
def human_assistance(query: str) -> str:
"""Request assistance from a human."""
human_response = interrupt({"query": query})
return human_response["data"]
tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm_with_tools = llm.bind_tools(tools)
def chatbot(state: State):
message = llm_with_tools.invoke(state["messages"])
assert(len(message.tool_calls) <= 1)
return {"messages": [message]}
graph_builder.add_node("chatbot", chatbot)
tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)
graph_builder.add_conditional_edges(
"chatbot",
tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
下一步¶
到目前为止,教程示例依赖于只有一个条目的简单状态:消息列表。您可以使用这种简单状态走得很远,但如果您想在不依赖消息列表的情况下定义复杂行为,您可以向状态添加其他字段。