如何使用 interrupt
等待用户输入¶
人在环路 (HIL) 交互对于 代理系统 至关重要。等待用户输入是一种常见的人在环路交互模式,允许代理询问用户澄清问题并在继续之前等待输入。
我们可以在 LangGraph 中使用 interrupt()
函数来实现这一点。interrupt
允许我们停止图执行以收集用户输入,并使用收集到的输入继续执行。
设置¶
首先,我们需要安装所需的软件包
接下来,我们需要为 Anthropic 和/或 OpenAI 设置 API 密钥(我们将使用的 LLM)
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
为 LangGraph 开发设置 LangSmith
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 允许您使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序——阅读更多关于如何开始使用 此处 的信息。
简单用法¶
让我们探索一个使用人工反馈的基本示例。一种直接的方法是创建一个节点,human_feedback
,专门用于收集用户输入。这允许我们在图中的特定选定点收集反馈。
步骤
- 在
human_feedback
节点内调用interrupt()
。 - 设置一个 检查点 以保存图的状态到此节点。
- 使用
Command(resume=...)
提供请求的值给human_feedback
节点并恢复执行。
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, START, END
from langgraph.types import Command, interrupt
from langgraph.checkpoint.memory import MemorySaver
from IPython.display import Image, display
class State(TypedDict):
input: str
user_feedback: str
def step_1(state):
print("---Step 1---")
pass
def human_feedback(state):
print("---human_feedback---")
feedback = interrupt("Please provide feedback:")
return {"user_feedback": feedback}
def step_3(state):
print("---Step 3---")
pass
builder = StateGraph(State)
builder.add_node("step_1", step_1)
builder.add_node("human_feedback", human_feedback)
builder.add_node("step_3", step_3)
builder.add_edge(START, "step_1")
builder.add_edge("step_1", "human_feedback")
builder.add_edge("human_feedback", "step_3")
builder.add_edge("step_3", END)
# Set up memory
memory = MemorySaver()
# Add
graph = builder.compile(checkpointer=memory)
# View
display(Image(graph.get_graph().draw_mermaid_png()))
API 参考:StateGraph | START | END | Command | interrupt | MemorySaver
运行直到我们在 human_feedback
处的断点
# Input
initial_input = {"input": "hello world"}
# Thread
thread = {"configurable": {"thread_id": "1"}}
# Run the graph until the first interruption
for event in graph.stream(initial_input, thread, stream_mode="updates"):
print(event)
print("\n")
---Step 1---
{'step_1': None}
---human_feedback---
{'__interrupt__': (Interrupt(value='Please provide feedback:', resumable=True, ns=['human_feedback:e9a51d27-22ed-8c01-3f17-0ed33209b554'], when='during'),)}
# Continue the graph execution
for event in graph.stream(
Command(resume="go to step 3!"), thread, stream_mode="updates"
):
print(event)
print("\n")
---human_feedback---
{'human_feedback': {'user_feedback': 'go to step 3!'}}
---Step 3---
{'step_3': None}
代理¶
在 代理 的上下文中,等待用户反馈对于询问澄清问题尤其有用。为了说明这一点,我们将创建一个简单的 ReAct 风格代理,能够进行 工具调用。
对于此示例,我们将使用 Anthropic 的聊天模型以及模拟工具(仅用于演示目的)。
将 Pydantic 与 LangChain 结合使用
此笔记本使用 Pydantic v2 BaseModel
,这需要 langchain-core >= 0.3
。使用 langchain-core < 0.3
将导致由于混合使用 Pydantic v1 和 v2 BaseModel
而产生的错误。
# Set up the state
from langgraph.graph import MessagesState, START
# Set up the tool
# We will have one real tool - a search tool
# We'll also have one "fake" tool - a "ask_human" tool
# Here we define any ACTUAL tools
from langchain_core.tools import tool
from langgraph.prebuilt import ToolNode
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return f"I looked up: {query}. Result: It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
tools = [search]
tool_node = ToolNode(tools)
# Set up the model
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-latest")
from pydantic import BaseModel
# We are going "bind" all tools to the model
# We have the ACTUAL tools from above, but we also need a mock tool to ask a human
# Since `bind_tools` takes in tools but also just tool definitions,
# We can define a tool definition for `ask_human`
class AskHuman(BaseModel):
"""Ask the human a question"""
question: str
model = model.bind_tools(tools + [AskHuman])
# Define nodes and conditional edges
# Define the function that determines whether to continue or not
def should_continue(state):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# If tool call is asking Human, we return that node
# You could also add logic here to let some system know that there's something that requires Human input
# For example, send a slack message, etc
elif last_message.tool_calls[0]["name"] == "AskHuman":
return "ask_human"
# Otherwise if there is, we continue
else:
return "action"
# Define the function that calls the model
def call_model(state):
messages = state["messages"]
response = model.invoke(messages)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# We define a fake node to ask the human
def ask_human(state):
tool_call_id = state["messages"][-1].tool_calls[0]["id"]
location = interrupt("Please provide your location:")
tool_message = [{"tool_call_id": tool_call_id, "type": "tool", "content": location}]
return {"messages": tool_message}
# Build the graph
from langgraph.graph import END, StateGraph
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the three nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
workflow.add_node("ask_human", ask_human)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# After we get back the human response, we go back to the agent
workflow.add_edge("ask_human", "agent")
# Set up memory
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
# We add a breakpoint BEFORE the `ask_human` node so it never executes
app = workflow.compile(checkpointer=memory)
display(Image(app.get_graph().draw_mermaid_png()))
API 参考:START | tool | ToolNode | ChatAnthropic | END | StateGraph | MemorySaver
与代理交互¶
我们现在可以与代理交互。让我们要求它询问用户他们在哪里,然后告诉他们天气。
这应该使其首先使用 ask_human
工具,然后使用普通工具。
config = {"configurable": {"thread_id": "2"}}
for event in app.stream(
{
"messages": [
(
"user",
"Use the search tool to ask the user where they are, then look up the weather there",
)
]
},
config,
stream_mode="values",
):
event["messages"][-1].pretty_print()
================================[1m Human Message [0m=================================
Use the search tool to ask the user where they are, then look up the weather there
==================================[1m Ai Message [0m==================================
[{'text': "I'll help you with that. Let me first ask the user about their location.", 'type': 'text'}, {'id': 'toolu_01KNvb7RCVu8yKYUuQQSKN1x', 'input': {'question': 'Where are you located?'}, 'name': 'AskHuman', 'type': 'tool_use'}]
Tool Calls:
AskHuman (toolu_01KNvb7RCVu8yKYUuQQSKN1x)
Call ID: toolu_01KNvb7RCVu8yKYUuQQSKN1x
Args:
question: Where are you located?
您可以看到我们的图在 ask_human
节点内中断,现在正在等待提供 location
。我们可以通过使用 Command(resume="<location>")
输入调用图来提供此值
for event in app.stream(Command(resume="san francisco"), config, stream_mode="values"):
event["messages"][-1].pretty_print()
==================================[1m Ai Message [0m==================================
[{'text': "I'll help you with that. Let me first ask the user about their location.", 'type': 'text'}, {'id': 'toolu_01KNvb7RCVu8yKYUuQQSKN1x', 'input': {'question': 'Where are you located?'}, 'name': 'AskHuman', 'type': 'tool_use'}]
Tool Calls:
AskHuman (toolu_01KNvb7RCVu8yKYUuQQSKN1x)
Call ID: toolu_01KNvb7RCVu8yKYUuQQSKN1x
Args:
question: Where are you located?
=================================[1m Tool Message [0m=================================
san francisco
==================================[1m Ai Message [0m==================================
[{'text': "Now I'll search for the weather in San Francisco.", 'type': 'text'}, {'id': 'toolu_01Y5C4rU9WcxBqFLYSMGjV1F', 'input': {'query': 'current weather in san francisco'}, 'name': 'search', 'type': 'tool_use'}]
Tool Calls:
search (toolu_01Y5C4rU9WcxBqFLYSMGjV1F)
Call ID: toolu_01Y5C4rU9WcxBqFLYSMGjV1F
Args:
query: current weather in san francisco
=================================[1m Tool Message [0m=================================
Name: search
I looked up: current weather in san francisco. Result: It's sunny in San Francisco, but you better look out if you're a Gemini 😈.
==================================[1m Ai Message [0m==================================
Based on the search results, it's currently sunny in San Francisco. Note that this is the current weather at the time of our conversation, and conditions can change throughout the day.