跳到内容

如何添加静态断点

先决条件

本指南假设您熟悉以下概念

人类在环 (HIL) 交互对于代理系统至关重要。断点是一种常见的 HIL 交互模式,它允许图在特定步骤停止,并在继续之前征求人类批准(例如,对于敏感操作)。

断点构建在 LangGraph 检查点之上,检查点在每次节点执行后保存图的状态。检查点保存在线程中,这些线程保留图的状态,并在图执行完成后可以访问。这使得图的执行可以在特定点暂停,等待人类批准,然后从最后一个检查点恢复执行。

设置

图代码

在本操作指南中,我们使用了一个简单的 ReAct 风格托管图(你可以在这里查看其完整定义代码)。重要的是图中有两个节点(一个名为 agent,调用 LLM;一个名为 action,调用工具),以及一个从 agent 出发的路由函数,该函数决定是接下来调用 action 还是直接结束图运行(action 节点在执行后总是调用 agent 节点)。

SDK 初始化

from langgraph_sdk import get_client
client = get_client(url=<DEPLOYMENT_URL>)
# Using the graph deployed with the name "agent"
assistant_id = "agent"
thread = await client.threads.create()
import { Client } from "@langchain/langgraph-sdk";

const client = new Client({ apiUrl: <DEPLOYMENT_URL> });
// Using the graph deployed with the name "agent"
const assistantId = "agent";
const thread = await client.threads.create();
curl --request POST \
  --url <DEPLOYMENT_URL>/threads \
  --header 'Content-Type: application/json' \
  --data '{}'

添加断点

现在我们想在图运行时添加一个断点,我们会在调用工具之前进行。可以通过添加 interrupt_before=["action"] 来实现,这会告诉我们在调用 action 节点之前中断。我们可以在编译图时或启动运行时进行此操作。这里我们将在启动运行时进行,如果你想在编译时进行,则需要编辑定义图的 python 文件,并在调用 .compile 时添加 interrupt_before 参数。

首先,让我们通过 SDK 访问我们的托管 LangGraph 实例

现在,让我们在工具节点之前添加一个断点来编译它

input = {"messages": [{"role": "user", "content": "what's the weather in sf"}]}
async for chunk in client.runs.stream(
    thread["thread_id"],
    assistant_id,
    input=input,
    stream_mode="updates",
    interrupt_before=["action"],
):
    print(f"Receiving new event of type: {chunk.event}...")
    print(chunk.data)
    print("\n\n")
const input = { messages: [{ role: "human", content: "what's the weather in sf" }] };

const streamResponse = client.runs.stream(
  thread["thread_id"],
  assistantId,
  {
    input: input,
    streamMode: "updates",
    interruptBefore: ["action"]
  }
);

for await (const chunk of streamResponse) {
  console.log(`Receiving new event of type: ${chunk.event}...`);
  console.log(chunk.data);
  console.log("\n\n");
}
curl --request POST \
 --url <DEPLOYMENT_URL>/threads/<THREAD_ID>/runs/stream \
 --header 'Content-Type: application/json' \
 --data "{
   \"assistant_id\": \"agent\",
   \"input\": {\"messages\": [{\"role\": \"human\", \"content\": \"what's the weather in sf\"}]},
   \"interrupt_before\": [\"action\"],
   \"stream_mode\": [
     \"messages\"
   ]
 }" | \
 sed 's/\r$//' | \
 awk '
 /^event:/ {
     if (data_content != "") {
         print data_content "\n"
     }
     sub(/^event: /, "Receiving event of type: ", $0)
     printf "%s...\n", $0
     data_content = ""
 }
 /^data:/ {
     sub(/^data: /, "", $0)
     data_content = $0
 }
 END {
     if (data_content != "") {
         print data_content "\n"
     }
 }
 '

输出

Receiving new event of type: metadata...
{'run_id': '3b77ef83-687a-4840-8858-0371f91a92c3'}



Receiving new event of type: data...
{'agent': {'messages': [{'content': [{'id': 'toolu_01HwZqM1ptX6E15A5LAmyZTB', 'input': {'query': 'weather in san francisco'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}], 'additional_kwargs': {}, 'response_metadata': {}, 'type': 'ai', 'name': None, 'id': 'run-e5d17791-4d37-4ad2-815f-a0c4cba62585', 'example': False, 'tool_calls': [{'name': 'tavily_search_results_json', 'args': {'query': 'weather in san francisco'}, 'id': 'toolu_01HwZqM1ptX6E15A5LAmyZTB'}], 'invalid_tool_calls': []}]}}



Receiving new event of type: end...
None