如何添加断点¶
人工参与 (HIL) 交互对于 Agent 系统 至关重要。断点 是一种常见的人工参与交互模式,允许图在特定步骤停止,并在继续之前寻求人工批准(例如,对于敏感操作)。
断点构建于 LangGraph 检查点 之上,检查点在每个节点执行后保存图的状态。检查点保存在线程 中,线程保留图状态,并且可以在图执行完成后访问。这允许图执行在特定点暂停,等待人工批准,然后从上次检查点恢复执行。
设置¶
首先,我们需要安装所需的软件包
接下来,我们需要为 Anthropic(我们将使用的 LLM)设置 API 密钥
可选地,我们可以为 LangSmith 追踪 设置 API 密钥,这将为我们提供一流的可观察性。
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_CALLBACKS_BACKGROUND="true"
export LANGCHAIN_API_KEY=your-api-key
简单用法¶
让我们看一下这种非常基本的用法。
下面,我们做两件事
1) 我们使用 interruptBefore
指定的步骤来指定断点。
2) 我们设置一个检查点来保存图的状态。
import { StateGraph, START, END, Annotation } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
const GraphState = Annotation.Root({
input: Annotation<string>
});
const step1 = (state: typeof GraphState.State) => {
console.log("---Step 1---");
return state;
}
const step2 = (state: typeof GraphState.State) => {
console.log("---Step 2---");
return state;
}
const step3 = (state: typeof GraphState.State) => {
console.log("---Step 3---");
return state;
}
const builder = new StateGraph(GraphState)
.addNode("step1", step1)
.addNode("step2", step2)
.addNode("step3", step3)
.addEdge(START, "step1")
.addEdge("step1", "step2")
.addEdge("step2", "step3")
.addEdge("step3", END);
// Set up memory
const graphStateMemory = new MemorySaver()
const graph = builder.compile({
checkpointer: graphStateMemory,
interruptBefore: ["step3"]
});
import * as tslab from "tslab";
const drawableGraphGraphState = graph.getGraph();
const graphStateImage = await drawableGraphGraphState.drawMermaidPng();
const graphStateArrayBuffer = await graphStateImage.arrayBuffer();
await tslab.display.png(new Uint8Array(graphStateArrayBuffer));
我们为检查点创建一个 线程 ID。
我们运行到步骤 3,如 interruptBefore
所定义。
在用户输入/批准后,我们通过使用 null
调用图来恢复执行。
// Input
const initialInput = { input: "hello world" };
// Thread
const graphStateConfig = { configurable: { thread_id: "1" }, streamMode: "values" as const };
// Run the graph until the first interruption
for await (const event of await graph.stream(initialInput, graphStateConfig)) {
console.log(`--- ${event.input} ---`);
}
// Will log when the graph is interrupted, after step 2.
console.log("---GRAPH INTERRUPTED---");
// If approved, continue the graph execution. We must pass `null` as
// the input here, or the graph will
for await (const event of await graph.stream(null, graphStateConfig)) {
console.log(`--- ${event.input} ---`);
}
--- hello world ---
---Step 1---
--- hello world ---
---Step 2---
--- hello world ---
---GRAPH INTERRUPTED---
---Step 3---
--- hello world ---
Agent¶
在 Agent 的上下文中,断点对于手动批准某些 Agent 操作非常有用。
为了展示这一点,我们将构建一个相对简单的 ReAct 风格的 Agent,它进行工具调用。
我们将在调用 action
节点之前添加一个断点。
// Set up the tool
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import { StateGraph, START, END } from "@langchain/langgraph";
import { MemorySaver, Annotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { BaseMessage, AIMessage } from "@langchain/core/messages";
import { z } from "zod";
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
const search = tool((_) => {
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈.";
}, {
name: "search",
description: "Call to surf the web.",
schema: z.string(),
})
const tools = [search]
const toolNode = new ToolNode<typeof AgentState.State>(tools)
// Set up the model
const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" })
const modelWithTools = model.bindTools(tools)
// Define nodes and conditional edges
// Define the function that determines whether to continue or not
function shouldContinue(state: typeof AgentState.State): "action" | typeof END {
const lastMessage = state.messages[state.messages.length - 1];
// If there is no function call, then we finish
if (lastMessage && !(lastMessage as AIMessage).tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue
return "action";
}
// Define the function that calls the model
async function callModel(state: typeof AgentState.State): Promise<Partial<typeof AgentState.State>> {
const messages = state.messages;
const response = await modelWithTools.invoke(messages);
// We return an object with a messages property, because this will get added to the existing list
return { messages: [response] };
}
// Define a new graph
const workflow = new StateGraph(AgentState)
// Define the two nodes we will cycle between
.addNode("agent", callModel)
.addNode("action", toolNode)
// We now add a conditional edge
.addConditionalEdges(
// First, we define the start node. We use `agent`.
// This means these are the edges taken after the `agent` node is called.
"agent",
// Next, we pass in the function that will determine which node is called next.
shouldContinue
)
// We now add a normal edge from `action` to `agent`.
// This means that after `action` is called, `agent` node is called next.
.addEdge("action", "agent")
// Set the entrypoint as `agent`
// This means that this node is the first one called
.addEdge(START, "agent");
// Setup memory
const memory = new MemorySaver();
// Finally, we compile it!
// This compiles it into a LangChain Runnable,
// meaning you can use it as you would any other runnable
const app = workflow.compile({
checkpointer: memory,
interruptBefore: ["action"]
});
import * as tslab from "tslab";
const drawableGraph = app.getGraph();
const image = await drawableGraph.drawMermaidPng();
const arrayBuffer = await image.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer));
与 Agent 交互¶
我们现在可以与 Agent 交互。
我们看到它在调用工具之前停止了,因为 interruptBefore
设置在 action
节点之前。
import { HumanMessage } from "@langchain/core/messages";
// Input
const inputs = new HumanMessage("search for the weather in sf now");
// Thread
const config = { configurable: { thread_id: "3" }, streamMode: "values" as const };
for await (const event of await app.stream({
messages: [inputs]
}, config)) {
const recentMsg = event.messages[event.messages.length - 1];
console.log(`================================ ${recentMsg._getType()} Message (1) =================================`)
console.log(recentMsg.content);
}
================================ human Message (1) =================================
search for the weather in sf now
================================ ai Message (1) =================================
[
{
type: 'text',
text: "Certainly! I'll search for the current weather in San Francisco for you. Let me use the search function to find this information."
},
{
type: 'tool_use',
id: 'toolu_01R524BmxkEm7Rf5Ss53cqkM',
name: 'search',
input: { input: 'current weather in San Francisco' }
}
]
我们现在可以再次调用 Agent,无需任何输入即可继续。
这将按请求运行该工具。
在输入中使用 null
运行中断的图意味着 继续,就像没有发生中断一样。
for await (const event of await app.stream(null, config)) {
const recentMsg = event.messages[event.messages.length - 1];
console.log(`================================ ${recentMsg._getType()} Message (1) =================================`)
console.log(recentMsg.content);
}
================================ tool Message (1) =================================
It's sunny in San Francisco, but you better look out if you're a Gemini 😈.
================================ ai Message (1) =================================
Based on the search results, I can provide you with information about the current weather in San Francisco:
The weather in San Francisco is currently sunny. This means it's a clear day with plenty of sunshine, which is great for outdoor activities or simply enjoying the city.
However, I should note that the search result included an unusual comment about Geminis. This appears to be unrelated to the weather and might be a quirk of the search engine or a reference to something else entirely. For accurate and detailed weather information, it would be best to check a reliable weather service or website.
Is there anything else you'd like to know about the weather in San Francisco or any other location?