如何等待用户输入(函数式 API)¶
人工回路(HIL)交互对于智能体系统至关重要。等待人工输入是一种常见的 HIL 交互模式,它允许智能体在继续执行之前向用户提出澄清问题并等待输入。
我们可以在 LangGraph 中使用interrupt()函数来实现这一点。interrupt
允许我们停止图执行,以收集用户输入,然后使用收集到的输入继续执行。
本指南演示了如何使用 LangGraph 的函数式 API 实现人工回路工作流。具体来说,我们将演示:
设置¶
注意
本指南需要 @langchain/langgraph>=0.2.42
。
首先,安装本示例所需的依赖项
接下来,我们需要设置OpenAI(我们将使用的LLM)的API密钥
为 LangGraph 开发设置 LangSmith
注册 LangSmith 可以快速发现问题并提升 LangGraph 项目的性能。LangSmith 允许你使用追踪数据来调试、测试和监控你使用 LangGraph 构建的 LLM 应用程序——了解更多如何开始使用请点击此处
简单用法¶
我们来演示一个简单的用法示例。我们将创建三个任务:
- 追加
"bar"
。 - 暂停等待人工输入。恢复时,追加人工输入。
- 追加
"qux"
。
import { task, interrupt } from "@langchain/langgraph";
const step1 = task("step1", async (inputQuery: string) => {
return `${inputQuery} bar`;
});
const humanFeedback = task(
"humanFeedback",
async (inputQuery: string) => {
const feedback = interrupt(`Please provide feedback: ${inputQuery}`);
return `${inputQuery} ${feedback}`;
});
const step3 = task("step3", async (inputQuery: string) => {
return `${inputQuery} qux`;
});
我们现在可以将这些任务组合到一个简单的入口点中
import { MemorySaver, entrypoint } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const graph = entrypoint({
name: "graph",
checkpointer,
}, async (inputQuery: string) => {
const result1 = await step1(inputQuery);
const result2 = await humanFeedback(result1);
const result3 = await step3(result2);
return result3;
});
我们为启用人工回路工作流所做的全部工作,就是在任务内部调用了interrupt()。
提示
先前任务的结果——在本例中为 step1
——会被持久化,这样在 `interrupt` 之后它们就不会再次运行。
我们来发送一个查询字符串
const config = {
configurable: {
thread_id: "1"
}
};
const stream = await graph.stream("foo", config);
for await (const event of stream) {
console.log(event);
}
{ step1: 'foo bar' }
{
__interrupt__: [
{
value: 'Please provide feedback: foo bar',
when: 'during',
resumable: true,
ns: [Array]
}
]
}
step1
之后使用 interrupt
暂停。此中断提供了恢复运行的指示。要恢复,我们发出一个包含 humanFeedback
任务所需数据的命令。
import { Command } from "@langchain/langgraph";
const resumeStream = await graph.stream(new Command({
resume: "baz"
}), config);
// Continue execution
for await (const event of resumeStream) {
if (event.__metadata__?.cached) {
continue;
}
console.log(event);
}
代理¶
我们将基于在如何使用函数式 API 创建 ReAct 智能体指南中创建的智能体进行构建。
在此,我们将扩展该智能体,使其能够在需要时向人工寻求帮助。
定义模型和工具¶
我们首先定义本示例中将使用的工具和模型。与ReAct 智能体指南中一样,我们将使用一个单一的占位工具,该工具获取某个地点的天气描述。
本示例将使用 OpenAI 聊天模型,但任何 支持工具调用 的模型都足够。
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
});
const getWeather = tool(async ({ location }) => {
// This is a placeholder for the actual implementation
const lowercaseLocation = location.toLowerCase();
if (lowercaseLocation.includes("sf") || lowercaseLocation.includes("san francisco")) {
return "It's sunny!";
} else if (lowercaseLocation.includes("boston")) {
return "It's rainy!";
} else {
return `I am not sure what the weather is in ${location}`;
}
}, {
name: "getWeather",
schema: z.object({
location: z.string().describe("Location to get the weather for"),
}),
description: "Call to get the weather from a specific location.",
});
要向人工寻求帮助,我们只需添加一个调用interrupt的工具。
import { interrupt } from "@langchain/langgraph";
import { z } from "zod";
const humanAssistance = tool(async ({ query }) => {
const humanResponse = interrupt({ query });
return humanResponse.data;
}, {
name: "humanAssistance",
description: "Request assistance from a human.",
schema: z.object({
query: z.string().describe("Human readable question for the human")
})
});
const tools = [getWeather, humanAssistance];
定义任务¶
我们的任务与ReAct 智能体指南相比没有其他变化。
- 调用模型:我们希望使用消息列表查询我们的聊天模型。
- 调用工具:如果我们的模型生成了工具调用,我们希望执行它们。
我们只是增加了一个可供模型访问的工具。
import {
type BaseMessageLike,
AIMessage,
ToolMessage,
} from "@langchain/core/messages";
import { type ToolCall } from "@langchain/core/messages/tool";
import { task } from "@langchain/langgraph";
const toolsByName = Object.fromEntries(tools.map((tool) => [tool.name, tool]));
const callModel = task("callModel", async (messages: BaseMessageLike[]) => {
const response = await model.bindTools(tools).invoke(messages);
return response;
});
const callTool = task(
"callTool",
async (toolCall: ToolCall): Promise<AIMessage> => {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall.args);
return new ToolMessage({ content: observation, tool_call_id: toolCall.id });
// Can also pass toolCall directly into the tool to return a ToolMessage
// return tool.invoke(toolCall);
});
定义入口点¶
我们的入口点与ReAct 智能体指南相比也没有变化。
import { entrypoint, addMessages, MemorySaver } from "@langchain/langgraph";
const agent = entrypoint({
name: "agent",
checkpointer: new MemorySaver(),
}, async (messages: BaseMessageLike[]) => {
let currentMessages = messages;
let llmResponse = await callModel(currentMessages);
while (true) {
if (!llmResponse.tool_calls?.length) {
break;
}
// Execute tools
const toolResults = await Promise.all(
llmResponse.tool_calls.map((toolCall) => {
return callTool(toolCall);
})
);
// Append to message list
currentMessages = addMessages(currentMessages, [llmResponse, ...toolResults]);
// Call model again
llmResponse = await callModel(currentMessages);
}
return llmResponse;
});
用法¶
让我们用一个需要人工协助的问题来调用模型。我们的问题也需要调用 getWeather
工具。
import { BaseMessage, isAIMessage } from "@langchain/core/messages";
const prettyPrintMessage = (message: BaseMessage) => {
console.log("=".repeat(30), `${message.getType()} message`, "=".repeat(30));
console.log(message.content);
if (isAIMessage(message) && message.tool_calls?.length) {
console.log(JSON.stringify(message.tool_calls, null, 2));
}
}
const prettyPrintStep = (step: Record<string, any>) => {
if (step.__metadata__?.cached) {
return;
}
for (const [taskName, update] of Object.entries(step)) {
const message = update as BaseMessage;
// Only print task updates
if (taskName === "agent") continue;
console.log(`\n${taskName}:`);
if (taskName === "__interrupt__") {
console.log(update);
} else {
prettyPrintMessage(message);
}
}
}
const userMessage = {
role: "user",
content: [
"Can you reach out for human assistance: what should I feed my cat?",
"Separately, can you check the weather in San Francisco?"
].join(" "),
};
console.log(userMessage);
const agentStream = await agent.stream([userMessage], {
configurable: {
thread_id: "1",
}
});
let lastStep;
for await (const step of agentStream) {
prettyPrintStep(step);
lastStep = step;
}
{
role: 'user',
content: 'Can you reach out for human assistance: what should I feed my cat? Separately, can you check the weather in San Francisco?'
}
callModel:
============================== ai message ==============================
[
{
"name": "humanAssistance",
"args": {
"query": "What should I feed my cat?"
},
"type": "tool_call",
"id": "call_TwrNq6tGI61cDCJEpj175h7J"
},
{
"name": "getWeather",
"args": {
"location": "San Francisco"
},
"type": "tool_call",
"id": "call_fMzUBvc0SpZpXxM2LQLXfbke"
}
]
callTool:
============================== tool message ==============================
It's sunny!
__interrupt__:
[
{
value: { query: 'What should I feed my cat?' },
when: 'during',
resumable: true,
ns: [ 'callTool:2e0c6c40-9541-57ef-a7af-24213a10d5a4' ]
}
]
get_weather
工具的执行。
让我们检查一下在哪里被中断了
{"__interrupt__":[{"value":{"query":"What should I feed my cat?"},"when":"during","resumable":true,"ns":["callTool:2e0c6c40-9541-57ef-a7af-24213a10d5a4"]}]}
Command
提供的数据可以根据 humanAssistance
的实现进行定制,以满足你的需求。
import { Command } from "@langchain/langgraph";
const humanResponse = "You should feed your cat a fish.";
const humanCommand = new Command({
resume: { data: humanResponse },
});
const resumeStream2 = await agent.stream(humanCommand, config);
for await (const step of resumeStream2) {
prettyPrintStep(step);
}
callTool:
============================== tool message ==============================
You should feed your cat a fish.
callModel:
============================== ai message ==============================
For your cat, it is suggested that you feed it fish. As for the weather in San Francisco, it's currently sunny!
注意: interrupt
函数通过抛出特殊的 GraphInterrupt
错误来传播。因此,应避免在 interrupt
函数周围使用 try/catch
块——或者如果使用,请确保在你的 catch
块中再次抛出 GraphInterrupt
错误。