如何从零开始创建 ReAct 代理(函数式 API)¶
本指南演示了如何使用 LangGraph 函数式 API 实现 ReAct 代理。
ReAct 代理是一个 工具调用代理,其运作方式如下
这是一个简单且通用的设置,可以通过内存、人工参与功能和其他功能进行扩展。有关示例,请参阅专门的操作指南。
设置¶
注意
本指南需要 @langchain/langgraph>=0.2.42
。
首先,安装此示例所需的依赖项
接下来,我们需要为 OpenAI(我们将使用的 LLM)设置 API 密钥
设置 LangSmith 以进行 LangGraph 开发
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 使您可以使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序——阅读此处以了解更多关于如何开始的信息
创建 ReAct 代理¶
现在您已经安装了所需的软件包并设置了环境变量,我们可以创建我们的代理。
定义模型和工具¶
让我们首先定义我们将用于示例的工具和模型。在这里,我们将使用一个占位符工具,该工具获取某个位置的天气描述。
我们将在此示例中使用 OpenAI 聊天模型,但任何支持工具调用的模型都适用。
import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const model = new ChatOpenAI({
model: "gpt-4o-mini",
});
const getWeather = tool(async ({ location }) => {
const lowercaseLocation = location.toLowerCase();
if (lowercaseLocation.includes("sf") || lowercaseLocation.includes("san francisco")) {
return "It's sunny!";
} else if (lowercaseLocation.includes("boston")) {
return "It's rainy!";
} else {
return `I am not sure what the weather is in ${location}`;
}
}, {
name: "getWeather",
schema: z.object({
location: z.string().describe("location to get the weather for"),
}),
description: "Call to get the weather from a specific location."
});
const tools = [getWeather];
定义任务¶
接下来,我们定义将要执行的任务。这里有两个不同的任务
- 调用模型:我们想要使用消息列表查询我们的聊天模型。
- 调用工具:如果我们的模型生成了工具调用,我们想要执行它们。
import {
type BaseMessageLike,
AIMessage,
ToolMessage,
} from "@langchain/core/messages";
import { type ToolCall } from "@langchain/core/messages/tool";
import { task } from "@langchain/langgraph";
const toolsByName = Object.fromEntries(tools.map((tool) => [tool.name, tool]));
const callModel = task("callModel", async (messages: BaseMessageLike[]) => {
const response = await model.bindTools(tools).invoke(messages);
return response;
});
const callTool = task(
"callTool",
async (toolCall: ToolCall): Promise<AIMessage> => {
const tool = toolsByName[toolCall.name];
const observation = await tool.invoke(toolCall.args);
return new ToolMessage({ content: observation, tool_call_id: toolCall.id });
// Can also pass toolCall directly into the tool to return a ToolMessage
// return tool.invoke(toolCall);
});
定义入口点¶
我们的入口点将处理这两个任务的编排。如上所述,当我们的 callModel
任务生成工具调用时,callTool
任务将为每个工具调用生成响应。我们将所有消息附加到单个消息列表。
import { entrypoint, addMessages } from "@langchain/langgraph";
const agent = entrypoint(
"agent",
async (messages: BaseMessageLike[]) => {
let currentMessages = messages;
let llmResponse = await callModel(currentMessages);
while (true) {
if (!llmResponse.tool_calls?.length) {
break;
}
// Execute tools
const toolResults = await Promise.all(
llmResponse.tool_calls.map((toolCall) => {
return callTool(toolCall);
})
);
// Append to message list
currentMessages = addMessages(currentMessages, [llmResponse, ...toolResults]);
// Call model again
llmResponse = await callModel(currentMessages);
}
return llmResponse;
}
);
用法¶
要使用我们的代理,我们使用消息列表调用它。根据我们的实现,这些可以是 LangChain 消息 对象或 OpenAI 风格的对象
import { BaseMessage, isAIMessage } from "@langchain/core/messages";
const prettyPrintMessage = (message: BaseMessage) => {
console.log("=".repeat(30), `${message.getType()} message`, "=".repeat(30));
console.log(message.content);
if (isAIMessage(message) && message.tool_calls?.length) {
console.log(JSON.stringify(message.tool_calls, null, 2));
}
}
// Usage example
const userMessage = { role: "user", content: "What's the weather in san francisco?" };
console.log(userMessage);
const stream = await agent.stream([userMessage]);
for await (const step of stream) {
for (const [taskName, update] of Object.entries(step)) {
const message = update as BaseMessage;
// Only print task updates
if (taskName === "agent") continue;
console.log(`\n${taskName}:`);
prettyPrintMessage(message);
}
}
{ role: 'user', content: "What's the weather in san francisco?" }
callModel:
============================== ai message ==============================
[
{
"name": "getWeather",
"args": {
"location": "San Francisco"
},
"type": "tool_call",
"id": "call_m5jZoH1HUtH6wA2QvexOHutj"
}
]
callTool:
============================== tool message ==============================
It's sunny!
callModel:
============================== ai message ==============================
The weather in San Francisco is sunny!
getWeather
工具,并在收到来自工具的信息后响应了用户。查看 LangSmith 跟踪记录此处。
添加线程级持久性¶
添加线程级持久性使我们能够支持与代理的对话体验:后续调用将附加到先前的消息列表,从而保留完整的对话上下文。
要向我们的代理添加线程级持久性
- 选择一个检查点:这里我们将使用 MemorySaver,一个简单的内存检查点。
- 更新我们的入口点以接受先前的消息状态作为第二个参数。在这里,我们只是将消息更新附加到先前的消息序列中。
- 选择将从工作流程返回哪些值以及将由检查点保存哪些值。如果我们从
entrypoint.final
返回它,我们将能够将其作为getPreviousState()
访问(可选)
import {
MemorySaver,
getPreviousState,
} from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const agentWithMemory = entrypoint({
name: "agentWithMemory",
checkpointer,
}, async (messages: BaseMessageLike[]) => {
const previous = getPreviousState<BaseMessage>() ?? [];
let currentMessages = addMessages(previous, messages);
let llmResponse = await callModel(currentMessages);
while (true) {
if (!llmResponse.tool_calls?.length) {
break;
}
// Execute tools
const toolResults = await Promise.all(
llmResponse.tool_calls.map((toolCall) => {
return callTool(toolCall);
})
);
// Append to message list
currentMessages = addMessages(currentMessages, [llmResponse, ...toolResults]);
// Call model again
llmResponse = await callModel(currentMessages);
}
// Append final response for storage
currentMessages = addMessages(currentMessages, llmResponse);
return entrypoint.final({
value: llmResponse,
save: currentMessages,
});
});
我们现在需要在运行应用程序时传入一个配置。该配置将为对话线程指定一个标识符。
我们以与之前相同的方式启动一个线程,这次传入配置
const streamWithMemory = await agentWithMemory.stream([{
role: "user",
content: "What's the weather in san francisco?",
}], config);
for await (const step of streamWithMemory) {
for (const [taskName, update] of Object.entries(step)) {
const message = update as BaseMessage;
// Only print task updates
if (taskName === "agentWithMemory") continue;
console.log(`\n${taskName}:`);
prettyPrintMessage(message);
}
}
callModel:
============================== ai message ==============================
[
{
"name": "getWeather",
"args": {
"location": "san francisco"
},
"type": "tool_call",
"id": "call_4vaZqAxUabthejqKPRMq0ngY"
}
]
callTool:
============================== tool message ==============================
It's sunny!
callModel:
============================== ai message ==============================
The weather in San Francisco is sunny!
const followupStreamWithMemory = await agentWithMemory.stream([{
role: "user",
content: "How does it compare to Boston, MA?",
}], config);
for await (const step of followupStreamWithMemory) {
for (const [taskName, update] of Object.entries(step)) {
const message = update as BaseMessage;
// Only print task updates
if (taskName === "agentWithMemory") continue;
console.log(`\n${taskName}:`);
prettyPrintMessage(message);
}
}
callModel:
============================== ai message ==============================
[
{
"name": "getWeather",
"args": {
"location": "boston, ma"
},
"type": "tool_call",
"id": "call_YDrNfZr5XnuBBq5jlIXaxC5v"
}
]
callTool:
============================== tool message ==============================
It's rainy!
callModel:
============================== ai message ==============================
In comparison, while San Francisco is sunny, Boston, MA is experiencing rain.