如何管理 Agent 步骤¶
在本示例中,我们将构建一个显式管理中间步骤的 ReAct Agent。
之前的示例只是将所有消息放入模型中,但是额外的上下文可能会分散 Agent 的注意力并增加 API 调用的延迟。在本示例中,我们将仅在聊天记录中包含最近的 N
条消息。请注意,这旨在说明一般的状态管理。
设置¶
首先,我们需要安装所需的软件包
接下来,我们需要为 Anthropic(我们将使用的 LLM)设置 API 密钥。
可选地,我们可以为 LangSmith 追踪设置 API 密钥,这将为我们提供一流的可观察性。
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Managing Agent Steps: LangGraphJS";
设置状态¶
langgraph
中的主要图类型是 StateGraph。此图由一个状态对象参数化,该状态对象在每个节点之间传递。然后,每个节点返回操作以更新该状态。这些操作可以 SET 状态上的特定属性(例如,覆盖现有值)或 ADD 到现有属性。设置还是添加在您构造图的状态对象中表示。
对于此示例,我们将跟踪的状态将只是消息列表。我们希望每个节点都只是将消息添加到该列表中。因此,我们将状态定义如下
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
设置工具¶
我们将首先定义要使用的工具。对于这个简单的示例,我们将创建一个占位符搜索引擎。创建您自己的工具非常容易 - 请参阅此处的文档,了解如何操作。
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = new DynamicStructuredTool({
name: "search",
description: "Call to surf the web.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
func: async ({}: { query: string }) => {
// This is a placeholder, but don't tell the LLM that...
return "Try again in a few seconds! Checking with the weathermen... Call be again next.";
},
});
const tools = [searchTool];
我们现在可以将这些工具包装在一个简单的 ToolNode 中。\ 这是一个简单的类,它接受包含 带有 tool_calls 的 AIMessages 的消息列表,运行工具,并将输出作为 ToolMessage 返回。
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode<typeof AgentState.State>(tools);
设置模型¶
现在我们需要加载我们要使用的聊天模型。这应该满足两个标准
- 它应该与消息一起工作,因为我们的状态主要是一个消息列表(聊天记录)。
- 它应该与工具调用一起工作,因为我们正在使用预构建的 ToolNode
注意: 这些模型要求不是使用 LangGraph 的要求 - 它们只是此特定示例的要求。
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
// After we've done this, we should make sure the model knows that it has these tools available to call.
// We can do this by binding the tools to the model class.
const boundModel = model.bindTools(tools);
定义节点¶
我们现在需要在图中定义几个不同的节点。在 langgraph
中,节点可以是函数或 runnable。我们需要两个主要节点
- Agent:负责决定要采取什么(如果有)行动。
- 调用工具的函数:如果 Agent 决定采取行动,则此节点将执行该操作。
我们还需要定义一些边。其中一些边可能是条件性的。它们是条件性的原因是,基于节点的输出,可能会采取几种路径之一。在运行该节点(LLM 决定)之前,采取的路径是未知的。
- 条件边:在调用 Agent 后,我们应该:a. 如果 Agent 说要采取行动,则应调用调用工具的函数\ b. 如果 Agent 说已完成,则应完成
- 正常边:在调用工具后,它应始终返回到 Agent 以决定下一步做什么
让我们定义节点,以及一个决定如何采取条件边的函数。
import { END } from "@langchain/langgraph";
import { AIMessage, ToolMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
// Define the function that determines whether to continue or not
const shouldContinue = (state: typeof AgentState.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If there is no function call, then we finish
if (!lastMessage.tool_calls || lastMessage.tool_calls.length === 0) {
return END;
}
// Otherwise if there is, we continue
return "tools";
};
// **MODIFICATION**
//
// Here we don't pass all messages to the model but rather only pass the `N` most recent. Note that this is a terribly simplistic way to handle messages meant as an illustration, and there may be other methods you may want to look into depending on your use case. We also have to make sure we don't truncate the chat history to include the tool message first, as this would cause an API error.
const callModel = async (
state: typeof AgentState.State,
config?: RunnableConfig,
) => {
let modelMessages = [];
for (let i = state.messages.length - 1; i >= 0; i--) {
modelMessages.push(state.messages[i]);
if (modelMessages.length >= 5) {
if (!ToolMessage.isInstance(modelMessages[modelMessages.length - 1])) {
break;
}
}
}
modelMessages.reverse();
const response = await boundModel.invoke(modelMessages, config);
// We return an object, because this will get added to the existing list
return { messages: [response] };
};
定义图¶
我们现在可以将所有内容放在一起并定义图!
import { START, StateGraph } from "@langchain/langgraph";
// Define a new graph
const workflow = new StateGraph(AgentState)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges(
"agent",
shouldContinue,
)
.addEdge("tools", "agent");
// Finally, we compile it!
// This compiles it into a LangChain Runnable,
// meaning you can use it as you would any other runnable
const app = workflow.compile();
使用它!¶
我们现在可以使用它了!现在,它公开了与所有其他 LangChain runnable 相同的接口。
import { HumanMessage, isAIMessage } from "@langchain/core/messages";
import { GraphRecursionError } from "@langchain/langgraph";
const prettyPrint = (message: BaseMessage) => {
let txt = `[${message._getType()}]: ${message.content}`;
if (
(isAIMessage(message) && (message as AIMessage)?.tool_calls?.length) ||
0 > 0
) {
const tool_calls = (message as AIMessage)?.tool_calls
?.map((tc) => `- ${tc.name}(${JSON.stringify(tc.args)})`)
.join("\n");
txt += ` \nTools: \n${tool_calls}`;
}
console.log(txt);
};
const inputs = {
messages: [
new HumanMessage(
"what is the weather in sf? Don't give up! Keep using your tools.",
),
],
};
// Setting the recursionLimit will set a max number of steps. We expect this to endlessly loop :)
try {
for await (
const output of await app.stream(inputs, {
streamMode: "values",
recursionLimit: 10,
})
) {
const lastMessage = output.messages[output.messages.length - 1];
prettyPrint(lastMessage);
console.log("-----\n");
}
} catch (e) {
// Since we are truncating the chat history, the agent never gets the chance
// to see enough information to know to stop, so it will keep looping until we hit the
// maximum recursion limit.
if ((e as GraphRecursionError).name === "GraphRecursionError") {
console.log("As expected, maximum steps reached. Exiting.");
} else {
console.error(e);
}
}
[human]: what is the weather in sf? Don't give up! Keep using your tools.
-----
[ai]:
Tools:
- search({"query":"current weather in San Francisco"})
-----
[tool]: Try again in a few seconds! Checking with the weathermen... Call be again next.
-----
[ai]:
Tools:
- search({"query":"current weather in San Francisco"})
-----
[tool]: Try again in a few seconds! Checking with the weathermen... Call be again next.
-----
[ai]:
Tools:
- search({"query":"current weather in San Francisco"})
-----
[tool]: Try again in a few seconds! Checking with the weathermen... Call be again next.
-----
[ai]:
Tools:
- search({"query":"current weather in San Francisco"})
-----
[tool]: Try again in a few seconds! Checking with the weathermen... Call be again next.
-----
[ai]:
Tools:
- search({"query":"current weather in San Francisco"})
-----
As expected, maximum steps reached. Exiting.