如何强制代理调用工具¶
在此示例中,我们将构建一个 ReAct 代理,该代理在进行任何规划之前总是先调用某个工具。在此示例中,我们将创建一个带有搜索工具的代理。然而,在开始时,我们将强制代理调用搜索工具(然后让它之后做任何它想做的事情)。当您知道要在应用程序中执行特定操作,但也希望在经历该固定序列后,LLM 能够灵活地跟进用户的查询时,这会非常有用。
设置¶
首先我们需要安装所需的包
接下来,我们需要设置 OpenAI(我们将使用的 LLM)的 API 密钥。或者,我们可以设置 LangSmith 追踪的 API 密钥,这将为我们提供一流的可观察性。
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Force Calling a Tool First: LangGraphJS";
设置工具¶
我们首先定义要使用的工具。对于这个简单的示例,我们将通过 Tavily 使用一个内置的搜索工具。然而,创建自己的工具非常容易——请参阅此处的文档,了解如何操作。
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = new DynamicStructuredTool({
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
func: async ({}: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 ℃";
},
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
现在我们可以将这些工具封装在 ToolNode
中。这是一个预构建的节点,它接收 LangChain 聊天模型生成的工具调用,并调用该工具,然后返回输出。
设置模型¶
现在我们需要加载我们想要使用的聊天模型。\重要的是,这应该满足两个标准
- 它应该能与消息配合使用。我们将所有代理状态都以消息的形式表示,因此它需要能够很好地与消息配合使用。
- 它应该与 OpenAI 函数调用兼容。这意味着它应该是一个 OpenAI 模型或一个提供类似接口的模型。
注意:这些模型要求并非使用 LangGraph 的要求——它们只是本示例的要求。
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({
temperature: 0,
model: "gpt-4o",
});
完成此操作后,我们应该确保模型知道这些工具可供调用。我们可以通过将 LangChain 工具转换为 OpenAI 函数调用格式,然后将它们绑定到模型类来完成此操作。
定义代理状态¶
langgraph
中的主要图类型是 StateGraph
。此图由一个状态对象参数化,该对象在每个节点之间传递。然后,每个节点返回更新该状态的操作。
在此示例中,我们将跟踪的状态只是一条消息列表。我们希望每个节点只向该列表添加消息。因此,我们将代理状态定义为一个具有一个键(messages
)的对象,其值指定如何更新状态。
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
定义节点¶
现在我们需要在图中定义几个不同的节点。在 langgraph
中,节点可以是函数或可运行对象。为此我们需要两个主要的节点:
- 代理:负责决定(如果需要)采取哪些行动。
- 一个调用工具的函数:如果代理决定采取行动,则此节点将执行该行动。
我们还需要定义一些边。其中一些边可能是条件性的。它们是条件性的原因在于,根据节点的输出,可能会采取几种路径中的一条。在节点运行之前(LLM 决定),所采取的路径是未知的。
- 条件边:在调用代理后,我们应该:a. 如果代理表示要采取行动,则应调用调用工具的函数;b. 如果代理表示已完成,则应结束。
- 普通边:在工具被调用后,它应该总是返回到代理,由代理决定接下来做什么。
让我们定义节点,以及一个决定采取何种条件边的函数。
import { AIMessage, AIMessageChunk } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
import { concat } from "@langchain/core/utils/stream";
// Define logic that will be used to determine which conditional edge to go down
const shouldContinue = (state: typeof AgentState.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If there is no function call, then we finish
if (!lastMessage.tool_calls || lastMessage.tool_calls.length === 0) {
return "end";
}
// Otherwise if there is, we continue
return "continue";
};
// Define the function that calls the model
const callModel = async (
state: typeof AgentState.State,
config?: RunnableConfig,
) => {
const { messages } = state;
let response: AIMessageChunk | undefined;
for await (const message of await boundModel.stream(messages, config)) {
if (!response) {
response = message;
} else {
response = concat(response, message);
}
}
// We return an object, because this will get added to the existing list
return {
messages: response ? [response as AIMessage] : [],
};
};
修改
这里我们创建一个节点,该节点返回一个带有工具调用的 AIMessage——我们将在开始时使用它来强制它调用一个工具。
// This is the new first - the first call of the model we want to explicitly hard-code some action
const firstModel = async (state: typeof AgentState.State) => {
const humanInput = state.messages[state.messages.length - 1].content || "";
return {
messages: [
new AIMessage({
content: "",
tool_calls: [
{
name: "search",
args: {
query: humanInput,
},
id: "tool_abcd123",
},
],
}),
],
};
};
定义图¶
现在我们可以将它们整合在一起并定义图!
修改
我们将定义一个 firstModel
节点,并将其设置为入口点。
import { END, START, StateGraph } from "@langchain/langgraph";
// Define a new graph
const workflow = new StateGraph(AgentState)
// Define the new entrypoint
.addNode("first_agent", firstModel)
// Define the two nodes we will cycle between
.addNode("agent", callModel)
.addNode("action", toolNode)
// Set the entrypoint as `first_agent`
// by creating an edge from the virtual __start__ node to `first_agent`
.addEdge(START, "first_agent")
// We now add a conditional edge
.addConditionalEdges(
// First, we define the start node. We use `agent`.
// This means these are the edges taken after the `agent` node is called.
"agent",
// Next, we pass in the function that will determine which node is called next.
shouldContinue,
// Finally we pass in a mapping.
// The keys are strings, and the values are other nodes.
// END is a special node marking that the graph should finish.
// What will happen is we will call `should_continue`, and then the output of that
// will be matched against the keys in this mapping.
// Based on which one it matches, that node will then be called.
{
// If `tools`, then we call the tool node.
continue: "action",
// Otherwise we finish.
end: END,
},
)
// We now add a normal edge from `tools` to `agent`.
// This means that after `tools` is called, `agent` node is called next.
.addEdge("action", "agent")
// After we call the first agent, we know we want to go to action
.addEdge("first_agent", "action");
// Finally, we compile it!
// This compiles it into a LangChain Runnable,
// meaning you can use it as you would any other runnable
const app = workflow.compile();
使用它!¶
现在我们可以使用它了!这现在暴露出与所有其他 LangChain 可运行对象相同的接口。
import { HumanMessage } from "@langchain/core/messages";
const inputs = {
messages: [new HumanMessage("what is the weather in sf")],
};
for await (const output of await app.stream(inputs)) {
console.log(output);
console.log("-----\n");
}
{
first_agent: {
messages: [
AIMessage {
"content": "",
"additional_kwargs": {},
"response_metadata": {},
"tool_calls": [
{
"name": "search",
"args": {
"query": "what is the weather in sf"
},
"id": "tool_abcd123"
}
],
"invalid_tool_calls": []
}
]
}
}
-----
{
action: {
messages: [
ToolMessage {
"content": "Cold, with a low of 13 ℃",
"name": "search",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "tool_abcd123"
}
]
}
}
-----
{
agent: {
messages: [
AIMessageChunk {
"id": "chatcmpl-9y562g16z0MUNBJcS6nKMsDuFMRsS",
"content": "The current weather in San Francisco is cold, with a low of 13°C.",
"additional_kwargs": {},
"response_metadata": {
"prompt": 0,
"completion": 0,
"finish_reason": "stop",
"system_fingerprint": "fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27fp_3aa7262c27"
},
"tool_calls": [],
"tool_call_chunks": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 104,
"output_tokens": 18,
"total_tokens": 122
}
}
]
}
}
-----