如何流式传输图的状态更新¶
LangGraph 支持多种流式传输模式。主要的模式有
values
:此流式传输模式会流式传输图的值。这是每次调用节点后图的完整状态。updates
:此流式传输模式会流式传输图的更新。这是每次调用节点后图的状态更新。
本指南介绍 streamMode="updates"
。
定义状态¶
状态是图中所有节点的接口。
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
设置工具¶
我们首先定义要使用的工具。在这个简单的示例中,我们将创建一个占位符搜索引擎。然而,创建自己的工具非常容易 - 请参阅此处的文档了解如何操作。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = tool(async ({ query: _query }: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 3℃";
}, {
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
现在我们可以将这些工具包装在一个简单的ToolNode中。当我们的 LLM 调用时,此对象将实际运行这些工具(函数)。
设置模型¶
现在我们将加载聊天模型。
- 它应该能处理消息。我们将以消息形式表示所有 Agent 状态,因此它需要能很好地处理消息。
- 它应该能与工具调用配合使用,这意味着它可以在响应中返回函数参数。
注意
这些模型要求并非使用 LangGraph 的通用要求 - 它们只是此示例的要求。
完成此步骤后,我们应该确保模型知道这些工具可供调用。我们可以通过调用bindTools来做到这一点。
定义图¶
现在我们可以将所有部分组合起来。
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
const routeMessage = (state: typeof StateAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage?.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: typeof StateAnnotation.State,
) => {
const { messages } = state;
const responseMessage = await boundModel.invoke(messages);
return { messages: [responseMessage] };
};
const workflow = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const graph = workflow.compile();
流式传输更新¶
现在我们可以与 Agent 交互。
let inputs = { messages: [{ role: "user", content: "what's the weather in sf" }] };
for await (
const chunk of await graph.stream(inputs, {
streamMode: "updates",
})
) {
for (const [node, values] of Object.entries(chunk)) {
console.log(`Receiving update from node: ${node}`);
console.log(values);
console.log("\n====\n");
}
}
Receiving update from node: agent
{
messages: [
AIMessage {
"id": "chatcmpl-9y654VypbD3kE1xM8v4xaAHzZEOXa",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_OxlOhnROermwae2LPs9SanmD",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 70,
"totalTokens": 87
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_3aa7262c27"
},
"tool_calls": [
{
"name": "search",
"args": {
"query": "current weather in San Francisco"
},
"type": "tool_call",
"id": "call_OxlOhnROermwae2LPs9SanmD"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 70,
"output_tokens": 17,
"total_tokens": 87
}
}
]
}
====
Receiving update from node: tools
{
messages: [
ToolMessage {
"content": "Cold, with a low of 3℃",
"name": "search",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_OxlOhnROermwae2LPs9SanmD"
}
]
}
====
Receiving update from node: agent
{
messages: [
AIMessage {
"id": "chatcmpl-9y654dZ0zzZhPYm6lb36FkG1Enr3p",
"content": "It looks like it's currently quite cold in San Francisco, with a low temperature of around 3°C. Make sure to dress warmly!",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 28,
"promptTokens": 103,
"totalTokens": 131
},
"finish_reason": "stop",
"system_fingerprint": "fp_3aa7262c27"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 103,
"output_tokens": 28,
"total_tokens": 131
}
}
]
}
====