如何流式传输图的状态更新¶
LangGraph 支持多种流式模式。主要模式包括
values
: 此流式模式将流回图的值。这是每个节点被调用后**图的完整状态**。updates
: 此流式模式将流回图的更新。这是每个节点被调用后**图状态的更新**。
本指南涵盖 streamMode="updates"
。
在 [1]
已复制!
// process.env.OPENAI_API_KEY = "sk-...";
// process.env.OPENAI_API_KEY = "sk-...";
定义状态¶
状态是我们图中所有节点的接口。
在 [2]
已复制!
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
import { Annotation } from "@langchain/langgraph"; import { BaseMessage } from "@langchain/core/messages"; const StateAnnotation = Annotation.Root({ messages: Annotation({ reducer: (x, y) => x.concat(y), }), });
在 [3]
已复制!
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = tool(async ({ query: _query }: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 3℃";
}, {
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
import { tool } from "@langchain/core/tools"; import { z } from "zod"; const searchTool = tool(async ({ query: _query }: { query: string }) => { // 这是实际实现的占位符 return "寒冷,最低气温 3℃"; }, { name: "search", description: "用于搜索网络、获取当前信息、查看天气以及检索其他信息。", schema: z.object({ query: z.string().describe("搜索中使用的查询。"), }), }); await searchTool.invoke({ query: "圣弗朗西斯科的天气怎么样?" }); const tools = [searchTool];
现在,我们可以将这些工具封装在一个简单的 ToolNode 中。此对象将在我们的 LLM 调用它们时实际运行这些工具(函数)。
在 [4]
已复制!
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode(tools);
import { ToolNode } from "@langchain/langgraph/prebuilt"; const toolNode = new ToolNode(tools);
在 [5]
已复制!
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o" });
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o" });
完成此操作后,我们应该确保模型知道它可以使用这些工具。我们可以通过调用 bindTools 来实现。
在 [6]
已复制!
const boundModel = model.bindTools(tools);
const boundModel = model.bindTools(tools);
定义图¶
现在,我们可以将所有内容整合在一起。
在 [7]
已复制!
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
const routeMessage = (state: typeof StateAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage?.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: typeof StateAnnotation.State,
) => {
const { messages } = state;
const responseMessage = await boundModel.invoke(messages);
return { messages: [responseMessage] };
};
const workflow = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const graph = workflow.compile();
import { END, START, StateGraph } from "@langchain/langgraph"; import { AIMessage } from "@langchain/core/messages"; const routeMessage = (state: typeof StateAnnotation.State) => { const { messages } = state; const lastMessage = messages[messages.length - 1] as AIMessage; // 如果没有调用工具,我们可以完成(回复用户) if (!lastMessage?.tool_calls?.length) { return END; } // 否则,如果存在,我们将继续并调用工具 return "tools"; }; const callModel = async ( state: typeof StateAnnotation.State, ) => { const { messages } = state; const responseMessage = await boundModel.invoke(messages); return { messages: [responseMessage] }; }; const workflow = new StateGraph(StateAnnotation) .addNode("agent", callModel) .addNode("tools", toolNode) .addEdge(START, "agent") .addConditionalEdges("agent", routeMessage) .addEdge("tools", "agent"); const graph = workflow.compile();
流式更新¶
现在,我们可以与代理进行交互。
在 [8]
已复制!
let inputs = { messages: [{ role: "user", content: "what's the weather in sf" }] };
for await (
const chunk of await graph.stream(inputs, {
streamMode: "updates",
})
) {
for (const [node, values] of Object.entries(chunk)) {
console.log(`Receiving update from node: ${node}`);
console.log(values);
console.log("\n====\n");
}
}
let inputs = { messages: [{ role: "user", content: "圣弗朗西斯科的天气怎么样?" }] }; for await ( const chunk of await graph.stream(inputs, { streamMode: "updates", }) ) { for (const [node, values] of Object.entries(chunk)) { console.log(`从节点接收更新:${node}`); console.log(values); console.log("\n====\n"); } }
Receiving update from node: agent { messages: [ AIMessage { "id": "chatcmpl-9y654VypbD3kE1xM8v4xaAHzZEOXa", "content": "", "additional_kwargs": { "tool_calls": [ { "id": "call_OxlOhnROermwae2LPs9SanmD", "type": "function", "function": "[Object]" } ] }, "response_metadata": { "tokenUsage": { "completionTokens": 17, "promptTokens": 70, "totalTokens": 87 }, "finish_reason": "tool_calls", "system_fingerprint": "fp_3aa7262c27" }, "tool_calls": [ { "name": "search", "args": { "query": "current weather in San Francisco" }, "type": "tool_call", "id": "call_OxlOhnROermwae2LPs9SanmD" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 70, "output_tokens": 17, "total_tokens": 87 } } ] } ==== Receiving update from node: tools { messages: [ ToolMessage { "content": "Cold, with a low of 3℃", "name": "search", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "call_OxlOhnROermwae2LPs9SanmD" } ] } ==== Receiving update from node: agent { messages: [ AIMessage { "id": "chatcmpl-9y654dZ0zzZhPYm6lb36FkG1Enr3p", "content": "It looks like it's currently quite cold in San Francisco, with a low temperature of around 3°C. Make sure to dress warmly!", "additional_kwargs": {}, "response_metadata": { "tokenUsage": { "completionTokens": 28, "promptTokens": 103, "totalTokens": 131 }, "finish_reason": "stop", "system_fingerprint": "fp_3aa7262c27" }, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 103, "output_tokens": 28, "total_tokens": 131 } } ] } ====