跳到内容

如何流式传输图的状态更新

LangGraph 支持多种流式传输模式。主要的有:

  • values:此流式传输模式流式传输图的值。这是在每个节点调用后图的完整状态
  • updates:此流式传输模式流式传输图的更新。这是在每个节点调用后图的状态更新

本指南涵盖 streamMode="updates"

// process.env.OPENAI_API_KEY = "sk-...";

定义状态

状态是我们图中所有节点的接口。

import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";

const StateAnnotation = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x, y) => x.concat(y),
  }),
});

设置工具

我们将首先定义我们要使用的工具。对于这个简单的示例,我们将创建一个占位符搜索引擎。但是,创建您自己的工具非常容易 - 请参阅此处的文档,了解如何操作。

import { tool } from "@langchain/core/tools";
import { z } from "zod";

const searchTool = tool(async ({ query: _query }: { query: string }) => {
  // This is a placeholder for the actual implementation
  return "Cold, with a low of 3℃";
}, {
  name: "search",
  description:
    "Use to surf the web, fetch current information, check the weather, and retrieve other information.",
  schema: z.object({
    query: z.string().describe("The query to use in your search."),
  }),
});

await searchTool.invoke({ query: "What's the weather like?" });

const tools = [searchTool];

我们现在可以将这些工具包装在一个简单的 ToolNode 中。此对象将在我们的 LLM 调用它们时实际运行工具(函数)。

import { ToolNode } from "@langchain/langgraph/prebuilt";

const toolNode = new ToolNode(tools);

设置模型

现在我们将加载聊天模型

  1. 它应该与消息一起工作。我们将以消息的形式表示所有代理状态,因此它需要能够很好地处理它们。
  2. 它应该与工具调用一起工作,这意味着它可以返回其响应中的函数参数。

注意

这些模型要求不是使用 LangGraph 的通用要求 - 它们只是此示例的要求。

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

完成此操作后,我们应确保模型知道它可以调用这些工具。我们可以通过调用 bindTools 来做到这一点。

const boundModel = model.bindTools(tools);

定义图

我们现在可以将所有内容放在一起。

import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";

const routeMessage = (state: typeof StateAnnotation.State) => {
  const { messages } = state;
  const lastMessage = messages[messages.length - 1] as AIMessage;
  // If no tools are called, we can finish (respond to the user)
  if (!lastMessage?.tool_calls?.length) {
    return END;
  }
  // Otherwise if there is, we continue and call the tools
  return "tools";
};

const callModel = async (
  state: typeof StateAnnotation.State,
) => {
  const { messages } = state;
  const responseMessage = await boundModel.invoke(messages);
  return { messages: [responseMessage] };
};

const workflow = new StateGraph(StateAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", routeMessage)
  .addEdge("tools", "agent");

const graph = workflow.compile();

流式传输更新

我们现在可以与代理交互。

let inputs = { messages: [{ role: "user",  content: "what's the weather in sf" }] };

for await (
  const chunk of await graph.stream(inputs, {
    streamMode: "updates",
  })
) {
  for (const [node, values] of Object.entries(chunk)) {
    console.log(`Receiving update from node: ${node}`);
    console.log(values);
    console.log("\n====\n");
  }
}
Receiving update from node: agent
{
  messages: [
    AIMessage {
      "id": "chatcmpl-9y654VypbD3kE1xM8v4xaAHzZEOXa",
      "content": "",
      "additional_kwargs": {
        "tool_calls": [
          {
            "id": "call_OxlOhnROermwae2LPs9SanmD",
            "type": "function",
            "function": "[Object]"
          }
        ]
      },
      "response_metadata": {
        "tokenUsage": {
          "completionTokens": 17,
          "promptTokens": 70,
          "totalTokens": 87
        },
        "finish_reason": "tool_calls",
        "system_fingerprint": "fp_3aa7262c27"
      },
      "tool_calls": [
        {
          "name": "search",
          "args": {
            "query": "current weather in San Francisco"
          },
          "type": "tool_call",
          "id": "call_OxlOhnROermwae2LPs9SanmD"
        }
      ],
      "invalid_tool_calls": [],
      "usage_metadata": {
        "input_tokens": 70,
        "output_tokens": 17,
        "total_tokens": 87
      }
    }
  ]
}

====

Receiving update from node: tools
{
  messages: [
    ToolMessage {
      "content": "Cold, with a low of 3℃",
      "name": "search",
      "additional_kwargs": {},
      "response_metadata": {},
      "tool_call_id": "call_OxlOhnROermwae2LPs9SanmD"
    }
  ]
}

====

Receiving update from node: agent
{
  messages: [
    AIMessage {
      "id": "chatcmpl-9y654dZ0zzZhPYm6lb36FkG1Enr3p",
      "content": "It looks like it's currently quite cold in San Francisco, with a low temperature of around 3°C. Make sure to dress warmly!",
      "additional_kwargs": {},
      "response_metadata": {
        "tokenUsage": {
          "completionTokens": 28,
          "promptTokens": 103,
          "totalTokens": 131
        },
        "finish_reason": "stop",
        "system_fingerprint": "fp_3aa7262c27"
      },
      "tool_calls": [],
      "invalid_tool_calls": [],
      "usage_metadata": {
        "input_tokens": 103,
        "output_tokens": 28,
        "total_tokens": 131
      }
    }
  ]
}

====