跳到内容

持久化

许多AI应用需要内存来在单个会话“线程”中的多次交互之间共享上下文。在 LangGraph 中,可以通过使用检查点器(Checkpointers)为任何图添加这种会话级别的内存。

只需使用兼容的检查点器编译图即可。下面是一个使用简单的内存中“MemorySaver”的示例

import { MemorySaver } from "@langchain/langgraph";

const checkpointer = new MemorySaver();
const graph = workflow.compile({ checkpointer });

本指南展示了如何为图添加线程级持久化。

注意:多会话内存

如果您需要跨多个会话或用户共享的内存(跨线程持久化),请查看此操作指南)。

注意

在本操作指南中,我们将从头开始创建智能体,以保持透明(但会比较详细)。您可以使用 createReactAgent(model, tools=tool, checkpointer=checkpointer)API 文档)构造函数实现类似的功能。如果您习惯使用 LangChain 的 AgentExecutor 类,这可能更合适。

设置

本指南将使用 OpenAI 的 GPT-4o 模型。我们将可选地设置用于 LangSmith 追踪的 API 密钥,这将为我们提供一流的可观察性。

// process.env.OPENAI_API_KEY = "sk_...";

// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Persistence: LangGraphJS";
Persistence: LangGraphJS

定义状态

状态是我们图中所有节点的接口。

import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";

const GraphState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x, y) => x.concat(y),
  }),
});

设置工具

我们首先定义要使用的工具。在这个简单的示例中,我们将创建一个占位符搜索引擎。但是,创建您自己的工具非常容易——请参阅此处的文档,了解如何操作。

import { tool } from "@langchain/core/tools";
import { z } from "zod";

const searchTool = tool(async ({}: { query: string }) => {
  // This is a placeholder for the actual implementation
  return "Cold, with a low of 13 ℃";
}, {
  name: "search",
  description:
    "Use to surf the web, fetch current information, check the weather, and retrieve other information.",
  schema: z.object({
    query: z.string().describe("The query to use in your search."),
  }),
});

await searchTool.invoke({ query: "What's the weather like?" });

const tools = [searchTool];

我们现在可以将这些工具包装在一个简单的ToolNode中。当我们的LLM调用这些工具(函数)时,此对象将实际运行它们。

import { ToolNode } from "@langchain/langgraph/prebuilt";

const toolNode = new ToolNode(tools);

设置模型

现在我们将加载聊天模型

  1. 它应该与消息配合使用。我们将以消息形式表示所有智能体状态,因此它需要能够很好地与它们配合。
  2. 它应该支持工具调用,这意味着它可以在响应中返回函数参数。

注意

这些模型要求不是使用 LangGraph 的通用要求——它们只是本示例的要求。

import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

完成此操作后,我们应确保模型知道它有这些工具可供调用。我们可以通过调用 bindTools 来实现。

const boundModel = model.bindTools(tools);

定义图

现在我们可以把所有东西组合起来。我们先在没有检查点器的情况下运行它。

import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";

const routeMessage = (state: typeof GraphState.State) => {
  const { messages } = state;
  const lastMessage = messages[messages.length - 1] as AIMessage;
  // If no tools are called, we can finish (respond to the user)
  if (!lastMessage.tool_calls?.length) {
    return END;
  }
  // Otherwise if there is, we continue and call the tools
  return "tools";
};

const callModel = async (
  state: typeof GraphState.State,
  config?: RunnableConfig,
) => {
  const { messages } = state;
  const response = await boundModel.invoke(messages, config);
  return { messages: [response] };
};

const workflow = new StateGraph(GraphState)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", routeMessage)
  .addEdge("tools", "agent");

const graph = workflow.compile();

let inputs = { messages: [{ role: "user", content: "Hi I'm Yu, nice to meet you." }] };
for await (
  const { messages } of await graph.stream(inputs, {
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Hi I'm Yu, nice to meet you.
-----

Hi Yu! Nice to meet you too. How can I assist you today?
-----

inputs = { messages: [{ role: "user", content: "Remember my name?" }] };
for await (
  const { messages } of await graph.stream(inputs, {
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Remember my name?
-----

You haven't shared your name with me yet. What's your name?
-----

添加内存

让我们再尝试使用检查点器。我们将使用MemorySaver,它将在内存中“保存”检查点。

import { MemorySaver } from "@langchain/langgraph";

// Here we only save in-memory
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });

let config = { configurable: { thread_id: "conversation-num-1" } };
inputs = { messages: [{ role: "user", content: "Hi I'm Jo, nice to meet you." }] };
for await (
  const { messages } of await persistentGraph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Hi I'm Jo, nice to meet you.
-----

Hello Jo, nice to meet you too! How can I assist you today?
-----

inputs = { messages: [{ role: "user", content: "Remember my name?"}] };
for await (
  const { messages } of await persistentGraph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Remember my name?
-----

Yes, I'll remember that your name is Jo. How can I assist you today?
-----

新会话线程

如果我们要开始新的对话,可以传入不同的 thread_id。呼!所有记忆都消失了(开玩笑,它们会永远存在于那个其他线程中)!

config = { configurable: { thread_id: "conversation-2" } };
{ configurable: { thread_id: 'conversation-2' } }

inputs = { messages: [{ role: "user", content: "you forgot?" }] };
for await (
  const { messages } of await persistentGraph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
you forgot?
-----
``````output
Could you please provide more context or details about what you are referring to? This will help me assist you better.
-----