持久性¶
许多 AI 应用需要内存来在单个对话“线程”中的多次交互之间共享上下文。在 LangGraph 中,可以使用 检查点(Checkpointers)将这种对话级别内存添加到任何图中。
只需使用兼容的检查点来编译图。下面是一个使用简单的内存中“MemorySaver”的示例
import { MemorySaver } from "@langchain/langgraph";
const checkpointer = new MemorySaver();
const graph = workflow.compile({ checkpointer });
本指南将展示如何向您的图添加线程级别持久性。
注意:多对话内存
如果您需要在多个对话或用户之间共享内存(跨线程持久性),请查看此操作指南)。
注意
在本操作指南中,我们将从头开始创建代理,以便透明地展示(尽管会有点冗长)。您可以使用 createReactAgent(model, tools=tool, checkpointer=checkpointer)
(API 文档) 构造函数实现类似功能。如果您习惯于 LangChain 的 AgentExecutor 类,这可能更适合您。
设置¶
本指南将使用 OpenAI 的 GPT-4o 模型。我们将选择性地为 LangSmith 追踪设置 API 密钥,这将为我们提供一流的可观察性。
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Persistence: LangGraphJS";
定义状态¶
状态是图中所有节点的接口。
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const GraphState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
设置工具¶
我们将首先定义要使用的工具。对于这个简单的示例,我们将创建一个占位符搜索引擎。然而,创建您自己的工具非常容易 - 请参阅此处的文档了解如何操作。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = tool(async ({}: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 ℃";
}, {
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
现在我们可以将这些工具封装在一个简单的 ToolNode 中。当 LLM 调用它们时,该对象将实际运行这些工具(函数)。
设置模型¶
现在我们将加载聊天模型。
- 它应该能够处理消息。我们将以消息形式表示所有代理状态,因此它需要能够很好地处理消息。
- 它应该能够处理工具调用,这意味着它可以在响应中返回函数参数。
注意
这些模型要求并非使用 LangGraph 的通用要求 - 它们只是此示例的要求。
完成此操作后,我们应确保模型知道它拥有这些可供调用的工具。我们可以通过调用 bindTools 来实现这一点。
定义图¶
现在我们可以将所有内容整合起来。我们将首先在没有检查点的情况下运行它
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
const routeMessage = (state: typeof GraphState.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: typeof GraphState.State,
config?: RunnableConfig,
) => {
const { messages } = state;
const response = await boundModel.invoke(messages, config);
return { messages: [response] };
};
const workflow = new StateGraph(GraphState)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const graph = workflow.compile();
let inputs = { messages: [{ role: "user", content: "Hi I'm Yu, nice to meet you." }] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
inputs = { messages: [{ role: "user", content: "Remember my name?" }] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
添加内存¶
让我们再次尝试使用检查点。我们将使用 MemorySaver,它将在内存中“保存”检查点。
import { MemorySaver } from "@langchain/langgraph";
// Here we only save in-memory
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });
let config = { configurable: { thread_id: "conversation-num-1" } };
inputs = { messages: [{ role: "user", content: "Hi I'm Jo, nice to meet you." }] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
Hi I'm Jo, nice to meet you.
-----
Hello Jo, nice to meet you too! How can I assist you today?
-----
inputs = { messages: [{ role: "user", content: "Remember my name?"}] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
新对话线程¶
如果我们要开始新的对话,可以传入不同的 thread_id
。咻!所有记忆都消失了(开玩笑,它们永远会存在于那个其他线程中)!
inputs = { messages: [{ role: "user", content: "you forgot?" }] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}