如何为你的图添加跨线程持久性¶
在上一指南中,您学习了如何在单个线程上的多次交互中持久化图状态。LangGraph.js 还允许您跨多个线程持久化数据。例如,您可以将有关用户的信息(他们的姓名或偏好)存储在共享内存中,并在新的对话线程中重复使用它们。
在本指南中,我们将展示如何构建和使用具有使用 Store 接口实现的共享内存的图。
注意
本指南中使用的 Store
API 的支持是在 LangGraph.js v0.2.10
中添加的。
设置¶
首先,让我们安装所需的软件包并设置我们的 API 密钥。
设置 LangSmith 用于 LangGraph 开发
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 让您可以使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序 — 阅读更多关于如何开始使用的信息 此处。
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "lsv2__...";
// process.env.ANTHROPIC_API_KEY = "your api key";
// process.env.LANGCHAIN_TRACING_V2 = "true";
// process.env.LANGCHAIN_PROJECT = "Cross-thread persistence: LangGraphJS";
定义存储¶
在本示例中,我们将创建一个能够检索有关用户偏好的信息的图。我们将通过定义 InMemoryStore
来实现这一点 - 一个可以在内存中存储数据并查询该数据的对象。然后,我们在编译图时传递 store 对象。这允许图中的每个节点访问 store:当您定义节点函数时,您可以定义 store
关键字参数,LangGraph 将自动传递您编译图时使用的 store 对象。
当使用 Store
接口存储对象时,您定义两件事
- 对象的命名空间,一个元组(类似于目录)
- 对象键(类似于文件名)
在我们的示例中,我们将使用 ("memories", <userId>)
作为命名空间,并使用随机 UUID 作为每个新内存的键。
重要的是,为了确定用户,我们将通过节点函数的 config 关键字参数传递 userId
。
让我们首先定义一个 InMemoryStore
,它已经填充了一些关于用户的记忆。
创建图¶
import { v4 as uuidv4 } from "uuid";
import { ChatAnthropic } from "@langchain/anthropic";
import { BaseMessage } from "@langchain/core/messages";
import {
Annotation,
StateGraph,
START,
MemorySaver,
LangGraphRunnableConfig,
messagesStateReducer,
} from "@langchain/langgraph";
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: messagesStateReducer,
default: () => [],
}),
});
const model = new ChatAnthropic({ modelName: "claude-3-5-sonnet-20240620" });
// NOTE: we're passing the Store param to the node --
// this is the Store we compile the graph with
const callModel = async (
state: typeof StateAnnotation.State,
config: LangGraphRunnableConfig
): Promise<{ messages: any }> => {
const store = config.store;
if (!store) {
if (!store) {
throw new Error("store is required when compiling the graph");
}
}
if (!config.configurable?.userId) {
throw new Error("userId is required in the config");
}
const namespace = ["memories", config.configurable?.userId];
const memories = await store.search(namespace);
const info = memories.map((d) => d.value.data).join("\n");
const systemMsg = `You are a helpful assistant talking to the user. User info: ${info}`;
// Store new memories if the user asks the model to remember
const lastMessage = state.messages[state.messages.length - 1];
if (
typeof lastMessage.content === "string" &&
lastMessage.content.toLowerCase().includes("remember")
) {
await store.put(namespace, uuidv4(), { data: lastMessage.content });
}
const response = await model.invoke([
{ type: "system", content: systemMsg },
...state.messages,
]);
return { messages: response };
};
const builder = new StateGraph(StateAnnotation)
.addNode("call_model", callModel)
.addEdge(START, "call_model");
// NOTE: we're passing the store object here when compiling the graph
const graph = builder.compile({
checkpointer: new MemorySaver(),
store: inMemoryStore,
});
// If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the store or checkpointer when compiling the graph, since it's done automatically.
注意
如果您正在使用 LangGraph Cloud 或 LangGraph Studio,则无需在编译图时传递 store,因为它会自动完成。
运行图!¶
现在让我们在 config 中指定一个用户 ID,并告诉模型我们的名字
let config = { configurable: { thread_id: "1", userId: "1" } };
let inputMessage = { type: "user", content: "Hi! Remember: my name is Bob" };
for await (const chunk of await graph.stream(
{ messages: [inputMessage] },
{ ...config, streamMode: "values" }
)) {
console.log(chunk.messages[chunk.messages.length - 1]);
}
HumanMessage {
"id": "ef28a40a-fd75-4478-929a-5413f2a6b044",
"content": "Hi! Remember: my name is Bob",
"additional_kwargs": {},
"response_metadata": {}
}
AIMessage {
"id": "msg_01UcHJnSAuVDFuDmqaYkxWAf",
"content": "Hello Bob! It's nice to meet you. I'll remember that your name is Bob. How can I assist you today?",
"additional_kwargs": {
"id": "msg_01UcHJnSAuVDFuDmqaYkxWAf",
"type": "message",
"role": "assistant",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 28,
"output_tokens": 29
}
},
"response_metadata": {
"id": "msg_01UcHJnSAuVDFuDmqaYkxWAf",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 28,
"output_tokens": 29
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 28,
"output_tokens": 29,
"total_tokens": 57
}
}
config = { configurable: { thread_id: "2", userId: "1" } };
inputMessage = { type: "user", content: "what is my name?" };
for await (const chunk of await graph.stream(
{ messages: [inputMessage] },
{ ...config, streamMode: "values" }
)) {
console.log(chunk.messages[chunk.messages.length - 1]);
}
HumanMessage {
"id": "eaaa4e1c-1560-4b0a-9c2d-396313cb000c",
"content": "what is my name?",
"additional_kwargs": {},
"response_metadata": {}
}
AIMessage {
"id": "msg_01VfqUerYCND1JuWGvbnAacP",
"content": "Your name is Bob. It's nice to meet you, Bob!",
"additional_kwargs": {
"id": "msg_01VfqUerYCND1JuWGvbnAacP",
"type": "message",
"role": "assistant",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 33,
"output_tokens": 17
}
},
"response_metadata": {
"id": "msg_01VfqUerYCND1JuWGvbnAacP",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 33,
"output_tokens": 17
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 33,
"output_tokens": 17,
"total_tokens": 50
}
}
const memories = await inMemoryStore.search(["memories", "1"]);
for (const memory of memories) {
console.log(await memory.value);
}
config = { configurable: { thread_id: "3", userId: "2" } };
inputMessage = { type: "user", content: "what is my name?" };
for await (const chunk of await graph.stream(
{ messages: [inputMessage] },
{ ...config, streamMode: "values" }
)) {
console.log(chunk.messages[chunk.messages.length - 1]);
}
HumanMessage {
"id": "1006b149-de8d-4d8e-81f4-c78c51a7144b",
"content": "what is my name?",
"additional_kwargs": {},
"response_metadata": {}
}
AIMessage {
"id": "msg_01MjpYZ65NjwZMYq42BWa2Ze",
"content": "I apologize, but I don't have any information about your name or personal details. As an AI assistant, I don't have access to personal information about individual users unless it's specifically provided in our conversation. Is there something else I can help you with?",
"additional_kwargs": {
"id": "msg_01MjpYZ65NjwZMYq42BWa2Ze",
"type": "message",
"role": "assistant",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 25,
"output_tokens": 56
}
},
"response_metadata": {
"id": "msg_01MjpYZ65NjwZMYq42BWa2Ze",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 25,
"output_tokens": 56
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 25,
"output_tokens": 56,
"total_tokens": 81
}
}