如何为您的图添加持久性(“记忆”)¶
许多 AI 应用程序需要内存来跨多个交互共享上下文。在 LangGraph 中,通过 检查点保存器 为任何 StateGraph 提供了内存。
在创建任何 LangGraph 工作流时,您可以使用以下方法将其设置为持久化其状态
- 一个 检查点保存器,例如 MemorySaver
- 在编译图时调用
compile({ checkpointer: myCheckpointer })
。
示例
import { MemorySaver, Annotation } from "@langchain/langgraph";
const GraphState = Annotation.Root({ ... });
const workflow = new StateGraph(GraphState);
/// ... Add nodes and edges
// Initialize any compatible CheckPointSaver
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });
下面是一个示例。
注意
在本操作指南中,我们将从头开始创建我们的代理以实现透明性(但冗长)。您可以使用 createReactAgent({ model, tools, checkpointer })
(API 文档)预构建工厂方法实现类似的功能。如果您习惯于 LangChain 的 AgentExecutor 类,这可能更合适。
设置¶
本指南将使用 OpenAI 的 GPT-4o 模型。我们还可以选择为 LangSmith 追踪 设置我们的 API 密钥,这将为我们提供一流的可观察性。
在 [1]
已复制!
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Persistence: LangGraphJS";
// process.env.OPENAI_API_KEY = "sk_..."; // 可选,在 LangSmith 中添加追踪 // process.env.LANGCHAIN_API_KEY = "ls__..."; process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true"; process.env.LANGCHAIN_TRACING_V2 = "true"; process.env.LANGCHAIN_PROJECT = "Persistence: LangGraphJS";
Persistence: LangGraphJS
定义状态¶
状态是我们图中所有节点的接口。
在 [2]
已复制!
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const GraphState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
import { Annotation } from "@langchain/langgraph"; import { BaseMessage } from "@langchain/core/messages"; const GraphState = Annotation.Root({ messages: Annotation({ reducer: (x, y) => x.concat(y), }), });
在 [3]
已复制!
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = tool(async ({}: { query: string }) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 ℃";
}, {
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
import { tool } from "@langchain/core/tools"; import { z } from "zod"; const searchTool = tool(async ({}: { query: string }) => { // 这是实际实现的占位符 return "寒冷,最低气温 13 ℃"; }, { name: "search", description: "用于浏览网页、获取当前信息、查看天气和检索其他信息。", schema: z.object({ query: z.string().describe("用于搜索的查询。"), }), }); await searchTool.invoke({ query: "天气怎么样?" }); const tools = [searchTool];
现在,我们可以将这些工具包装在一个简单的 ToolNode 中。每当我们的 LLM 调用这些工具(函数)时,此对象实际上都会运行它们。
在 [4]
已复制!
import { ToolNode } from "@langchain/langgraph/prebuilt";
const toolNode = new ToolNode(tools);
import { ToolNode } from "@langchain/langgraph/prebuilt"; const toolNode = new ToolNode(tools);
在 [5]
已复制!
import { ChatOpenAI } from "@langchain/openai";
const model = new ChatOpenAI({ model: "gpt-4o" });
import { ChatOpenAI } from "@langchain/openai"; const model = new ChatOpenAI({ model: "gpt-4o" });
完成此操作后,我们应确保模型知道它可以使用这些工具进行调用。我们可以通过调用 bindTools 来实现。
在 [6]
已复制!
const boundModel = model.bindTools(tools);
const boundModel = model.bindTools(tools);
定义图¶
现在我们可以将所有内容组合在一起。我们首先在没有检查点保存器的情况下运行它
在 [7]
已复制!
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
const routeMessage = (state: typeof GraphState.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: typeof GraphState.State,
config?: RunnableConfig,
) => {
const { messages } = state;
const response = await boundModel.invoke(messages, config);
return { messages: [response] };
};
const workflow = new StateGraph(GraphState)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
const graph = workflow.compile();
import { END, START, StateGraph } from "@langchain/langgraph"; import { AIMessage } from "@langchain/core/messages"; import { RunnableConfig } from "@langchain/core/runnables"; const routeMessage = (state: typeof GraphState.State) => { const { messages } = state; const lastMessage = messages[messages.length - 1] as AIMessage; // 如果没有调用工具,我们可以结束(回复用户) if (!lastMessage.tool_calls?.length) { return END; } // 否则,如果有,我们将继续并调用工具 return "tools"; }; const callModel = async ( state: typeof GraphState.State, config?: RunnableConfig, ) => { const { messages } = state; const response = await boundModel.invoke(messages, config); return { messages: [response] }; }; const workflow = new StateGraph(GraphState) .addNode("agent", callModel) .addNode("tools", toolNode) .addEdge(START, "agent") .addConditionalEdges("agent", routeMessage) .addEdge("tools", "agent"); const graph = workflow.compile();
在 [8]
已复制!
let inputs = { messages: [{ role: "user", content: "Hi I'm Yu, nice to meet you." }] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
let inputs = { messages: [{ role: "user", content: "你好,我是 Yu,很高兴认识你。" }] }; for await ( const { messages } of await graph.stream(inputs, { streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Hi I'm Yu, nice to meet you. ----- Hi Yu! Nice to meet you too. How can I assist you today? -----
在 [9]
已复制!
inputs = { messages: [{ role: "user", content: "Remember my name?" }] };
for await (
const { messages } of await graph.stream(inputs, {
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
inputs = { messages: [{ role: "user", content: "还记得我的名字吗?" }] }; for await ( const { messages } of await graph.stream(inputs, { streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Remember my name? ----- You haven't shared your name with me yet. What's your name? -----
添加内存¶
让我们再次尝试使用检查点保存器。我们将使用 MemorySaver,它将在内存中“保存”检查点。
在 [10]
已复制!
import { MemorySaver } from "@langchain/langgraph";
// Here we only save in-memory
const memory = new MemorySaver();
const persistentGraph = workflow.compile({ checkpointer: memory });
import { MemorySaver } from "@langchain/langgraph"; // 在这里我们只保存在内存中 const memory = new MemorySaver(); const persistentGraph = workflow.compile({ checkpointer: memory });
在 [11]
已复制!
let config = { configurable: { thread_id: "conversation-num-1" } };
inputs = { messages: [{ role: "user", content: "Hi I'm Jo, nice to meet you." }] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
let config = { configurable: { thread_id: "conversation-num-1" } }; inputs = { messages: [{ role: "user", content: "你好,我是 Jo,很高兴认识你。" }] }; for await ( const { messages } of await persistentGraph.stream(inputs, { ...config, streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Hi I'm Jo, nice to meet you. ----- Hello Jo, nice to meet you too! How can I assist you today? -----
在 [12]
已复制!
inputs = { messages: [{ role: "user", content: "Remember my name?"}] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
inputs = { messages: [{ role: "user", content: "还记得我的名字吗?"}] }; for await ( const { messages } of await persistentGraph.stream(inputs, { ...config, streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Remember my name? ----- Yes, I'll remember that your name is Jo. How can I assist you today? -----
新的对话线程¶
如果我们想开始新的对话,我们可以传入不同的thread_id
。嗖!所有记忆都消失了(开玩笑,它们将永远存在于另一个线程中)!
在 [13]
已复制!
config = { configurable: { thread_id: "conversation-2" } };
config = { configurable: { thread_id: "conversation-2" } };
{ configurable: { thread_id: 'conversation-2' } }
在 [14]
已复制!
inputs = { messages: [{ role: "user", content: "you forgot?" }] };
for await (
const { messages } of await persistentGraph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
inputs = { messages: [{ role: "user", content: "你忘记了吗?" }] }; for await ( const { messages } of await persistentGraph.stream(inputs, { ...config, streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
you forgot? -----
Could you please provide more context or details about what you are referring to? This will help me assist you better. -----