如何在您的图中添加线程级持久化¶
许多 AI 应用程序需要内存来跨多个交互共享上下文。在 LangGraph 中,这种类型的内存可以使用 线程级持久化 添加到任何 StateGraph 。
在创建任何 LangGraph 图时,您可以通过在编译图时添加一个 检查点 来设置它持久化其状态。
from langgraph.checkpoint.memory import MemorySaver
checkpointer = MemorySaver()
graph.compile(checkpointer=checkpointer)
本指南展示了如何将线程级持久化添加到您的图中。
注意
如果您需要跨多个对话或用户(跨线程持久化)共享的内存,请查看此 操作指南)。
设置¶
首先,我们需要安装所需的包
在 [1]
已复制!
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic
%%capture --no-stderr %pip install --quiet -U langgraph langchain_anthropic
接下来,我们需要为 OpenAI(我们将使用的 LLM)和 Tavily(我们将使用的搜索工具)设置 API 密钥
在 [2]
已复制!
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
import getpass import os def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("ANTHROPIC_API_KEY")
在 [3]
已复制!
from langchain_anthropic import ChatAnthropic
model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
from langchain_anthropic import ChatAnthropic model = ChatAnthropic(model="claude-3-5-sonnet-20240620")
现在我们可以定义我们的 StateGraph
并添加我们的模型调用节点
在 [4]
已复制!
from typing import Annotated
from typing_extensions import TypedDict
from langgraph.graph import StateGraph, MessagesState, START
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
graph = builder.compile()
from typing import Annotated from typing_extensions import TypedDict from langgraph.graph import StateGraph, MessagesState, START def call_model(state: MessagesState): response = model.invoke(state["messages"]) return {"messages": response} builder = StateGraph(MessagesState) builder.add_node("call_model", call_model) builder.add_edge(START, "call_model") graph = builder.compile()
如果我们尝试使用此图,对话的上下文将不会在交互之间持久化
在 [5]
已复制!
input_message = {"type": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
chunk["messages"][-1].pretty_print()
input_message = {"type": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
chunk["messages"][-1].pretty_print()
input_message = {"type": "user", "content": "hi! I'm bob"} for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"): chunk["messages"][-1].pretty_print() input_message = {"type": "user", "content": "what's my name?"} for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"): chunk["messages"][-1].pretty_print()
================================ Human Message ================================= hi! I'm bob ================================== Ai Message ================================== Hello Bob! It's nice to meet you. How are you doing today? Is there anything I can help you with or would you like to chat about something in particular? ================================ Human Message ================================= what's my name? ================================== Ai Message ================================== I apologize, but I don't have access to your personal information, including your name. I'm an AI language model designed to provide general information and answer questions to the best of my ability based on my training data. I don't have any information about individual users or their personal details. If you'd like to share your name, you're welcome to do so, but I won't be able to recall it in future conversations.
添加持久化¶
为了添加持久化,我们需要在编译图时传入一个 Checkpointer。
在 [6]
已复制!
from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
# If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the checkpointer when compiling the graph, since it's done automatically.
from langgraph.checkpoint.memory import MemorySaver memory = MemorySaver() graph = builder.compile(checkpointer=memory) # 如果你使用的是 LangGraph Cloud 或 LangGraph Studio,则不需要在编译图时传递检查点,因为它是自动完成的。
注意
如果你使用的是 LangGraph Cloud 或 LangGraph Studio,则不需要在编译图时传递检查点,因为它是自动完成的。
我们现在可以与代理进行交互,并看到它记住了之前的消息!
在 [7]
已复制!
config = {"configurable": {"thread_id": "1"}}
input_message = {"type": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
config = {"configurable": {"thread_id": "1"}} input_message = {"type": "user", "content": "hi! I'm bob"} for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"): chunk["messages"][-1].pretty_print()
================================ Human Message ================================= hi! I'm bob ================================== Ai Message ================================== Hello Bob! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions you have that I can help you with?
您可以随时恢复以前的线程
在 [8]
已复制!
input_message = {"type": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
chunk["messages"][-1].pretty_print()
input_message = {"type": "user", "content": "what's my name?"} for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"): chunk["messages"][-1].pretty_print()
================================ Human Message ================================= what's my name? ================================== Ai Message ================================== Your name is Bob, as you introduced yourself at the beginning of our conversation.
如果我们想要开始新的对话,我们可以传入不同的 thread_id
。砰!所有的记忆都没了!
在 [9]
已复制!
input_message = {"type": "user", "content": "what's my name?"}
for chunk in graph.stream(
{"messages": [input_message]},
{"configurable": {"thread_id": "2"}},
stream_mode="values",
):
chunk["messages"][-1].pretty_print()
input_message = {"type": "user", "content": "what's my name?"} for chunk in graph.stream( {"messages": [input_message]}, {"configurable": {"thread_id": "2"}}, stream_mode="values", ): chunk["messages"][-1].pretty_print()
================================ Human Message ================================= what's is my name? ================================== Ai Message ================================== I apologize, but I don't have access to your personal information, including your name. As an AI language model, I don't have any information about individual users unless it's provided within the conversation. If you'd like to share your name, you're welcome to do so, but otherwise, I won't be able to know or guess it.