跳到内容

如何为你的图添加线程级持久性

先决条件

本指南假设您熟悉以下内容

LangGraph API 用户不需要

如果您使用 LangGraph API,则无需手动实现 checkpointer。API 会自动为您处理 checkpointing。本指南适用于您在自己的自定义服务器中实现 LangGraph 的情况。

许多 AI 应用需要内存来在多次交互中共享上下文。在 LangGraph 中,可以使用线程级持久性将这种内存添加到任何 StateGraph 中。

创建任何 LangGraph 图时,您可以在编译图时添加一个 checkpointer 来设置状态持久化。

API 参考:MemorySaver

from langgraph.checkpoint.memory import MemorySaver

checkpointer = MemorySaver()
graph.compile(checkpointer=checkpointer)

本指南演示了如何为您的图添加线程级持久性。

注意

如果您需要跨多个对话或用户共享的内存(跨线程持久性),请查看这篇操作指南

设置

首先我们需要安装所需的软件包

pip install --quiet -U langgraph langchain_anthropic

接下来,我们需要设置 Anthropic(我们将使用的 LLM)的 API 密钥。

import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("ANTHROPIC_API_KEY")
ANTHROPIC_API_KEY:  ········

为 LangGraph 开发设置 LangSmith

注册 LangSmith,快速发现问题并提升 LangGraph 项目的性能。LangSmith 允许您使用跟踪数据调试、测试和监控使用 LangGraph 构建的 LLM 应用 — 在此处阅读更多关于如何开始的信息。

定义图

我们将使用一个调用聊天模型的单节点图。

首先定义我们将使用的模型

API 参考:ChatAnthropic

from langchain_anthropic import ChatAnthropic

model = ChatAnthropic(model="claude-3-5-sonnet-20240620")

现在我们可以定义我们的 StateGraph 并添加模型调用节点

API 参考:StateGraph | START

from typing import Annotated
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, MessagesState, START


def call_model(state: MessagesState):
    response = model.invoke(state["messages"])
    return {"messages": response}


builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_edge(START, "call_model")
graph = builder.compile()

如果我们尝试使用此图,对话的上下文将不会在交互之间持久化

input_message = {"role": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
    chunk["messages"][-1].pretty_print()

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! It's nice to meet you. How are you doing today? Is there anything I can help you with or would you like to chat about something in particular?
================================ Human Message =================================

what's my name?
================================== Ai Message ==================================

I apologize, but I don't have access to your personal information, including your name. I'm an AI language model designed to provide general information and answer questions to the best of my ability based on my training data. I don't have any information about individual users or their personal details. If you'd like to share your name, you're welcome to do so, but I won't be able to recall it in future conversations.

添加持久性

要添加持久性,我们需要在编译图时传入一个 Checkpointer

API 参考:MemorySaver

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
# If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the checkpointer when compiling the graph, since it's done automatically.

注意

如果您使用 LangGraph Cloud 或 LangGraph Studio,则无需在编译图时传入 checkpointer,因为它是自动完成的。

现在我们可以与智能体交互,并看到它记住了之前的消息!

config = {"configurable": {"thread_id": "1"}}
input_message = {"role": "user", "content": "hi! I'm bob"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

hi! I'm bob
================================== Ai Message ==================================

Hello Bob! It's nice to meet you. How are you doing today? Is there anything in particular you'd like to chat about or any questions you have that I can help you with?
您可以随时恢复之前的线程

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream({"messages": [input_message]}, config, stream_mode="values"):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

what's my name?
================================== Ai Message ==================================

Your name is Bob, as you introduced yourself at the beginning of our conversation.
如果我们想开始一个新的对话,我们可以传入不同的 thread_id。噗!所有的记忆都消失了!

input_message = {"role": "user", "content": "what's my name?"}
for chunk in graph.stream(
    {"messages": [input_message]},
    {"configurable": {"thread_id": "2"}},
    stream_mode="values",
):
    chunk["messages"][-1].pretty_print()
================================ Human Message =================================

what's is my name?
================================== Ai Message ==================================

I apologize, but I don't have access to your personal information, including your name. As an AI language model, I don't have any information about individual users unless it's provided within the conversation. If you'd like to share your name, you're welcome to do so, but otherwise, I won't be able to know or guess it.

评论