跳到内容

添加记忆

聊天机器人现在可以使用工具来回答用户问题,但它不记得之前交互的上下文。这限制了它进行连贯的多轮对话的能力。

LangGraph 通过持久化检查点解决了这个问题。如果在编译图时提供 checkpointer 并在调用图时提供 thread_id,LangGraph 会在每一步之后自动保存状态。当你使用相同的 thread_id 再次调用图时,图会加载其保存的状态,从而使聊天机器人能够从上次中断的地方继续。

我们稍后会看到,检查点比简单的聊天记忆功能强大得多——它允许你在任何时候保存和恢复复杂的状态,用于错误恢复、人工干预工作流、时间旅行交互等。但首先,让我们添加检查点以实现多轮对话。

注意

本教程基于添加工具

1. 创建一个 MemorySaver 检查点器

创建一个 MemorySaver 检查点器

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()

这是一个内存中的检查点器,方便本教程使用。然而,在生产应用程序中,你可能会将其更改为使用 SqliteSaverPostgresSaver 并连接数据库。

2. 编译图

使用提供的检查点器编译图,它将在图通过每个节点时对 State 进行检查点

graph = graph_builder.compile(checkpointer=memory)
from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

3. 与你的聊天机器人互动

现在你可以与你的机器人互动了!

  1. 选择一个线程作为本次对话的键。

    config = {"configurable": {"thread_id": "1"}}
    
  2. 调用你的聊天机器人

    user_input = "Hi there! My name is Will."
    
    # The config is the **second positional argument** to stream() or invoke()!
    events = graph.stream(
        {"messages": [{"role": "user", "content": user_input}]},
        config,
        stream_mode="values",
    )
    for event in events:
        event["messages"][-1].pretty_print()
    
    ================================ Human Message =================================
    
    Hi there! My name is Will.
    ================================== Ai Message ==================================
    
    Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?
    

    注意

    配置作为调用图时的第二个位置参数提供。重要的是,它嵌套在图输入 ({'messages': []}) 中。

4. 提出后续问题

提出后续问题

user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.

请注意,我们没有使用外部列表作为记忆:这一切都由检查点器处理!你可以在这个 LangSmith 跟踪中检查完整的执行过程,看看发生了什么。

不相信我?尝试使用不同的配置。

# The only difference is we change the `thread_id` here to "2" instead of "1"
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    {"configurable": {"thread_id": "2"}},
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

I apologize, but I don't have any previous context or memory of your name. As an AI assistant, I don't retain information from past conversations. Each interaction starts fresh. Could you please tell me your name so I can address you properly in this conversation?

请注意,我们所做的唯一更改是修改了配置中的 thread_id。请查看此调用的 LangSmith 跟踪进行比较。

5. 检查状态

到目前为止,我们已经在两个不同的线程中创建了一些检查点。但是检查点中包含什么?要随时检查给定配置的图的 state,请调用 get_state(config)

snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
snapshot.next  # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)

上面的快照包含当前状态值、对应的配置以及要处理的 next 节点。在我们的例子中,图已达到 END 状态,因此 next 为空。

恭喜! 借助 LangGraph 的检查点系统,你的聊天机器人现在可以跨会话维护对话状态。这为更自然、上下文相关的交互开辟了激动人心的可能性。LangGraph 的检查点甚至可以处理任意复杂的图状态,这比简单的聊天记忆功能更具表达力和强大。

查看下面的代码片段,回顾本教程中的图

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["OPENAI_API_KEY"] = "sk-..."

llm = init_chat_model("openai:gpt-4.1")

👉 阅读 OpenAI 集成文档

pip install -U "langchain[anthropic]"
import os
from langchain.chat_models import init_chat_model

os.environ["ANTHROPIC_API_KEY"] = "sk-..."

llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")

👉 阅读 Anthropic 集成文档

pip install -U "langchain[openai]"
import os
from langchain.chat_models import init_chat_model

os.environ["AZURE_OPENAI_API_KEY"] = "..."
os.environ["AZURE_OPENAI_ENDPOINT"] = "..."
os.environ["OPENAI_API_VERSION"] = "2025-03-01-preview"

llm = init_chat_model(
    "azure_openai:gpt-4.1",
    azure_deployment=os.environ["AZURE_OPENAI_DEPLOYMENT_NAME"],
)

👉 阅读 Azure 集成文档

pip install -U "langchain[google-genai]"
import os
from langchain.chat_models import init_chat_model

os.environ["GOOGLE_API_KEY"] = "..."

llm = init_chat_model("google_genai:gemini-2.0-flash")

👉 阅读 Google GenAI 集成文档

pip install -U "langchain[aws]"
from langchain.chat_models import init_chat_model

# Follow the steps here to configure your credentials:
# https://docs.aws.amazon.com/bedrock/latest/userguide/getting-started.html

llm = init_chat_model(
    "anthropic.claude-3-5-sonnet-20240620-v1:0",
    model_provider="bedrock_converse",
)

👉 阅读 AWS Bedrock 集成文档

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

class State(TypedDict):
    messages: Annotated[list, add_messages]

graph_builder = StateGraph(State)

tool = TavilySearch(max_results=2)
tools = [tool]
llm_with_tools = llm.bind_tools(tools)

def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}

graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

下一步

在下一个教程中,你将为聊天机器人添加人工干预功能,以处理可能需要在继续之前获得指导或验证的情况。