如何删除消息¶
图的一个常见状态是消息列表。通常,您只会将消息添加到该状态。但是,有时您可能希望删除消息(通过直接修改状态或作为图的一部分)。为此,您可以使用 RemoveMessage
修饰符。在本指南中,我们将介绍如何做到这一点。
关键思想是,每个状态键都具有一个 reducer
键。该键指定如何将状态更新组合在一起。默认的 MessagesState
具有一个 messages 键,该键的 reducer 接受这些 RemoveMessage
修饰符。然后,该 reducer 使用这些 RemoveMessage
从键中删除消息。
因此请注意,即使您的图状态具有一个作为消息列表的键,也不意味着此 RemoveMessage
修饰符将起作用。您还必须定义一个知道如何使用此修饰符的 reducer
。
注意:许多模型期望消息列表围绕某些规则。例如,有些模型期望它们以 user
消息开头,其他模型期望所有带有工具调用的消息后面都跟着工具消息。删除消息时,您需要确保不会违反这些规则。
设置¶
首先,让我们构建一个使用消息的简单图。请注意,它使用 MessagesState
,它具有所需的 reducer
。
%%capture --no-stderr
%pip install --quiet -U langgraph langchain_anthropic
接下来,我们需要设置 Anthropic(我们将使用的 LLM)的 API 密钥
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
ANTHROPIC_API_KEY: ········
构建代理¶
现在让我们构建一个简单的 ReAct 样式代理。
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode
memory = MemorySaver()
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model_name="claude-3-haiku-20240307")
bound_model = model.bind_tools(tools)
def should_continue(state: MessagesState):
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# Otherwise if there is, we continue
return "action"
# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Next, we pass in the path map - all the possible nodes this edge could go to
["action", END],
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile(checkpointer=memory)
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "2"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
================================ Human Message ================================= hi! I'm bob ================================== Ai Message ================================== It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with. ================================ Human Message ================================= what's my name? ================================== Ai Message ================================== You said your name is Bob.
手动删除消息¶
首先,我们将介绍如何手动删除消息。让我们看一下线程的当前状态
messages = app.get_state(config).values["messages"]
messages
[HumanMessage(content="hi! I'm bob", additional_kwargs={}, response_metadata={}, id='db576005-3a60-4b3b-8925-dc602ac1c571'), AIMessage(content="It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with.", additional_kwargs={}, response_metadata={'id': 'msg_01BKAnYxmoC6bQ9PpCuHk8ZT', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 52}}, id='run-3a60c536-b207-4c56-98f3-03f94d49a9e4-0', usage_metadata={'input_tokens': 12, 'output_tokens': 52, 'total_tokens': 64}), HumanMessage(content="what's my name?", additional_kwargs={}, response_metadata={}, id='2088c465-400b-430b-ad80-fad47dc1f2d6'), AIMessage(content='You said your name is Bob.', additional_kwargs={}, response_metadata={'id': 'msg_013UWTLTzwZi81vke8mMQ2KP', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 72, 'output_tokens': 10}}, id='run-3a6883be-0c52-4938-af98-e9e7476659eb-0', usage_metadata={'input_tokens': 72, 'output_tokens': 10, 'total_tokens': 82})]
我们可以调用 update_state
并传入第一条消息的 id。这将删除该消息。
from langchain_core.messages import RemoveMessage
app.update_state(config, {"messages": RemoveMessage(id=messages[0].id)})
{'configurable': {'thread_id': '2', 'checkpoint_ns': '', 'checkpoint_id': '1ef75157-f251-6a2a-8005-82a86a6593a0'}}
如果我们现在看一下消息,我们可以验证第一条消息是否已被删除。
messages = app.get_state(config).values["messages"]
messages
[AIMessage(content="It's nice to meet you, Bob! I'm Claude, an AI assistant created by Anthropic. How can I assist you today?", response_metadata={'id': 'msg_01XPSAenmSqK8rX2WgPZHfz7', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 32}}, id='run-1c69af09-adb1-412d-9010-2456e5a555fb-0', usage_metadata={'input_tokens': 12, 'output_tokens': 32, 'total_tokens': 44}), HumanMessage(content="what's my name?", id='f3c71afe-8ce2-4ed0-991e-65021f03b0a5'), AIMessage(content='Your name is Bob, as you introduced yourself at the beginning of our conversation.', response_metadata={'id': 'msg_01BPZdwsjuMAbC1YAkqawXaF', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 52, 'output_tokens': 19}}, id='run-b2eb9137-2f4e-446f-95f5-3d5f621a2cf8-0', usage_metadata={'input_tokens': 52, 'output_tokens': 19, 'total_tokens': 71})]
以编程方式删除消息¶
我们还可以以编程方式从图中删除消息。在这里,我们将修改图以在图运行结束时删除所有旧消息(超过 3 条消息前的消息)。
from langchain_core.messages import RemoveMessage
from langgraph.graph import END
def delete_messages(state):
messages = state["messages"]
if len(messages) > 3:
return {"messages": [RemoveMessage(id=m.id) for m in messages[:-3]]}
# We need to modify the logic to call delete_messages rather than end right away
def should_continue(state: MessagesState) -> Literal["action", "delete_messages"]:
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we call our delete_messages function
if not last_message.tool_calls:
return "delete_messages"
# Otherwise if there is, we continue
return "action"
# Define a new graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# This is our new node we're defining
workflow.add_node(delete_messages)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("action", "agent")
# This is the new edge we're adding: after we delete messages, we finish
workflow.add_edge("delete_messages", END)
app = workflow.compile(checkpointer=memory)
现在我们可以尝试一下。我们可以调用图两次,然后检查状态
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "3"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
[('human', "hi! I'm bob")] [('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.")] [('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?")] [('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')] [('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')]
如果我们现在检查状态,我们应该看到它只有三条消息长。这是因为我们刚刚删除了之前的消息——否则它应该是四条!
messages = app.get_state(config).values["messages"]
messages
[AIMessage(content="Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.", response_metadata={'id': 'msg_01XPEgPPbcnz5BbGWUDWTmzG', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 48}}, id='run-eded3820-b6a9-4d66-9210-03ca41787ce6-0', usage_metadata={'input_tokens': 12, 'output_tokens': 48, 'total_tokens': 60}), HumanMessage(content="what's my name?", id='a0ea2097-3280-402b-92e1-67177b807ae8'), AIMessage(content='You said your name is Bob, so that is the name I have for you.', response_metadata={'id': 'msg_01JGT62pxhrhN4SykZ57CSjW', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 68, 'output_tokens': 20}}, id='run-ace3519c-81f8-45fe-a777-91f42d48b3a3-0', usage_metadata={'input_tokens': 68, 'output_tokens': 20, 'total_tokens': 88})]
请记住,删除消息时,您需要确保剩余的消息列表仍然有效。此消息列表实际上可能不是——这是因为它当前以 AI 消息开头,而某些模型不允许这样做。