如何删除消息¶
图的常见状态之一是消息列表。通常你只向该状态添加消息。但是,有时你可能想要删除消息(直接修改状态或作为图的一部分)。为此,你可以使用 RemoveMessage
修饰符。在本指南中,我们将介绍如何做到这一点。
关键思想是每个状态键都有一个 reducer
键。此键指定如何组合对状态的更新。默认的 MessagesState
具有消息键,并且该键的 reducer 接受这些 RemoveMessage
修饰符。然后,reducer 使用这些 RemoveMessage
从键中删除消息。
因此请注意,仅仅因为你的图状态具有一个作为消息列表的键,并不意味着 RemoveMessage
修饰符就能工作。你还需要定义一个知道如何使用它的 reducer
。
注意:许多模型对消息列表有某些规则。例如,有些模型期望它们以 user
消息开头,另一些模型期望所有带有工具调用的消息都后跟工具消息。删除消息时,你将需要确保不违反这些规则。
设置¶
首先,让我们构建一个使用消息的简单图。请注意,它使用的是具有所需 reducer
的 MessagesState
。
接下来,我们需要为 Anthropic(我们将使用的 LLM)设置 API 密钥
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("ANTHROPIC_API_KEY")
设置 LangSmith 以进行 LangGraph 开发
注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 允许你使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序——阅读此处了解更多关于如何开始的信息。
构建 Agent¶
现在让我们构建一个简单的 ReAct 风格的 Agent。
from typing import Literal
from langchain_anthropic import ChatAnthropic
from langchain_core.tools import tool
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import MessagesState, StateGraph, START, END
from langgraph.prebuilt import ToolNode
memory = MemorySaver()
@tool
def search(query: str):
"""Call to surf the web."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
tools = [search]
tool_node = ToolNode(tools)
model = ChatAnthropic(model_name="claude-3-haiku-20240307")
bound_model = model.bind_tools(tools)
def should_continue(state: MessagesState):
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return END
# Otherwise if there is, we continue
return "action"
# Define the function that calls the model
def call_model(state: MessagesState):
response = model.invoke(state["messages"])
# We return a list, because this will get added to the existing list
return {"messages": response}
# Define a new graph
workflow = StateGraph(MessagesState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.add_edge(START, "agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Next, we pass in the path map - all the possible nodes this edge could go to
["action", END],
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("action", "agent")
# Finally, we compile it!
# This compiles it into a LangChain Runnable,
# meaning you can use it as you would any other runnable
app = workflow.compile(checkpointer=memory)
API 参考:ChatAnthropic | tool | MemorySaver | StateGraph | START | END | ToolNode
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "2"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
event["messages"][-1].pretty_print()
API 参考:HumanMessage
================================[1m Human Message [0m=================================
hi! I'm bob
==================================[1m Ai Message [0m==================================
It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with.
================================[1m Human Message [0m=================================
what's my name?
==================================[1m Ai Message [0m==================================
You said your name is Bob.
手动删除消息¶
首先,我们将介绍如何手动删除消息。让我们看一下线程的当前状态
[HumanMessage(content="hi! I'm bob", additional_kwargs={}, response_metadata={}, id='db576005-3a60-4b3b-8925-dc602ac1c571'),
AIMessage(content="It's nice to meet you, Bob! I'm an AI assistant created by Anthropic. I'm here to help out with any questions or tasks you might have. Please let me know if there's anything I can assist you with.", additional_kwargs={}, response_metadata={'id': 'msg_01BKAnYxmoC6bQ9PpCuHk8ZT', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 52}}, id='run-3a60c536-b207-4c56-98f3-03f94d49a9e4-0', usage_metadata={'input_tokens': 12, 'output_tokens': 52, 'total_tokens': 64}),
HumanMessage(content="what's my name?", additional_kwargs={}, response_metadata={}, id='2088c465-400b-430b-ad80-fad47dc1f2d6'),
AIMessage(content='You said your name is Bob.', additional_kwargs={}, response_metadata={'id': 'msg_013UWTLTzwZi81vke8mMQ2KP', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 72, 'output_tokens': 10}}, id='run-3a6883be-0c52-4938-af98-e9e7476659eb-0', usage_metadata={'input_tokens': 72, 'output_tokens': 10, 'total_tokens': 82})]
我们可以调用 update_state
并传入第一条消息的 id。这将删除该消息。
from langchain_core.messages import RemoveMessage
app.update_state(config, {"messages": RemoveMessage(id=messages[0].id)})
API 参考:RemoveMessage
{'configurable': {'thread_id': '2',
'checkpoint_ns': '',
'checkpoint_id': '1ef75157-f251-6a2a-8005-82a86a6593a0'}}
如果我们现在查看消息,我们可以验证第一条消息已被删除。
[AIMessage(content="It's nice to meet you, Bob! I'm Claude, an AI assistant created by Anthropic. How can I assist you today?", response_metadata={'id': 'msg_01XPSAenmSqK8rX2WgPZHfz7', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 32}}, id='run-1c69af09-adb1-412d-9010-2456e5a555fb-0', usage_metadata={'input_tokens': 12, 'output_tokens': 32, 'total_tokens': 44}),
HumanMessage(content="what's my name?", id='f3c71afe-8ce2-4ed0-991e-65021f03b0a5'),
AIMessage(content='Your name is Bob, as you introduced yourself at the beginning of our conversation.', response_metadata={'id': 'msg_01BPZdwsjuMAbC1YAkqawXaF', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 52, 'output_tokens': 19}}, id='run-b2eb9137-2f4e-446f-95f5-3d5f621a2cf8-0', usage_metadata={'input_tokens': 52, 'output_tokens': 19, 'total_tokens': 71})]
以编程方式删除消息¶
我们也可以从图内部以编程方式删除消息。在这里,我们将修改图,以在图运行结束时删除任何旧消息(早于 3 条消息之前的消息)。
from langchain_core.messages import RemoveMessage
from langgraph.graph import END
def delete_messages(state):
messages = state["messages"]
if len(messages) > 3:
return {"messages": [RemoveMessage(id=m.id) for m in messages[:-3]]}
# We need to modify the logic to call delete_messages rather than end right away
def should_continue(state: MessagesState) -> Literal["action", "delete_messages"]:
"""Return the next node to execute."""
last_message = state["messages"][-1]
# If there is no function call, then we call our delete_messages function
if not last_message.tool_calls:
return "delete_messages"
# Otherwise if there is, we continue
return "action"
# Define a new graph
workflow = StateGraph(MessagesState)
workflow.add_node("agent", call_model)
workflow.add_node("action", tool_node)
# This is our new node we're defining
workflow.add_node(delete_messages)
workflow.add_edge(START, "agent")
workflow.add_conditional_edges(
"agent",
should_continue,
)
workflow.add_edge("action", "agent")
# This is the new edge we're adding: after we delete messages, we finish
workflow.add_edge("delete_messages", END)
app = workflow.compile(checkpointer=memory)
API 参考:RemoveMessage | END
我们现在可以尝试一下。我们可以调用该图两次,然后检查状态
from langchain_core.messages import HumanMessage
config = {"configurable": {"thread_id": "3"}}
input_message = HumanMessage(content="hi! I'm bob")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
input_message = HumanMessage(content="what's my name?")
for event in app.stream({"messages": [input_message]}, config, stream_mode="values"):
print([(message.type, message.content) for message in event["messages"]])
API 参考:HumanMessage
[('human', "hi! I'm bob")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?")]
[('human', "hi! I'm bob"), ('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')]
[('ai', "Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you."), ('human', "what's my name?"), ('ai', 'You said your name is Bob, so that is the name I have for you.')]
[AIMessage(content="Hello Bob! It's nice to meet you. I'm an AI assistant created by Anthropic. I'm here to help with any questions or tasks you might have. Please let me know how I can assist you.", response_metadata={'id': 'msg_01XPEgPPbcnz5BbGWUDWTmzG', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 12, 'output_tokens': 48}}, id='run-eded3820-b6a9-4d66-9210-03ca41787ce6-0', usage_metadata={'input_tokens': 12, 'output_tokens': 48, 'total_tokens': 60}),
HumanMessage(content="what's my name?", id='a0ea2097-3280-402b-92e1-67177b807ae8'),
AIMessage(content='You said your name is Bob, so that is the name I have for you.', response_metadata={'id': 'msg_01JGT62pxhrhN4SykZ57CSjW', 'model': 'claude-3-haiku-20240307', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 68, 'output_tokens': 20}}, id='run-ace3519c-81f8-45fe-a777-91f42d48b3a3-0', usage_metadata={'input_tokens': 68, 'output_tokens': 20, 'total_tokens': 88})]
请记住,删除消息时,你将需要确保剩余的消息列表仍然有效。此消息列表实际上可能无效 - 这是因为它当前以 AI 消息开头,而某些模型不允许这样做。