跳到内容

如何删除消息

图的常见状态之一是消息列表。通常您只向该状态添加消息。但是,有时您可能想要删除消息(直接修改状态或作为图的一部分)。为此,您可以使用 RemoveMessage 修饰符。在本指南中,我们将介绍如何做到这一点。

关键思想是每个状态键都有一个 reducer 键。此键指定如何组合对状态的更新。预构建的 MessagesAnnotation 有一个消息键,并且该键的 reducer 接受这些 RemoveMessage 修饰符。然后,该 reducer 使用这些 RemoveMessage 从键中删除消息。

因此请注意,仅仅因为您的图状态有一个键是消息列表,并不意味着 RemoveMessage 修饰符会起作用。您还必须定义一个知道如何使用它的 reducer

注意:许多模型都期望围绕消息列表的某些规则。例如,有些模型期望它们以 user 消息开头,另一些模型期望所有带有工具调用的消息都跟随一条工具消息。删除消息时,您需要确保不违反这些规则。

设置

首先,安装此示例所需的依赖项

npm install @langchain/langgraph @langchain/openai @langchain/core zod uuid

接下来,我们需要为 OpenAI(我们将使用的 LLM)设置 API 密钥

process.env.OPENAI_API_KEY = 'YOUR_API_KEY';

可选地,我们可以为 LangSmith 跟踪 设置 API 密钥,这将为我们提供一流的可观察性。

process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_API_KEY = "YOUR_API_KEY";

现在,让我们构建一个使用消息的简单图。

构建代理

现在让我们构建一个简单的 ReAct 风格的代理。

import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { MemorySaver } from "@langchain/langgraph-checkpoint";
import { MessagesAnnotation, StateGraph, START, END } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { z } from "zod";

const memory = new MemorySaver();

const search = tool((_) => {
  // This is a placeholder for the actual implementation
  // Don't let the LLM know this though 😊
  return [
    "It's sunny in San Francisco, but you better look out if you're a Gemini 😈.",
  ];
}, {
  name: "search",
  description: "Call to surf the web.",
  schema: z.object({
    query: z.string(),
  })
});

const tools = [search];
const toolNode = new ToolNode<typeof MessagesAnnotation.State>(tools);
const model = new ChatOpenAI({ model: "gpt-4o" });
const boundModel = model.bindTools(tools);

function shouldContinue(state: typeof MessagesAnnotation.State): "action" | typeof END {
  const lastMessage = state.messages[state.messages.length - 1];
  if (
    "tool_calls" in lastMessage &&
    Array.isArray(lastMessage.tool_calls) &&
    lastMessage.tool_calls.length
  ) {
    return "action";
  }
  // If there is no tool call, then we finish
  return END;
}

// Define the function that calls the model
async function callModel(state: typeof MessagesAnnotation.State) {
  const response = await boundModel.invoke(state.messages);
  return { messages: [response] };
}

// Define a new graph
const workflow = new StateGraph(MessagesAnnotation)
  // Define the two nodes we will cycle between
  .addNode("agent", callModel)
  .addNode("action", toolNode)
  // Set the entrypoint as `agent`
  // This means that this node is the first one called
  .addEdge(START, "agent")
  // We now add a conditional edge
  .addConditionalEdges(
    // First, we define the start node. We use `agent`.
    // This means these are the edges taken after the `agent` node is called.
    "agent",
    // Next, we pass in the function that will determine which node is called next.
    shouldContinue
  )
  // We now add a normal edge from `tools` to `agent`.
  // This means that after `tools` is called, `agent` node is called next.
  .addEdge("action", "agent");

// Finally, we compile it!
// This compiles it into a LangChain Runnable,
// meaning you can use it as you would any other runnable
const app = workflow.compile({ checkpointer: memory });

import { HumanMessage } from "@langchain/core/messages";
import { v4 as uuidv4 } from "uuid";

const config = { configurable: { thread_id: "2" }, streamMode: "values" as const };
const inputMessage = new HumanMessage({
  id: uuidv4(),
  content: "hi! I'm bob",
});

for await (const event of await app.stream(
  { messages: [inputMessage] },
  config,
)) {
  const lastMsg = event.messages[event.messages.length - 1];
  console.dir(
    {
      type: lastMsg._getType(),
      content: lastMsg.content,
      tool_calls: lastMsg.tool_calls,
    },
    { depth: null }
  )
}

const inputMessage2 = new HumanMessage({
  id: uuidv4(),
  content: "What's my name?",
});
for await (const event of await app.stream(
  { messages: [inputMessage2] },
  config,
)) {
  const lastMsg = event.messages[event.messages.length - 1];
  console.dir(
    {
      type: lastMsg._getType(),
      content: lastMsg.content,
      tool_calls: lastMsg.tool_calls,
    },
    { depth: null }
  )
}
{ type: 'human', content: "hi! I'm bob", tool_calls: undefined }
{
  type: 'ai',
  content: 'Hi Bob! How can I assist you today?',
  tool_calls: []
}
{ type: 'human', content: "What's my name?", tool_calls: undefined }
{ type: 'ai', content: 'Your name is Bob.', tool_calls: [] }

手动删除消息

首先,我们将介绍如何手动删除消息。让我们看一下线程的当前状态

const messages = (await app.getState(config)).values.messages;
console.dir(
  messages.map((msg) => ({
    id: msg.id,
    type: msg._getType(),
    content: msg.content,
    tool_calls:
    msg.tool_calls,
  })),
  { depth: null }
);
[
  {
    id: '24187daa-00dd-40d8-bc30-f4e24ff78165',
    type: 'human',
    content: "hi! I'm bob",
    tool_calls: undefined
  },
  {
    id: 'chatcmpl-9zYV9yHLiZmR2ZVHEhHcbVEshr3qG',
    type: 'ai',
    content: 'Hi Bob! How can I assist you today?',
    tool_calls: []
  },
  {
    id: 'a67e53c3-5dcf-4ddc-83f5-309b72ac61f4',
    type: 'human',
    content: "What's my name?",
    tool_calls: undefined
  },
  {
    id: 'chatcmpl-9zYV9mmpJrm3SQ7ngMJZ1XBHzHfL6',
    type: 'ai',
    content: 'Your name is Bob.',
    tool_calls: []
  }
]
我们可以调用 updateState 并传入第一条消息的 ID。这将删除该消息。

import { RemoveMessage } from "@langchain/core/messages";

await app.updateState(config, { messages: new RemoveMessage({ id: messages[0].id }) })
{
  configurable: {
    thread_id: '2',
    checkpoint_ns: '',
    checkpoint_id: '1ef61abf-1fc2-6431-8005-92730e9d667c'
  }
}
如果我们现在查看消息,我们可以验证第一条消息已被删除。

const updatedMessages = (await app.getState(config)).values.messages;
console.dir(
  updatedMessages.map((msg) => ({
    id: msg.id,
    type: msg._getType(),
    content: msg.content,
    tool_calls:
    msg.tool_calls,
  })),
  { depth: null }
);
[
  {
    id: 'chatcmpl-9zYV9yHLiZmR2ZVHEhHcbVEshr3qG',
    type: 'ai',
    content: 'Hi Bob! How can I assist you today?',
    tool_calls: []
  },
  {
    id: 'a67e53c3-5dcf-4ddc-83f5-309b72ac61f4',
    type: 'human',
    content: "What's my name?",
    tool_calls: undefined
  },
  {
    id: 'chatcmpl-9zYV9mmpJrm3SQ7ngMJZ1XBHzHfL6',
    type: 'ai',
    content: 'Your name is Bob.',
    tool_calls: []
  }
]

以编程方式删除消息

我们还可以从图内部以编程方式删除消息。在这里,我们将修改图,以便在图运行结束时删除任何旧消息(超过 3 条消息前的消息)。

import { RemoveMessage } from "@langchain/core/messages";
import { StateGraph, START, END } from "@langchain/langgraph";
import { MessagesAnnotation } from "@langchain/langgraph";

function deleteMessages(state: typeof MessagesAnnotation.State) {
  const messages = state.messages;
  if (messages.length > 3) {
    return { messages: messages.slice(0, -3).map(m => new RemoveMessage({ id: m.id })) };
  }
  return {};
}

// We need to modify the logic to call deleteMessages rather than end right away
function shouldContinue2(state: typeof MessagesAnnotation.State): "action" | "delete_messages" {
  const lastMessage = state.messages[state.messages.length - 1];
  if (
    "tool_calls" in lastMessage &&
    Array.isArray(lastMessage.tool_calls) &&
    lastMessage.tool_calls.length
  ) {
    return "action";
  }
  // Otherwise if there aren't, we finish
  return "delete_messages";
}

// Define a new graph
const workflow2 = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addNode("action", toolNode)
  // This is our new node we're defining
  .addNode("delete_messages", deleteMessages)
  .addEdge(START, "agent")
  .addConditionalEdges(
    "agent",
    shouldContinue2
  )
  .addEdge("action", "agent")
  // This is the new edge we're adding: after we delete messages, we finish
  .addEdge("delete_messages", END);

const app2 = workflow2.compile({ checkpointer: memory });

我们现在可以尝试一下。我们可以调用该图两次,然后检查状态

import { HumanMessage } from "@langchain/core/messages";
import { v4 as uuidv4 } from "uuid";

const config2 = { configurable: { thread_id: "3" }, streamMode: "values" as const };

const inputMessage3 = new HumanMessage({
  id: uuidv4(),
  content: "hi! I'm bob",
});

console.log("--- FIRST ITERATION ---\n");
for await (const event of await app2.stream(
  { messages: [inputMessage3] },
  config2
)) {
  console.log(event.messages.map((message) => [message._getType(), message.content]));
}

const inputMessage4 = new HumanMessage({
  id: uuidv4(),
  content: "what's my name?",
});

console.log("\n\n--- SECOND ITERATION ---\n");
for await (const event of await app2.stream(
  { messages: [inputMessage4] },
  config2
)) {
  console.log(event.messages.map((message) => [message._getType(), message.content]), "\n");
}
--- FIRST ITERATION ---

[ [ 'human', "hi! I'm bob" ] ]
``````output
[
  [ 'human', "hi! I'm bob" ],
  [ 'ai', 'Hi Bob! How can I assist you today?' ]
]


--- SECOND ITERATION ---

[
  [ 'human', "hi! I'm bob" ],
  [ 'ai', 'Hi Bob! How can I assist you today?' ],
  [ 'human', "what's my name?" ]
] 

[
  [ 'human', "hi! I'm bob" ],
  [ 'ai', 'Hi Bob! How can I assist you today?' ],
  [ 'human', "what's my name?" ],
  [ 'ai', "Based on what you've told me, your name is Bob." ]
] 

[
  [ 'ai', 'Hi Bob! How can I assist you today?' ],
  [ 'human', "what's my name?" ],
  [ 'ai', "Based on what you've told me, your name is Bob." ]
]
如果我们现在检查状态,我们应该看到它只有三条消息长。这是因为我们刚刚删除了较早的消息 - 否则它将是四条!

const messages3 = (await app.getState(config2)).values["messages"]
console.dir(
  messages3.map((msg) => ({
    id: msg.id,
    type: msg._getType(),
    content: msg.content,
    tool_calls:
    msg.tool_calls,
  })),
  { depth: null }
);
[
  {
    id: 'chatcmpl-9zYVAEiiC9D7bb0wF4KLXgY0OAG8O',
    type: 'ai',
    content: 'Hi Bob! How can I assist you today?',
    tool_calls: []
  },
  {
    id: 'b93e5f35-cfa3-4ca6-9b59-154ce2bd476b',
    type: 'human',
    content: "what's my name?",
    tool_calls: undefined
  },
  {
    id: 'chatcmpl-9zYVBHJWtEM6pw2koE8dykzSA0XSO',
    type: 'ai',
    content: "Based on what you've told me, your name is Bob.",
    tool_calls: []
  }
]
请记住,删除消息时,您需要确保剩余的消息列表仍然有效。此消息列表实际上可能无效 - 这是因为它当前以 AI 消息开头,而某些模型不允许这样做。