跳到内容

如何查看和更新过往的图状态

先决条件

本指南假设您熟悉以下概念

一旦你开始 检查点 你的图,你可以轻松地获取更新代理在任何时间点的状态。 这允许以下几件事

  1. 你可以在中断期间向用户展示状态,让他们接受一个动作。
  2. 你可以回溯图来重现或避免问题。
  3. 你可以修改状态,将你的 Agent 嵌入到更大的系统中,或者让用户更好地控制其行为。

用于此功能的关键方法是

注意: 这需要传入一个检查点程序。

<!-- 示例

TODO
...
``` -->

This works for
<a href="/langgraphjs/reference/classes/langgraph.StateGraph.html">StateGraph</a>
and all its subclasses, such as
<a href="/langgraphjs/reference/classes/langgraph.MessageGraph.html">MessageGraph</a>.

Below is an example.

<div class="admonition tip">
    <p class="admonition-title">Note</p>
    <p>
        In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>createReactAgent(model, tools=tool, checkpointer=checkpointer)</code> (<a href="/langgraphjs/reference/functions/langgraph_prebuilt.createReactAgent.html">API doc</a>) constructor. This may be more appropriate if you are used to LangChain's <a href="https://js.langchain.ac.cn/docs/how_to/agent_executor">AgentExecutor</a> class.
    </p>
</div>

## Setup

This guide will use OpenAI's GPT-4o model. We will optionally set our API key
for [LangSmith tracing](https://smith.langchain.com/), which will give us
best-in-class observability.


```typescript
// process.env.OPENAI_API_KEY = "sk_...";

// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Time Travel: LangGraphJS";
Time Travel: LangGraphJS

定义状态

状态是我们图中所有节点的接口。

import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";

const StateAnnotation = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x, y) => x.concat(y),
  }),
});

设置工具

我们将首先定义我们要使用的工具。 对于这个简单的例子,我们将使用创建一个占位符搜索引擎。 然而,创建你自己的工具非常容易 - 请参阅 此处 的文档,了解如何做到这一点。

import { tool } from "@langchain/core/tools";
import { z } from "zod";

const searchTool = tool(async (_) => {
  // This is a placeholder for the actual implementation
  return "Cold, with a low of 13 ℃";
}, {
  name: "search",
  description:
    "Use to surf the web, fetch current information, check the weather, and retrieve other information.",
  schema: z.object({
    query: z.string().describe("The query to use in your search."),
  }),
});

await searchTool.invoke({ query: "What's the weather like?" });

const tools = [searchTool];

我们现在可以将这些工具包装在一个简单的 ToolNodee 代理中。 在交互之间,你可以获取和更新状态。

let config = { configurable: { thread_id: "conversation-num-1" } };
let inputs = { messages: [{ role: "user", content: "Hi I'm Jo." }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Hi I'm Jo.
-----

Hello, Jo! How can I assist you today?
-----
在此处查看 LangSmith 示例运行 https://smith.langchain.com/public/b3feb09b-bcd2-4ad5-ad1d-414106148448/r

在这里你可以看到 “agent” 节点运行了,然后我们的边缘返回了 __end__,因此图在那里停止了执行。

让我们检查一下当前的图状态。

let checkpoint = await graph.getState(config);
checkpoint.values;
{
  messages: [
    { role: 'user', content: "Hi I'm Jo." },
    AIMessage {
      "id": "chatcmpl-A3FGf3k3QQo9q0QjT6Oc5h1XplkHr",
      "content": "Hello, Jo! How can I assist you today?",
      "additional_kwargs": {},
      "response_metadata": {
        "tokenUsage": {
          "completionTokens": 12,
          "promptTokens": 68,
          "totalTokens": 80
        },
        "finish_reason": "stop",
        "system_fingerprint": "fp_fde2829a40"
      },
      "tool_calls": [],
      "invalid_tool_calls": []
    }
  ]
}
当前状态是我们上面看到的两条消息,1. 我们发送的 HumanMessage,2. 我们从模型收到的 AIMessage。

next 值是空的,因为图已终止(转换到 __end__)。

checkpoint.next;
[]

让它执行一个工具

当我们再次调用图时,它将在每个内部执行步骤后创建一个检查点。 让我们让它运行一个工具,然后查看检查点。

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
  }
]
-----

Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----
在此处查看上述执行的跟踪: https://smith.langchain.com/public/0ef426fd-0da1-4c02-a50b-64ae1e68338e/r 我们可以看到它计划了工具执行(即 “agent” 节点),然后 “should_continue” 边缘返回了 “continue”,因此我们继续到 “action” 节点,该节点执行了该工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。

在工具之前暂停

如果你注意到下面,我们现在将添加 interruptBefore=["action"] - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。

memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----

获取状态

你可以使用 获取最新的图检查点。 这个对象实际上会在我们的 LLM 调用工具(函数)时运行它们。

import { ToolNode } from "@langchain/langgraph/prebuilt";

const toolNode = new ToolNode(tools);

设置模型

现在我们将加载 聊天模型

  1. 它应该与消息一起工作。 我们将以消息的形式表示所有 Agent 状态,因此它需要能够很好地处理它们。
  2. 它应该与 工具调用 一起工作,这意味着它可以返回其响应中的函数参数。

注意状态是我们上面看到的两条消息,1. 我们发送的 HumanMessage,2. 我们从模型收到的 AIMessage。 `next` 值是空的,因为图已终止(转换到 `__end__`)。

checkpoint.next;
[]
## 让它执行一个工具 当我们再次调用图时,它将在每个内部执行步骤后创建一个检查点。 让我们让它运行一个工具,然后查看检查点。
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
  }
]
-----

Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----
在此处查看上述执行的跟踪: https://smith.langchain.com/public/0ef426fd-0da1-4c02-a50b-64ae1e68338e/r 我们可以看到它计划了工具执行(即 “agent” 节点),然后 “should_continue” 边缘返回了 “continue”,因此我们继续到 “action” 节点,该节点执行了该工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。 ### 在工具之前暂停 如果你注意到下面,我们现在将添加 `interruptBefore=["action"]` - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----
## 获取状态 你可以使用 获取最新的图检查点。

这些模型要求不是使用 LangGraph 的通用要求 - 它们只是这个示例的要求。 in, 2. 我们从模型收到的 AIMessage。 `next` 值是空的,因为图已终止(转换到 `__end__`)。

checkpoint.next;
[]
## 让它执行一个工具 当我们再次调用图时,它将在每个内部执行步骤后创建一个检查点。 让我们让它运行一个工具,然后查看检查点。
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
  }
]
-----

Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----
在此处查看上述执行的跟踪: https://smith.langchain.com/public/0ef426fd-0da1-4c02-a50b-64ae1e68338e/r 我们可以看到它计划了工具执行(即 “agent” 节点),然后 “should_continue” 边缘返回了 “continue”,因此我们继续到 “action” 节点,该节点执行了该工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。 ### 在工具之前暂停 如果你注意到下面,我们现在将添加 `interruptBefore=["action"]` - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----
## 获取状态 你可以使用 `next` 值是空的,因为图已终止(转换到 `__end__`)。
checkpoint.next;
[]
## 让它执行一个工具 当我们再次调用图时,它将在每个内部执行步骤后创建一个检查点。 让我们让它运行一个工具,然后查看检查点。
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
  }
]
-----

Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----
在此处查看上述执行的跟踪: https://smith.langchain.com/public/0ef426fd-0da1-4c02-a50b-64ae1e68338e/r 我们可以看到它计划了工具执行(即 “agent” 节点),然后 “should_continue” 边缘返回了 “continue”,因此我们继续到 “action” 节点,该节点执行了该工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。 ### 在工具之前暂停 如果你注意到下面,我们现在将添加 `interruptBefore=["action"]` - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----
## 获取状态 你可以使用
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

完成此操作后,我们应确保模型知道它可以调用这些工具。 我们可以通过调用 bindTools 来做到这一点。

const boundModel = model.bindTools(tools);

定义图

我们现在可以将它们放在一起。 时间旅行需要一个检查点程序来保存状态 - 否则你将没有任何东西可以 getupdate。 我们将使用 MemorySaverhe 工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。

在工具之前暂停

如果你注意到下面,我们现在将添加 interruptBefore=["action"] - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。

memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----

获取状态

你可以使用 获取最新的图检查点,它在内存中 “保存” 检查点。

import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
import { MemorySaver } from "@langchain/langgraph";

const routeMessage = (state: typeof StateAnnotation.State) => {
  const { messages } = state;
  const lastMessage = messages[messages.length - 1] as AIMessage;
  // If no tools are called, we can finish (respond to the user)
  if (!lastMessage?.tool_calls?.length) {
    return END;
  }
  // Otherwise if there is, we continue and call the tools
  return "tools";
};

const callModel = async (
  state: typeof StateAnnotation.State,
  config?: RunnableConfig,
): Promise<Partial<typeof StateAnnotation.State>> => {
  const { messages } = state;
  const response = await boundModel.invoke(messages, config);
  return { messages: [response] };
};

const workflow = new StateGraph(StateAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", toolNode)
  .addEdge(START, "agent")
  .addConditionalEdges("agent", routeMessage)
  .addEdge("tools", "agent");

// Here we only save in-memory
let memory = new MemorySaver();
const graph = workflow.compile({ checkpointer: memory });

与 Agent 交互

我们现在可以与 Agent 交互。 在交互之间,你可以获取和更新状态。

let config = { configurable: { thread_id: "conversation-num-1" } };
let inputs = { messages: [{ role: "user", content: "Hi I'm Jo." }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Hi I'm Jo.
-----

Hello, Jo! How can I assist you today?
-----
在此处查看 LangSmith 示例运行 https://smith.langchain.com/public/b3feb09b-bcd2-4ad5-ad1d-414106148448/r

在这里你可以看到 “agent” 节点运行了,然后我们的边缘返回了 __end__,因此图在那里停止了执行。

让我们检查一下当前的图状态。

let checkpoint = await graph.getState(config);
checkpoint.values;
{
  messages: [
    { role: 'user', content: "Hi I'm Jo." },
    AIMessage {
      "id": "chatcmpl-A3FGf3k3QQo9q0QjT6Oc5h1XplkHr",
      "content": "Hello, Jo! How can I assist you today?",
      "additional_kwargs": {},
      "response_metadata": {
        "tokenUsage": {
          "completionTokens": 12,
          "promptTokens": 68,
          "totalTokens": 80
        },
        "finish_reason": "stop",
        "system_fingerprint": "fp_fde2829a40"
      },
      "tool_calls": [],
      "invalid_tool_calls": []
    }
  ]
}
当前状态是我们上面看到的两条消息,1. 我们发送的 HumanMessage,2. 我们从模型收到的 AIMessage。

next 值是空的,因为图已终止(转换到 __end__)。

checkpoint.next;
[]

让它执行一个工具

当我们再次调用图时,它将在每个内部执行步骤后创建一个检查点。 让我们让它运行一个工具,然后查看检查点。

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graph.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
  }
]
-----

Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----
在此处查看上述执行的跟踪: https://smith.langchain.com/public/0ef426fd-0da1-4c02-a50b-64ae1e68338e/r 我们可以看到它计划了工具执行(即 “agent” 节点),然后 “should_continue” 边缘返回了 “continue”,因此我们继续到 “action” 节点,该节点执行了该工具,然后 “agent” 节点发出了最终响应,这使得 “should_continue” 边缘返回了 “end”。 让我们看看我们如何能更好地控制这一点。

在工具之前暂停

如果你注意到下面,我们现在将添加 interruptBefore=["action"] - 这意味着在采取任何动作之前,我们会暂停。 这是一个让用户纠正和更新状态的好时机! 当你想让人工参与来验证(并可能更改)要采取的行动时,这非常有用。

memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
  checkpointer: memory,
  interruptBefore: ["tools"],
});

inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
  const { messages } of await graphWithInterrupt.stream(inputs, {
    ...config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
What's the weather like in SF currently?
-----

[
  {
    name: 'search',
    args: { query: 'current weather in San Francisco' },
    type: 'tool_call',
    id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
  }
]
-----

获取状态

你可以使用 getState(config) 获取最新的图检查点。

let snapshot = await graphWithInterrupt.getState(config);
snapshot.next;
[ 'tools' ]

恢复

你可以通过使用 null 输入运行图来恢复。 检查点将被加载,并且在没有新输入的情况下,它将像没有发生中断一样执行。

for await (
  const { messages } of await graphWithInterrupt.stream(null, {
    ...snapshot.config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Cold, with a low of 13 ℃
-----

Currently, it is cold in San Francisco, with a temperature around 13°C (55°F).
-----

查看完整历史记录

让我们浏览一下此线程的历史记录,从最新到最旧。

let toReplay;
const states = await graphWithInterrupt.getStateHistory(config);
for await (const state of states) {
  console.log(state);
  console.log("--");
  if (state.values?.messages?.length === 2) {
    toReplay = state;
  }
}
if (!toReplay) {
  throw new Error("No state to replay");
}
{
  values: {
    messages: [
      [Object],
      AIMessage {
        "id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
        "content": "",
        "additional_kwargs": {
          "tool_calls": [
            {
              "id": "call_OsKnTv2psf879eeJ9vx5GeoY",
              "type": "function",
              "function": "[Object]"
            }
          ]
        },
        "response_metadata": {
          "tokenUsage": {
            "completionTokens": 17,
            "promptTokens": 72,
            "totalTokens": 89
          },
          "finish_reason": "tool_calls",
          "system_fingerprint": "fp_fde2829a40"
        },
        "tool_calls": [
          {
            "name": "search",
            "args": {
              "query": "current weather in San Francisco"
            },
            "type": "tool_call",
            "id": "call_OsKnTv2psf879eeJ9vx5GeoY"
          }
        ],
        "invalid_tool_calls": []
      },
      ToolMessage {
        "content": "Cold, with a low of 13 ℃",
        "name": "search",
        "additional_kwargs": {},
        "response_metadata": {},
        "tool_call_id": "call_OsKnTv2psf879eeJ9vx5GeoY"
      },
      AIMessage {
        "id": "chatcmpl-A3FGiYripPKtQLnAK1H3hWLSXQfOD",
        "content": "Currently, it is cold in San Francisco, with a temperature around 13°C (55°F).",
        "additional_kwargs": {},
        "response_metadata": {
          "tokenUsage": {
            "completionTokens": 21,
            "promptTokens": 105,
            "totalTokens": 126
          },
          "finish_reason": "stop",
          "system_fingerprint": "fp_fde2829a40"
        },
        "tool_calls": [],
        "invalid_tool_calls": []
      }
    ]
  },
  next: [],
  tasks: [],
  metadata: { source: 'loop', writes: { agent: [Object] }, step: 3 },
  config: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-9c3a-6bd1-8003-d7f030ff72b2'
    }
  },
  createdAt: '2024-09-03T04:17:20.653Z',
  parentConfig: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-9516-6200-8002-43d2c6dc603f'
    }
  }
}
--
{
  values: {
    messages: [
      [Object],
      AIMessage {
        "id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
        "content": "",
        "additional_kwargs": {
          "tool_calls": [
            {
              "id": "call_OsKnTv2psf879eeJ9vx5GeoY",
              "type": "function",
              "function": "[Object]"
            }
          ]
        },
        "response_metadata": {
          "tokenUsage": {
            "completionTokens": 17,
            "promptTokens": 72,
            "totalTokens": 89
          },
          "finish_reason": "tool_calls",
          "system_fingerprint": "fp_fde2829a40"
        },
        "tool_calls": [
          {
            "name": "search",
            "args": {
              "query": "current weather in San Francisco"
            },
            "type": "tool_call",
            "id": "call_OsKnTv2psf879eeJ9vx5GeoY"
          }
        ],
        "invalid_tool_calls": []
      },
      ToolMessage {
        "content": "Cold, with a low of 13 ℃",
        "name": "search",
        "additional_kwargs": {},
        "response_metadata": {},
        "tool_call_id": "call_OsKnTv2psf879eeJ9vx5GeoY"
      }
    ]
  },
  next: [ 'agent' ],
  tasks: [
    {
      id: '612efffa-3b16-530f-8a39-fd01c31e7b8b',
      name: 'agent',
      interrupts: []
    }
  ],
  metadata: { source: 'loop', writes: { tools: [Object] }, step: 2 },
  config: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-9516-6200-8002-43d2c6dc603f'
    }
  },
  createdAt: '2024-09-03T04:17:19.904Z',
  parentConfig: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-9455-6410-8001-1c78a97f63e6'
    }
  }
}
--
{
  values: {
    messages: [
      [Object],
      AIMessage {
        "id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
        "content": "",
        "additional_kwargs": {
          "tool_calls": [
            {
              "id": "call_OsKnTv2psf879eeJ9vx5GeoY",
              "type": "function",
              "function": "[Object]"
            }
          ]
        },
        "response_metadata": {
          "tokenUsage": {
            "completionTokens": 17,
            "promptTokens": 72,
            "totalTokens": 89
          },
          "finish_reason": "tool_calls",
          "system_fingerprint": "fp_fde2829a40"
        },
        "tool_calls": [
          {
            "name": "search",
            "args": {
              "query": "current weather in San Francisco"
            },
            "type": "tool_call",
            "id": "call_OsKnTv2psf879eeJ9vx5GeoY"
          }
        ],
        "invalid_tool_calls": []
      }
    ]
  },
  next: [ 'tools' ],
  tasks: [
    {
      id: '767116b0-55b6-5af4-8f74-ce45fb6e31ed',
      name: 'tools',
      interrupts: []
    }
  ],
  metadata: { source: 'loop', writes: { agent: [Object] }, step: 1 },
  config: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-9455-6410-8001-1c78a97f63e6'
    }
  },
  createdAt: '2024-09-03T04:17:19.825Z',
  parentConfig: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-8c4b-6261-8000-c51e5807fbcd'
    }
  }
}
--
{
  values: { messages: [ [Object] ] },
  next: [ 'agent' ],
  tasks: [
    {
      id: '5b0ed7d1-1bb7-5d78-b4fc-7a8ed40e7291',
      name: 'agent',
      interrupts: []
    }
  ],
  metadata: { source: 'loop', writes: null, step: 0 },
  config: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-8c4b-6261-8000-c51e5807fbcd'
    }
  },
  createdAt: '2024-09-03T04:17:18.982Z',
  parentConfig: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-8c4b-6260-ffff-6ec582916c42'
    }
  }
}
--
{
  values: {},
  next: [ '__start__' ],
  tasks: [
    {
      id: 'a4250d5c-d025-5da1-b588-cae2b3f4a8c7',
      name: '__start__',
      interrupts: []
    }
  ],
  metadata: { source: 'input', writes: { messages: [Array] }, step: -1 },
  config: {
    configurable: {
      thread_id: 'conversation-num-1',
      checkpoint_ns: '',
      checkpoint_id: '1ef69ab6-8c4b-6260-ffff-6ec582916c42'
    }
  },
  createdAt: '2024-09-03T04:17:18.982Z',
  parentConfig: undefined
}
--

重放过往状态

要从这个地方重放,我们只需要将其配置传递回 Agent。

for await (
  const { messages } of await graphWithInterrupt.stream(null, {
    ...toReplay.config,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
Cold, with a low of 13 ℃
-----

The current weather in San Francisco is cold, with a low of 13°C.
-----

从过往状态分支

使用 LangGraph 的检查点功能,你可以做的不仅仅是重放过往状态。 你可以从以前的位置分支出来,让 Agent 探索替代轨迹,或者让用户 “版本控制” 工作流程中的更改。

首先,更新之前的检查点

更新状态将通过将更新应用于之前的检查点来创建一个新的快照。 让我们添加一个工具消息来模拟调用工具。

const tool_calls =
  toReplay.values.messages[toReplay.values.messages.length - 1].tool_calls;
const branchConfig = await graphWithInterrupt.updateState(
  toReplay.config,
  {
    messages: [
      { role: "tool", content: "It's sunny out, with a high of 38 ℃.", tool_call_id: tool_calls[0].id },
    ],
  },
  // Updates are applied "as if" they were coming from a node. By default,
  // the updates will come from the last node to run. In our case, we want to treat
  // this update as if it came from the tools node, so that the next node to run will be
  // the agent.
  "tools",
);

const branchState = await graphWithInterrupt.getState(branchConfig);
console.log(branchState.values);
console.log(branchState.next);
{
  messages: [
    {
      role: 'user',
      content: "What's the weather like in SF currently?"
    },
    AIMessage {
      "id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
      "content": "",
      "additional_kwargs": {
        "tool_calls": [
          {
            "id": "call_OsKnTv2psf879eeJ9vx5GeoY",
            "type": "function",
            "function": "[Object]"
          }
        ]
      },
      "response_metadata": {
        "tokenUsage": {
          "completionTokens": 17,
          "promptTokens": 72,
          "totalTokens": 89
        },
        "finish_reason": "tool_calls",
        "system_fingerprint": "fp_fde2829a40"
      },
      "tool_calls": [
        {
          "name": "search",
          "args": {
            "query": "current weather in San Francisco"
          },
          "type": "tool_call",
          "id": "call_OsKnTv2psf879eeJ9vx5GeoY"
        }
      ],
      "invalid_tool_calls": []
    },
    {
      role: 'tool',
      content: "It's sunny out, with a high of 38 ℃.",
      tool_call_id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
    }
  ]
}
[ 'agent' ]

现在你可以从这个分支运行

只需使用更新后的配置(包含新的检查点 ID)。 轨迹将跟随新分支。

for await (
  const { messages } of await graphWithInterrupt.stream(null, {
    ...branchConfig,
    streamMode: "values",
  })
) {
  let msg = messages[messages?.length - 1];
  if (msg?.content) {
    console.log(msg.content);
  } else if (msg?.tool_calls?.length > 0) {
    console.log(msg.tool_calls);
  } else {
    console.log(msg);
  }
  console.log("-----\n");
}
The current weather in San Francisco is sunny, with a high of 38°C.
-----