如何查看和更新过去的图状态¶
一旦您开始对图进行检查点,就可以轻松地随时随地获取或更新代理的状态。这允许实现以下几点
- 您可以在中断期间向用户显示状态,让他们接受某个操作。
- 您可以回退图以重现或避免问题。
- 您可以修改状态,将您的代理嵌入到更大的系统中,或者让用户更好地控制其操作。
实现此功能的关键方法是
- getState:从目标配置中获取值
- updateState:将给定值应用于目标状态
注意:这需要传入一个检查点器。
<!-- 示例
TODO
...
``` -->
This works for
<a href="/langgraphjs/reference/classes/langgraph.StateGraph.html">StateGraph</a>
and all its subclasses, such as
<a href="/langgraphjs/reference/classes/langgraph.MessageGraph.html">MessageGraph</a>.
Below is an example.
<div class="admonition tip">
<p class="admonition-title">Note</p>
<p>
In this how-to, we will create our agent from scratch to be transparent (but verbose). You can accomplish similar functionality using the <code>createReactAgent(model, tools=tool, checkpointer=checkpointer)</code> (<a href="/langgraphjs/reference/functions/langgraph_prebuilt.createReactAgent.html">API doc</a>) constructor. This may be more appropriate if you are used to LangChain's <a href="https://js.langchain.ac.cn/docs/how_to/agent_executor">AgentExecutor</a> class.
</p>
</div>
## Setup
This guide will use OpenAI's GPT-4o model. We will optionally set our API key
for [LangSmith tracing](https://smith.langchain.com/), which will give us
best-in-class observability.
```typescript
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Time Travel: LangGraphJS";
定义状态¶
状态是我们图中所有节点的接口。
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
});
设置工具¶
我们首先定义要使用的工具。在这个简单的示例中,我们将创建一个占位符搜索引擎。但是,创建自己的工具非常容易 - 请参阅此处的文档了解如何操作。
import { tool } from "@langchain/core/tools";
import { z } from "zod";
const searchTool = tool(async (_) => {
// This is a placeholder for the actual implementation
return "Cold, with a low of 13 ℃";
}, {
name: "search",
description:
"Use to surf the web, fetch current information, check the weather, and retrieve other information.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
await searchTool.invoke({ query: "What's the weather like?" });
const tools = [searchTool];
现在我们可以将这些工具封装在一个简单的ToolNodee 代理中。在交互之间,您可以获取和更新状态。
let config = { configurable: { thread_id: "conversation-num-1" } };
let inputs = { messages: [{ role: "user", content: "Hi I'm Jo." }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
这里你可以看到 "agent" 节点运行了,然后我们的边返回了 __end__
,因此图在那里停止了执行。
让我们检查当前的图状态。
{
messages: [
{ role: 'user', content: "Hi I'm Jo." },
AIMessage {
"id": "chatcmpl-A3FGf3k3QQo9q0QjT6Oc5h1XplkHr",
"content": "Hello, Jo! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 12,
"promptTokens": 68,
"totalTokens": 80
},
"finish_reason": "stop",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [],
"invalid_tool_calls": []
}
]
}
next
值为空,因为图已终止(转换到 __end__
)。
让我们让它执行一个工具¶
当我们再次调用图时,它会在每个内部执行步骤后创建一个检查点。让我们让它运行一个工具,然后查看检查点。
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
}
]
-----
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
在工具前暂停¶
如果您注意到下面,我们现在将添加 interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
获取状态¶
您可以使用 获取最新的图检查点。当我们的 LLM 调用工具(函数)时,此对象会实际运行它们。
设置模型¶
现在我们将加载聊天模型。
- 它应该能处理消息。我们将以消息的形式表示所有代理状态,因此它需要能够很好地处理消息。
- 它应该能处理工具调用,这意味着它可以在响应中返回函数参数。
注意状态是我们在上面看到的两个消息:1. 我们发送的 HumanMessage,2. 我们从模型收到的 AIMessage。next
值为空,因为图已终止(转换到 __end__
)。
interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
}
]
-----
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
这些模型要求并非使用 LangGraph 的一般要求 - 它们仅是此示例的要求。 2. 我们从模型收到的 AIMessage。next
值为空,因为图已终止(转换到 __end__
)。
interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
}
]
-----
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
next
值为空,因为图已终止(转换到 __end__
)。
## 让我们让它执行一个工具 当我们再次调用图时,它会在每个内部执行步骤后创建一个检查点。让我们让它运行一个工具,然后查看检查点。 ### 在工具前暂停 如果您注意到下面,我们现在将添加 interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
}
]
-----
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
完成此操作后,我们应该确保模型知道它可以使用这些工具。我们可以通过调用 bindTools 来实现这一点。
定义图¶
现在我们可以将所有内容组合在一起。时光旅行需要一个检查点器来保存状态 - 否则你就无法进行 get
或 update
。我们将使用 MemorySaver 工具,然后 "agent" 节点发出了最终响应,这使得 "should_continue" 边返回 "end"。让我们看看如何对此进行更多控制。
在工具前暂停¶
如果您注意到下面,我们现在将添加 interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
获取状态¶
您可以使用 获取最新的图检查点,它在内存中“保存”检查点。
import { END, START, StateGraph } from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { RunnableConfig } from "@langchain/core/runnables";
import { MemorySaver } from "@langchain/langgraph";
const routeMessage = (state: typeof StateAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If no tools are called, we can finish (respond to the user)
if (!lastMessage?.tool_calls?.length) {
return END;
}
// Otherwise if there is, we continue and call the tools
return "tools";
};
const callModel = async (
state: typeof StateAnnotation.State,
config?: RunnableConfig,
): Promise<Partial<typeof StateAnnotation.State>> => {
const { messages } = state;
const response = await boundModel.invoke(messages, config);
return { messages: [response] };
};
const workflow = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge(START, "agent")
.addConditionalEdges("agent", routeMessage)
.addEdge("tools", "agent");
// Here we only save in-memory
let memory = new MemorySaver();
const graph = workflow.compile({ checkpointer: memory });
与代理交互¶
现在我们可以与代理进行交互。在交互之间,您可以获取和更新状态。
let config = { configurable: { thread_id: "conversation-num-1" } };
let inputs = { messages: [{ role: "user", content: "Hi I'm Jo." }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
这里你可以看到 "agent" 节点运行了,然后我们的边返回了 __end__
,因此图在那里停止了执行。
让我们检查当前的图状态。
{
messages: [
{ role: 'user', content: "Hi I'm Jo." },
AIMessage {
"id": "chatcmpl-A3FGf3k3QQo9q0QjT6Oc5h1XplkHr",
"content": "Hello, Jo! How can I assist you today?",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 12,
"promptTokens": 68,
"totalTokens": 80
},
"finish_reason": "stop",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [],
"invalid_tool_calls": []
}
]
}
next
值为空,因为图已终止(转换到 __end__
)。
让我们让它执行一个工具¶
当我们再次调用图时,它会在每个内部执行步骤后创建一个检查点。让我们让它运行一个工具,然后查看检查点。
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
``````output
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_ZtmtDOyEXDCnXDgowlit5dSd'
}
]
-----
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
在工具前暂停¶
如果您注意到下面,我们现在将添加 interruptBefore=["action"]
- 这意味着在执行任何操作之前我们会暂停。这是允许用户纠正和更新状态的绝佳时机!当您希望通过人在回路来验证(并可能更改)要执行的操作时,这非常有用。
memory = new MemorySaver();
const graphWithInterrupt = workflow.compile({
checkpointer: memory,
interruptBefore: ["tools"],
});
inputs = { messages: [{ role: "user", content: "What's the weather like in SF currently?" }] } as any;
for await (
const { messages } of await graphWithInterrupt.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
What's the weather like in SF currently?
-----
[
{
name: 'search',
args: { query: 'current weather in San Francisco' },
type: 'tool_call',
id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
-----
获取状态¶
您可以使用 getState(config)
获取最新的图检查点。
恢复¶
您可以通过使用 null
输入运行图来恢复。检查点被加载,并且没有新的输入,它将像没有发生中断一样执行。
for await (
const { messages } of await graphWithInterrupt.stream(null, {
...snapshot.config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
Cold, with a low of 13 ℃
-----
Currently, it is cold in San Francisco, with a temperature around 13°C (55°F).
-----
查看完整历史记录¶
让我们浏览此线程的历史记录,从最新到最旧。
let toReplay;
const states = await graphWithInterrupt.getStateHistory(config);
for await (const state of states) {
console.log(state);
console.log("--");
if (state.values?.messages?.length === 2) {
toReplay = state;
}
}
if (!toReplay) {
throw new Error("No state to replay");
}
{
values: {
messages: [
[Object],
AIMessage {
"id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_OsKnTv2psf879eeJ9vx5GeoY",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 72,
"totalTokens": 89
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [
{
"name": "search",
"args": {
"query": "current weather in San Francisco"
},
"type": "tool_call",
"id": "call_OsKnTv2psf879eeJ9vx5GeoY"
}
],
"invalid_tool_calls": []
},
ToolMessage {
"content": "Cold, with a low of 13 ℃",
"name": "search",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_OsKnTv2psf879eeJ9vx5GeoY"
},
AIMessage {
"id": "chatcmpl-A3FGiYripPKtQLnAK1H3hWLSXQfOD",
"content": "Currently, it is cold in San Francisco, with a temperature around 13°C (55°F).",
"additional_kwargs": {},
"response_metadata": {
"tokenUsage": {
"completionTokens": 21,
"promptTokens": 105,
"totalTokens": 126
},
"finish_reason": "stop",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [],
"invalid_tool_calls": []
}
]
},
next: [],
tasks: [],
metadata: { source: 'loop', writes: { agent: [Object] }, step: 3 },
config: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-9c3a-6bd1-8003-d7f030ff72b2'
}
},
createdAt: '2024-09-03T04:17:20.653Z',
parentConfig: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-9516-6200-8002-43d2c6dc603f'
}
}
}
--
{
values: {
messages: [
[Object],
AIMessage {
"id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_OsKnTv2psf879eeJ9vx5GeoY",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 72,
"totalTokens": 89
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [
{
"name": "search",
"args": {
"query": "current weather in San Francisco"
},
"type": "tool_call",
"id": "call_OsKnTv2psf879eeJ9vx5GeoY"
}
],
"invalid_tool_calls": []
},
ToolMessage {
"content": "Cold, with a low of 13 ℃",
"name": "search",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "call_OsKnTv2psf879eeJ9vx5GeoY"
}
]
},
next: [ 'agent' ],
tasks: [
{
id: '612efffa-3b16-530f-8a39-fd01c31e7b8b',
name: 'agent',
interrupts: []
}
],
metadata: { source: 'loop', writes: { tools: [Object] }, step: 2 },
config: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-9516-6200-8002-43d2c6dc603f'
}
},
createdAt: '2024-09-03T04:17:19.904Z',
parentConfig: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-9455-6410-8001-1c78a97f63e6'
}
}
}
--
{
values: {
messages: [
[Object],
AIMessage {
"id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_OsKnTv2psf879eeJ9vx5GeoY",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 72,
"totalTokens": 89
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [
{
"name": "search",
"args": {
"query": "current weather in San Francisco"
},
"type": "tool_call",
"id": "call_OsKnTv2psf879eeJ9vx5GeoY"
}
],
"invalid_tool_calls": []
}
]
},
next: [ 'tools' ],
tasks: [
{
id: '767116b0-55b6-5af4-8f74-ce45fb6e31ed',
name: 'tools',
interrupts: []
}
],
metadata: { source: 'loop', writes: { agent: [Object] }, step: 1 },
config: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-9455-6410-8001-1c78a97f63e6'
}
},
createdAt: '2024-09-03T04:17:19.825Z',
parentConfig: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-8c4b-6261-8000-c51e5807fbcd'
}
}
}
--
{
values: { messages: [ [Object] ] },
next: [ 'agent' ],
tasks: [
{
id: '5b0ed7d1-1bb7-5d78-b4fc-7a8ed40e7291',
name: 'agent',
interrupts: []
}
],
metadata: { source: 'loop', writes: null, step: 0 },
config: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-8c4b-6261-8000-c51e5807fbcd'
}
},
createdAt: '2024-09-03T04:17:18.982Z',
parentConfig: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-8c4b-6260-ffff-6ec582916c42'
}
}
}
--
{
values: {},
next: [ '__start__' ],
tasks: [
{
id: 'a4250d5c-d025-5da1-b588-cae2b3f4a8c7',
name: '__start__',
interrupts: []
}
],
metadata: { source: 'input', writes: { messages: [Array] }, step: -1 },
config: {
configurable: {
thread_id: 'conversation-num-1',
checkpoint_ns: '',
checkpoint_id: '1ef69ab6-8c4b-6260-ffff-6ec582916c42'
}
},
createdAt: '2024-09-03T04:17:18.982Z',
parentConfig: undefined
}
--
重放过去的状态¶
要从此位置重放,我们只需将其配置传回给代理。
for await (
const { messages } of await graphWithInterrupt.stream(null, {
...toReplay.config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
Cold, with a low of 13 ℃
-----
The current weather in San Francisco is cold, with a low of 13°C.
-----
从过去的状态分支¶
使用 LangGraph 的检查点功能,您不仅可以重放过去的状态。您可以从之前的位置分支,让代理探索备用轨迹,或让用户对工作流中的更改进行“版本控制”。
首先,更新一个之前的检查点¶
更新状态将通过将更新应用到之前的检查点来创建一个新快照。让我们添加一个工具消息来模拟调用该工具。
const tool_calls =
toReplay.values.messages[toReplay.values.messages.length - 1].tool_calls;
const branchConfig = await graphWithInterrupt.updateState(
toReplay.config,
{
messages: [
{ role: "tool", content: "It's sunny out, with a high of 38 ℃.", tool_call_id: tool_calls[0].id },
],
},
// Updates are applied "as if" they were coming from a node. By default,
// the updates will come from the last node to run. In our case, we want to treat
// this update as if it came from the tools node, so that the next node to run will be
// the agent.
"tools",
);
const branchState = await graphWithInterrupt.getState(branchConfig);
console.log(branchState.values);
console.log(branchState.next);
{
messages: [
{
role: 'user',
content: "What's the weather like in SF currently?"
},
AIMessage {
"id": "chatcmpl-A3FGhKzOZs0GYZ2yalNOCQZyPgbcp",
"content": "",
"additional_kwargs": {
"tool_calls": [
{
"id": "call_OsKnTv2psf879eeJ9vx5GeoY",
"type": "function",
"function": "[Object]"
}
]
},
"response_metadata": {
"tokenUsage": {
"completionTokens": 17,
"promptTokens": 72,
"totalTokens": 89
},
"finish_reason": "tool_calls",
"system_fingerprint": "fp_fde2829a40"
},
"tool_calls": [
{
"name": "search",
"args": {
"query": "current weather in San Francisco"
},
"type": "tool_call",
"id": "call_OsKnTv2psf879eeJ9vx5GeoY"
}
],
"invalid_tool_calls": []
},
{
role: 'tool',
content: "It's sunny out, with a high of 38 ℃.",
tool_call_id: 'call_OsKnTv2psf879eeJ9vx5GeoY'
}
]
}
[ 'agent' ]
现在你可以从这个分支运行¶
只需使用更新后的配置(包含新的检查点 ID)。轨迹将遵循新的分支。
for await (
const { messages } of await graphWithInterrupt.stream(null, {
...branchConfig,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}