如何处理工具调用错误¶
LLM 在调用工具方面并不完美。模型可能会尝试调用不存在的工具,或者未能返回与请求的模式匹配的参数。诸如保持模式简单、减少一次传递的工具数量以及使用好的名称和描述等策略可以帮助降低这种风险,但并非万无一失。
本指南介绍了一些将错误处理构建到图中的方法,以减轻这些故障模式。
兼容性
本指南需要 @langchain/langgraph>=0.0.28
、@langchain/anthropic>=0.2.6
和 @langchain/core>=0.2.17
。有关升级方面的帮助,请参阅本指南。
使用预构建的 ToolNode
¶
首先,定义一个模拟天气工具,该工具对输入查询有一些隐藏的限制。这里的目的是模拟模型未能正确调用工具的真实世界案例
import { z } from "zod";
import { tool } from "@langchain/core/tools";
const getWeather = tool(async ({ location }) => {
if (location === "SAN FRANCISCO") {
return "It's 60 degrees and foggy";
} else if (location.toLowerCase() === "san francisco") {
throw new Error("Input queries must be all capitals");
} else {
throw new Error("Invalid input.");
}
}, {
name: "get_weather",
description: "Call to get the current weather",
schema: z.object({
location: z.string(),
}),
});
接下来,设置 ReAct 代理的图实现。此代理将一些查询作为输入,然后重复调用工具,直到它有足够的信息来解决查询。我们将使用预构建的 ToolNode
来执行调用的工具,以及由 Anthropic 提供支持的小型快速模型
import { StateGraph, MessagesAnnotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatAnthropic } from "@langchain/anthropic";
import { BaseMessage, isAIMessage } from "@langchain/core/messages";
const toolNode = new ToolNode([getWeather]);
const modelWithTools = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
}).bindTools([getWeather]);
const shouldContinue = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1];
if (isAIMessage(lastMessage) && lastMessage.tool_calls?.length) {
return "tools";
}
return "__end__";
}
const callModel = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const response = await modelWithTools.invoke(messages);
return { messages: [response] };
}
const app = new StateGraph(MessagesAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge("__start__", "agent")
.addEdge("tools", "agent")
.addConditionalEdges("agent", shouldContinue, {
// Explicitly list possible destinations so that
// we can automatically draw the graph below.
tools: "tools",
__end__: "__end__",
})
.compile();
import * as tslab from "tslab";
const graph = app.getGraph();
const image = await graph.drawMermaidPng();
const arrayBuffer = await image.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer));
当您尝试调用该工具时,您可以看到模型使用错误的输入调用了该工具,导致该工具抛出错误。执行该工具的预构建 ToolNode
具有一些内置的错误处理功能,可以捕获错误并将其传递回模型,以便模型可以再次尝试
const response = await app.invoke({
messages: [
{ role: "user", content: "what is the weather in san francisco?"},
]
});
for (const message of response.messages) {
// Anthropic returns tool calls in content as well as in `AIMessage.tool_calls`
const content = JSON.stringify(message.content, null, 2);
console.log(`${message._getType().toUpperCase()}: ${content}`);
}
HUMAN: "what is the weather in san francisco?"
AI: [
{
"type": "text",
"text": "Okay, let's check the weather in San Francisco:"
},
{
"type": "tool_use",
"id": "toolu_015dywEMjSJsjkgP91VDbm52",
"name": "get_weather",
"input": {
"location": "San Francisco"
}
}
]
TOOL: "Error: Input queries must be all capitals\n Please fix your mistakes."
AI: [
{
"type": "text",
"text": "Apologies, let me try that again with the location in all capital letters:"
},
{
"type": "tool_use",
"id": "toolu_01Qw6t7p9UGk8aHQh7qtLJZT",
"name": "get_weather",
"input": {
"location": "SAN FRANCISCO"
}
}
]
TOOL: "It's 60 degrees and foggy"
AI: "The weather in San Francisco is 60 degrees and foggy."
自定义策略¶
在许多情况下,这是一个很好的默认设置,但在某些情况下,自定义回退可能更好。
例如,下面的工具需要输入特定长度的元素列表 - 对于小型模型来说很棘手!我们还将有意避免将 topic
复数化,以欺骗模型认为它应该传递一个字符串
import { StringOutputParser } from "@langchain/core/output_parsers";
const haikuRequestSchema = z.object({
topic: z.array(z.string()).length(3),
});
const masterHaikuGenerator = tool(async ({ topic }) => {
const model = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const chain = model.pipe(new StringOutputParser());
const topics = topic.join(", ");
const haiku = await chain.invoke(`Write a haiku about ${topics}`);
return haiku;
}, {
name: "master_haiku_generator",
description: "Generates a haiku based on the provided topics.",
schema: haikuRequestSchema,
});
const customStrategyToolNode = new ToolNode([masterHaikuGenerator]);
const customStrategyModel = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const customStrategyModelWithTools = customStrategyModel.bindTools([masterHaikuGenerator]);
const customStrategyShouldContinue = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1];
if (isAIMessage(lastMessage) && lastMessage.tool_calls?.length) {
return "tools";
}
return "__end__";
}
const customStrategyCallModel = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const response = await customStrategyModelWithTools.invoke(messages);
return { messages: [response] };
}
const customStrategyApp = new StateGraph(MessagesAnnotation)
.addNode("tools", customStrategyToolNode)
.addNode("agent", customStrategyCallModel)
.addEdge("__start__", "agent")
.addEdge("tools", "agent")
.addConditionalEdges("agent", customStrategyShouldContinue, {
// Explicitly list possible destinations so that
// we can automatically draw the graph below.
tools: "tools",
__end__: "__end__",
})
.compile();
const response2 = await customStrategyApp.invoke(
{
messages: [{ role: "user", content: "Write me an incredible haiku about water." }],
},
{ recursionLimit: 10 }
);
for (const message of response2.messages) {
// Anthropic returns tool calls in content as well as in `AIMessage.tool_calls`
const content = JSON.stringify(message.content, null, 2);
console.log(`${message._getType().toUpperCase()}: ${content}`);
}
HUMAN: "Write me an incredible haiku about water."
AI: [
{
"type": "text",
"text": "Okay, let's generate a haiku about water using the master haiku generator tool:"
},
{
"type": "tool_use",
"id": "toolu_01CMvVu3MhPeCk5X7F8GBv8f",
"name": "master_haiku_generator",
"input": {
"topic": [
"water"
]
}
}
]
TOOL: "Error: Received tool input did not match expected schema\n Please fix your mistakes."
AI: [
{
"type": "text",
"text": "Oops, looks like I need to provide 3 topics for the haiku generator. Let me try again with 3 water-related topics:"
},
{
"type": "tool_use",
"id": "toolu_0158Nz2scGSWvYor4vmJbSDZ",
"name": "master_haiku_generator",
"input": {
"topic": [
"ocean",
"waves",
"rain"
]
}
}
]
TOOL: "Here is a haiku about the ocean, waves, and rain:\n\nWaves crash on the shore,\nRhythmic dance of water's song,\nRain falls from the sky."
AI: "The haiku generator has produced a beautiful and evocative poem about the different aspects of water - the ocean, waves, and rain. I hope you enjoy this creative take on a water-themed haiku!"
更好的策略可能是修剪失败的尝试以减少干扰,然后回退到更高级的模型。这是一个示例 - 请注意自定义构建的工具调用节点,而不是预构建的 ToolNode
import { AIMessage, ToolMessage, RemoveMessage } from "@langchain/core/messages";
const haikuRequestSchema2 = z.object({
topic: z.array(z.string()).length(3),
});
const masterHaikuGenerator2 = tool(async ({ topic }) => {
const model = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const chain = model.pipe(new StringOutputParser());
const topics = topic.join(", ");
const haiku = await chain.invoke(`Write a haiku about ${topics}`);
return haiku;
}, {
name: "master_haiku_generator",
description: "Generates a haiku based on the provided topics.",
schema: haikuRequestSchema2,
});
const callTool2 = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const toolsByName = { master_haiku_generator: masterHaikuGenerator };
const lastMessage = messages[messages.length - 1] as AIMessage;
const outputMessages: ToolMessage[] = [];
for (const toolCall of lastMessage.tool_calls) {
try {
const toolResult = await toolsByName[toolCall.name].invoke(toolCall);
outputMessages.push(toolResult);
} catch (error: any) {
// Return the error if the tool call fails
outputMessages.push(
new ToolMessage({
content: error.message,
name: toolCall.name,
tool_call_id: toolCall.id!,
additional_kwargs: { error }
})
);
}
}
return { messages: outputMessages };
};
const model = new ChatAnthropic({
model: "claude-3-haiku-20240307",
temperature: 0,
});
const modelWithTools2 = model.bindTools([masterHaikuGenerator2]);
const betterModel = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0,
});
const betterModelWithTools = betterModel.bindTools([masterHaikuGenerator2]);
const shouldContinue2 = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const lastMessage = messages[messages.length - 1];
if (isAIMessage(lastMessage) && lastMessage.tool_calls?.length) {
return "tools";
}
return "__end__";
}
const shouldFallback = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const failedToolMessages = messages.find((message) => {
return message._getType() === "tool" && message.additional_kwargs.error !== undefined;
});
if (failedToolMessages) {
return "remove_failed_tool_call_attempt";
}
return "agent";
}
const callModel2 = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const response = await modelWithTools2.invoke(messages);
return { messages: [response] };
}
const removeFailedToolCallAttempt = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
// Remove all messages from the most recent
// instance of AIMessage onwards.
const lastAIMessageIndex = messages
.map((msg, index) => ({ msg, index }))
.reverse()
.findIndex(({ msg }) => isAIMessage(msg));
const messagesToRemove = messages.slice(lastAIMessageIndex);
return { messages: messagesToRemove.map(m => new RemoveMessage({ id: m.id })) };
}
const callFallbackModel = async (state: typeof MessagesAnnotation.State) => {
const { messages } = state;
const response = await betterModelWithTools.invoke(messages);
return { messages: [response] };
}
const app2 = new StateGraph(MessagesAnnotation)
.addNode("tools", callTool2)
.addNode("agent", callModel2)
.addNode("remove_failed_tool_call_attempt", removeFailedToolCallAttempt)
.addNode("fallback_agent", callFallbackModel)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue2, {
// Explicitly list possible destinations so that
// we can automatically draw the graph below.
tools: "tools",
__end__: "__end__",
})
.addConditionalEdges("tools", shouldFallback, {
remove_failed_tool_call_attempt: "remove_failed_tool_call_attempt",
agent: "agent",
})
.addEdge("remove_failed_tool_call_attempt", "fallback_agent")
.addEdge("fallback_agent", "tools")
.compile();
现在,tools
节点将返回带有 error
字段的 ToolMessage
(如果工具调用失败,则在 additional_kwargs
中)。如果发生这种情况,它将转到另一个节点,该节点删除失败的工具消息,并让更好的模型重试工具调用生成。我们还通过返回特殊的消息修饰符 RemoveMessage
添加一个修剪步骤,以从状态中删除之前的消息。
下图以可视化方式显示了这一点
import * as tslab from "tslab";
const graph2 = app2.getGraph();
const image2 = await graph2.drawMermaidPng();
const arrayBuffer2 = await image2.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer2));
让我们试用一下。为了强调删除步骤,让我们 stream
来自模型的响应,以便我们可以看到每个执行的节点
const stream = await app2.stream(
{ messages: [{ role: "user", content: "Write me an incredible haiku about water." }] },
{ recursionLimit: 10 },
)
for await (const chunk of stream) {
console.log(chunk);
}
{
agent: {
messages: [
AIMessage {
"id": "msg_01HqvhPuubXqerWgYRNFqPrd",
"content": [
{
"type": "text",
"text": "Okay, let's generate a haiku about water using the master haiku generator tool:"
},
{
"type": "tool_use",
"id": "toolu_01QFmyc5vhQBFfzF7hCGTRc1",
"name": "master_haiku_generator",
"input": {
"topic": "[Array]"
}
}
],
"additional_kwargs": {
"id": "msg_01HqvhPuubXqerWgYRNFqPrd",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 392,
"output_tokens": 77
}
},
"response_metadata": {
"id": "msg_01HqvhPuubXqerWgYRNFqPrd",
"model": "claude-3-haiku-20240307",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 392,
"output_tokens": 77
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "master_haiku_generator",
"args": {
"topic": "[Array]"
},
"id": "toolu_01QFmyc5vhQBFfzF7hCGTRc1",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 392,
"output_tokens": 77,
"total_tokens": 469
}
}
]
}
}
{
tools: {
messages: [
ToolMessage {
"id": "502c7399-4d95-4afd-8a86-ece864d2bc7f",
"content": "Received tool input did not match expected schema",
"name": "master_haiku_generator",
"additional_kwargs": {
"error": {
"output": "{\"topic\":[\"water\"]}"
}
},
"response_metadata": {},
"tool_call_id": "toolu_01QFmyc5vhQBFfzF7hCGTRc1"
}
]
}
}
{
remove_failed_tool_call_attempt: {
messages: [
BaseMessage {
"id": "msg_01HqvhPuubXqerWgYRNFqPrd",
"content": "",
"additional_kwargs": {},
"response_metadata": {}
},
BaseMessage {
"id": "502c7399-4d95-4afd-8a86-ece864d2bc7f",
"content": "",
"additional_kwargs": {},
"response_metadata": {}
}
]
}
}
{
fallback_agent: {
messages: [
AIMessage {
"id": "msg_01EQSawL2oxNhph9be99k7Yp",
"content": [
{
"type": "text",
"text": "Certainly! I'd be happy to help you create an incredible haiku about water. To do this, we'll use the master_haiku_generator function, which requires three topics as input. Since you've specified water as the main theme, I'll add two related concepts to create a more vivid and interesting haiku. Let's use \"water,\" \"flow,\" and \"reflection\" as our three topics.\n\nHere's the function call to generate your haiku:"
},
{
"type": "tool_use",
"id": "toolu_017hrp13SsgfdJTdhkJDMaQy",
"name": "master_haiku_generator",
"input": {
"topic": "[Array]"
}
}
],
"additional_kwargs": {
"id": "msg_01EQSawL2oxNhph9be99k7Yp",
"type": "message",
"role": "assistant",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 422,
"output_tokens": 162
}
},
"response_metadata": {
"id": "msg_01EQSawL2oxNhph9be99k7Yp",
"model": "claude-3-5-sonnet-20240620",
"stop_reason": "tool_use",
"stop_sequence": null,
"usage": {
"input_tokens": 422,
"output_tokens": 162
},
"type": "message",
"role": "assistant"
},
"tool_calls": [
{
"name": "master_haiku_generator",
"args": {
"topic": "[Array]"
},
"id": "toolu_017hrp13SsgfdJTdhkJDMaQy",
"type": "tool_call"
}
],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 422,
"output_tokens": 162,
"total_tokens": 584
}
}
]
}
}
{
tools: {
messages: [
ToolMessage {
"id": "3d24d291-7501-4a65-9286-10dc47239b5b",
"content": "Here is a haiku about water, flow, and reflection:\n\nRippling waters flow,\nMirroring the sky above,\nTranquil reflection.",
"name": "master_haiku_generator",
"additional_kwargs": {},
"response_metadata": {},
"tool_call_id": "toolu_017hrp13SsgfdJTdhkJDMaQy"
}
]
}
}
{
agent: {
messages: [
AIMessage {
"id": "msg_01Jy7Vw8DN77sjVWcB4TcJR6",
"content": "I hope you enjoy this haiku about the beauty and serenity of water. Please let me know if you would like me to generate another one.",
"additional_kwargs": {
"id": "msg_01Jy7Vw8DN77sjVWcB4TcJR6",
"type": "message",
"role": "assistant",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 601,
"output_tokens": 35
}
},
"response_metadata": {
"id": "msg_01Jy7Vw8DN77sjVWcB4TcJR6",
"model": "claude-3-haiku-20240307",
"stop_reason": "end_turn",
"stop_sequence": null,
"usage": {
"input_tokens": 601,
"output_tokens": 35
},
"type": "message",
"role": "assistant"
},
"tool_calls": [],
"invalid_tool_calls": [],
"usage_metadata": {
"input_tokens": 601,
"output_tokens": 35,
"total_tokens": 636
}
}
]
}
}
您还可以检查此 LangSmith 跟踪,其中显示了对较小模型的初始调用失败。
后续步骤¶
您现在已经了解了如何实施一些策略来处理工具调用错误。
接下来,查看此处的其他 LangGraph 操作指南。