跳到内容

如何审查工具调用(功能 API)

先决条件

本指南假定您熟悉以下内容

本指南演示了如何在 ReAct 代理中使用 LangGraph 功能 API 实现人在环路工作流。

我们将以 如何使用功能 API 创建 ReAct 代理 指南中创建的代理为基础进行构建。

具体来说,我们将演示如何在执行 工具调用 之前审查由 聊天模型 生成的工具调用。这可以通过在我们应用程序的关键点使用 interrupt 函数来实现。

预览:

我们将实现一个简单的函数,用于审查从我们的聊天模型生成的工具调用,并从我们应用程序的 入口点 中调用它

function reviewToolCall(toolCall: ToolCall): ToolCall | ToolMessage {
  // Interrupt for human review
  const humanReview = interrupt({
    question: "Is this correct?",
    tool_call: toolCall,
  });

  const { action, data } = humanReview;

  if (action === "continue") {
    return toolCall;
  } else if (action === "update") {
    return {
      ...toolCall,
      args: data,
    };
  } else if (action === "feedback") {
    return new ToolMessage({
      content: data,
      name: toolCall.name,
      tool_call_id: toolCall.id,
    });
  }
  throw new Error(`Unsupported review action: ${action}`);
}

设置

注意

本指南需要 @langchain/langgraph>=0.2.42

首先,安装此示例所需的依赖项

npm install @langchain/langgraph @langchain/openai @langchain/core zod

接下来,我们需要为 OpenAI(我们将使用的 LLM)设置 API 密钥

process.env.OPENAI_API_KEY = "YOUR_API_KEY";

设置 LangSmith 以进行 LangGraph 开发

注册 LangSmith 以快速发现问题并提高 LangGraph 项目的性能。LangSmith 允许您使用跟踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用程序 — 阅读更多关于如何开始使用 此处 的信息

定义模型和工具

让我们首先定义我们将用于示例的工具和模型。与 ReAct 代理指南 中一样,我们将使用一个占位符工具,该工具获取位置的天气描述。

我们将在此示例中使用 OpenAI 聊天模型,但任何 支持工具调用的 模型都适用。

import { ChatOpenAI } from "@langchain/openai";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const model = new ChatOpenAI({
  model: "gpt-4o-mini",
});

const getWeather = tool(async ({ location }) => {
  // This is a placeholder for the actual implementation
  const lowercaseLocation = location.toLowerCase();
  if (lowercaseLocation.includes("sf") || lowercaseLocation.includes("san francisco")) {
    return "It's sunny!";
  } else if (lowercaseLocation.includes("boston")) {
    return "It's rainy!";
  } else {
    return `I am not sure what the weather is in ${location}`;
  }
}, {
  name: "getWeather",
  schema: z.object({
    location: z.string().describe("Location to get the weather for"),
  }),
  description: "Call to get the weather from a specific location.",
});

const tools = [getWeather];

定义任务

我们的 任务ReAct 代理指南 中的任务相同

  1. 调用模型:我们希望使用消息列表查询我们的聊天模型。
  2. 调用工具:如果我们的模型生成工具调用,我们希望执行它们。
import {
  type BaseMessageLike,
  AIMessage,
  ToolMessage,
} from "@langchain/core/messages";
import { type ToolCall } from "@langchain/core/messages/tool";
import { task } from "@langchain/langgraph";

const toolsByName = Object.fromEntries(tools.map((tool) => [tool.name, tool]));

const callModel = task("callModel", async (messages: BaseMessageLike[]) => {
  const response = await model.bindTools(tools).invoke(messages);
  return response;
});

const callTool = task(
  "callTool",
  async (toolCall: ToolCall): Promise<AIMessage> => {
    const tool = toolsByName[toolCall.name];
    const observation = await tool.invoke(toolCall.args);
    return new ToolMessage({ content: observation, tool_call_id: toolCall.id });
    // Can also pass toolCall directly into the tool to return a ToolMessage
    // return tool.invoke(toolCall);
  });

定义入口点

为了在执行工具调用之前对其进行审查,我们添加了一个 reviewToolCalls 函数,该函数调用 interrupt。当调用此函数时,执行将暂停,直到我们发出命令以恢复它。

给定一个工具调用,我们的函数将 interrupt 以供人工审查。在那时,我们可以

  • 接受工具调用;
  • 修改工具调用并继续;
  • 生成自定义工具消息(例如,指示模型重新格式化其工具调用)。

我们将在下面的 用法示例 中演示这三种情况。

import { interrupt } from "@langchain/langgraph";

function reviewToolCall(toolCall: ToolCall): ToolCall | ToolMessage {
  // Interrupt for human review
  const humanReview = interrupt({
    question: "Is this correct?",
    tool_call: toolCall,
  });

  const { action, data } = humanReview;

  if (action === "continue") {
    return toolCall;
  } else if (action === "update") {
    return {
      ...toolCall,
      args: data,
    };
  } else if (action === "feedback") {
    return new ToolMessage({
      content: data,
      name: toolCall.name,
      tool_call_id: toolCall.id,
    });
  }
  throw new Error(`Unsupported review action: ${action}`);
}

我们现在可以更新我们的 入口点 以审查生成的工具调用。如果工具调用被接受或修改,我们将以与之前相同的方式执行。否则,我们只需附加人类提供的 ToolMessage

提示

先前任务的结果(在本例中为初始模型调用)会被持久化,因此在 interrupt 后不会再次运行。

import {
  MemorySaver,
  addMessages,
  entrypoint,
  getPreviousState,
} from "@langchain/langgraph";

const checkpointer = new MemorySaver();

const agent = entrypoint({
  checkpointer,
  name: "agent",
}, async (messages: BaseMessageLike[]) => {
  const previous = getPreviousState<BaseMessageLike[]>() ?? [];
  let currentMessages = addMessages(previous, messages);
  let llmResponse = await callModel(currentMessages);
  while (true) {
    if (!llmResponse.tool_calls?.length) {
      break;
    }
    // Review tool calls
    const toolResults: ToolMessage[] = [];
    const toolCalls: ToolCall[] = [];

    for (let i = 0; i < llmResponse.tool_calls.length; i++) {
      const review = await reviewToolCall(llmResponse.tool_calls[i]);
      if (review instanceof ToolMessage) {
        toolResults.push(review);
      } else { // is a validated tool call
        toolCalls.push(review);
        if (review !== llmResponse.tool_calls[i]) {
          llmResponse.tool_calls[i] = review;
        }
      }
    }
    // Execute remaining tool calls
    const remainingToolResults = await Promise.all(
      toolCalls.map((toolCall) => callTool(toolCall))
    );

    // Append to message list
    currentMessages = addMessages(
      currentMessages,
      [llmResponse, ...toolResults, ...remainingToolResults]
    );

    // Call model again
    llmResponse = await callModel(currentMessages);
  }
  // Generate final response
  currentMessages = addMessages(currentMessages, llmResponse);
  return entrypoint.final({
    value: llmResponse,
    save: currentMessages
  });
});

用法

让我们演示一些场景。

import { BaseMessage, isAIMessage } from "@langchain/core/messages";

const prettyPrintMessage = (message: BaseMessage) => {
  console.log("=".repeat(30), `${message.getType()} message`, "=".repeat(30));
  console.log(message.content);
  if (isAIMessage(message) && message.tool_calls?.length) {
    console.log(JSON.stringify(message.tool_calls, null, 2));
  }
}

const printStep = (step: Record<string, any>) => {
  if (step.__metadata__?.cached) {
    return;
  }
  for (const [taskName, result] of Object.entries(step)) {
    if (taskName === "agent") {
      continue; // just stream from tasks
    }

    console.log(`\n${taskName}:`);
    if (taskName === "__interrupt__" || taskName === "reviewToolCall") {
      console.log(JSON.stringify(result, null, 2));
    } else {
      prettyPrintMessage(result);
    }
  }
};

接受工具调用

要接受工具调用,我们只需在我们在 Command 中提供的数据中指示工具调用应通过。

const config = {
  configurable: {
    thread_id: "1"
  }
};

const userMessage = {
  role: "user",
  content: "What's the weather in san francisco?"
};
console.log(userMessage);

const stream = await agent.stream([userMessage], config);

for await (const step of stream) {
  printStep(step);
}
{ role: 'user', content: "What's the weather in san francisco?" }
``````output

callModel:
============================== ai message ==============================

[
  {
    "name": "getWeather",
    "args": {
      "location": "San Francisco"
    },
    "type": "tool_call",
    "id": "call_pe7ee3A4lOO4Llr2NcfRukyp"
  }
]

__interrupt__:
[
  {
    "value": {
      "question": "Is this correct?",
      "tool_call": {
        "name": "getWeather",
        "args": {
          "location": "San Francisco"
        },
        "type": "tool_call",
        "id": "call_pe7ee3A4lOO4Llr2NcfRukyp"
      }
    },
    "when": "during",
    "resumable": true,
    "ns": [
      "agent:dcee519a-80f5-5950-9e1c-e8bb85ed436f"
    ]
  }
]

import { Command } from "@langchain/langgraph";

const humanInput = new Command({
  resume: {
    action: "continue",
  },
});

const resumedStream = await agent.stream(humanInput, config)

for await (const step of resumedStream) {
  printStep(step);
}
callTool:
============================== tool message ==============================
It's sunny!

callModel:
============================== ai message ==============================
The weather in San Francisco is sunny!

修改工具调用

要修改工具调用,我们可以提供更新的参数。

const config2 = {
  configurable: {
    thread_id: "2"
  }
};

const userMessage2 = {
  role: "user",
  content: "What's the weather in san francisco?"
};

console.log(userMessage2);

const stream2 = await agent.stream([userMessage2], config2);

for await (const step of stream2) {
  printStep(step);
}
{ role: 'user', content: "What's the weather in san francisco?" }

callModel:
============================== ai message ==============================

[
  {
    "name": "getWeather",
    "args": {
      "location": "San Francisco"
    },
    "type": "tool_call",
    "id": "call_JEOqaUEvYJ4pzMtVyCQa6H2H"
  }
]

__interrupt__:
[
  {
    "value": {
      "question": "Is this correct?",
      "tool_call": {
        "name": "getWeather",
        "args": {
          "location": "San Francisco"
        },
        "type": "tool_call",
        "id": "call_JEOqaUEvYJ4pzMtVyCQa6H2H"
      }
    },
    "when": "during",
    "resumable": true,
    "ns": [
      "agent:d5c54c67-483a-589a-a1e7-2a8465b3ef13"
    ]
  }
]

const humanInput2 = new Command({
  resume: {
    action: "update",
    data: { location: "SF, CA" },
  },
});

const resumedStream2 = await agent.stream(humanInput2, config2)

for await (const step of resumedStream2) {
  printStep(step);
}
callTool:
============================== tool message ==============================
It's sunny!

callModel:
============================== ai message ==============================
The weather in San Francisco is sunny!
此运行的 LangSmith 跟踪信息特别丰富

  • 中断前的 跟踪中,我们为位置 "San Francisco" 生成了一个工具调用。
  • 恢复后的 跟踪中,我们看到消息中的工具调用已更新为 "SF, CA"

生成自定义 ToolMessage

要生成自定义 ToolMessage,我们提供消息的内容。在这种情况下,我们将要求模型重新格式化其工具调用。

const config3 = {
  configurable: {
    thread_id: "3"
  }
};

const userMessage3 = {
  role: "user",
  content: "What's the weather in san francisco?"
};

console.log(userMessage3);

const stream3 = await agent.stream([userMessage3], config3);

for await (const step of stream3) {
  printStep(step);
}
{ role: 'user', content: "What's the weather in san francisco?" }

callModel:
============================== ai message ==============================

[
  {
    "name": "getWeather",
    "args": {
      "location": "San Francisco"
    },
    "type": "tool_call",
    "id": "call_HNRjJLJo4U78dtk0uJ9YZF6V"
  }
]

__interrupt__:
[
  {
    "value": {
      "question": "Is this correct?",
      "tool_call": {
        "name": "getWeather",
        "args": {
          "location": "San Francisco"
        },
        "type": "tool_call",
        "id": "call_HNRjJLJo4U78dtk0uJ9YZF6V"
      }
    },
    "when": "during",
    "resumable": true,
    "ns": [
      "agent:6f313de8-c19e-5c3e-bdff-f90cdd68d0de"
    ]
  }
]

const humanInput3 = new Command({
  resume: {
    action: "feedback",
    data: "Please format as <City>, <State>.",
  },
});

const resumedStream3 = await agent.stream(humanInput3, config3)

for await (const step of resumedStream3) {
  printStep(step);
}
callModel:
============================== ai message ==============================

[
  {
    "name": "getWeather",
    "args": {
      "location": "San Francisco, CA"
    },
    "type": "tool_call",
    "id": "call_5V4Oj4JV2DVfeteM4Aaf2ieD"
  }
]

__interrupt__:
[
  {
    "value": {
      "question": "Is this correct?",
      "tool_call": {
        "name": "getWeather",
        "args": {
          "location": "San Francisco, CA"
        },
        "type": "tool_call",
        "id": "call_5V4Oj4JV2DVfeteM4Aaf2ieD"
      }
    },
    "when": "during",
    "resumable": true,
    "ns": [
      "agent:6f313de8-c19e-5c3e-bdff-f90cdd68d0de"
    ]
  }
]
一旦重新格式化,我们就可以接受它

const continueCommand = new Command({
  resume: {
    action: "continue",
  },
});

const continueStream = await agent.stream(continueCommand, config3)

for await (const step of continueStream) {
  printStep(step);
}
callTool:
============================== tool message ==============================
It's sunny!

callModel:
============================== ai message ==============================
The weather in San Francisco, CA is sunny!