跳到内容

如何将运行时值传递给工具

本指南展示了如何定义依赖于动态定义变量的工具。这些值由您的程序提供,而不是由 LLM 提供。

工具可以访问 config.configurable 字段,以获取诸如用户 ID 之类的值,这些值在图表最初执行时已知,以及来自 store 的托管值,用于跨线程持久化。

然而,访问中间运行时值可能很方便,这些值不是预先知道的,而是随着图表的执行逐步生成的,例如当前的图状态。本指南将介绍两种实现此目的的技术:getCurrentTaskInput 实用程序函数和闭包。

设置

安装以下内容以运行本指南

npm install @langchain/langgraph @langchain/openai @langchain/core

接下来,配置您的环境以连接到您的模型提供商。

export OPENAI_API_KEY=your-api-key

可选地,为 LangSmith 追踪 设置您的 API 密钥,这将为我们提供一流的可观察性。

export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_CALLBACKS_BACKGROUND="true"
export LANGCHAIN_API_KEY=your-api-key

getCurrentTaskInput 实用程序函数

getCurrentTaskInput 实用程序函数使在应用程序中可能被间接调用的区域(如工具处理程序)中获取当前状态变得更容易。

兼容性

此功能在 @langchain/langgraph>=0.2.53 中添加。

它还需要 async_hooks 支持,这在许多流行的 JavaScript 环境(如 Node.js、Deno 和 Cloudflare Workers)中都受支持,但并非所有环境都支持(主要是 Web 浏览器)。如果您要部署到不支持此功能的环境,请参阅下面的 闭包 部分。

让我们首先定义一个 LLM 可以用来更新用户宠物偏好的工具。该工具将从当前上下文中检索图的当前状态。

定义代理状态

由于我们只是跟踪消息,我们将使用 MessagesAnnotation

import { MessagesAnnotation } from "@langchain/langgraph";

现在,如下所示声明一个工具。该工具通过三种不同的方式接收值

  1. 它将在其 input 中接收来自 LLM 生成的 pets 列表。
  2. 它将拉取从初始图调用填充的 userId
  3. 它将通过 getCurrentTaskInput 函数获取传递给当前正在执行的任务(StateGraph 节点处理程序,或 Functional API entrypointtask)的输入。

然后,它将使用 LangGraph 的 跨线程持久性 来保存偏好

import { z } from "zod";
import { tool } from "@langchain/core/tools";
import {
  getCurrentTaskInput,
  LangGraphRunnableConfig,
} from "@langchain/langgraph";

const updateFavoritePets = tool(async (input, config: LangGraphRunnableConfig) => {
  // Some arguments are populated by the LLM; these are included in the schema below
  const { pets } = input;
  // Fetch the current input to the task that called this tool.
  // This will be identical to the input that was passed to the `ToolNode` that called this tool.
  const currentState = getCurrentTaskInput() as typeof MessagesAnnotation.State;
  // Other information (such as a UserID) are most easily provided via the config
  // This is set when when invoking or streaming the graph
  const userId = config.configurable?.userId;
  // LangGraph's managed key-value store is also accessible from the config
  const store = config.store;
  await store.put([userId, "pets"], "names", pets);
  // Store the initial input message from the user as a note.
  // Using the same key will override previous values - you could
  // use something different if you wanted to store many interactions.
  await store.put([userId, "pets"], "context", { content: currentState.messages[0].content });

  return "update_favorite_pets called.";
},
{
  // The LLM "sees" the following schema:
  name: "update_favorite_pets",
  description: "add to the list of favorite pets.",
  schema: z.object({
    pets: z.array(z.string()),
  }),
});

如果我们查看工具调用模式,即传递给模型进行工具调用的内容,我们可以看到只有 pets 被传递

import { zodToJsonSchema } from "zod-to-json-schema";

console.log(zodToJsonSchema(updateFavoritePets.schema));
{
  type: 'object',
  properties: { pets: { type: 'array', items: [Object] } },
  required: [ 'pets' ],
  additionalProperties: false,
  '$schema': 'https://json-schema.fullstack.org.cn/draft-07/schema#'
}
让我们也声明另一个工具,以便我们的代理可以检索先前设置的偏好

const getFavoritePets = tool(
  async (_, config: LangGraphRunnableConfig) => {
    const userId = config.configurable?.userId;
    // LangGraph's managed key-value store is also accessible via the config
    const store = config.store;
    const petNames = await store.get([userId, "pets"], "names");
    const context = await store.get([userId, "pets"], "context");
    return JSON.stringify({
      pets: petNames.value,
      context: context.value.content,
    });
  },
  {
    // The LLM "sees" the following schema:
    name: "get_favorite_pets",
    description: "retrieve the list of favorite pets for the given user.",
    schema: z.object({}),
  }
);

定义节点

从这里开始,实际上没有什么特别需要做的。这种方法适用于 StateGraph 和函数式代理,并且与预构建的代理(如 createReactAgent)一样有效!我们将通过使用 StateGraph 定义自定义 ReAct 代理来演示它。这与您调用 createReactAgent 时获得的代理非常相似

让我们首先定义我们图的节点。

  1. 代理:负责决定采取什么(如果有)行动。
  2. 一个调用工具的函数:如果代理决定采取行动,则此节点将执行该操作。

我们还需要定义一些边。

  1. 在调用代理后,我们应该调用工具节点或完成。
  2. 在工具节点被调用后,它应该始终返回到代理以决定下一步做什么
import {
  END,
  START,
  StateGraph,
  MemorySaver,
  InMemoryStore,
} from "@langchain/langgraph";
import { AIMessage } from "@langchain/core/messages";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { ChatOpenAI } from "@langchain/openai";

const model = new ChatOpenAI({ model: "gpt-4o" });

const tools = [getFavoritePets, updateFavoritePets];

const routeMessage = (state: typeof MessagesAnnotation.State) => {
  const { messages } = state;
  const lastMessage = messages[messages.length - 1] as AIMessage;
  // If no tools are called, we can finish (respond to the user)
  if (!lastMessage?.tool_calls?.length) {
    return END;
  }
  // Otherwise if there is, we continue and call the tools
  return "tools";
};

const callModel = async (state: typeof MessagesAnnotation.State) => {
  const { messages } = state;
  const modelWithTools = model.bindTools(tools);
  const responseMessage = await modelWithTools.invoke([
    {
      role: "system",
      content: "You are a personal assistant. Store any preferences the user tells you about."
    },
    ...messages
  ]);
  return { messages: [responseMessage] };
};

const workflow = new StateGraph(MessagesAnnotation)
  .addNode("agent", callModel)
  .addNode("tools", new ToolNode(tools))
  .addEdge(START, "agent")
  .addConditionalEdges("agent", routeMessage)
  .addEdge("tools", "agent");

const memory = new MemorySaver();
const store = new InMemoryStore();

const graph = workflow.compile({ checkpointer: memory, store: store });
import * as tslab from "tslab";

const graphViz = graph.getGraph();
const image = await graphViz.drawMermaidPng();
const arrayBuffer = await image.arrayBuffer();

await tslab.display.png(new Uint8Array(arrayBuffer));

使用它!

现在使用我们的图!

import {
  BaseMessage,
  isAIMessage,
  isHumanMessage,
  isToolMessage,
  HumanMessage,
  ToolMessage,
} from "@langchain/core/messages";

let inputs = {
  messages: [ new HumanMessage({ content: "My favorite pet is a terrier. I saw a cute one on Twitter." }) ],
};

let config = {
  configurable: {
    thread_id: "1",
    userId: "a-user",
  },
};

function printMessages(messages: BaseMessage[]) {
  for (const message of messages) {
    if (isHumanMessage(message)) {
      console.log(`User: ${message.content}`);
    } else if (isAIMessage(message)) {
      const aiMessage = message as AIMessage;
      if (aiMessage.content) {
        console.log(`Assistant: ${aiMessage.content}`);
      }
      if (aiMessage.tool_calls) {
        for (const toolCall of aiMessage.tool_calls) {
          console.log(`Tool call: ${toolCall.name}(${JSON.stringify(toolCall.args)})`);
        }
      }
    } else if (isToolMessage(message)) {
      const toolMessage = message as ToolMessage;
      console.log(`${toolMessage.name} tool output: ${toolMessage.content}`);
    }
  }
}

let { messages } = await graph.invoke(inputs, config);

printMessages(messages);
User: My favorite pet is a terrier. I saw a cute one on Twitter.
Tool call: update_favorite_pets({"pets":["terrier"]})
update_favorite_pets tool output: update_favorite_pets called.
Assistant: I've added "terrier" to your list of favorite pets. If you have any more favorites, feel free to let me know!
现在验证它是否可以正确获取存储的偏好并引用信息的来源

inputs = { messages: [new HumanMessage({ content: "What're my favorite pets and what did I say when I told you about them?" })] };
config = {
  configurable: {
    thread_id: "2", // New thread ID, so the conversation history isn't present.
    userId: "a-user"
  }
};

messages = (await graph.invoke(inputs, config)).messages;

printMessages(messages);
User: What're my favorite pets and what did I say when I told you about them?
Tool call: get_favorite_pets({})
get_favorite_pets tool output: {"pets":["terrier"],"context":"My favorite pet is a terrier. I saw a cute one on Twitter."}
Assistant: Your favorite pet is a terrier. You mentioned, "My favorite pet is a terrier. I saw a cute one on Twitter."
您可以看到代理能够正确引用信息来自 Twitter!

闭包

如果您无法在您的环境中使用上下文变量,则可以使用 闭包 来创建可以访问动态内容的工具。这是一个高级示例

function generateTools(state: typeof MessagesAnnotation.State) {
  const updateFavoritePets = tool(
    async (input, config: LangGraphRunnableConfig) => {
      // Some arguments are populated by the LLM; these are included in the schema below
      const { pets } = input;
      // Others (such as a UserID) are best provided via the config
      // This is set when when invoking or streaming the graph
      const userId = config.configurable?.userId;
      // LangGraph's managed key-value store is also accessible via the config
      const store = config.store;
      await store.put([userId, "pets"], "names", pets )
      await store.put([userId, "pets"], "context", {content: state.messages[0].content})

      return "update_favorite_pets called.";
    },
    {
      // The LLM "sees" the following schema:
      name: "update_favorite_pets",
      description: "add to the list of favorite pets.",
      schema: z.object({
        pets: z.array(z.string()),
      }),
    }
  );
  return [updateFavoritePets];
};

然后,在布置图表时,您需要在每次绑定或调用工具时调用上述方法。例如

const toolNodeWithClosure = async (state: typeof MessagesAnnotation.State) => {
  // We fetch the tools any time this node is reached to
  // form a closure and let it access the latest messages
  const tools = generateTools(state);
  const toolNodeWithConfig = new ToolNode(tools);
  return toolNodeWithConfig.invoke(state);
};