跳到内容

如何从预构建的 ReAct 代理返回结构化输出

要从预构建的 ReAct 代理返回结构化输出,您可以向 createReactAgent 提供一个带有所需输出 schema 的 responseFormat 参数

import { z } from "zod";
import { createReactAgent } from "@langchain/langgraph/prebuilt";

const responseFormat = z.object({
    // Respond to the user in this format
    mySpecialOutput: z.string(),
})

const graph = createReactAgent({
    llm: llm,
    tools: tools,
    // specify the schema for the structured output using `responseFormat` parameter
    responseFormat: responseFormat
})

代理将在对话结束时(不再需要进行工具调用时)通过进行额外的 LLM 调用,按照 responseFormat schema 指定的格式返回输出。您可以阅读本指南,了解另一种方法——将结构化输出视为另一种工具——来实现从代理获取结构化输出。

设置

首先,我们需要安装所需的软件包。

yarn add @langchain/langgraph @langchain/openai @langchain/core zod

本指南将使用 OpenAI 的 GPT-4o 模型。我们可以选择性地设置用于 LangSmith tracing 的 API 密钥,这将为我们提供一流的可观测性。

// process.env.OPENAI_API_KEY = "sk_...";

// Optional, add tracing in LangSmith
// process.env.LANGSMITH_API_KEY = "ls__..."
process.env.LANGSMITH_TRACING = "true";
process.env.LANGSMITH_PROJECT = "ReAct Agent with system prompt: LangGraphJS";

代码

import { ChatOpenAI } from "@langchain/openai";
import { createReactAgent } from "@langchain/langgraph/prebuilt";
import { tool } from "@langchain/core/tools";
import { z } from "zod";

const weatherTool = tool(
  async (input): Promise<string> => {
    if (input.city === "nyc") {
      return "It might be cloudy in nyc";
    } else if (input.city === "sf") {
      return "It's always sunny in sf";
    } else {
      throw new Error("Unknown city");
    }
  },
  {
    name: "get_weather",
    description: "Use this to get weather information.",
    schema: z.object({
      city: z.enum(["nyc", "sf"]).describe("The city to get weather for"),
    }),
  }
);

const WeatherResponseSchema = z.object({
  conditions: z.string().describe("Weather conditions"),
});

const tools = [weatherTool];

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: tools,
  responseFormat: WeatherResponseSchema,
}); 

用法

现在我们来测试一下我们的代理

const response = await agent.invoke({
  messages: [
    {
      role: "user",
      content: "What's the weather in NYC?",
    },
  ],
})

您可以看到,除了 messages 键下的消息历史记录外,代理输出中包含一个 structuredResponse 键,其结构化输出符合指定的 WeatherResponse schema

response.structuredResponse
{ conditions: 'cloudy' }

自定义系统提示

您可能需要进一步自定义第二次 LLM 调用以生成结构化输出并提供系统提示。为此,您可以向 responseFormat 参数传递一个包含 promptschema 键的对象

const agent = createReactAgent({
  llm: new ChatOpenAI({ model: "gpt-4o", temperature: 0 }),
  tools: tools,
  responseFormat: {
    prompt: "Always return capitalized weather conditions",
    schema: WeatherResponseSchema,
  }
}); 

const response = await agent.invoke({
  messages: [
    {
      role: "user",
      content: "What's the weather in NYC?",
    },
  ],
})

您可以验证结构化响应现在包含一个大写的值

response.structuredResponse
{ conditions: 'Cloudy' }