跳到内容

构建多智能体系统

如果单个智能体需要在多个领域进行专业化或管理许多工具,它可能会面临困难。为了解决这个问题,您可以将智能体分解为更小、独立的智能体,并将它们组合成一个多智能体系统

在多智能体系统中,智能体之间需要相互通信。它们通过交接实现这一点——交接是一种原语,描述了将控制权移交给哪个智能体以及要发送给该智能体的数据负载。

本指南涵盖以下内容

要开始构建多智能体系统,请查看 LangGraph 预构建的两种最流行的多智能体架构实现——监督者群组

交接

为了在多智能体系统中设置智能体之间的通信,您可以使用交接(handoffs)——这是一种一个智能体将控制权“交接”给另一个智能体的模式。交接允许您指定

  • 目标:要导航到的目标智能体(例如,要前往的 LangGraph 节点的名称)
  • 数据负载:要传递给该智能体的信息(例如,状态更新)

创建交接

要实现交接,您可以从智能体节点或工具中返回 Command 对象

API 参考:tool | InjectedToolCallId | create_react_agent | InjectedState | StateGraph | START | Command

from typing import Annotated
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import create_react_agent, InjectedState
from langgraph.graph import StateGraph, START, MessagesState
from langgraph.types import Command

def create_handoff_tool(*, agent_name: str, description: str | None = None):
    name = f"transfer_to_{agent_name}"
    description = description or f"Transfer to {agent_name}"

    @tool(name, description=description)
    def handoff_tool(
        state: Annotated[MessagesState, InjectedState], # (1)!
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        tool_message = {
            "role": "tool",
            "content": f"Successfully transferred to {agent_name}",
            "name": name,
            "tool_call_id": tool_call_id,
        }
        return Command(  # (2)!
            goto=agent_name,  # (3)!
            update={"messages": state["messages"] + [tool_message]},  # (4)!
            graph=Command.PARENT,  # (5)!
        )
    return handoff_tool
  1. 使用 InjectedState 注解访问调用交接工具的智能体的状态。有关更多信息,请参阅此指南
  2. Command 原语允许将状态更新和节点转换指定为单个操作,这使其在实现交接时非常有用。
  3. 要交接到的智能体或节点的名称。
  4. 获取智能体的消息,并将其作为交接的一部分**添加**到父级的**状态**中。下一个智能体将看到父级状态。
  5. 指示 LangGraph 我们需要在**父级**多智能体图中导航到智能体节点。

提示

如果您想使用返回 Command 的工具,您可以使用预构建的 create_react_agent / ToolNode 组件,或者实现自己的工具执行节点,该节点收集工具返回的 Command 对象并返回它们的列表,例如。

def call_tools(state):
    ...
    commands = [tools_by_name[tool_call["name"]].invoke(tool_call) for tool_call in tool_calls]
    return commands

重要

此交接实现假定

  • 多智能体系统中的每个智能体都将整体消息历史(跨所有智能体)作为其输入。如果您想更精细地控制智能体输入,请参阅此部分
  • 每个智能体将其内部消息历史输出到多智能体系统的整体消息历史中。如果您想更精细地控制**智能体输出的添加方式**,请将智能体包装在一个单独的节点函数中

    def call_hotel_assistant(state):
        # return agent's final response,
        # excluding inner monologue
        response = hotel_assistant.invoke(state)
        return {"messages": response["messages"][-1]}
    

控制智能体输入

您可以使用 Send() 原语在交接期间直接向工作智能体发送数据。例如,您可以要求调用智能体为下一个智能体填充任务描述

API 参考:tool | InjectedToolCallId | InjectedState | StateGraph | START | Command | Send

from typing import Annotated
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import InjectedState
from langgraph.graph import StateGraph, START, MessagesState
from langgraph.types import Command, Send

def create_task_description_handoff_tool(
    *, agent_name: str, description: str | None = None
):
    name = f"transfer_to_{agent_name}"
    description = description or f"Ask {agent_name} for help."

    @tool(name, description=description)
    def handoff_tool(
        # this is populated by the calling agent
        task_description: Annotated[
            str,
            "Description of what the next agent should do, including all of the relevant context.",
        ],
        # these parameters are ignored by the LLM
        state: Annotated[MessagesState, InjectedState],
    ) -> Command:
        task_description_message = {"role": "user", "content": task_description}
        agent_input = {**state, "messages": [task_description_message]}
        return Command(
            goto=[Send(agent_name, agent_input)],
            graph=Command.PARENT,
        )

    return handoff_tool

有关在交接中使用 Send() 的完整示例,请参阅多智能体监督者教程。

构建多智能体系统

您可以在任何使用 LangGraph 构建的智能体中使用交接。我们建议使用预构建的智能体ToolNode,因为它们原生支持返回 Command 的交接工具。以下是您如何使用交接实现用于旅行预订的多智能体系统的示例

API 参考:create_react_agent | StateGraph | START

from langgraph.prebuilt import create_react_agent
from langgraph.graph import StateGraph, START, MessagesState

def create_handoff_tool(*, agent_name: str, description: str | None = None):
    # same implementation as above
    ...
    return Command(...)

# Handoffs
transfer_to_hotel_assistant = create_handoff_tool(agent_name="hotel_assistant")
transfer_to_flight_assistant = create_handoff_tool(agent_name="flight_assistant")

# Define agents
flight_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    tools=[..., transfer_to_hotel_assistant],
    name="flight_assistant"
)
hotel_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    tools=[..., transfer_to_flight_assistant],
    name="hotel_assistant"
)

# Define multi-agent graph
multi_agent_graph = (
    StateGraph(MessagesState)
    .add_node(flight_assistant)
    .add_node(hotel_assistant)
    .add_edge(START, "flight_assistant")
    .compile()
)
完整示例:用于旅行预订的多智能体系统
from typing import Annotated
from langchain_core.messages import convert_to_messages
from langchain_core.tools import tool, InjectedToolCallId
from langgraph.prebuilt import create_react_agent, InjectedState
from langgraph.graph import StateGraph, START, MessagesState
from langgraph.types import Command

# We'll use `pretty_print_messages` helper to render the streamed agent outputs nicely

def pretty_print_message(message, indent=False):
    pretty_message = message.pretty_repr(html=True)
    if not indent:
        print(pretty_message)
        return

    indented = "\n".join("\t" + c for c in pretty_message.split("\n"))
    print(indented)


def pretty_print_messages(update, last_message=False):
    is_subgraph = False
    if isinstance(update, tuple):
        ns, update = update
        # skip parent graph updates in the printouts
        if len(ns) == 0:
            return

        graph_id = ns[-1].split(":")[0]
        print(f"Update from subgraph {graph_id}:")
        print("\n")
        is_subgraph = True

    for node_name, node_update in update.items():
        update_label = f"Update from node {node_name}:"
        if is_subgraph:
            update_label = "\t" + update_label

        print(update_label)
        print("\n")

        messages = convert_to_messages(node_update["messages"])
        if last_message:
            messages = messages[-1:]

        for m in messages:
            pretty_print_message(m, indent=is_subgraph)
        print("\n")


def create_handoff_tool(*, agent_name: str, description: str | None = None):
    name = f"transfer_to_{agent_name}"
    description = description or f"Transfer to {agent_name}"

    @tool(name, description=description)
    def handoff_tool(
        state: Annotated[MessagesState, InjectedState], # (1)!
        tool_call_id: Annotated[str, InjectedToolCallId],
    ) -> Command:
        tool_message = {
            "role": "tool",
            "content": f"Successfully transferred to {agent_name}",
            "name": name,
            "tool_call_id": tool_call_id,
        }
        return Command(  # (2)!
            goto=agent_name,  # (3)!
            update={"messages": state["messages"] + [tool_message]},  # (4)!
            graph=Command.PARENT,  # (5)!
        )
    return handoff_tool

# Handoffs
transfer_to_hotel_assistant = create_handoff_tool(
    agent_name="hotel_assistant",
    description="Transfer user to the hotel-booking assistant.",
)
transfer_to_flight_assistant = create_handoff_tool(
    agent_name="flight_assistant",
    description="Transfer user to the flight-booking assistant.",
)

# Simple agent tools
def book_hotel(hotel_name: str):
    """Book a hotel"""
    return f"Successfully booked a stay at {hotel_name}."

def book_flight(from_airport: str, to_airport: str):
    """Book a flight"""
    return f"Successfully booked a flight from {from_airport} to {to_airport}."

# Define agents
flight_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    tools=[book_flight, transfer_to_hotel_assistant],
    prompt="You are a flight booking assistant",
    name="flight_assistant"
)
hotel_assistant = create_react_agent(
    model="anthropic:claude-3-5-sonnet-latest",
    tools=[book_hotel, transfer_to_flight_assistant],
    prompt="You are a hotel booking assistant",
    name="hotel_assistant"
)

# Define multi-agent graph
multi_agent_graph = (
    StateGraph(MessagesState)
    .add_node(flight_assistant)
    .add_node(hotel_assistant)
    .add_edge(START, "flight_assistant")
    .compile()
)

# Run the multi-agent graph
for chunk in multi_agent_graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": "book a flight from BOS to JFK and a stay at McKittrick Hotel"
            }
        ]
    },
    subgraphs=True
):
    pretty_print_messages(chunk)
  1. 访问智能体的状态
  2. Command 原语允许将状态更新和节点转换指定为单个操作,这使其在实现交接时非常有用。
  3. 要交接到的智能体或节点的名称。
  4. 获取智能体的消息,并将其作为交接的一部分**添加**到父级的**状态**中。下一个智能体将看到父级状态。
  5. 指示 LangGraph 我们需要在**父级**多智能体图中导航到智能体节点。

多轮对话

用户可能希望与一个或多个智能体进行多轮对话。为了构建一个能够处理这种情况的系统,您可以创建一个使用中断来收集用户输入并路由回**活跃**智能体的节点。

然后,智能体可以作为图中的节点来实现,该图执行智能体步骤并确定下一个动作

  1. 等待用户输入以继续对话,或者
  2. 通过交接路由到另一个智能体(或返回到自身,例如在循环中)
def human(state) -> Command[Literal["agent", "another_agent"]]:
    """A node for collecting user input."""
    user_input = interrupt(value="Ready for user input.")

    # Determine the active agent.
    active_agent = ...

    ...
    return Command(
        update={
            "messages": [{
                "role": "human",
                "content": user_input,
            }]
        },
        goto=active_agent
    )

def agent(state) -> Command[Literal["agent", "another_agent", "human"]]:
    # The condition for routing/halting can be anything, e.g. LLM tool call / structured output, etc.
    goto = get_next_agent(...)  # 'agent' / 'another_agent'
    if goto:
        return Command(goto=goto, update={"my_state_key": "my_state_value"})
    else:
        return Command(goto="human") # Go to human node
完整示例:用于旅行推荐的多智能体系统

在此示例中,我们将构建一个旅行助手智能体团队,它们可以通过交接相互通信。

我们将创建2个智能体

  • travel_advisor:可以帮助推荐旅行目的地。可以向 hotel_advisor 寻求帮助。
  • hotel_advisor:可以帮助推荐酒店。可以向 travel_advisor 寻求帮助。
from langchain_anthropic import ChatAnthropic
from langgraph.graph import MessagesState, StateGraph, START
from langgraph.prebuilt import create_react_agent, InjectedState
from langgraph.types import Command, interrupt
from langgraph.checkpoint.memory import MemorySaver


model = ChatAnthropic(model="claude-3-5-sonnet-latest")

class MultiAgentState(MessagesState):
    last_active_agent: str


# Define travel advisor tools and ReAct agent
travel_advisor_tools = [
    get_travel_recommendations,
    make_handoff_tool(agent_name="hotel_advisor"),
]
travel_advisor = create_react_agent(
    model,
    travel_advisor_tools,
    prompt=(
        "You are a general travel expert that can recommend travel destinations (e.g. countries, cities, etc). "
        "If you need hotel recommendations, ask 'hotel_advisor' for help. "
        "You MUST include human-readable response before transferring to another agent."
    ),
)


def call_travel_advisor(
    state: MultiAgentState,
) -> Command[Literal["hotel_advisor", "human"]]:
    # You can also add additional logic like changing the input to the agent / output from the agent, etc.
    # NOTE: we're invoking the ReAct agent with the full history of messages in the state
    response = travel_advisor.invoke(state)
    update = {**response, "last_active_agent": "travel_advisor"}
    return Command(update=update, goto="human")


# Define hotel advisor tools and ReAct agent
hotel_advisor_tools = [
    get_hotel_recommendations,
    make_handoff_tool(agent_name="travel_advisor"),
]
hotel_advisor = create_react_agent(
    model,
    hotel_advisor_tools,
    prompt=(
        "You are a hotel expert that can provide hotel recommendations for a given destination. "
        "If you need help picking travel destinations, ask 'travel_advisor' for help."
        "You MUST include human-readable response before transferring to another agent."
    ),
)


def call_hotel_advisor(
    state: MultiAgentState,
) -> Command[Literal["travel_advisor", "human"]]:
    response = hotel_advisor.invoke(state)
    update = {**response, "last_active_agent": "hotel_advisor"}
    return Command(update=update, goto="human")


def human_node(
    state: MultiAgentState, config
) -> Command[Literal["hotel_advisor", "travel_advisor", "human"]]:
    """A node for collecting user input."""

    user_input = interrupt(value="Ready for user input.")
    active_agent = state["last_active_agent"]

    return Command(
        update={
            "messages": [
                {
                    "role": "human",
                    "content": user_input,
                }
            ]
        },
        goto=active_agent,
    )


builder = StateGraph(MultiAgentState)
builder.add_node("travel_advisor", call_travel_advisor)
builder.add_node("hotel_advisor", call_hotel_advisor)

# This adds a node to collect human input, which will route
# back to the active agent.
builder.add_node("human", human_node)

# We'll always start with a general travel advisor.
builder.add_edge(START, "travel_advisor")


checkpointer = MemorySaver()
graph = builder.compile(checkpointer=checkpointer)

让我们测试一下此应用程序的多轮对话。

import uuid

thread_config = {"configurable": {"thread_id": str(uuid.uuid4())}}

inputs = [
    # 1st round of conversation,
    {
        "messages": [
            {"role": "user", "content": "i wanna go somewhere warm in the caribbean"}
        ]
    },
    # Since we're using `interrupt`, we'll need to resume using the Command primitive.
    # 2nd round of conversation,
    Command(
        resume="could you recommend a nice hotel in one of the areas and tell me which area it is."
    ),
    # 3rd round of conversation,
    Command(
        resume="i like the first one. could you recommend something to do near the hotel?"
    ),
]

for idx, user_input in enumerate(inputs):
    print()
    print(f"--- Conversation Turn {idx + 1} ---")
    print()
    print(f"User: {user_input}")
    print()
    for update in graph.stream(
        user_input,
        config=thread_config,
        stream_mode="updates",
    ):
        for node_id, value in update.items():
            if isinstance(value, dict) and value.get("messages", []):
                last_message = value["messages"][-1]
                if isinstance(last_message, dict) or last_message.type != "ai":
                    continue
                print(f"{node_id}: {last_message.content}")
--- Conversation Turn 1 ---

User: {'messages': [{'role': 'user', 'content': 'i wanna go somewhere warm in the caribbean'}]}

travel_advisor: Based on the recommendations, Aruba would be an excellent choice for your Caribbean getaway! Aruba is known as "One Happy Island" and offers:
- Year-round warm weather with consistent temperatures around 82°F (28°C)
- Beautiful white sand beaches like Eagle Beach and Palm Beach
- Clear turquoise waters perfect for swimming and snorkeling
- Minimal rainfall and location outside the hurricane belt
- A blend of Caribbean and Dutch culture
- Great dining options and nightlife
- Various water sports and activities

Would you like me to get some specific hotel recommendations in Aruba for your stay? I can transfer you to our hotel advisor who can help with accommodations.

--- Conversation Turn 2 ---

User: Command(resume='could you recommend a nice hotel in one of the areas and tell me which area it is.')

hotel_advisor: Based on the recommendations, I can suggest two excellent options:

1. The Ritz-Carlton, Aruba - Located in Palm Beach
- This luxury resort is situated in the vibrant Palm Beach area
- Known for its exceptional service and amenities
- Perfect if you want to be close to dining, shopping, and entertainment
- Features multiple restaurants, a casino, and a world-class spa
- Located on a pristine stretch of Palm Beach

2. Bucuti & Tara Beach Resort - Located in Eagle Beach
- An adults-only boutique resort on Eagle Beach
- Known for being more intimate and peaceful
- Award-winning for its sustainability practices
- Perfect for a romantic getaway or peaceful vacation
- Located on one of the most beautiful beaches in the Caribbean

Would you like more specific information about either of these properties or their locations?

--- Conversation Turn 3 ---

User: Command(resume='i like the first one. could you recommend something to do near the hotel?')

travel_advisor: Near the Ritz-Carlton in Palm Beach, here are some highly recommended activities:

1. Visit the Palm Beach Plaza Mall - Just a short walk from the hotel, featuring shopping, dining, and entertainment
2. Try your luck at the Stellaris Casino - It's right in the Ritz-Carlton
3. Take a sunset sailing cruise - Many depart from the nearby pier
4. Visit the California Lighthouse - A scenic landmark just north of Palm Beach
5. Enjoy water sports at Palm Beach:
   - Jet skiing
   - Parasailing
   - Snorkeling
   - Stand-up paddleboarding

Would you like more specific information about any of these activities or would you like to know about other options in the area?

预构建实现

LangGraph 提供了两种最流行的多智能体架构的预构建实现

  • 监督者(supervisor)——个体智能体由一个中央监督者智能体协调。监督者控制所有通信流和任务委派,根据当前上下文和任务要求决定调用哪个智能体。您可以使用 langgraph-supervisor 库来创建监督者多智能体系统。
  • 群组(swarm)——智能体根据其专业特长动态地将控制权相互交接。系统会记住上次活跃的智能体,确保在后续交互中,对话与该智能体恢复。您可以使用 langgraph-swarm 库来创建群组多智能体系统。