跳到内容

预构建

create_react_agent(model: Union[str, LanguageModelLike], tools: Union[Sequence[Union[BaseTool, Callable]], ToolNode], *, prompt: Optional[Prompt] = None, response_format: Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]] = None, state_schema: Optional[StateSchemaType] = None, config_schema: Optional[Type[Any]] = None, checkpointer: Optional[Checkpointer] = None, store: Optional[BaseStore] = None, interrupt_before: Optional[list[str]] = None, interrupt_after: Optional[list[str]] = None, debug: bool = False, version: Literal['v1', 'v2'] = 'v1', name: Optional[str] = None) -> CompiledGraph

创建一个使用聊天模型并利用工具调用的图。

参数

  • model (Union[str, LanguageModelLike]) –

    支持工具调用的 LangChain 聊天模型。

  • tools (Union[Sequence[Union[BaseTool, Callable]], ToolNode]) –

    工具列表或 ToolNode 实例。如果提供空列表,则代理将由单个 LLM 节点组成,而无需工具调用。

  • prompt (Optional[Prompt], default: None ) –

    LLM 的可选提示。可以采用几种不同的形式

    • str:这将被转换为 SystemMessage 并添加到 state["messages"] 中的消息列表的开头。
    • SystemMessage:这将被添加到 state["messages"] 中的消息列表的开头。
    • Callable:此函数应接收完整的图状态,然后将输出传递给语言模型。
    • Runnable:此 runnable 应接收完整的图状态,然后将输出传递给语言模型。
  • response_format (Optional[Union[StructuredResponseSchema, tuple[str, StructuredResponseSchema]]], default: None ) –

    最终代理输出的可选架构。

    如果提供,输出将格式化为与给定架构匹配,并在 'structured_response' 状态键中返回。如果未提供,structured_response 将不会出现在输出状态中。可以作为以下内容传入

    - an OpenAI function/tool schema,
    - a JSON Schema,
    - a TypedDict class,
    - or a Pydantic class.
    - a tuple (prompt, schema), where schema is one of the above.
        The prompt will be used together with the model that is being used to generate the structured response.
    

    重要提示

    response_format 要求模型支持 .with_structured_output

    注意

    在代理循环完成后,图将单独调用 LLM 以生成结构化响应。这不是获取结构化响应的唯一策略,请参阅本指南中的更多选项。

  • state_schema (Optional[StateSchemaType], default: None ) –

    定义图状态的可选状态架构。必须具有 messagesremaining_steps 键。默认为定义这两个键的 AgentState

  • config_schema (Optional[Type[Any]], default: None ) –

    配置的可选架构。使用此选项通过 agent.config_specs 公开可配置的参数。

  • checkpointer (Optional[Checkpointer], default: None ) –

    可选的检查点保存器对象。这用于持久化单个线程(例如,单个对话)的图状态(例如,作为聊天记录)。

  • store (Optional[BaseStore], default: None ) –

    可选的存储对象。这用于持久化跨多个线程的数据(例如,多个对话/用户)。

  • interrupt_before (Optional[list[str]], default: None ) –

    要在之前中断的可选节点名称列表。应为以下之一:“agent”、“tools”。如果您想在采取操作之前添加用户确认或其他中断,这将非常有用。

  • interrupt_after (Optional[list[str]], default: None ) –

    要在之后中断的可选节点名称列表。应为以下之一:“agent”、“tools”。如果您想直接返回或对输出运行额外的处理,这将非常有用。

  • debug (bool, default: False ) –

    指示是否启用调试模式的标志。

  • version (Literal['v1', 'v2'], default: 'v1' ) –

    确定要创建的图的版本。可以是以下之一

    • "v1":工具节点处理单条消息。消息中的所有工具调用都在工具节点内并行执行。
    • "v2":工具节点处理工具调用。工具调用使用 Send API 分布在工具节点的多个实例中。
  • name (Optional[str], default: None ) –

    CompiledStateGraph 的可选名称。当将 ReAct 代理图作为子图节点添加到另一个图中时,此名称将自动使用 - 对于构建多代理系统特别有用。

返回

  • CompiledGraph

    一个编译后的 LangChain runnable,可用于聊天交互。

生成的图如下所示

stateDiagram-v2
    [*] --> Start
    Start --> Agent
    Agent --> Tools : continue
    Tools --> Agent
    Agent --> End : end
    End --> [*]

    classDef startClass fill:#ffdfba;
    classDef endClass fill:#baffc9;
    classDef otherClass fill:#fad7de;

    class Start startClass
    class End endClass
    class Agent,Tools otherClass

“agent”节点使用消息列表(在应用消息修饰符后)调用语言模型。如果生成的 AIMessage 包含 tool_calls,则图将调用 “tools”。“tools”节点执行工具(每个 tool_call 一个工具),并将响应作为 ToolMessage 对象添加到消息列表中。然后,agent 节点再次调用语言模型。重复该过程,直到响应中不再存在 tool_calls。然后,代理返回消息的完整列表,作为包含键“messages”的字典。

    sequenceDiagram
        participant U as User
        participant A as Agent (LLM)
        participant T as Tools
        U->>A: Initial input
        Note over A: Messages modifier + LLM
        loop while tool_calls present
            A->>T: Execute tools
            T-->>A: ToolMessage for each tool_calls
        end
        A->>U: Return final state

示例

与简单工具一起使用

>>> from langchain_openai import ChatOpenAI
>>> from langgraph.prebuilt import create_react_agent


... def check_weather(location: str) -> str:
...     '''Return the weather forecast for the specified location.'''
...     return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> model = ChatOpenAI(model="gpt-4o")
>>> graph = create_react_agent(model, tools=tools)
>>> inputs = {"messages": [("user", "what is the weather in sf")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()
('user', 'what is the weather in sf')
================================== Ai Message ==================================
Tool Calls:
check_weather (call_LUzFvKJRuaWQPeXvBOzwhQOu)
Call ID: call_LUzFvKJRuaWQPeXvBOzwhQOu
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny.
为 LLM 添加系统提示

>>> system_prompt = "You are a helpful bot named Fred."
>>> graph = create_react_agent(model, tools, prompt=system_prompt)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()
('user', "What's your name? And what's the weather in SF?")
================================== Ai Message ==================================
Hi, my name is Fred. Let me check the weather in San Francisco for you.
Tool Calls:
check_weather (call_lqhj4O0hXYkW9eknB4S41EXk)
Call ID: call_lqhj4O0hXYkW9eknB4S41EXk
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is currently sunny. If you need any more details or have other questions, feel free to ask!

为 LLM 添加更复杂的提示

>>> from langchain_core.prompts import ChatPromptTemplate
>>> prompt = ChatPromptTemplate.from_messages([
...     ("system", "You are a helpful bot named Fred."),
...     ("placeholder", "{messages}"),
...     ("user", "Remember, always be polite!"),
... ])
>>>
>>> graph = create_react_agent(model, tools, prompt=prompt)
>>> inputs = {"messages": [("user", "What's your name? And what's the weather in SF?")]}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()

添加具有自定义图状态的复杂提示

>>> from typing_extensions import TypedDict
>>>
>>> from langgraph.managed import IsLastStep
>>> prompt = ChatPromptTemplate.from_messages(
...     [
...         ("system", "Today is {today}"),
...         ("placeholder", "{messages}"),
...     ]
... )
>>>
>>> class CustomState(TypedDict):
...     today: str
...     messages: Annotated[list[BaseMessage], add_messages]
...     is_last_step: IsLastStep
>>>
>>> graph = create_react_agent(model, tools, state_schema=CustomState, prompt=prompt)
>>> inputs = {"messages": [("user", "What's today's date? And what's the weather in SF?")], "today": "July 16, 2004"}
>>> for s in graph.stream(inputs, stream_mode="values"):
...     message = s["messages"][-1]
...     if isinstance(message, tuple):
...         print(message)
...     else:
...         message.pretty_print()

向图中添加线程级“聊天记录”

>>> from langgraph.checkpoint.memory import MemorySaver
>>> graph = create_react_agent(model, tools, checkpointer=MemorySaver())
>>> config = {"configurable": {"thread_id": "thread-1"}}
>>> def print_stream(graph, inputs, config):
...     for s in graph.stream(inputs, config, stream_mode="values"):
...         message = s["messages"][-1]
...         if isinstance(message, tuple):
...             print(message)
...         else:
...             message.pretty_print()
>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> inputs2 = {"messages": [("user", "Cool, so then should i go biking today?")]}
>>> print_stream(graph, inputs2, config)
('user', "What's the weather in SF?")
================================== Ai Message ==================================
Tool Calls:
check_weather (call_ChndaktJxpr6EMPEB5JfOFYc)
Call ID: call_ChndaktJxpr6EMPEB5JfOFYc
Args:
    location: San Francisco
================================= Tool Message =================================
Name: check_weather
It's always sunny in San Francisco
================================== Ai Message ==================================
The weather in San Francisco is sunny. Enjoy your day!
================================ Human Message =================================
Cool, so then should i go biking today?
================================== Ai Message ==================================
Since the weather in San Francisco is sunny, it sounds like a great day for biking! Enjoy your ride!

添加中断以允许用户在采取操作之前确认

>>> graph = create_react_agent(
...     model, tools, interrupt_before=["tools"], checkpointer=MemorySaver()
>>> )
>>> config = {"configurable": {"thread_id": "thread-1"}}

>>> inputs = {"messages": [("user", "What's the weather in SF?")]}
>>> print_stream(graph, inputs, config)
>>> snapshot = graph.get_state(config)
>>> print("Next step: ", snapshot.next)
>>> print_stream(graph, None, config)

向图中添加跨线程内存

>>> from langgraph.prebuilt import InjectedStore
>>> from langgraph.store.base import BaseStore

>>> def save_memory(memory: str, *, config: RunnableConfig, store: Annotated[BaseStore, InjectedStore()]) -> str:
...     '''Save the given memory for the current user.'''
...     # This is a **tool** the model can use to save memories to storage
...     user_id = config.get("configurable", {}).get("user_id")
...     namespace = ("memories", user_id)
...     store.put(namespace, f"memory_{len(store.search(namespace))}", {"data": memory})
...     return f"Saved memory: {memory}"

>>> def prepare_model_inputs(state: AgentState, config: RunnableConfig, store: BaseStore):
...     # Retrieve user memories and add them to the system message
...     # This function is called **every time** the model is prompted. It converts the state to a prompt
...     user_id = config.get("configurable", {}).get("user_id")
...     namespace = ("memories", user_id)
...     memories = [m.value["data"] for m in store.search(namespace)]
...     system_msg = f"User memories: {', '.join(memories)}"
...     return [{"role": "system", "content": system_msg)] + state["messages"]

>>> from langgraph.checkpoint.memory import MemorySaver
>>> from langgraph.store.memory import InMemoryStore
>>> store = InMemoryStore()
>>> graph = create_react_agent(model, [save_memory], prompt=prepare_model_inputs, store=store, checkpointer=MemorySaver())
>>> config = {"configurable": {"thread_id": "thread-1", "user_id": "1"}}

>>> inputs = {"messages": [("user", "Hey I'm Will, how's it going?")]}
>>> print_stream(graph, inputs, config)
('user', "Hey I'm Will, how's it going?")
================================== Ai Message ==================================
Hello Will! It's nice to meet you. I'm doing well, thank you for asking. How are you doing today?

>>> inputs2 = {"messages": [("user", "I like to bike")]}
>>> print_stream(graph, inputs2, config)
================================ Human Message =================================
I like to bike
================================== Ai Message ==================================
That's great to hear, Will! Biking is an excellent hobby and form of exercise. It's a fun way to stay active and explore your surroundings. Do you have any favorite biking routes or trails you enjoy? Or perhaps you're into a specific type of biking, like mountain biking or road cycling?

>>> config = {"configurable": {"thread_id": "thread-2", "user_id": "1"}}
>>> inputs3 = {"messages": [("user", "Hi there! Remember me?")]}
>>> print_stream(graph, inputs3, config)
================================ Human Message =================================
Hi there! Remember me?
================================== Ai Message ==================================
User memories:
Hello! Of course, I remember you, Will! You mentioned earlier that you like to bike. It's great to hear from you again. How have you been? Have you been on any interesting bike rides lately?

为给定步骤添加超时

>>> import time
... def check_weather(location: str) -> str:
...     '''Return the weather forecast for the specified location.'''
...     time.sleep(2)
...     return f"It's always sunny in {location}"
>>>
>>> tools = [check_weather]
>>> graph = create_react_agent(model, tools)
>>> graph.step_timeout = 1 # Seconds
>>> for s in graph.stream({"messages": [("user", "what is the weather in sf")]}):
...     print(s)
TimeoutError: Timed out at step 2

ToolNode

基类:RunnableCallable

一个节点,用于运行在最后一条 AIMessage 中调用的工具。

它可以与具有“messages”状态键(或通过 ToolNode 的 'messages_key' 传递的自定义键)的 StateGraph 一起使用。如果请求多个工具调用,它们将并行运行。输出将是 ToolMessages 列表,每个工具调用一个。

工具调用也可以直接作为 ToolCall 字典列表传递。

参数

  • tools (Sequence[Union[BaseTool, Callable]]) –

    ToolNode 可以调用的工具序列。

  • name (str, default: 'tools' ) –

    图中 ToolNode 的名称。默认为“tools”。

  • tags (Optional[list[str]], default: None ) –

    与节点关联的可选标签。默认为 None。

  • handle_tool_errors (Union[bool, str, Callable[..., str], tuple[type[Exception], ...]], default: True ) –

    如何处理节点内工具引发的工具错误。默认为 True。必须是以下之一

    • True:所有错误都将被捕获,并将返回带有默认错误消息 (TOOL_CALL_ERROR_TEMPLATE) 的 ToolMessage。
    • str:所有错误都将被捕获,并将返回带有 'handle_tool_errors' 的字符串值的 ToolMessage。
    • tuple[type[Exception], ...]:元组中的异常将被捕获,并将返回带有默认错误消息 (TOOL_CALL_ERROR_TEMPLATE) 的 ToolMessage。
    • Callable[..., str]:将捕获可调用对象的签名中的异常,并将返回带有 'handle_tool_errors' 可调用对象的结果的字符串值的 ToolMessage。
    • False:不会捕获工具引发的任何错误
  • messages_key (str, default: 'messages' ) –

    输入中包含消息列表的状态键。相同的键将用于 ToolNode 的输出。默认为“messages”。

ToolNode 大致类似于

tools_by_name = {tool.name: tool for tool in tools}
def tool_node(state: dict):
    result = []
    for tool_call in state["messages"][-1].tool_calls:
        tool = tools_by_name[tool_call["name"]]
        observation = tool.invoke(tool_call["args"])
        result.append(ToolMessage(content=observation, tool_call_id=tool_call["id"]))
    return {"messages": result}

工具调用也可以直接传递给 ToolNode。当使用 Send API 时,这可能很有用,例如,在条件边缘中

def example_conditional_edge(state: dict) -> List[Send]:
    tool_calls = state["messages"][-1].tool_calls
    # If tools rely on state or store variables (whose values are not generated
    # directly by a model), you can inject them into the tool calls.
    tool_calls = [
        tool_node.inject_tool_args(call, state, store)
        for call in last_message.tool_calls
    ]
    return [Send("tools", [tool_call]) for tool_call in tool_calls]
重要提示
  • 输入状态可以是以下之一
    • 带有包含消息列表的消息键的字典。
    • 消息列表。
    • 工具调用列表。
  • 如果对消息列表进行操作,则最后一条消息必须是填充了 tool_callsAIMessage

inject_tool_args(tool_call: ToolCall, input: Union[list[AnyMessage], dict[str, Any], BaseModel], store: Optional[BaseStore]) -> ToolCall

将状态和存储注入到工具调用中。

类型注释为 InjectedStateInjectedStore 的工具参数在工具架构中被忽略,以用于生成目的。此方法将它们注入到工具调用中以进行工具调用。

参数

  • tool_call (ToolCall) –

    要注入状态和存储的工具调用。

  • input (Union[list[AnyMessage], dict[str, Any], BaseModel]) –

    要注入的输入状态。

  • store (Optional[BaseStore]) –

    要注入的存储。

返回

  • ToolCall ( ToolCall ) –

    注入了状态和存储的工具调用。

InjectedState

基类:InjectedToolArg

用于工具参数的注解,该参数旨在用图状态填充。

任何使用 InjectedState 注解的工具参数都将从工具调用模型中隐藏,以便模型不会尝试生成参数。如果使用 ToolNode,则相应的图状态字段将自动注入到模型生成的工具参数中。

参数

  • field (Optional[str], default: None ) –

    要插入的状态中的键。如果为 None,则期望传入整个状态。

示例
from typing import List
from typing_extensions import Annotated, TypedDict

from langchain_core.messages import BaseMessage, AIMessage
from langchain_core.tools import tool

from langgraph.prebuilt import InjectedState, ToolNode


class AgentState(TypedDict):
    messages: List[BaseMessage]
    foo: str

@tool
def state_tool(x: int, state: Annotated[dict, InjectedState]) -> str:
    '''Do something with state.'''
    if len(state["messages"]) > 2:
        return state["foo"] + str(x)
    else:
        return "not enough messages"

@tool
def foo_tool(x: int, foo: Annotated[str, InjectedState("foo")]) -> str:
    '''Do something else with state.'''
    return foo + str(x + 1)

node = ToolNode([state_tool, foo_tool])

tool_call1 = {"name": "state_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
tool_call2 = {"name": "foo_tool", "args": {"x": 1}, "id": "2", "type": "tool_call"}
state = {
    "messages": [AIMessage("", tool_calls=[tool_call1, tool_call2])],
    "foo": "bar",
}
node.invoke(state)
[
    ToolMessage(content='not enough messages', name='state_tool', tool_call_id='1'),
    ToolMessage(content='bar2', name='foo_tool', tool_call_id='2')
]

InjectedStore

基类:InjectedToolArg

用于工具参数的注解,该参数旨在用 LangGraph 存储填充。

任何使用 InjectedStore 注解的工具参数都将从工具调用模型中隐藏,以便模型不会尝试生成参数。如果使用 ToolNode,则相应的存储字段将自动注入到模型生成的工具参数中。注意:如果图是使用存储对象编译的,则当使用 ToolNode 时,存储将自动传播到带有 InjectedStore 参数的工具。

警告

InjectedStore 注解需要 langchain-core >= 0.3.8

示例
from typing import Any
from typing_extensions import Annotated

from langchain_core.messages import AIMessage
from langchain_core.tools import tool

from langgraph.store.memory import InMemoryStore
from langgraph.prebuilt import InjectedStore, ToolNode

store = InMemoryStore()
store.put(("values",), "foo", {"bar": 2})

@tool
def store_tool(x: int, my_store: Annotated[Any, InjectedStore()]) -> str:
    '''Do something with store.'''
    stored_value = my_store.get(("values",), "foo").value["bar"]
    return stored_value + x

node = ToolNode([store_tool])

tool_call = {"name": "store_tool", "args": {"x": 1}, "id": "1", "type": "tool_call"}
state = {
    "messages": [AIMessage("", tool_calls=[tool_call])],
}

node.invoke(state, store=store)
{
    "messages": [
        ToolMessage(content='3', name='store_tool', tool_call_id='1'),
    ]
}

tools_condition(state: Union[list[AnyMessage], dict[str, Any], BaseModel], messages_key: str = 'messages') -> Literal['tools', '__end__']

如果在最后一条消息中

有工具调用,则在 conditional_edge 中使用以路由到 ToolNode。否则,路由到结尾。

参数

  • state (Union[list[AnyMessage], dict[str, Any], BaseModel]) –

    要检查工具调用的状态。必须具有消息列表 (MessageGraph) 或具有“messages”键 (StateGraph)。

返回

  • Literal['tools', '__end__']

    要路由到的下一个节点。

示例

创建一个带有工具的自定义 ReAct 风格代理。

>>> from langchain_anthropic import ChatAnthropic
>>> from langchain_core.tools import tool
...
>>> from langgraph.graph import StateGraph
>>> from langgraph.prebuilt import ToolNode, tools_condition
>>> from langgraph.graph.message import add_messages
...
>>> from typing import Annotated
>>> from typing_extensions import TypedDict
...
>>> @tool
>>> def divide(a: float, b: float) -> int:
...     """Return a / b."""
...     return a / b
...
>>> llm = ChatAnthropic(model="claude-3-haiku-20240307")
>>> tools = [divide]
...
>>> class State(TypedDict):
...     messages: Annotated[list, add_messages]
>>>
>>> graph_builder = StateGraph(State)
>>> graph_builder.add_node("tools", ToolNode(tools))
>>> graph_builder.add_node("chatbot", lambda state: {"messages":llm.bind_tools(tools).invoke(state['messages'])})
>>> graph_builder.add_edge("tools", "chatbot")
>>> graph_builder.add_conditional_edges(
...     "chatbot", tools_condition
... )
>>> graph_builder.set_entry_point("chatbot")
>>> graph = graph_builder.compile()
>>> graph.invoke({"messages": {"role": "user", "content": "What's 329993 divided by 13662?"}})

此模块提供了一个 ValidationNode 类,该类可用于验证 langchain 图中的工具调用。它将 pydantic 架构应用于模型输出中的 tool_calls,并返回带有已验证内容的 ToolMessage。如果架构无效,则返回带有错误消息的 ToolMessage。ValidationNode 可以与具有“messages”键的 StateGraph 或 MessageGraph 一起使用。如果请求多个工具调用,它们将并行运行。

ValidationNode

基类:RunnableCallable

一个节点,用于验证来自最后一条 AIMessage 的所有工具请求。

它可以与具有“messages”键的 StateGraph 或 MessageGraph 一起使用。

注意

此节点实际上并不运行工具,它仅验证工具调用,这对于提取和其他需要生成符合复杂架构的结构化输出而又不丢失原始消息和工具 ID(用于多轮对话)的用例非常有用。

参数

  • schemas (Sequence[Union[BaseTool, Type[BaseModel], Callable]]) –

    用于验证工具调用的架构列表。这些可以是以下任何一项:- pydantic BaseModel 类 - BaseTool 实例(将使用 args_schema)- 函数(将从函数签名创建架构)

  • format_error (Optional[Callable[[BaseException, ToolCall, Type[BaseModel]], str]], default: None ) –

    一个函数,它接受异常、ToolCall 和架构,并返回格式化的错误字符串。默认情况下,它返回异常 repr 和一条消息,以便在修复验证错误后做出响应。

  • name (str, default: 'validation' ) –

    节点的名称。

  • tags (Optional[list[str]], default: None ) –

    要添加到节点的标签列表。

返回

  • Union[Dict[str, List[ToolMessage]], Sequence[ToolMessage]]

    带有已验证内容或错误消息的 ToolMessages 列表。

示例

用于重新提示模型以生成有效响应的示例用法

>>> from typing import Literal, Annotated
>>> from typing_extensions import TypedDict
...
>>> from langchain_anthropic import ChatAnthropic
>>> from pydantic import BaseModel, field_validator
...
>>> from langgraph.graph import END, START, StateGraph
>>> from langgraph.prebuilt import ValidationNode
>>> from langgraph.graph.message import add_messages
...
...
>>> class SelectNumber(BaseModel):
...     a: int
...
...     @field_validator("a")
...     def a_must_be_meaningful(cls, v):
...         if v != 37:
...             raise ValueError("Only 37 is allowed")
...         return v
...
...
>>> builder = StateGraph(Annotated[list, add_messages])
>>> llm = ChatAnthropic(model="claude-3-5-haiku-latest").bind_tools([SelectNumber])
>>> builder.add_node("model", llm)
>>> builder.add_node("validation", ValidationNode([SelectNumber]))
>>> builder.add_edge(START, "model")
...
...
>>> def should_validate(state: list) -> Literal["validation", "__end__"]:
...     if state[-1].tool_calls:
...         return "validation"
...     return END
...
...
>>> builder.add_conditional_edges("model", should_validate)
...
...
>>> def should_reprompt(state: list) -> Literal["model", "__end__"]:
...     for msg in state[::-1]:
...         # None of the tool calls were errors
...         if msg.type == "ai":
...             return END
...         if msg.additional_kwargs.get("is_error"):
...             return "model"
...     return END
...
...
>>> builder.add_conditional_edges("validation", should_reprompt)
...
...
>>> graph = builder.compile()
>>> res = graph.invoke(("user", "Select a number, any number"))
>>> # Show the retry logic
>>> for msg in res:
...     msg.pretty_print()
================================ Human Message =================================
Select a number, any number
================================== Ai Message ==================================
[{'id': 'toolu_01JSjT9Pq8hGmTgmMPc6KnvM', 'input': {'a': 42}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01JSjT9Pq8hGmTgmMPc6KnvM)
Call ID: toolu_01JSjT9Pq8hGmTgmMPc6KnvM
Args:
    a: 42
================================= Tool Message =================================
Name: SelectNumber
ValidationError(model='SelectNumber', errors=[{'loc': ('a',), 'msg': 'Only 37 is allowed', 'type': 'value_error'}])
Respond after fixing all validation errors.
================================== Ai Message ==================================
[{'id': 'toolu_01PkxSVxNxc5wqwCPW1FiSmV', 'input': {'a': 37}, 'name': 'SelectNumber', 'type': 'tool_use'}]
Tool Calls:
SelectNumber (toolu_01PkxSVxNxc5wqwCPW1FiSmV)
Call ID: toolu_01PkxSVxNxc5wqwCPW1FiSmV
Args:
    a: 37
================================= Tool Message =================================
Name: SelectNumber
{"a": 37}

评论