如何从头开始创建 ReAct 代理¶
使用预构建的 ReAct 代理 (create_react_agent) 是入门的好方法,但有时您可能需要更多控制和定制。在这些情况下,您可以创建自定义的 ReAct 代理。本指南展示了如何使用 LangGraph 从头开始实现 ReAct 代理。
设置¶
首先,让我们安装所需的软件包并设置 API 密钥
在 [1] 中
已复制!
%%capture --no-stderr
%pip install -U langgraph langchain-openai
%%capture --no-stderr %pip install -U langgraph langchain-openai
在 [2] 中
已复制!
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
import getpass import os def _set_env(var: str): if not os.environ.get(var): os.environ[var] = getpass.getpass(f"{var}: ") _set_env("OPENAI_API_KEY")
在 [4] 中
已复制!
from typing import (
Annotated,
Sequence,
TypedDict,
)
from langchain_core.messages import BaseMessage
from langgraph.graph.message import add_messages
class AgentState(TypedDict):
"""The state of the agent."""
# add_messages is a reducer
# See https://github.langchain.ac.cn/langgraph/concepts/low_level/#reducers
messages: Annotated[Sequence[BaseMessage], add_messages]
from typing import ( Annotated, Sequence, TypedDict, ) from langchain_core.messages import BaseMessage from langgraph.graph.message import add_messages class AgentState(TypedDict): """代理的状态。""" # add_messages 是一个 reducer # 查看 https://github.langchain.ac.cn/langgraph/concepts/low_level/#reducers messages: Annotated[Sequence[BaseMessage], add_messages]
定义模型和工具¶
接下来,让我们定义我们将用于示例的工具和模型。
在 [5] 中
已复制!
from langchain_openai import ChatOpenAI
from langchain_core.tools import tool
model = ChatOpenAI(model="gpt-4o-mini")
@tool
def get_weather(location: str):
"""Call to get the weather from a specific location."""
# This is a placeholder for the actual implementation
# Don't let the LLM know this though 😊
if any([city in location.lower() for city in ["sf", "san francisco"]]):
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."
else:
return f"I am not sure what the weather is in {location}"
tools = [get_weather]
model = model.bind_tools(tools)
from langchain_openai import ChatOpenAI from langchain_core.tools import tool model = ChatOpenAI(model="gpt-4o-mini") @tool def get_weather(location: str): """调用以从特定位置获取天气。""" # 这是实际实现的占位符 # 不要让 LLM 知道这一点 😊 if any([city in location.lower() for city in ["sf", "san francisco"]]): return "旧金山的阳光明媚,但如果你是一个双子座,你最好小心点😈。" else: return f"我不确定 {location} 的天气" tools = [get_weather] model = model.bind_tools(tools)
在 [6] 中
已复制!
import json
from langchain_core.messages import ToolMessage, SystemMessage
from langchain_core.runnables import RunnableConfig
tools_by_name = {tool.name: tool for tool in tools}
# Define our tool node
def tool_node(state: AgentState):
outputs = []
for tool_call in state["messages"][-1].tool_calls:
tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"])
outputs.append(
ToolMessage(
content=json.dumps(tool_result),
name=tool_call["name"],
tool_call_id=tool_call["id"],
)
)
return {"messages": outputs}
# Define the node that calls the model
def call_model(
state: AgentState,
config: RunnableConfig,
):
# this is similar to customizing the create_react_agent with state_modifier, but is a lot more flexible
system_prompt = SystemMessage(
"You are a helpful AI assistant, please respond to the users query to the best of your ability!"
)
response = model.invoke([system_prompt] + state["messages"], config)
# We return a list, because this will get added to the existing list
return {"messages": [response]}
# Define the conditional edge that determines whether to continue or not
def should_continue(state: AgentState):
messages = state["messages"]
last_message = messages[-1]
# If there is no function call, then we finish
if not last_message.tool_calls:
return "end"
# Otherwise if there is, we continue
else:
return "continue"
import json from langchain_core.messages import ToolMessage, SystemMessage from langchain_core.runnables import RunnableConfig tools_by_name = {tool.name: tool for tool in tools} # 定义我们的工具节点 def tool_node(state: AgentState): outputs = [] for tool_call in state["messages"][-1].tool_calls: tool_result = tools_by_name[tool_call["name"]].invoke(tool_call["args"]) outputs.append( ToolMessage( content=json.dumps(tool_result), name=tool_call["name"], tool_call_id=tool_call["id"], ) ) return {"messages": outputs} # 定义调用模型的节点 def call_model( state: AgentState, config: RunnableConfig, ): # 这类似于使用 state_modifier 自定义 create_react_agent,但更灵活 system_prompt = SystemMessage( "您是一个乐于助人的 AI 助手,请尽力回答用户的查询!" ) response = model.invoke([system_prompt] + state["messages"], config) # 我们返回一个列表,因为这将被添加到现有的列表中 return {"messages": [response]} # 定义确定是否继续的条件边 def should_continue(state: AgentState): messages = state["messages"] last_message = messages[-1] # 如果没有函数调用,那么我们就结束 if not last_message.tool_calls: return "end" # 否则如果有,我们就继续 else: return "continue"
定义图¶
现在我们已经定义了所有节点和边,我们可以定义并编译我们的图。根据您是否添加了更多节点或不同的边,您需要编辑它以适应您的特定用例。
在 [7] 中
已复制!
from langgraph.graph import StateGraph, END
from langgraph.constants import END
# Define a new graph
workflow = StateGraph(AgentState)
# Define the two nodes we will cycle between
workflow.add_node("agent", call_model)
workflow.add_node("tools", tool_node)
# Set the entrypoint as `agent`
# This means that this node is the first one called
workflow.set_entry_point("agent")
# We now add a conditional edge
workflow.add_conditional_edges(
# First, we define the start node. We use `agent`.
# This means these are the edges taken after the `agent` node is called.
"agent",
# Next, we pass in the function that will determine which node is called next.
should_continue,
# Finally we pass in a mapping.
# The keys are strings, and the values are other nodes.
# END is a special node marking that the graph should finish.
# What will happen is we will call `should_continue`, and then the output of that
# will be matched against the keys in this mapping.
# Based on which one it matches, that node will then be called.
{
# If `tools`, then we call the tool node.
"continue": "tools",
# Otherwise we finish.
"end": END,
},
)
# We now add a normal edge from `tools` to `agent`.
# This means that after `tools` is called, `agent` node is called next.
workflow.add_edge("tools", "agent")
# Now we can compile and visualize our graph
graph = workflow.compile()
from IPython.display import Image, display
try:
display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
# This requires some extra dependencies and is optional
pass
from langgraph.graph import StateGraph, END from langgraph.constants import END # 定义一个新的图工作流程 workflow = StateGraph(AgentState) # 定义我们将循环使用的两个节点 workflow.add_node("agent", call_model) workflow.add_node("tools", tool_node) # 将入口点设置为 `agent` # 这意味着此节点是第一个被调用的节点 workflow.set_entry_point("agent") # 现在我们添加一个条件边 workflow.add_conditional_edges( # 首先,我们定义开始节点。我们使用 `agent`。 # 这意味着这些是在 `agent` 节点被调用后采取的边。 "agent", # 接下来,我们传入一个函数,该函数将确定下一个被调用的节点。 should_continue, # 最后我们传入一个映射。 # 键是字符串,值是其他节点。 # END 是一个特殊节点,表示图应该结束。 # 将发生的事情是我们会调用 `should_continue`,然后该函数的输出 # 将与此映射中的键进行匹配。 # 基于匹配到的键,将调用该节点。 { # 如果是 `tools`,那么我们调用工具节点。 "continue": "tools", # 否则我们结束。 "end": END, }, ) # 现在我们添加一个从 `tools` 到 `agent` 的普通边。 # 这意味着在 `tools` 被调用后,`agent` 节点被调用。 workflow.add_edge("tools", "agent") # 现在我们可以编译和可视化我们的图 graph = workflow.compile() from IPython.display import Image, display try: display(Image(graph.get_graph().draw_mermaid_png())) except Exception: # 这需要一些额外的依赖项,是可选的 pass
使用 ReAct 代理¶
现在我们已经创建了 ReAct 代理,让我们实际测试一下!
在 [8] 中
已复制!
# Helper function for formatting the stream nicely
def print_stream(stream):
for s in stream:
message = s["messages"][-1]
if isinstance(message, tuple):
print(message)
else:
message.pretty_print()
inputs = {"messages": [("user", "what is the weather in sf")]}
print_stream(graph.stream(inputs, stream_mode="values"))
# 用于美化流的辅助函数 def print_stream(stream): for s in stream: message = s["messages"][-1] if isinstance(message, tuple): print(message) else: message.pretty_print() inputs = {"messages": [("user", "旧金山的天气怎么样")]} print_stream(graph.stream(inputs, stream_mode="values"))
================================ Human Message ================================= what is the weather in sf ================================== Ai Message ================================== Tool Calls: get_weather (call_azW0cQ4XjWWj0IAkWAxq9nLB) Call ID: call_azW0cQ4XjWWj0IAkWAxq9nLB Args: location: San Francisco ================================= Tool Message ================================= Name: get_weather "It's sunny in San Francisco, but you better look out if you're a Gemini \ud83d\ude08." ================================== Ai Message ================================== The weather in San Francisco is sunny! However, it seems there's a playful warning for Geminis. Enjoy the sunshine!
完美!该图正确地调用了 get_weather
工具,并在从工具接收信息后回复用户。