如何将 LangGraph 与 AutoGen、CrewAI 和其他框架集成¶
本指南展示了如何将 AutoGen 代理与 LangGraph 集成,以利用持久化、流式传输和内存等功能,然后将集成解决方案部署到 LangGraph 平台以实现可扩展的生产用途。在本指南中,我们展示了如何构建一个与 AutoGen 集成的 LangGraph 聊天机器人,但您可以使用相同的方法与其他框架进行集成。
将 AutoGen 与 LangGraph 集成提供多项优势
- 增强功能:为您的 AutoGen 代理添加持久化、流式传输、短期和长期内存等功能。
- 多代理系统:构建多代理系统,其中单个代理使用不同的框架构建。
- 生产部署:将您的集成解决方案部署到LangGraph 平台以实现可扩展的生产用途。
先决条件¶
- Python 3.9+
- Autogen:
pip install autogen
- LangGraph:
pip install langgraph
- OpenAI API 密钥
设置¶
设置您的环境
import getpass
import os
def _set_env(var: str):
if not os.environ.get(var):
os.environ[var] = getpass.getpass(f"{var}: ")
_set_env("OPENAI_API_KEY")
1. 定义 AutoGen 代理¶
创建一个可以执行代码的 AutoGen 代理。此示例改编自 AutoGen 的官方教程
import autogen
import os
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]
llm_config = {
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
autogen_agent = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "web",
"use_docker": False,
}, # Please set use_docker=True if docker is available to run the generated code. Using docker is safer than running the generated code directly.
llm_config=llm_config,
system_message="Reply TERMINATE if the task has been solved at full satisfaction. Otherwise, reply CONTINUE, or the reason why the task is not solved yet.",
)
2. 创建图¶
现在我们将创建一个调用 AutoGen 代理的 LangGraph 聊天机器人图。
API 参考:convert_to_openai_messages | StateGraph | START | MemorySaver
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver
def call_autogen_agent(state: MessagesState):
# Convert LangGraph messages to OpenAI format for AutoGen
messages = convert_to_openai_messages(state["messages"])
# Get the last user message
last_message = messages[-1]
# Pass previous message history as context (excluding the last message)
carryover = messages[:-1] if len(messages) > 1 else []
# Initiate chat with AutoGen
response = user_proxy.initiate_chat(
autogen_agent,
message=last_message,
carryover=carryover
)
# Extract the final response from the agent
final_content = response.chat_history[-1]["content"]
# Return the response in LangGraph format
return {"messages": {"role": "assistant", "content": final_content}}
# Create the graph with memory for persistence
checkpointer = MemorySaver()
# Build the graph
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")
# Compile with checkpointer for persistence
graph = builder.compile(checkpointer=checkpointer)
3. 在本地测试图¶
在部署到 LangGraph 平台之前,您可以在本地测试图
# pass the thread ID to persist agent outputs for future interactions
config = {"configurable": {"thread_id": "1"}}
for chunk in graph.stream(
{
"messages": [
{
"role": "user",
"content": "Find numbers between 10 and 30 in fibonacci sequence",
}
]
},
config,
):
print(chunk)
输出
user_proxy (to assistant):
Find numbers between 10 and 30 in fibonacci sequence
--------------------------------------------------------------------------------
assistant (to user_proxy):
To find numbers between 10 and 30 in the Fibonacci sequence, we can generate the Fibonacci sequence and check which numbers fall within this range. Here's a plan:
1. Generate Fibonacci numbers starting from 0.
2. Continue generating until the numbers exceed 30.
3. Collect and print the numbers that are between 10 and 30.
...
由于我们正在利用 LangGraph 的持久化功能,我们现在可以使用相同的线程 ID 继续对话——LangGraph 将自动将以前的历史记录传递给 AutoGen 代理
for chunk in graph.stream(
{
"messages": [
{
"role": "user",
"content": "Multiply the last number by 3",
}
]
},
config,
):
print(chunk)
输出
user_proxy (to assistant):
Multiply the last number by 3
Context:
Find numbers between 10 and 30 in fibonacci sequence
The Fibonacci numbers between 10 and 30 are 13 and 21.
These numbers are part of the Fibonacci sequence, which is generated by adding the two preceding numbers to get the next number, starting from 0 and 1.
The sequence goes: 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, ...
As you can see, 13 and 21 are the only numbers in this sequence that fall between 10 and 30.
TERMINATE
--------------------------------------------------------------------------------
assistant (to user_proxy):
The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:
21 * 3 = 63
TERMINATE
--------------------------------------------------------------------------------
{'call_autogen_agent': {'messages': {'role': 'assistant', 'content': 'The last number in the Fibonacci sequence between 10 and 30 is 21. Multiplying 21 by 3 gives:\n\n21 * 3 = 63\n\nTERMINATE'}}}
4. 准备部署¶
要部署到 LangGraph 平台,请创建如下文件结构
my-autogen-agent/
├── agent.py # Your main agent code
├── requirements.txt # Python dependencies
└── langgraph.json # LangGraph configuration
import os
import autogen
from langchain_core.messages import convert_to_openai_messages
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.checkpoint.memory import MemorySaver
# AutoGen configuration
config_list = [{"model": "gpt-4o", "api_key": os.environ["OPENAI_API_KEY"]}]
llm_config = {
"timeout": 600,
"cache_seed": 42,
"config_list": config_list,
"temperature": 0,
}
# Create AutoGen agents
autogen_agent = autogen.AssistantAgent(
name="assistant",
llm_config=llm_config,
)
user_proxy = autogen.UserProxyAgent(
name="user_proxy",
human_input_mode="NEVER",
max_consecutive_auto_reply=10,
is_termination_msg=lambda x: x.get("content", "").rstrip().endswith("TERMINATE"),
code_execution_config={
"work_dir": "/tmp/autogen_work",
"use_docker": False,
},
llm_config=llm_config,
system_message="Reply TERMINATE if the task has been solved at full satisfaction.",
)
def call_autogen_agent(state: MessagesState):
"""Node function that calls the AutoGen agent"""
messages = convert_to_openai_messages(state["messages"])
last_message = messages[-1]
carryover = messages[:-1] if len(messages) > 1 else []
response = user_proxy.initiate_chat(
autogen_agent,
message=last_message,
carryover=carryover
)
final_content = response.chat_history[-1]["content"]
return {"messages": {"role": "assistant", "content": final_content}}
# Create and compile the graph
def create_graph():
checkpointer = MemorySaver()
builder = StateGraph(MessagesState)
builder.add_node("autogen", call_autogen_agent)
builder.add_edge(START, "autogen")
return builder.compile(checkpointer=checkpointer)
# Export the graph for LangGraph Platform
graph = create_graph()
5. 部署到 LangGraph 平台¶
使用 LangGraph 平台 CLI 部署图