⚡ 构建语言代理作为图 ⚡
LangGraph.js 是一个用于构建具有 LLM 的有状态、多参与者应用程序的库,用于创建代理和多代理工作流程。与其他 LLM 框架相比,它提供了以下核心优势:循环、可控性和持久性。LangGraph 允许您定义包含循环的流程,这对大多数代理体系结构至关重要,从而区别于基于 DAG 的解决方案。作为一个非常低级的框架,它提供了对应用程序的流程和状态的细粒度控制,对于创建可靠的代理至关重要。此外,LangGraph 包含内置的持久性,可以实现高级的人机交互和内存功能。
LangGraph 受 Pregel 和 Apache Beam 的启发。公共接口借鉴了 NetworkX 的灵感。LangGraph 由 LangChain Inc 构建,LangChain Inc 是 LangChain 的创建者,但可以使用它而无需 LangChain。
npm install @langchain/langgraph @langchain/core
LangGraph 的核心概念之一是状态。每次图执行都会创建一个状态,该状态在图中的节点执行时在它们之间传递,每个节点在执行后都会使用其返回值更新此内部状态。图更新其内部状态的方式由所选图的类型或自定义函数定义。
让我们看一个可以使用搜索工具的代理的示例。
首先安装所需的依赖项
npm install @langchain/anthropic
然后设置所需的環境变量
export ANTHROPIC_API_KEY=sk-...
可选地,设置 LangSmith 以实现最佳的观测性
export LANGCHAIN_TRACING_V2=true
export LANGCHAIN_API_KEY=ls__...
现在让我们定义我们的代理
import { AIMessage, BaseMessage, HumanMessage } from "@langchain/core/messages";
import { tool } from "@langchain/core/tools";
import { z } from "zod";
import { ChatAnthropic } from "@langchain/anthropic";
import { StateGraph } from "@langchain/langgraph";
import { MemorySaver, Annotation } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
// Define the graph state
// See here for more info: https://github.langchain.ac.cn/langgraphjs/how-tos/define-state/
const StateAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
})
})
// Define the tools for the agent to use
const weatherTool = tool(async ({ query }) => {
// This is a placeholder for the actual implementation
if (query.toLowerCase().includes("sf") || query.toLowerCase().includes("san francisco")) {
return "It's 60 degrees and foggy."
}
return "It's 90 degrees and sunny."
}, {
name: "weather",
description:
"Call to get the current weather for a location.",
schema: z.object({
query: z.string().describe("The query to use in your search."),
}),
});
const tools = [weatherTool];
const toolNode = new ToolNode(tools);
const model = new ChatAnthropic({
model: "claude-3-5-sonnet-20240620",
temperature: 0,
}).bindTools(tools);
// Define the function that determines whether to continue or not
// We can extract the state typing via `StateAnnotation.State`
function shouldContinue(state: typeof StateAnnotation.State) {
const messages = state.messages;
const lastMessage = messages[messages.length - 1] as AIMessage;
// If the LLM makes a tool call, then we route to the "tools" node
if (lastMessage.tool_calls?.length) {
return "tools";
}
// Otherwise, we stop (reply to the user)
return "__end__";
}
// Define the function that calls the model
async function callModel(state: typeof StateAnnotation.State) {
const messages = state.messages;
const response = await model.invoke(messages);
// We return a list, because this will get added to the existing list
return { messages: [response] };
}
// Define a new graph
const workflow = new StateGraph(StateAnnotation)
.addNode("agent", callModel)
.addNode("tools", toolNode)
.addEdge("__start__", "agent")
.addConditionalEdges("agent", shouldContinue)
.addEdge("tools", "agent");
// Initialize memory to persist state between graph runs
const checkpointer = new MemorySaver();
// Finally, we compile it!
// This compiles it into a LangChain Runnable.
// Note that we're (optionally) passing the memory when compiling the graph
const app = workflow.compile({ checkpointer });
// Use the Runnable
const finalState = await app.invoke(
{ messages: [new HumanMessage("what is the weather in sf")] },
{ configurable: { thread_id: "42" } }
);
console.log(finalState.messages[finalState.messages.length - 1].content);
这将输出
Based on the information I received, the current weather in San Francisco is:
Temperature: 60 degrees Fahrenheit
Conditions: Foggy
San Francisco is known for its foggy weather, especially during certain times of the year. The moderate temperature of 60°F (about 15.5°C) is quite typical for the city, which generally has mild weather year-round due to its coastal location.
Is there anything else you'd like to know about the weather in San Francisco or any other location?
现在当我们传递相同的 "thread_id"
时,对话上下文通过保存的状态(即存储的消息列表)保留。
const nextState = await app.invoke(
{ messages: [new HumanMessage("what about ny")] },
{ configurable: { thread_id: "42" } }
);
console.log(nextState.messages[nextState.messages.length - 1].content);
Based on the information I received, the current weather in New York is:
Temperature: 90 degrees Fahrenheit (approximately 32.2 degrees Celsius)
Conditions: Sunny
New York is experiencing quite warm weather today. A temperature of 90°F is considered hot for most people, and it's significantly warmer than the San Francisco weather we just checked. The sunny conditions suggest it's a clear day without cloud cover, which can make it feel even warmer.
On a day like this in New York, it would be advisable for people to stay hydrated, seek shade when possible, and use sun protection if spending time outdoors.
Is there anything else you'd like to know about the weather in New York or any other location?
ChatAnthropic
作为我们的 LLM。注意:我们需要确保模型知道它可以使用这些工具进行调用。我们可以通过使用 .bindTools()
方法将 LangChain 工具转换为 Anthropic 工具调用的格式来实现。AgentState
) 来初始化图 (StateGraph
)。StateAnnotation
对象定义了每个节点的更新应如何合并到图的状态中。我们需要两个主要节点
agent
节点:负责决定要采取的操作(如果有)。tools
节点:如果代理决定采取行动,则此节点将执行该行动。首先,我们需要设置图执行的入口点 - agent
节点。
然后我们定义一个普通边和一个条件边。条件边意味着目标取决于图状态 (AgentState
) 的内容。在我们的例子中,目标在代理 (LLM) 决定之前是未知的。
.invoke()
、.stream()
和 .batch()
。MemorySaver
- 一个简单的内存中检查点。LangGraph 将输入消息添加到内部状态,然后将状态传递给入口点节点 "agent"
。
"agent"
节点执行,调用聊天模型。
聊天模型返回一个 AIMessage
。LangGraph 将其添加到状态。
该图循环执行以下步骤,直到 AIMessage
上没有更多 tool_calls
AIMessage
具有 tool_calls
,则 "tools"
节点执行。"agent"
节点再次执行并返回一个 AIMessage
。执行进入特殊 __end__
值,并输出最终状态。因此,我们获得所有聊天消息的列表作为输出。
请注意,examples/
文件夹中的 *.ipynb 笔记本需要安装 tslab。为了在 VSCode 中运行这些笔记本,您还需要安装 Jupyter VSCode 扩展。克隆此存储库后,您可以在根目录中运行 yarn build
。然后您就可以开始使用了!
如果您仍然遇到问题,请尝试将以下 tsconfig.json
文件添加到 examples/
目录中
{
"compilerOptions": {
"esModuleInterop": true,
"moduleResolution": "node",
"target": "ES2020",
"module": "ES2020",
"lib": [
"ES2020"
],
"strict": true,
"baseUrl": ".",
"paths": {
"@langchain/langgraph": [
"../langgraph/src"
]
}
},
"include": [
"./**/*.ts",
"./**/*.tsx"
],
"exclude": [
"node_modules"
]
}