如何在图中添加运行时配置¶
创建 LangGraph 中的应用程序后,您可能希望在运行时允许配置。
例如,您可能希望动态地选择 LLM 或提示,配置用户的 user_id
来强制实施行级安全性等等。
在 LangGraph 中,配置和其他 "带外通信" 是通过 RunnableConfig 完成的,它始终是调用应用程序时的第二个位置参数。
下面,我们将逐步介绍一个示例,说明如何让您配置用户 ID 并选择要使用的模型。
设置¶
本指南将使用 Anthropic 的 Claude 3 Haiku 和 OpenAI 的 GPT-4o 模型。我们还将选择性地设置 LangSmith 跟踪 的 API 密钥,这将为我们提供一流的可观察性。
在 [1]
已复制!
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Configuration: LangGraphJS";
// process.env.OPENAI_API_KEY = "sk_..."; // 可选,在 LangSmith 中添加跟踪 // process.env.LANGCHAIN_API_KEY = "ls__..."; // process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true"; process.env.LANGCHAIN_TRACING_V2 = "true"; process.env.LANGCHAIN_PROJECT = "Configuration: LangGraphJS";
Configuration: LangGraphJS
定义图¶
我们将为这个示例创建一个极其简单的消息图。
在 [2]
已复制!
import { BaseMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableConfig } from "@langchain/core/runnables";
import {
END,
START,
StateGraph,
Annotation,
} from "@langchain/langgraph";
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
userInfo: Annotation<string | undefined>({
reducer: (x, y) => {
return y ? y : x ? x : "N/A";
},
default: () => "N/A",
})
});
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant.\n\n## User Info:\n{userInfo}"],
["placeholder", "{messages}"],
]);
const callModel = async (
state: typeof AgentState.State,
config?: RunnableConfig,
) => {
const { messages, userInfo } = state;
const modelName = config?.configurable?.model;
const model = modelName === "claude"
? new ChatAnthropic({ model: "claude-3-haiku-20240307" })
: new ChatOpenAI({ model: "gpt-4o" });
const chain = promptTemplate.pipe(model);
const response = await chain.invoke(
{
messages,
userInfo,
},
config,
);
return { messages: [response] };
};
const fetchUserInformation = async (
_: typeof AgentState.State,
config?: RunnableConfig,
) => {
const userDB = {
user1: {
name: "John Doe",
email: "[email protected]",
phone: "+1234567890",
},
user2: {
name: "Jane Doe",
email: "[email protected]",
phone: "+0987654321",
},
};
const userId = config?.configurable?.user;
if (userId) {
const user = userDB[userId as keyof typeof userDB];
if (user) {
return {
userInfo:
`Name: ${user.name}\nEmail: ${user.email}\nPhone: ${user.phone}`,
};
}
}
return { userInfo: "N/A" };
};
const workflow = new StateGraph(AgentState)
.addNode("fetchUserInfo", fetchUserInformation)
.addNode("agent", callModel)
.addEdge(START, "fetchUserInfo")
.addEdge("fetchUserInfo", "agent")
.addEdge("agent", END);
const graph = workflow.compile();
import { BaseMessage } from "@langchain/core/messages"; import { ChatOpenAI } from "@langchain/openai"; import { ChatAnthropic } from "@langchain/anthropic"; import { ChatPromptTemplate } from "@langchain/core/prompts"; import { RunnableConfig } from "@langchain/core/runnables"; import { END, START, StateGraph, Annotation, } from "@langchain/langgraph"; const AgentState = Annotation.Root({ messages: Annotation({ reducer: (x, y) => x.concat(y), }), userInfo: Annotation({ reducer: (x, y) => { return y ? y : x ? x : "N/A"; }, default: () => "N/A", }) }); const promptTemplate = ChatPromptTemplate.fromMessages([ ["system", "您是一个乐于助人的助手。\n\n## 用户信息:\n{userInfo}"], ["placeholder", "{messages}"], ]); const callModel = async ( state: typeof AgentState.State, config?: RunnableConfig, ) => { const { messages, userInfo } = state; const modelName = config?.configurable?.model; const model = modelName === "claude" ? new ChatAnthropic({ model: "claude-3-haiku-20240307" }) : new ChatOpenAI({ model: "gpt-4o" }); const chain = promptTemplate.pipe(model); const response = await chain.invoke( { messages, userInfo, }, config, ); return { messages: [response] }; }; const fetchUserInformation = async ( _: typeof AgentState.State, config?: RunnableConfig, ) => { const userDB = { user1: { name: "John Doe", email: "[email protected]", phone: "+1234567890", }, user2: { name: "Jane Doe", email: "[email protected]", phone: "+0987654321", }, }; const userId = config?.configurable?.user; if (userId) { const user = userDB[userId as keyof typeof userDB]; if (user) { return { userInfo: `姓名:${user.name}\n电子邮件:${user.email}\n电话:${user.phone}`, }; } } return { userInfo: "N/A" }; }; const workflow = new StateGraph(AgentState) .addNode("fetchUserInfo", fetchUserInformation) .addNode("agent", callModel) .addEdge(START, "fetchUserInfo") .addEdge("fetchUserInfo", "agent") .addEdge("agent", END); const graph = workflow.compile();
使用配置调用¶
在 [3]
已复制!
import { HumanMessage } from "@langchain/core/messages";
const config = {
configurable: {
model: "openai",
user: "user1",
},
};
const inputs = {
messages: [new HumanMessage("Could you remind me of my email??")],
};
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
import { HumanMessage } from "@langchain/core/messages"; const config = { configurable: { model: "openai", user: "user1", }, }; const inputs = { messages: [new HumanMessage("你能提醒我我的电子邮件吗?")], }; for await ( const { messages } of await graph.stream(inputs, { ...config, streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Could you remind me of my email?? ----- Could you remind me of my email?? ----- Sure, your email is [email protected]. Is there anything else you need help with? -----
更改配置¶
现在让我们尝试使用不同的用户输入相同的输入。
在 [4]
已复制!
const config2 = {
configurable: {
model: "openai",
user: "user2",
},
};
const inputs2 = {
messages: [new HumanMessage("Could you remind me of my email??")],
};
for await (
const { messages } of await graph.stream(inputs2, {
...config2,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
const config2 = { configurable: { model: "openai", user: "user2", }, }; const inputs2 = { messages: [new HumanMessage("你能提醒我我的电子邮件吗?")], }; for await ( const { messages } of await graph.stream(inputs2, { ...config2, streamMode: "values", }) ) { let msg = messages[messages?.length - 1]; if (msg?.content) { console.log(msg.content); } else if (msg?.tool_calls?.length > 0) { console.log(msg.tool_calls); } else { console.log(msg); } console.log("-----\n"); }
Could you remind me of my email?? ----- Could you remind me of my email?? ----- Sure, Jane! Your email is [email protected]. -----
查看此运行的 LangSmith 跟踪 (链接),以“查看 LLM 所见”。
配置模式¶
您还可以传递一个定义 config.configurable
形状的注释到您的图中。这目前只会过滤掉键,并且会公开编译图上的类型信息。
在 [6]
已复制!
import { MessagesAnnotation } from "@langchain/langgraph";
const ConfigAnnotation = Annotation.Root({
expectedField: Annotation<string>,
});
const printNode = async (state: typeof MessagesAnnotation.State, config: RunnableConfig) => {
console.log("Expected", config.configurable?.expectedField);
console.log("Unxpected", config.configurable?.unexpectedField);
return {};
};
const graphWithConfigSchema = new StateGraph(MessagesAnnotation, ConfigAnnotation)
.addNode("printNode", printNode)
.addEdge(START, "printNode")
.compile();
const result = await graphWithConfigSchema.invoke({
messages: [{ role: "user", content: "Echo!"} ]
}, { configurable: { expectedField: "I am expected", unexpectedField: "I will be filtered out" } });
import { MessagesAnnotation } from "@langchain/langgraph"; const ConfigAnnotation = Annotation.Root({ expectedField: Annotation, }); const printNode = async (state: typeof MessagesAnnotation.State, config: RunnableConfig) => { console.log("Expected", config.configurable?.expectedField); console.log("Unxpected", config.configurable?.unexpectedField); return {}; }; const graphWithConfigSchema = new StateGraph(MessagesAnnotation, ConfigAnnotation) .addNode("printNode", printNode) .addEdge(START, "printNode") .compile(); const result = await graphWithConfigSchema.invoke({ messages: [{ role: "user", content: "Echo!"} ] }, { configurable: { expectedField: "I am expected", unexpectedField: "I will be filtered out" } });
Expected I am expected Unxpected undefined