如何向你的图添加运行时配置¶
一旦你在 LangGraph 中创建了一个应用,你可能希望允许在运行时进行配置。
例如,你可能希望动态选择 LLM 或提示,配置用户的 user_id
以强制执行行级安全性等等。
在 LangGraph 中,配置和其他 “带外”通信 是通过 RunnableConfig 完成的,它始终是在调用你的应用程序时的第二个位置参数。
下面,我们将逐步介绍一个示例,让你配置用户 ID 并选择要使用的模型。
设置¶
本指南将使用 Anthropic 的 Claude 3 Haiku 和 OpenAI 的 GPT-4o 模型。我们将可选地为 LangSmith 跟踪 设置我们的 API 密钥,这将为我们提供一流的可观察性。
// process.env.OPENAI_API_KEY = "sk_...";
// Optional, add tracing in LangSmith
// process.env.LANGCHAIN_API_KEY = "ls__...";
// process.env.LANGCHAIN_CALLBACKS_BACKGROUND = "true";
process.env.LANGCHAIN_TRACING_V2 = "true";
process.env.LANGCHAIN_PROJECT = "Configuration: LangGraphJS";
定义图¶
我们将为本示例创建一个非常简单的消息图。
import { BaseMessage } from "@langchain/core/messages";
import { ChatOpenAI } from "@langchain/openai";
import { ChatAnthropic } from "@langchain/anthropic";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { RunnableConfig } from "@langchain/core/runnables";
import {
END,
START,
StateGraph,
Annotation,
} from "@langchain/langgraph";
const AgentState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
}),
userInfo: Annotation<string | undefined>({
reducer: (x, y) => {
return y ? y : x ? x : "N/A";
},
default: () => "N/A",
})
});
const promptTemplate = ChatPromptTemplate.fromMessages([
["system", "You are a helpful assistant.\n\n## User Info:\n{userInfo}"],
["placeholder", "{messages}"],
]);
const callModel = async (
state: typeof AgentState.State,
config?: RunnableConfig,
) => {
const { messages, userInfo } = state;
const modelName = config?.configurable?.model;
const model = modelName === "claude"
? new ChatAnthropic({ model: "claude-3-haiku-20240307" })
: new ChatOpenAI({ model: "gpt-4o" });
const chain = promptTemplate.pipe(model);
const response = await chain.invoke(
{
messages,
userInfo,
},
config,
);
return { messages: [response] };
};
const fetchUserInformation = async (
_: typeof AgentState.State,
config?: RunnableConfig,
) => {
const userDB = {
user1: {
name: "John Doe",
email: "jod@langchain.ai",
phone: "+1234567890",
},
user2: {
name: "Jane Doe",
email: "jad@langchain.ai",
phone: "+0987654321",
},
};
const userId = config?.configurable?.user;
if (userId) {
const user = userDB[userId as keyof typeof userDB];
if (user) {
return {
userInfo:
`Name: ${user.name}\nEmail: ${user.email}\nPhone: ${user.phone}`,
};
}
}
return { userInfo: "N/A" };
};
const workflow = new StateGraph(AgentState)
.addNode("fetchUserInfo", fetchUserInformation)
.addNode("agent", callModel)
.addEdge(START, "fetchUserInfo")
.addEdge("fetchUserInfo", "agent")
.addEdge("agent", END);
const graph = workflow.compile();
使用配置调用¶
import { HumanMessage } from "@langchain/core/messages";
const config = {
configurable: {
model: "openai",
user: "user1",
},
};
const inputs = {
messages: [new HumanMessage("Could you remind me of my email??")],
};
for await (
const { messages } of await graph.stream(inputs, {
...config,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
Could you remind me of my email??
-----
Could you remind me of my email??
-----
Your email is jod@langchain.ai.
-----
更改配置¶
现在让我们尝试使用不同的用户进行相同的输入。
const config2 = {
configurable: {
model: "openai",
user: "user2",
},
};
const inputs2 = {
messages: [new HumanMessage("Could you remind me of my email??")],
};
for await (
const { messages } of await graph.stream(inputs2, {
...config2,
streamMode: "values",
})
) {
let msg = messages[messages?.length - 1];
if (msg?.content) {
console.log(msg.content);
} else if (msg?.tool_calls?.length > 0) {
console.log(msg.tool_calls);
} else {
console.log(msg);
}
console.log("-----\n");
}
Could you remind me of my email??
-----
Could you remind me of my email??
-----
Your email address is jad@langchain.ai.
-----
配置架构¶
你还可以传递一个注释,定义 config.configurable
的形状到你的图中。这目前只会公开编译图上的类型信息,而不会过滤掉键
import { MessagesAnnotation } from "@langchain/langgraph";
const ConfigurableAnnotation = Annotation.Root({
expectedField: Annotation<string>,
});
const printNode = async (
state: typeof MessagesAnnotation.State,
config: RunnableConfig<typeof ConfigurableAnnotation.State>
) => {
console.log("Expected", config.configurable?.expectedField);
// @ts-expect-error This type will be present even though is not in the typing
console.log("Unexpected", config.configurable?.unexpectedField);
return {};
};
const graphWithConfigSchema = new StateGraph(MessagesAnnotation, ConfigurableAnnotation)
.addNode("printNode", printNode)
.addEdge(START, "printNode")
.compile();
const result = await graphWithConfigSchema.invoke({
messages: [{ role: "user", content: "Echo!"} ]
}, { configurable: { expectedField: "I am expected", unexpectedField: "I am unexpected but present" } });