设置¶
首先,我们需要安装所需的包
npm install @langchain/langgraph @langchain/anthropic @langchain/core zod
接下来,我们需要为 Anthropic(我们将使用的 LLM)设置 API 密钥
export ANTHROPIC_API_KEY=your-api-key
可选地,我们可以为LangSmith 跟踪设置 API 密钥,这将为我们提供一流的可观察性。
export LANGCHAIN_TRACING_V2="true"
export LANGCHAIN_CALLBACKS_BACKGROUND="true"
export LANGCHAIN_API_KEY=your-api-key
简单用法¶
让我们看看这种方法的基本用法。一种直观的方法是简单地创建一个名为 humanFeedback
的节点,它将接收用户反馈。这允许我们将反馈收集放置在我们图中的某个特定、选定位置。
我们使用
interruptBefore
节点humanFeedback
指定断点。我们设置一个检查点来保存图直到此节点的状态。
我们使用
.updateState()
使用我们收到的用户响应更新图的状态。
- 我们使用
asNode
参数将此状态更新应用为指定的节点humanFeedback
。 - 然后图将恢复执行,就好像
humanFeedback
节点刚刚执行一样。
在 [1]
已复制!
import { StateGraph, Annotation, START, END } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
const StateAnnotation = Annotation.Root({
input: Annotation<string>,
userFeedback: Annotation<string>
});
const step1 = (state: typeof StateAnnotation.State) => {
console.log("---Step 1---");
return state;
}
const humanFeedback = (state: typeof StateAnnotation.State) => {
console.log("--- humanFeedback ---");
return state;
}
const step3 = (state: typeof StateAnnotation.State) => {
console.log("---Step 3---");
return state;
}
const builder = new StateGraph(StateAnnotation)
.addNode("step1", step1)
.addNode("humanFeedback", humanFeedback)
.addNode("step3", step3)
.addEdge(START, "step1")
.addEdge("step1", "humanFeedback")
.addEdge("humanFeedback", "step3")
.addEdge("step3", END);
// Set up memory
const memory = new MemorySaver()
// Add
const graph = builder.compile({
checkpointer: memory,
interruptBefore: ["humanFeedback"]
});
import { StateGraph, Annotation, START, END } from "@langchain/langgraph"; import { MemorySaver } from "@langchain/langgraph"; const StateAnnotation = Annotation.Root({ input: Annotation, userFeedback: Annotation}); const step1 = (state: typeof StateAnnotation.State) => { console.log("---Step 1---"); return state; } const humanFeedback = (state: typeof StateAnnotation.State) => { console.log("--- humanFeedback ---"); return state; } const step3 = (state: typeof StateAnnotation.State) => { console.log("---Step 3---"); return state; } const builder = new StateGraph(StateAnnotation) .addNode("step1", step1) .addNode("humanFeedback", humanFeedback) .addNode("step3", step3) .addEdge(START, "step1") .addEdge("step1", "humanFeedback") .addEdge("humanFeedback", "step3") .addEdge("step3", END); // 设置内存 const memory = new MemorySaver() // 添加 const graph = builder.compile({ checkpointer: memory, interruptBefore: ["humanFeedback"] });
在 [2]
已复制!
import * as tslab from "tslab";
const drawableGraph = graph.getGraph();
const image = await drawableGraph.drawMermaidPng();
const arrayBuffer = await image.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer));
import * as tslab from "tslab"; const drawableGraph = graph.getGraph(); const image = await drawableGraph.drawMermaidPng(); const arrayBuffer = await image.arrayBuffer(); await tslab.display.png(new Uint8Array(arrayBuffer));
运行到 step2
处的断点
在 [3]
已复制!
// Input
const initialInput = { input: "hello world" };
// Thread
const config = { configurable: { thread_id: "1" }, streamMode: "values" as const };
// Run the graph until the first interruption
for await (const event of await graph.stream(initialInput, config)) {
console.log(`--- ${event.input} ---`);
}
// Will log when the graph is interrupted, after step 2.
console.log("--- GRAPH INTERRUPTED ---");
// 输入 const initialInput = { input: "hello world" }; // 线程 const config = { configurable: { thread_id: "1" }, streamMode: "values" as const }; // 运行图直到第一个中断 for await (const event of await graph.stream(initialInput, config)) { console.log(`--- ${event.input} ---`); } // 当图中断后,将在 step 2 之后记录。 console.log("--- GRAPH INTERRUPTED ---");
--- hello world --- ---Step 1--- --- hello world --- --- GRAPH INTERRUPTED ---
现在,我们可以手动使用用户输入更新图状态 -
在 [4]
已复制!
// You should replace this with actual user input from a source, e.g stdin
const userInput = "Go to step 3!!";
// We now update the state as if we are the humanFeedback node
await graph.updateState(config, { "userFeedback": userInput, asNode: "humanFeedback" });
// We can check the state
console.log("--- State after update ---")
console.log(await graph.getState(config));
// We can check the next node, showing that it is node 3 (which follows human_feedback)
(await graph.getState(config)).next
// 您应该用来自源(例如 stdin)的实际用户输入替换此内容 const userInput = "Go to step 3!!"; // 现在我们更新状态,就好像我们是 humanFeedback 节点一样 await graph.updateState(config, { "userFeedback": userInput, asNode: "humanFeedback" }); // 我们可以检查状态 console.log("--- State after update ---") console.log(await graph.getState(config)); // 我们可以检查下一个节点,显示它是节点 3(紧随 human_feedback) (await graph.getState(config)).next
--- State after update --- { values: { input: 'hello world', userFeedback: 'Go to step 3!!' }, next: [ 'humanFeedback' ], metadata: { source: 'update', step: 2, writes: { step1: [Object] } }, config: { configurable: { thread_id: '1', checkpoint_id: '1ef5e8fb-89dd-6360-8002-5ff9e3c15c57' } }, createdAt: '2024-08-20T01:01:24.246Z', parentConfig: undefined } [ 'humanFeedback' ]
我们可以继续执行我们的断点 -
在 [5]
已复制!
// Continue the graph execution
for await (const event of await graph.stream(null, config)) {
console.log(`--- ${event.input} ---`);
}
// 继续执行图 for await (const event of await graph.stream(null, config)) { console.log(`--- ${event.input} ---`); }
--- humanFeedback --- --- hello world --- ---Step 3--- --- hello world ---
我们可以看到我们的反馈已添加到状态中 -
在 [6]
已复制!
(await graph.getState(config)).values
(await graph.getState(config)).values
{ input: 'hello world', userFeedback: 'Go to step 3!!' }
代理¶
在代理的上下文中,等待用户反馈对于询问澄清问题很有用。
为了说明这一点,我们将构建一个相对简单的 ReAct 式代理,该代理执行工具调用。
我们将使用 OpenAI 和/或 Anthropic 的模型以及一个假的工具(仅用于演示目的)。
在 [1]
已复制!
// Set up the tool
import { ChatAnthropic } from "@langchain/anthropic";
import { tool } from "@langchain/core/tools";
import { StateGraph, Annotation, START, END, messagesStateReducer } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
import { ToolNode } from "@langchain/langgraph/prebuilt";
import { BaseMessage, AIMessage } from "@langchain/core/messages";
import { z } from "zod";
const GraphMessagesAnnotation = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: messagesStateReducer,
}),
});
const search = tool((_) => {
return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈.";
}, {
name: "search",
description: "Call to surf the web.",
schema: z.string(),
})
const tools = [search]
const toolNode = new ToolNode<typeof GraphMessagesAnnotation.State>(tools)
// Set up the model
const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" })
const askHumanTool = tool((_) => {
return "The human said XYZ";
}, {
name: "askHuman",
description: "Ask the human for input.",
schema: z.string(),
});
const modelWithTools = model.bindTools([...tools, askHumanTool])
// Define nodes and conditional edges
// Define the function that determines whether to continue or not
function shouldContinue(state: typeof GraphMessagesAnnotation.State): "action" | "askHuman" | typeof END {
const lastMessage = state.messages[state.messages.length - 1];
const castLastMessage = lastMessage as AIMessage;
// If there is no function call, then we finish
if (castLastMessage && !castLastMessage.tool_calls?.length) {
return END;
}
// If tool call is askHuman, we return that node
// You could also add logic here to let some system know that there's something that requires Human input
// For example, send a slack message, etc
if (castLastMessage.tool_calls?.[0]?.name === "askHuman") {
console.log("--- ASKING HUMAN ---")
return "askHuman";
}
// Otherwise if it isn't, we continue with the action node
return "action";
}
// Define the function that calls the model
async function callModel(state: typeof GraphMessagesAnnotation.State): Promise<Partial<typeof GraphMessagesAnnotation.State>> {
const messages = state.messages;
const response = await modelWithTools.invoke(messages);
// We return an object with a messages property, because this will get added to the existing list
return { messages: [response] };
}
// We define a fake node to ask the human
function askHuman(state: typeof GraphMessagesAnnotation.State): Partial<typeof GraphMessagesAnnotation.State> {
return state;
}
// Define a new graph
const messagesWorkflow = new StateGraph(GraphMessagesAnnotation)
// Define the two nodes we will cycle between
.addNode("agent", callModel)
.addNode("action", toolNode)
.addNode("askHuman", askHuman)
// We now add a conditional edge
.addConditionalEdges(
// First, we define the start node. We use `agent`.
// This means these are the edges taken after the `agent` node is called.
"agent",
// Next, we pass in the function that will determine which node is called next.
shouldContinue
)
// We now add a normal edge from `action` to `agent`.
// This means that after `action` is called, `agent` node is called next.
.addEdge("action", "agent")
// After we get back the human response, we go back to the agent
.addEdge("askHuman", "agent")
// Set the entrypoint as `agent`
// This means that this node is the first one called
.addEdge(START, "agent");
// Setup memory
const messagesMemory = new MemorySaver();
// Finally, we compile it!
// This compiles it into a LangChain Runnable,
// meaning you can use it as you would any other runnable
const messagesApp = messagesWorkflow.compile({
checkpointer: messagesMemory,
interruptBefore: ["askHuman"]
});
// 设置工具 import { ChatAnthropic } from "@langchain/anthropic"; import { tool } from "@langchain/core/tools"; import { StateGraph, Annotation, START, END, messagesStateReducer } from "@langchain/langgraph"; import { MemorySaver } from "@langchain/langgraph"; import { ToolNode } from "@langchain/langgraph/prebuilt"; import { BaseMessage, AIMessage } from "@langchain/core/messages"; import { z } from "zod"; const GraphMessagesAnnotation = Annotation.Root({ messages: Annotation({ reducer: messagesStateReducer, }), }); const search = tool((_) => { return "It's sunny in San Francisco, but you better look out if you're a Gemini 😈."; }, { name: "search", description: "Call to surf the web.", schema: z.string(), }) const tools = [search] const toolNode = new ToolNode(tools) // 设置模型 const model = new ChatAnthropic({ model: "claude-3-5-sonnet-20240620" }) const askHumanTool = tool((_) => { return "The human said XYZ"; }, { name: "askHuman", description: "Ask the human for input.", schema: z.string(), }); const modelWithTools = model.bindTools([...tools, askHumanTool]) // 定义节点和条件边 // 定义确定是否继续执行的函数 function shouldContinue(state: typeof GraphMessagesAnnotation.State): "action" | "askHuman" | typeof END { const lastMessage = state.messages[state.messages.length - 1]; const castLastMessage = lastMessage as AIMessage; // 如果没有函数调用,则完成 if (castLastMessage && !castLastMessage.tool_calls?.length) { return END; } // 如果工具调用是 askHuman,则返回该节点 // 您也可以在此处添加逻辑,让某些系统知道是否有需要用户输入的内容 // 例如,发送 Slack 消息等 if (castLastMessage.tool_calls?.[0]?.name === "askHuman") { console.log("--- ASKING HUMAN ---") return "askHuman"; } // 否则,如果不是,则继续使用 action 节点 return "action"; } // 定义调用模型的函数 async function callModel(state: typeof GraphMessagesAnnotation.State): Promise> { const messages = state.messages; const response = await modelWithTools.invoke(messages); // 我们返回一个带有 messages 属性的对象,因为这将被添加到现有列表中 return { messages: [response] }; } // 我们定义一个假的节点来询问用户 function askHuman(state: typeof GraphMessagesAnnotation.State): Partial{ return state; } // 定义一个新的图 const messagesWorkflow = new StateGraph(GraphMessagesAnnotation) // 定义我们将循环遍历的两个节点 .addNode("agent", callModel) .addNode("action", toolNode) .addNode("askHuman", askHuman) // 我们现在添加一个条件边 .addConditionalEdges( // 首先,我们定义开始节点。我们使用 `agent`。 // 这意味着这些是在调用 `agent` 节点后采取的边。 "agent", // 接下来,我们传入一个函数,该函数将确定接下来调用哪个节点。 shouldContinue ) // 我们现在添加从 `action` 到 `agent` 的普通边。 // 这意味着在调用 `action` 之后,接下来调用 `agent` 节点。 .addEdge("action", "agent") // 在我们收到用户的响应后,我们返回到代理 .addEdge("askHuman", "agent") // 将入口点设置为 `agent` // 这意味着此节点是第一个被调用的节点 .addEdge(START, "agent"); // 设置内存 const messagesMemory = new MemorySaver(); // 最后,我们进行编译! // 这将其编译成一个 LangChain 可执行文件, // 意味着您可以像使用任何其他可执行文件一样使用它 const messagesApp = messagesWorkflow.compile({ checkpointer: messagesMemory, interruptBefore: ["askHuman"] });
在 [2]
已复制!
import * as tslab from "tslab";
const drawableGraph2 = messagesApp.getGraph();
const image2 = await drawableGraph2.drawMermaidPng();
const arrayBuffer2 = await image2.arrayBuffer();
await tslab.display.png(new Uint8Array(arrayBuffer2));
import * as tslab from "tslab"; const drawableGraph2 = messagesApp.getGraph(); const image2 = await drawableGraph2.drawMermaidPng(); const arrayBuffer2 = await image2.arrayBuffer(); await tslab.display.png(new Uint8Array(arrayBuffer2));
在 [3]
已复制!
import { HumanMessage } from "@langchain/core/messages";
// Input
const inputs = new HumanMessage("Use the search tool to ask the user where they are, then look up the weather there");
// Thread
const config2 = { configurable: { thread_id: "3" }, streamMode: "values" as const };
for await (const event of await messagesApp.stream({
messages: [inputs]
}, config2)) {
const recentMsg = event.messages[event.messages.length - 1];
console.log(`================================ ${recentMsg._getType()} Message (1) =================================`)
console.log(recentMsg.content);
}
console.log("next: ", (await messagesApp.getState(config2)).next)
import { HumanMessage } from "@langchain/core/messages"; // 输入 const inputs = new HumanMessage("Use the search tool to ask the user where they are, then look up the weather there"); // 线程 const config2 = { configurable: { thread_id: "3" }, streamMode: "values" as const }; for await (const event of await messagesApp.stream({ messages: [inputs] }, config2)) { const recentMsg = event.messages[event.messages.length - 1]; console.log(`================================ ${recentMsg._getType()} Message (1) =================================`) console.log(recentMsg.content); } console.log("next: ", (await messagesApp.getState(config2)).next)
================================ human Message (1) ================================= Use the search tool to ask the user where they are, then look up the weather there --- ASKING HUMAN --- ================================ ai Message (1) ================================= [ { type: 'text', text: "Certainly! I'll use the askHuman tool to ask the user about their location, and then use the search tool to look up the weather for that location. Let's start by asking the user where they are." }, { type: 'tool_use', id: 'toolu_01RN181HAAL5BcnMXkexbA1r', name: 'askHuman', input: { input: 'Where are you located? Please provide your city and country.' } } ] next: [ 'askHuman' ]
现在,我们希望用用户的响应更新此线程。然后,我们可以开始另一个运行。
因为我们将其视为工具调用,所以我们需要更新状态,就好像它来自工具调用一样。为此,我们需要检查状态以获取工具调用的 ID。
在 [4]
已复制!
import { ToolMessage } from "@langchain/core/messages";
const currentState = await messagesApp.getState(config2);
const toolCallId = currentState.values.messages[currentState.values.messages.length - 1].tool_calls[0].id;
// We now create the tool call with the id and the response we want
const toolMessage = new ToolMessage({
tool_call_id: toolCallId,
content: "san francisco"
});
console.log("next before update state: ", (await messagesApp.getState(config2)).next)
// We now update the state
// Notice that we are also specifying `asNode: "askHuman"`
// This will apply this update as this node,
// which will make it so that afterwards it continues as normal
await messagesApp.updateState(config2, { messages: [toolMessage] }, "askHuman");
// We can check the state
// We can see that the state currently has the `agent` node next
// This is based on how we define our graph,
// where after the `askHuman` node goes (which we just triggered)
// there is an edge to the `agent` node
console.log("next AFTER update state: ", (await messagesApp.getState(config2)).next)
// await messagesApp.getState(config)
import { ToolMessage } from "@langchain/core/messages"; const currentState = await messagesApp.getState(config2); const toolCallId = currentState.values.messages[currentState.values.messages.length - 1].tool_calls[0].id; // 现在,我们使用 ID 和我们想要的响应创建工具调用 const toolMessage = new ToolMessage({ tool_call_id: toolCallId, content: "san francisco" }); console.log("next before update state: ", (await messagesApp.getState(config2)).next) // 现在我们更新状态 // 请注意,我们还指定了 `asNode: "askHuman"` // 这会将此更新应用为此节点, // 这将使其在之后像往常一样继续执行 await messagesApp.updateState(config2, { messages: [toolMessage] }, "askHuman"); // 我们可以检查状态 // 我们可以看到,状态当前的下一个节点是 `agent` 节点 // 这是根据我们定义图的方式, // 在 `askHuman` 节点执行之后(我们刚刚触发了它) // 有一条边指向 `agent` 节点 console.log("next AFTER update state: ", (await messagesApp.getState(config2)).next) // await messagesApp.getState(config)
next before update state: [ 'askHuman' ] next AFTER update state: [ 'agent' ]
现在,我们可以告诉代理继续执行。我们可以将 None
作为图的输入传递,因为不需要额外的输入
在 [5]
已复制!
for await (const event of await messagesApp.stream(null, config2)) {
console.log(event)
const recentMsg = event.messages[event.messages.length - 1];
console.log(`================================ ${recentMsg._getType()} Message (1) =================================`)
if (recentMsg._getType() === "tool") {
console.log({
name: recentMsg.name,
content: recentMsg.content
})
} else if (recentMsg._getType() === "ai") {
console.log(recentMsg.content)
}
}
for await (const event of await messagesApp.stream(null, config2)) { console.log(event) const recentMsg = event.messages[event.messages.length - 1]; console.log(`================================ ${recentMsg._getType()} Message (1) =================================`) if (recentMsg._getType() === "tool") { console.log({ name: recentMsg.name, content: recentMsg.content }) } else if (recentMsg._getType() === "ai") { console.log(recentMsg.content) } }
{ messages: [ HumanMessage { "id": "a80d5763-0f27-4a00-9e54-8a239b499ea1", "content": "Use the search tool to ask the user where they are, then look up the weather there", "additional_kwargs": {}, "response_metadata": {} }, AIMessage { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "content": [ { "type": "text", "text": "Certainly! I'll use the askHuman tool to ask the user about their location, and then use the search tool to look up the weather for that location. Let's start by asking the user where they are." }, { "type": "tool_use", "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "name": "askHuman", "input": { "input": "Where are you located? Please provide your city and country." } } ], "additional_kwargs": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 } }, "response_metadata": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "askHuman", "args": { "input": "Where are you located? Please provide your city and country." }, "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "type": "tool_call" } ], "invalid_tool_calls": [] }, ToolMessage { "id": "9159f841-0e15-4366-96a9-cc5ee0662da0", "content": "san francisco", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "toolu_01RN181HAAL5BcnMXkexbA1r" }, AIMessage { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "content": [ { "type": "text", "text": "Thank you for providing your location. Now, I'll use the search tool to look up the weather in San Francisco." }, { "type": "tool_use", "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "name": "search", "input": { "input": "current weather in San Francisco" } } ], "additional_kwargs": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 } }, "response_metadata": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "search", "args": { "input": "current weather in San Francisco" }, "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "type": "tool_call" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 587, "output_tokens": 81, "total_tokens": 668 } } ] } ================================ ai Message (1) ================================= [ { type: 'text', text: "Thank you for providing your location. Now, I'll use the search tool to look up the weather in San Francisco." }, { type: 'tool_use', id: 'toolu_01QCcxzRjojWW5JqQp7WTN82', name: 'search', input: { input: 'current weather in San Francisco' } } ] { messages: [ HumanMessage { "id": "a80d5763-0f27-4a00-9e54-8a239b499ea1", "content": "Use the search tool to ask the user where they are, then look up the weather there", "additional_kwargs": {}, "response_metadata": {} }, AIMessage { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "content": [ { "type": "text", "text": "Certainly! I'll use the askHuman tool to ask the user about their location, and then use the search tool to look up the weather for that location. Let's start by asking the user where they are." }, { "type": "tool_use", "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "name": "askHuman", "input": { "input": "Where are you located? Please provide your city and country." } } ], "additional_kwargs": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 } }, "response_metadata": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "askHuman", "args": { "input": "Where are you located? Please provide your city and country." }, "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "type": "tool_call" } ], "invalid_tool_calls": [] }, ToolMessage { "id": "9159f841-0e15-4366-96a9-cc5ee0662da0", "content": "san francisco", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "toolu_01RN181HAAL5BcnMXkexbA1r" }, AIMessage { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "content": [ { "type": "text", "text": "Thank you for providing your location. Now, I'll use the search tool to look up the weather in San Francisco." }, { "type": "tool_use", "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "name": "search", "input": { "input": "current weather in San Francisco" } } ], "additional_kwargs": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 } }, "response_metadata": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "search", "args": { "input": "current weather in San Francisco" }, "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "type": "tool_call" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 587, "output_tokens": 81, "total_tokens": 668 } }, ToolMessage { "id": "0bf52bcd-ffbd-4f82-9ee1-7ba2108f0d27", "content": "It's sunny in San Francisco, but you better look out if you're a Gemini 😈.", "name": "search", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "toolu_01QCcxzRjojWW5JqQp7WTN82" } ] } ================================ tool Message (1) ================================= { name: 'search', content: "It's sunny in San Francisco, but you better look out if you're a Gemini 😈." } { messages: [ HumanMessage { "id": "a80d5763-0f27-4a00-9e54-8a239b499ea1", "content": "Use the search tool to ask the user where they are, then look up the weather there", "additional_kwargs": {}, "response_metadata": {} }, AIMessage { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "content": [ { "type": "text", "text": "Certainly! I'll use the askHuman tool to ask the user about their location, and then use the search tool to look up the weather for that location. Let's start by asking the user where they are." }, { "type": "tool_use", "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "name": "askHuman", "input": { "input": "Where are you located? Please provide your city and country." } } ], "additional_kwargs": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 } }, "response_metadata": { "id": "msg_01CsrDn46VqNXrdkpVHbcMKA", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 465, "output_tokens": 108 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "askHuman", "args": { "input": "Where are you located? Please provide your city and country." }, "id": "toolu_01RN181HAAL5BcnMXkexbA1r", "type": "tool_call" } ], "invalid_tool_calls": [] }, ToolMessage { "id": "9159f841-0e15-4366-96a9-cc5ee0662da0", "content": "san francisco", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "toolu_01RN181HAAL5BcnMXkexbA1r" }, AIMessage { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "content": [ { "type": "text", "text": "Thank you for providing your location. Now, I'll use the search tool to look up the weather in San Francisco." }, { "type": "tool_use", "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "name": "search", "input": { "input": "current weather in San Francisco" } } ], "additional_kwargs": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 } }, "response_metadata": { "id": "msg_017hfZ8kdhX5nKD97THKWpPx", "model": "claude-3-5-sonnet-20240620", "stop_reason": "tool_use", "stop_sequence": null, "usage": { "input_tokens": 587, "output_tokens": 81 }, "type": "message", "role": "assistant" }, "tool_calls": [ { "name": "search", "args": { "input": "current weather in San Francisco" }, "id": "toolu_01QCcxzRjojWW5JqQp7WTN82", "type": "tool_call" } ], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 587, "output_tokens": 81, "total_tokens": 668 } }, ToolMessage { "id": "0bf52bcd-ffbd-4f82-9ee1-7ba2108f0d27", "content": "It's sunny in San Francisco, but you better look out if you're a Gemini 😈.", "name": "search", "additional_kwargs": {}, "response_metadata": {}, "tool_call_id": "toolu_01QCcxzRjojWW5JqQp7WTN82" }, AIMessage { "id": "msg_01NuhYbiu36DSgW7brfoKMr8", "content": "Based on the search results, I can provide you with information about the current weather in San Francisco:\n\nThe weather in San Francisco is currently sunny. \n\nIt's worth noting that the search result included an unusual comment about Geminis, which doesn't seem directly related to the weather. If you'd like more detailed weather information, such as temperature, humidity, or forecast, please let me know, and I can perform another search for more specific weather data.\n\nIs there anything else you'd like to know about the weather in San Francisco or any other information you need?", "additional_kwargs": { "id": "msg_01NuhYbiu36DSgW7brfoKMr8", "type": "message", "role": "assistant", "model": "claude-3-5-sonnet-20240620", "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 701, "output_tokens": 121 } }, "response_metadata": { "id": "msg_01NuhYbiu36DSgW7brfoKMr8", "model": "claude-3-5-sonnet-20240620", "stop_reason": "end_turn", "stop_sequence": null, "usage": { "input_tokens": 701, "output_tokens": 121 }, "type": "message", "role": "assistant" }, "tool_calls": [], "invalid_tool_calls": [], "usage_metadata": { "input_tokens": 701, "output_tokens": 121, "total_tokens": 822 } } ] } ================================ ai Message (1) ================================= Based on the search results, I can provide you with information about the current weather in San Francisco: The weather in San Francisco is currently sunny. It's worth noting that the search result included an unusual comment about Geminis, which doesn't seem directly related to the weather. If you'd like more detailed weather information, such as temperature, humidity, or forecast, please let me know, and I can perform another search for more specific weather data. Is there anything else you'd like to know about the weather in San Francisco or any other information you need?