跳到内容

🚀 LangGraph 快速入门

在本教程中,我们将使用 LangGraph 构建一个支持聊天机器人,它可以:

✅ 通过网络搜索回答常见问题
✅ 跨调用维护对话状态
✅ 将复杂查询路由给人工进行审查
使用自定义状态控制其行为
回溯并探索不同的对话路径

我们将从一个基础聊天机器人开始,逐步添加更复杂的功能,同时介绍关键的 LangGraph 概念。让我们开始吧!🌟

设置

首先,安装所需软件包并配置您的环境

pip install -U langgraph langsmith "langchain[anthropic]"
import getpass
import os


def _set_env(var: str):
    if not os.environ.get(var):
        os.environ[var] = getpass.getpass(f"{var}: ")


_set_env("ANTHROPIC_API_KEY")

设置 LangSmith 用于 LangGraph 开发

注册 LangSmith,快速发现问题并提高您的 LangGraph 项目性能。LangSmith 允许您使用追踪数据来调试、测试和监控使用 LangGraph 构建的 LLM 应用 — 在此阅读更多关于如何开始的内容。

第一部分:构建一个基础聊天机器人

我们首先使用 LangGraph 创建一个简单的聊天机器人。该聊天机器人将直接回复用户消息。尽管简单,但它将展示使用 LangGraph 构建的核心概念。在本节结束时,您将构建一个基本聊天机器人。

首先创建一个 StateGraphStateGraph 对象将我们的聊天机器人定义为一个“状态机”的结构。我们将添加 nodes 来表示聊天机器人可以调用的 LLM 和函数,并添加 edges 来指定机器人如何在这些函数之间转换。

API 参考:StateGraph | START | END | add_messages

from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
    # Messages have the type "list". The `add_messages` function
    # in the annotation defines how this state key should be updated
    # (in this case, it appends messages to the list, rather than overwriting them)
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)

我们的图现在可以处理两个关键任务

  1. 每个 node 都可以接收当前 State 作为输入,并输出对状态的更新。
  2. messages 的更新将被附加到现有列表而不是覆盖它,这得益于与 Annotated 语法一起使用的预构建 add_messages 函数。

概念

定义图时,第一步是定义其 StateState 包括图的模式(schema)和处理状态更新的 reducer 函数。在我们的例子中,State 是一个只有一个键 messagesTypedDict。预构建的 add_messages reducer 函数用于将新消息附加到列表而不是覆盖它。没有 reducer 注释的键将覆盖之前的值。在本指南中了解更多关于状态、reducers 和相关概念的信息。


接下来,添加一个“chatbot”节点。节点代表工作单元。它们通常是常规的 Python 函数。

API 参考:init_chat_model

from langchain.chat_models import init_chat_model

llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")


def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}


# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)

注意chatbot 节点函数如何将当前 State 作为输入,并返回一个包含在键“messages”下的更新的 messages 列表的字典。这是所有 LangGraph 节点函数的基本模式。

我们 State 中的 add_messages 函数会将 LLM 的响应消息附加到状态中已有的任何消息之后。

接下来,添加一个 entry(入口)点。这告诉我们的图每次运行时从哪里开始工作

graph_builder.add_edge(START, "chatbot")

类似地,设置一个 finish(结束)点。这指示图**“任何时候运行此节点,都可以退出。”**

graph_builder.add_edge("chatbot", END)

最后,我们将希望能够运行我们的图。为此,在图构建器上调用“compile()”。这将创建一个“CompiledGraph”,我们可以用它来调用我们的状态。

graph = graph_builder.compile()

您可以使用 get_graph 方法和其中一种“绘制”方法(如 draw_asciidraw_png)来可视化图。这些 draw 方法各自需要额外的依赖项。

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

现在让我们运行聊天机器人!

提示:您可以随时输入“quit”、“exit”或“q”退出聊天循环。

def stream_graph_updates(user_input: str):
    for event in graph.stream({"messages": [{"role": "user", "content": user_input}]}):
        for value in event.values():
            print("Assistant:", value["messages"][-1].content)


while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break
        stream_graph_updates(user_input)
    except:
        # fallback if input() is not available
        user_input = "What do you know about LangGraph?"
        print("User: " + user_input)
        stream_graph_updates(user_input)
        break
Assistant: LangGraph is a library designed to help build stateful multi-agent applications using language models. It provides tools for creating workflows and state machines to coordinate multiple AI agents or language model interactions. LangGraph is built on top of LangChain, leveraging its components while adding graph-based coordination capabilities. It's particularly useful for developing more complex, stateful AI applications that go beyond simple query-response interactions.
Goodbye!
恭喜!您已经使用 LangGraph 构建了您的第一个聊天机器人。该机器人可以通过接收用户输入并使用 LLM 生成回复来进行基础对话。您可以在提供的链接查看上面调用的 LangSmith Trace

然而,您可能已经注意到,机器人的知识仅限于其训练数据。在下一部分中,我们将添加一个网络搜索工具来扩展机器人的知识并使其更强大。

以下是本节的完整代码供您参考

完整代码
API Reference: init_chat_model | StateGraph | add_messages

from typing import Annotated

from langchain.chat_models import init_chat_model
from typing_extensions import TypedDict

from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")


def chatbot(state: State):
    return {"messages": [llm.invoke(state["messages"])]}


# The first argument is the unique node name
# The second argument is the function or object that will be called whenever
# the node is used.
graph_builder.add_node("chatbot", chatbot)
graph_builder.set_entry_point("chatbot")
graph_builder.set_finish_point("chatbot")
graph = graph_builder.compile()

第二部分:🛠️ 使用工具增强聊天机器人

为了处理我们的聊天机器人无法“凭借记忆”回答的查询,我们将集成一个网络搜索工具。我们的机器人可以使用此工具查找相关信息并提供更好的回复。

要求

在开始之前,请确保您已安装必要的软件包并设置好 API 密钥

首先,安装使用 Tavily 搜索引擎所需的依赖项,并设置您的 TAVILY_API_KEY

pip install -U langchain-tavily

_set_env("TAVILY_API_KEY")
TAVILY_API_KEY:  ········
接下来,定义工具

API 参考:TavilySearch

from langchain_tavily import TavilySearch

tool = TavilySearch(max_results=2)
tools = [tool]
tool.invoke("What's a 'node' in LangGraph?")
{'query': "What's a 'node' in LangGraph?",
 'follow_up_questions': None,
 'answer': None,
 'images': [],
 'results': [{'title': "Introduction to LangGraph: A Beginner's Guide - Medium",
   'url': 'https://medium.com/@cplog/introduction-to-langgraph-a-beginners-guide-14f9be027141',
   'content': 'Stateful Graph: LangGraph revolves around the concept of a stateful graph, where each node in the graph represents a step in your computation, and the graph maintains a state that is passed around and updated as the computation progresses. LangGraph supports conditional edges, allowing you to dynamically determine the next node to execute based on the current state of the graph. We define nodes for classifying the input, handling greetings, and handling search queries. def classify_input_node(state): LangGraph is a versatile tool for building complex, stateful applications with LLMs. By understanding its core concepts and working through simple examples, beginners can start to leverage its power for their projects. Remember to pay attention to state management, conditional edges, and ensuring there are no dead-end nodes in your graph.',
   'score': 0.7065353,
   'raw_content': None},
  {'title': 'LangGraph Tutorial: What Is LangGraph and How to Use It?',
   'url': 'https://www.datacamp.com/tutorial/langgraph-tutorial',
   'content': 'LangGraph is a library within the LangChain ecosystem that provides a framework for defining, coordinating, and executing multiple LLM agents (or chains) in a structured and efficient manner. By managing the flow of data and the sequence of operations, LangGraph allows developers to focus on the high-level logic of their applications rather than the intricacies of agent coordination. Whether you need a chatbot that can handle various types of user requests or a multi-agent system that performs complex tasks, LangGraph provides the tools to build exactly what you need. LangGraph significantly simplifies the development of complex LLM applications by providing a structured framework for managing state and coordinating agent interactions.',
   'score': 0.5008063,
   'raw_content': None}],
 'response_time': 1.38}

结果是页面摘要,我们的聊天机器人可以使用它们来回答问题。

接下来,我们将开始定义我们的图。以下所有内容都与第一部分相同,除了我们在 LLM 上添加了 bind_tools。这让 LLM 知道如果它想使用我们的搜索引擎应该使用正确的 JSON 格式。

API 参考:init_chat_model | StateGraph | START | END | add_messages

from typing import Annotated

from langchain.chat_models import init_chat_model
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
# Modification: tell the LLM which tools it can call
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

接下来,我们需要创建一个函数来实际运行被调用的工具。我们将通过将工具添加到新节点来实现这一点。

下面,我们实现了一个 BasicToolNode,它检查状态中的最新消息,如果消息包含 tool_calls,则调用工具。它依赖于 LLM 的 tool_calling 支持,该支持在 Anthropic、OpenAI、Google Gemini 和许多其他 LLM 提供商中可用。

稍后我们将用 LangGraph 的预构建 ToolNode 替换它以加快速度,但首先自己构建一次很有启发性。

API 参考:ToolMessage

import json

from langchain_core.messages import ToolMessage


class BasicToolNode:
    """A node that runs the tools requested in the last AIMessage."""

    def __init__(self, tools: list) -> None:
        self.tools_by_name = {tool.name: tool for tool in tools}

    def __call__(self, inputs: dict):
        if messages := inputs.get("messages", []):
            message = messages[-1]
        else:
            raise ValueError("No message found in input")
        outputs = []
        for tool_call in message.tool_calls:
            tool_result = self.tools_by_name[tool_call["name"]].invoke(
                tool_call["args"]
            )
            outputs.append(
                ToolMessage(
                    content=json.dumps(tool_result),
                    name=tool_call["name"],
                    tool_call_id=tool_call["id"],
                )
            )
        return {"messages": outputs}


tool_node = BasicToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

添加工具节点后,我们可以定义 conditional_edges(条件边)。

回想一下,边(edges)将控制流从一个节点路由到下一个节点。条件边(conditional edges)通常包含“if”语句,根据当前的图状态路由到不同的节点。这些函数接收当前的图 state 并返回一个字符串或字符串列表,指示接下来要调用哪个(或哪些)节点。

下面,定义一个名为 route_tools 的路由函数,它检查聊天机器人的输出中是否有 tool_calls。通过调用 add_conditional_edges 将此函数提供给图,这告诉图,每当 chatbot 节点完成时,就检查此函数以确定接下来去哪里。

如果存在工具调用,条件将路由到 tools;如果不存在,则路由到 END

稍后,我们将用预构建的 tools_condition 替换此功能以使其更简洁,但首先自己实现它会使事情更清晰。

def route_tools(
    state: State,
):
    """
    Use in the conditional_edge to route to the ToolNode if the last message
    has tool calls. Otherwise, route to the end.
    """
    if isinstance(state, list):
        ai_message = state[-1]
    elif messages := state.get("messages", []):
        ai_message = messages[-1]
    else:
        raise ValueError(f"No messages found in input state to tool_edge: {state}")
    if hasattr(ai_message, "tool_calls") and len(ai_message.tool_calls) > 0:
        return "tools"
    return END


# The `tools_condition` function returns "tools" if the chatbot asks to use a tool, and "END" if
# it is fine directly responding. This conditional routing defines the main agent loop.
graph_builder.add_conditional_edges(
    "chatbot",
    route_tools,
    # The following dictionary lets you tell the graph to interpret the condition's outputs as a specific node
    # It defaults to the identity function, but if you
    # want to use a node named something else apart from "tools",
    # You can update the value of the dictionary to something else
    # e.g., "tools": "my_tools"
    {"tools": "tools", END: END},
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")
graph = graph_builder.compile()

注意,条件边从单个节点开始。这告诉图:“任何时候 'chatbot' 节点运行时,如果它调用工具,就去 'tools';如果它直接回复,就结束循环。”

与预构建的 tools_condition 一样,如果未进行工具调用,我们的函数将返回 END 字符串。当图转换到 END 时,它没有更多要完成的任务,并停止执行。由于条件可以返回 END,这次我们不需要显式设置 finish_point。我们的图已经有了结束的方式!

让我们可视化我们构建的图。以下函数运行需要一些额外的依赖项,这些依赖项对于本教程不重要。

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

现在我们可以问机器人其训练数据之外的问题了。

while True:
    try:
        user_input = input("User: ")
        if user_input.lower() in ["quit", "exit", "q"]:
            print("Goodbye!")
            break

        stream_graph_updates(user_input)
    except:
        # fallback if input() is not available
        user_input = "What do you know about LangGraph?"
        print("User: " + user_input)
        stream_graph_updates(user_input)
        break
Assistant: [{'text': "To provide you with accurate and up-to-date information about LangGraph, I'll need to search for the latest details. Let me do that for you.", 'type': 'text'}, {'id': 'toolu_01Q588CszHaSvvP2MxRq9zRD', 'input': {'query': 'LangGraph AI tool information'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Assistant: [{"url": "https://langchain.ac.cn/langgraph", "content": "LangGraph sets the foundation for how we can build and scale AI workloads \u2014 from conversational agents, complex task automation, to custom LLM-backed experiences that 'just work'. The next chapter in building complex production-ready features with LLMs is agentic, and with LangGraph and LangSmith, LangChain delivers an out-of-the-box solution ..."}, {"url": "https://github.com/langchain-ai/langgraph", "content": "Overview. LangGraph is a library for building stateful, multi-actor applications with LLMs, used to create agent and multi-agent workflows. Compared to other LLM frameworks, it offers these core benefits: cycles, controllability, and persistence. LangGraph allows you to define flows that involve cycles, essential for most agentic architectures ..."}]
Assistant: Based on the search results, I can provide you with information about LangGraph:

1. Purpose:
   LangGraph is a library designed for building stateful, multi-actor applications with Large Language Models (LLMs). It's particularly useful for creating agent and multi-agent workflows.

2. Developer:
   LangGraph is developed by LangChain, a company known for its tools and frameworks in the AI and LLM space.

3. Key Features:
   - Cycles: LangGraph allows the definition of flows that involve cycles, which is essential for most agentic architectures.
   - Controllability: It offers enhanced control over the application flow.
   - Persistence: The library provides ways to maintain state and persistence in LLM-based applications.

4. Use Cases:
   LangGraph can be used for various applications, including:
   - Conversational agents
   - Complex task automation
   - Custom LLM-backed experiences

5. Integration:
   LangGraph works in conjunction with LangSmith, another tool by LangChain, to provide an out-of-the-box solution for building complex, production-ready features with LLMs.

6. Significance:
   LangGraph is described as setting the foundation for building and scaling AI workloads. It's positioned as a key tool in the next chapter of LLM-based application development, particularly in the realm of agentic AI.

7. Availability:
   LangGraph is open-source and available on GitHub, which suggests that developers can access and contribute to its codebase.

8. Comparison to Other Frameworks:
   LangGraph is noted to offer unique benefits compared to other LLM frameworks, particularly in its ability to handle cycles, provide controllability, and maintain persistence.

LangGraph appears to be a significant tool in the evolving landscape of LLM-based application development, offering developers new ways to create more complex, stateful, and interactive AI systems.
Goodbye!
恭喜!您在 LangGraph 中创建了一个对话式 Agent,它可以在需要时使用搜索引擎检索更新的信息。现在它可以处理更广泛的用户查询。要检查您的 Agent 刚才执行的所有步骤,请查看此 LangSmith trace

我们的聊天机器人仍然无法自主记住过去的互动,这限制了其进行连贯的多轮对话的能力。在下一部分中,我们将添加记忆(memory)来解决这个问题。

本节创建的图的完整代码如下所示,其中用预构建的 ToolNode 替换了我们的 BasicToolNode,并用预构建的 tools_condition 替换了我们的 route_tools 条件。

完整代码
API Reference: init_chat_model | TavilySearch | BaseMessage | StateGraph | add_messages | ToolNode | tools_condition

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


tool = TavilySearch(max_results=2)
tools = [tool]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
graph = graph_builder.compile()

第三部分:为聊天机器人添加记忆

我们的聊天机器人现在可以使用工具回答用户问题,但它不记得之前交互的上下文。这限制了其进行连贯的多轮对话的能力。

LangGraph 通过持久化检查点(persistent checkpointing)解决了这个问题。如果在编译图时提供了 checkpointer,并在调用图时提供了 thread_id,LangGraph 会在每一步后自动保存状态。当您使用相同的 thread_id 再次调用图时,图会加载其保存的状态,从而允许聊天机器人从上次离开的地方继续。

我们稍后会看到,检查点(checkpointing)比简单的聊天记忆强大得多——它允许您随时保存和恢复复杂的状态,用于错误恢复、人在回路工作流、时间旅行交互等等。但在我们进一步深入之前,让我们先添加检查点来启用多轮对话。

首先,创建一个 MemorySaver 检查点。

API 参考:MemorySaver

from langgraph.checkpoint.memory import MemorySaver

memory = MemorySaver()

注意,我们使用的是内存检查点。这对于我们的教程很方便(它将所有内容保存在内存中)。在生产应用中,您可能会将其更改为使用 SqliteSaverPostgresSaver 并连接到您自己的数据库。

接下来定义图。既然您已经构建了自己的 BasicToolNode,我们将用 LangGraph 的预构建 ToolNodetools_condition 替换它,因为这些提供了并行 API 执行等不错的功能。除此之外,以下所有内容都复制自第二部分。

API 参考:init_chat_model | TavilySearch | BaseMessage | StateGraph | START | END | add_messages | ToolNode | tools_condition

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


tool = TavilySearch(max_results=2)
tools = [tool]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
# Any time a tool is called, we return to the chatbot to decide the next step
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

最后,使用提供的检查点编译图。

graph = graph_builder.compile(checkpointer=memory)

注意,自第二部分以来,图的连接性没有改变。我们所做的只是在图遍历每个节点时对 State 进行检查点。

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

现在您可以与您的机器人互动了!首先,选择一个线程作为本次对话的键。

config = {"configurable": {"thread_id": "1"}}

接下来,调用您的聊天机器人。

user_input = "Hi there! My name is Will."

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Hi there! My name is Will.
================================== Ai Message ==================================

Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?
注意:配置是作为调用我们图时的第二个位置参数提供的。重要的是它没有嵌套在图的输入中({'messages': []})。

让我们问一个后续问题:看看它是否记得您的名字。

user_input = "Remember my name?"

# The config is the **second positional argument** to stream() or invoke()!
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.
注意,我们没有使用外部列表作为记忆:这一切都由检查点处理!您可以在此 LangSmith trace 中检查完整的执行过程,看看发生了什么。

不相信?试试使用不同的配置。

# The only difference is we change the `thread_id` here to "2" instead of "1"
events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    {"configurable": {"thread_id": "2"}},
    stream_mode="values",
)
for event in events:
    event["messages"][-1].pretty_print()
================================ Human Message =================================

Remember my name?
================================== Ai Message ==================================

I apologize, but I don't have any previous context or memory of your name. As an AI assistant, I don't retain information from past conversations. Each interaction starts fresh. Could you please tell me your name so I can address you properly in this conversation?
注意,我们所做的唯一更改是修改了配置中的 thread_id。请参阅此调用的 LangSmith trace 进行比较。

到目前为止,我们已经在两个不同的线程中创建了一些检查点。但是检查点里有什么?要随时检查给定配置的图的 state,请调用 get_state(config)

snapshot = graph.get_state(config)
snapshot
StateSnapshot(values={'messages': [HumanMessage(content='Hi there! My name is Will.', additional_kwargs={}, response_metadata={}, id='8c1ca919-c553-4ebf-95d4-b59a2d61e078'), AIMessage(content="Hello Will! It's nice to meet you. How can I assist you today? Is there anything specific you'd like to know or discuss?", additional_kwargs={}, response_metadata={'id': 'msg_01WTQebPhNwmMrmmWojJ9KXJ', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 405, 'output_tokens': 32}}, id='run-58587b77-8c82-41e6-8a90-d62c444a261d-0', usage_metadata={'input_tokens': 405, 'output_tokens': 32, 'total_tokens': 437}), HumanMessage(content='Remember my name?', additional_kwargs={}, response_metadata={}, id='daba7df6-ad75-4d6b-8057-745881cea1ca'), AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}, next=(), config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-93e0-6acc-8004-f2ac846575d2'}}, metadata={'source': 'loop', 'writes': {'chatbot': {'messages': [AIMessage(content="Of course, I remember your name, Will. I always try to pay attention to important details that users share with me. Is there anything else you'd like to talk about or any questions you have? I'm here to help with a wide range of topics or tasks.", additional_kwargs={}, response_metadata={'id': 'msg_01E41KitY74HpENRgXx94vag', 'model': 'claude-3-5-sonnet-20240620', 'stop_reason': 'end_turn', 'stop_sequence': None, 'usage': {'input_tokens': 444, 'output_tokens': 58}}, id='run-ffeaae5c-4d2d-4ddb-bd59-5d5cbf2a5af8-0', usage_metadata={'input_tokens': 444, 'output_tokens': 58, 'total_tokens': 502})]}}, 'step': 4, 'parents': {}}, created_at='2024-09-27T19:30:10.820758+00:00', parent_config={'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1ef7d06e-859f-6206-8003-e1bd3c264b8f'}}, tasks=())
snapshot.next  # (since the graph ended this turn, `next` is empty. If you fetch a state from within a graph invocation, next tells which node will execute next)
()

上述快照包含当前状态值、对应的配置以及要处理的 next 节点。在我们的例子中,图已达到 END 状态,因此 next 为空。

恭喜!得益于 LangGraph 的检查点系统,您的聊天机器人现在可以在会话之间维护对话状态。这为更自然、有上下文的交互打开了激动人心的可能性。LangGraph 的检查点甚至可以处理任意复杂的图状态,这比简单的聊天记忆更具表达力和强大。

在下一部分中,我们将为我们的机器人引入人工监督,以处理它在继续之前可能需要指导或验证的情况。

查看下面的代码片段,回顾本节中的图。

完整代码
API Reference: init_chat_model | TavilySearch | BaseMessage | MemorySaver | StateGraph | add_messages | ToolNode

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


tool = TavilySearch(max_results=2)
tools = [tool]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.set_entry_point("chatbot")
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

第四部分:人在回路中

Agent 可能不可靠,并且可能需要人工输入才能成功完成任务。类似地,对于某些操作,您可能希望在运行之前需要人工批准,以确保一切按预期运行。

LangGraph 的 持久化 层支持人在回路工作流,允许根据用户反馈暂停和恢复执行。此功能的主要接口是 interrupt 函数。在节点内部调用 interrupt 将暂停执行。通过传入 Command,可以恢复执行,并接收来自人工的新输入。interrupt 在人体工程学上类似于 Python 的内置 input()但有一些注意事项。我们在下面演示一个示例。

首先,从第三部分的现有代码开始。我们将进行一项更改,即添加一个聊天机器人可以访问的简单 human_assistance 工具。此工具使用 interrupt 从人工接收信息。

API 参考:init_chat_model | TavilySearch | tool | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition | Command | interrupt

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition

from langgraph.types import Command, interrupt


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


@tool
def human_assistance(query: str) -> str:
    """Request assistance from a human."""
    human_response = interrupt({"query": query})
    return human_response["data"]


tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    message = llm_with_tools.invoke(state["messages"])
    # Because we will be interrupting during tool execution,
    # we disable parallel tool calling to avoid repeating any
    # tool invocations when we resume.
    assert len(message.tool_calls) <= 1
    return {"messages": [message]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

提示

请查看操作指南中的人在回路部分,了解更多人在回路工作流的示例,包括如何在执行工具调用之前审查和编辑工具调用


我们像之前一样使用检查点编译图

memory = MemorySaver()

graph = graph_builder.compile(checkpointer=memory)

可视化图后,我们得到了与之前相同的布局。我们只是添加了一个工具!

from IPython.display import Image, display

try:
    display(Image(graph.get_graph().draw_mermaid_png()))
except Exception:
    # This requires some extra dependencies and is optional
    pass

现在让我们向聊天机器人提出一个问题,该问题将触发新的 human_assistance 工具

user_input = "I need some expert guidance for building an AI agent. Could you request assistance for me?"
config = {"configurable": {"thread_id": "1"}}

events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================ Human Message =================================

I need some expert guidance for building an AI agent. Could you request assistance for me?
================================== Ai Message ==================================

[{'text': "Certainly! I'd be happy to request expert assistance for you regarding building an AI agent. To do this, I'll use the human_assistance function to relay your request. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01ABUqneqnuHNuo1vhfDFQCW', 'input': {'query': 'A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
  human_assistance (toolu_01ABUqneqnuHNuo1vhfDFQCW)
 Call ID: toolu_01ABUqneqnuHNuo1vhfDFQCW
  Args:
    query: A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?
聊天机器人生成了一个工具调用,但执行被中断了!请注意,如果我们检查图状态,会发现它停在了工具节点。

snapshot = graph.get_state(config)
snapshot.next
('tools',)

让我们仔细看看 human_assistance 工具

@tool
def human_assistance(query: str) -> str:
    """Request assistance from a human."""
    human_response = interrupt({"query": query})
    return human_response["data"]

类似于 Python 的内置 input() 函数,在工具内部调用 interrupt 将暂停执行。进度将根据我们选择的 检查点 进行持久化——因此,如果我们使用 Postgres 进行持久化,只要数据库处于活动状态,我们可以随时恢复。这里我们使用内存检查点进行持久化,因此只要我们的 Python 内核正在运行,我们就可以随时恢复。

要恢复执行,我们传入一个 Command 对象,其中包含工具预期的数据。此数据的格式可以根据我们的需要进行自定义。在这里,我们只需要一个包含键 "data" 的字典。

human_response = (
    "We, the experts are here to help! We'd recommend you check out LangGraph to build your agent."
    " It's much more reliable and extensible than simple autonomous agents."
)

human_command = Command(resume={"data": human_response})

events = graph.stream(human_command, config, stream_mode="values")
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================== Ai Message ==================================

[{'text': "Certainly! I'd be happy to request expert assistance for you regarding building an AI agent. To do this, I'll use the human_assistance function to relay your request. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01ABUqneqnuHNuo1vhfDFQCW', 'input': {'query': 'A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
  human_assistance (toolu_01ABUqneqnuHNuo1vhfDFQCW)
 Call ID: toolu_01ABUqneqnuHNuo1vhfDFQCW
  Args:
    query: A user is requesting expert guidance for building an AI agent. Could you please provide some expert advice or resources on this topic?
================================= Tool Message =================================
Name: human_assistance

We, the experts are here to help! We'd recommend you check out LangGraph to build your agent. It's much more reliable and extensible than simple autonomous agents.
================================== Ai Message ==================================

Thank you for your patience. I've received some expert advice regarding your request for guidance on building an AI agent. Here's what the experts have suggested:

The experts recommend that you look into LangGraph for building your AI agent. They mention that LangGraph is a more reliable and extensible option compared to simple autonomous agents.

LangGraph is likely a framework or library designed specifically for creating AI agents with advanced capabilities. Here are a few points to consider based on this recommendation:

1. Reliability: The experts emphasize that LangGraph is more reliable than simpler autonomous agent approaches. This could mean it has better stability, error handling, or consistent performance.

2. Extensibility: LangGraph is described as more extensible, which suggests that it probably offers a flexible architecture that allows you to easily add new features or modify existing ones as your agent's requirements evolve.

3. Advanced capabilities: Given that it's recommended over "simple autonomous agents," LangGraph likely provides more sophisticated tools and techniques for building complex AI agents.

To get started with LangGraph, you might want to:

1. Search for the official LangGraph documentation or website to learn more about its features and how to use it.
2. Look for tutorials or guides specifically focused on building AI agents with LangGraph.
3. Check if there are any community forums or discussion groups where you can ask questions and get support from other developers using LangGraph.

If you'd like more specific information about LangGraph or have any questions about this recommendation, please feel free to ask, and I can request further assistance from the experts.
我们的输入已被接收并作为工具消息处理。请查看此调用的 LangSmith trace,了解上述调用中完成的具体工作。请注意,状态在第一步中被加载,以便我们的聊天机器人可以从上次离开的地方继续。

恭喜!您已使用 interrupt 为您的聊天机器人添加了人在回路执行功能,从而在需要时进行人工监督和干预。这为您可以使用 AI 系统创建的潜在 UI 打开了大门。由于我们已经添加了检查点,只要底层的持久化层正在运行,图就可以**无限期地**暂停并在任何时候恢复,就像什么都没发生一样。

人在回路工作流实现了各种新的工作流和用户体验。查看操作指南的 本节,了解更多人在回路工作流的示例,包括如何在执行工具调用之前审查和编辑工具调用

完整代码
API Reference: init_chat_model | TavilySearch | tool | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition | Command | interrupt

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.tools import tool
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


@tool
def human_assistance(query: str) -> str:
    """Request assistance from a human."""
    human_response = interrupt({"query": query})
    return human_response["data"]


tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    message = llm_with_tools.invoke(state["messages"])
    assert(len(message.tool_calls) <= 1)
    return {"messages": [message]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

第五部分:自定义状态

到目前为止,我们依赖一个包含一个条目(消息列表)的简单状态。虽然这个简单状态可以走得很远,但如果您想定义复杂的行为而不依赖于消息列表,您可以向状态添加额外的字段。在这里,我们将演示一个新场景,其中聊天机器人使用其搜索工具查找特定信息,并将其转发给人工进行审查。让聊天机器人研究一个实体的生日。我们将向状态添加 namebirthday

API 参考:add_messages

from typing import Annotated

from typing_extensions import TypedDict

from langgraph.graph.message import add_messages


class State(TypedDict):
    messages: Annotated[list, add_messages]
    name: str
    birthday: str

将此信息添加到状态使其易于被其他图节点(例如,存储或处理信息的下游节点)以及图的持久化层访问。

在这里,我们将在 human_assistance 工具内部填充状态键。这允许人工在信息存储到状态之前对其进行审查。我们将再次使用 Command,这次是从我们的工具内部发出状态更新。在此处阅读更多关于 Command 的用例。

API 参考:ToolMessage | InjectedToolCallId | tool | Command | interrupt

from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool

from langgraph.types import Command, interrupt


@tool
# Note that because we are generating a ToolMessage for a state update, we
# generally require the ID of the corresponding tool call. We can use
# LangChain's InjectedToolCallId to signal that this argument should not
# be revealed to the model in the tool's schema.
def human_assistance(
    name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
    """Request assistance from a human."""
    human_response = interrupt(
        {
            "question": "Is this correct?",
            "name": name,
            "birthday": birthday,
        },
    )
    # If the information is correct, update the state as-is.
    if human_response.get("correct", "").lower().startswith("y"):
        verified_name = name
        verified_birthday = birthday
        response = "Correct"
    # Otherwise, receive information from the human reviewer.
    else:
        verified_name = human_response.get("name", name)
        verified_birthday = human_response.get("birthday", birthday)
        response = f"Made a correction: {human_response}"

    # This time we explicitly update the state with a ToolMessage inside
    # the tool.
    state_update = {
        "name": verified_name,
        "birthday": verified_birthday,
        "messages": [ToolMessage(response, tool_call_id=tool_call_id)],
    }
    # We return a Command object in the tool to update our state.
    return Command(update=state_update)

除此之外,我们的图的其余部分是相同的

API 参考:init_chat_model | TavilySearch | MemorySaver | StateGraph | START | END | ToolNode | tools_condition

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.prebuilt import ToolNode, tools_condition


tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    message = llm_with_tools.invoke(state["messages"])
    assert len(message.tool_calls) <= 1
    return {"messages": [message]}


graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

让我们提示我们的应用查找 LangGraph 库的“生日”。我们将指示聊天机器人在获得所需信息后联系 human_assistance 工具。请注意,通过在工具的参数中设置 namebirthday,我们强制聊天机器人为这些字段生成建议。

user_input = (
    "Can you look up when LangGraph was released? "
    "When you have the answer, use the human_assistance tool for review."
)
config = {"configurable": {"thread_id": "1"}}

events = graph.stream(
    {"messages": [{"role": "user", "content": user_input}]},
    config,
    stream_mode="values",
)
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================ Human Message =================================

Can you look up when LangGraph was released? When you have the answer, use the human_assistance tool for review.
================================== Ai Message ==================================

[{'text': "Certainly! I'll start by searching for information about LangGraph's release date using the Tavily search function. Then, I'll use the human_assistance tool for review.", 'type': 'text'}, {'id': 'toolu_01JoXQPgTVJXiuma8xMVwqAi', 'input': {'query': 'LangGraph release date'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
  tavily_search_results_json (toolu_01JoXQPgTVJXiuma8xMVwqAi)
 Call ID: toolu_01JoXQPgTVJXiuma8xMVwqAi
  Args:
    query: LangGraph release date
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://blog.langchain.ac.cn/langgraph-cloud/", "content": "We also have a new stable release of LangGraph. By LangChain 6 min read Jun 27, 2024 (Oct '24) Edit: Since the launch of LangGraph Cloud, we now have multiple deployment options alongside LangGraph Studio - which now fall under LangGraph Platform. LangGraph Cloud is synonymous with our Cloud SaaS deployment option."}, {"url": "https://changelog.langchain.ac.cn/announcements/langgraph-cloud-deploy-at-scale-monitor-carefully-iterate-boldly", "content": "LangChain - Changelog | ☁ 🚀 LangGraph Cloud: Deploy at scale, monitor LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain LangSmith LangGraph LangChain Changelog Sign up for our newsletter to stay up to date DATE: The LangChain Team LangGraph LangGraph Cloud ☁ 🚀 LangGraph Cloud: Deploy at scale, monitor carefully, iterate boldly DATE: June 27, 2024 AUTHOR: The LangChain Team LangGraph Cloud is now in closed beta, offering scalable, fault-tolerant deployment for LangGraph agents. LangGraph Cloud also includes a new playground-like studio for debugging agent failure modes and quick iteration: Join the waitlist today for LangGraph Cloud. And to learn more, read our blog post announcement or check out our docs. Subscribe By clicking subscribe, you accept our privacy policy and terms and conditions."}]
================================== Ai Message ==================================

[{'text': "Based on the search results, it appears that LangGraph was already in existence before June 27, 2024, when LangGraph Cloud was announced. However, the search results don't provide a specific release date for the original LangGraph. \n\nGiven this information, I'll use the human_assistance tool to review and potentially provide more accurate information about LangGraph's initial release date.", 'type': 'text'}, {'id': 'toolu_01JDQAV7nPqMkHHhNs3j3XoN', 'input': {'name': 'Assistant', 'birthday': '2023-01-01'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
  human_assistance (toolu_01JDQAV7nPqMkHHhNs3j3XoN)
 Call ID: toolu_01JDQAV7nPqMkHHhNs3j3XoN
  Args:
    name: Assistant
    birthday: 2023-01-01
我们再次触发了 human_assistance 工具中的 interrupt。在这种情况下,聊天机器人未能识别正确的日期,所以我们可以提供它

human_command = Command(
    resume={
        "name": "LangGraph",
        "birthday": "Jan 17, 2024",
    },
)

events = graph.stream(human_command, config, stream_mode="values")
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================== Ai Message ==================================

[{'text': "Based on the search results, it appears that LangGraph was already in existence before June 27, 2024, when LangGraph Cloud was announced. However, the search results don't provide a specific release date for the original LangGraph. \n\nGiven this information, I'll use the human_assistance tool to review and potentially provide more accurate information about LangGraph's initial release date.", 'type': 'text'}, {'id': 'toolu_01JDQAV7nPqMkHHhNs3j3XoN', 'input': {'name': 'Assistant', 'birthday': '2023-01-01'}, 'name': 'human_assistance', 'type': 'tool_use'}]
Tool Calls:
  human_assistance (toolu_01JDQAV7nPqMkHHhNs3j3XoN)
 Call ID: toolu_01JDQAV7nPqMkHHhNs3j3XoN
  Args:
    name: Assistant
    birthday: 2023-01-01
================================= Tool Message =================================
Name: human_assistance

Made a correction: {'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}
================================== Ai Message ==================================

Thank you for the human assistance. I can now provide you with the correct information about LangGraph's release date.

LangGraph was initially released on January 17, 2024. This information comes from the human assistance correction, which is more accurate than the search results I initially found.

To summarize:
1. LangGraph's original release date: January 17, 2024
2. LangGraph Cloud announcement: June 27, 2024

It's worth noting that LangGraph had been in development and use for some time before the LangGraph Cloud announcement, but the official initial release of LangGraph itself was on January 17, 2024.
注意,这些字段现在已反映在状态中

snapshot = graph.get_state(config)

{k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
{'name': 'LangGraph', 'birthday': 'Jan 17, 2024'}

这使得它们易于被下游节点访问(例如,一个进一步处理或存储信息的节点)。

手动更新状态

LangGraph 对应用程序状态提供了高度控制。例如,在任何时候(包括中断时),我们可以使用 graph.update_state 手动覆盖一个键。

graph.update_state(config, {"name": "LangGraph (library)"})
{'configurable': {'thread_id': '1',
  'checkpoint_ns': '',
  'checkpoint_id': '1efd4ec5-cf69-6352-8006-9278f1730162'}}

如果我们调用 graph.get_state,可以看到新值已反映出来。

snapshot = graph.get_state(config)

{k: v for k, v in snapshot.values.items() if k in ("name", "birthday")}
{'name': 'LangGraph (library)', 'birthday': 'Jan 17, 2024'}

手动状态更新甚至会在 LangSmith 中生成 trace。如果需要,它们也可以用于控制人在回路工作流,如本指南所述。通常建议使用 interrupt 函数,因为它允许在人在回路交互中独立于状态更新传输数据。

恭喜!您已向状态添加了自定义键,以便实现更复杂的工作流,并学习了如何从工具内部生成状态更新。

本教程即将结束,但在结束之前,我们还想回顾一个概念,它连接了 checkpointingstate updates

本节的代码复制如下供您参考。

完整代码
API Reference: init_chat_model | TavilySearch | ToolMessage | InjectedToolCallId | tool | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition | Command | interrupt

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import ToolMessage
from langchain_core.tools import InjectedToolCallId, tool
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.types import Command, interrupt



class State(TypedDict):
    messages: Annotated[list, add_messages]
    name: str
    birthday: str


@tool
def human_assistance(
    name: str, birthday: str, tool_call_id: Annotated[str, InjectedToolCallId]
) -> str:
    """Request assistance from a human."""
    human_response = interrupt(
        {
            "question": "Is this correct?",
            "name": name,
            "birthday": birthday,
        },
    )
    if human_response.get("correct", "").lower().startswith("y"):
        verified_name = name
        verified_birthday = birthday
        response = "Correct"
    else:
        verified_name = human_response.get("name", name)
        verified_birthday = human_response.get("birthday", birthday)
        response = f"Made a correction: {human_response}"

    state_update = {
        "name": verified_name,
        "birthday": verified_birthday,
        "messages": [ToolMessage(response, tool_call_id=tool_call_id)],
    }
    return Command(update=state_update)


tool = TavilySearch(max_results=2)
tools = [tool, human_assistance]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    message = llm_with_tools.invoke(state["messages"])
    assert(len(message.tool_calls) <= 1)
    return {"messages": [message]}


graph_builder = StateGraph(State)
graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=tools)
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

第六部分:时间旅行

在典型的聊天机器人工作流中,用户与机器人交互 1 次或多次来完成任务。在前面的部分中,我们看到了如何添加记忆和人在回路功能,以便能够检查我们的图状态并控制未来的回复。

但是,如果您想让用户从之前的回复开始,“分支”出来探索另一个结果怎么办?或者,如果您想让用户能够“回溯”您的助手的操作,以修复一些错误或尝试不同的策略(这在自主软件工程师等应用中很常见)怎么办?

您可以使用 LangGraph 的内置“时间旅行”功能创建这两种体验以及更多功能。

在本节中,您将通过使用图的 get_state_history 方法获取检查点来“回溯”您的图。然后,您可以在之前的时间点恢复执行。

为此,让我们使用来自第三部分带有工具的简单聊天机器人

API 参考:init_chat_model | TavilySearch | BaseMessage | MemorySaver | StateGraph | START | END | add_messages | ToolNode | tools_condition

from typing import Annotated

from langchain.chat_models import init_chat_model
from langchain_tavily import TavilySearch
from langchain_core.messages import BaseMessage
from typing_extensions import TypedDict

from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.prebuilt import ToolNode, tools_condition


class State(TypedDict):
    messages: Annotated[list, add_messages]


graph_builder = StateGraph(State)


tool = TavilySearch(max_results=2)
tools = [tool]
llm = init_chat_model("anthropic:claude-3-5-sonnet-latest")
llm_with_tools = llm.bind_tools(tools)


def chatbot(state: State):
    return {"messages": [llm_with_tools.invoke(state["messages"])]}


graph_builder.add_node("chatbot", chatbot)

tool_node = ToolNode(tools=[tool])
graph_builder.add_node("tools", tool_node)

graph_builder.add_conditional_edges(
    "chatbot",
    tools_condition,
)
graph_builder.add_edge("tools", "chatbot")
graph_builder.add_edge(START, "chatbot")

memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)

让我们让图执行几个步骤。每一步都将在其状态历史中进行检查点。

config = {"configurable": {"thread_id": "1"}}
events = graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": (
                    "I'm learning LangGraph. "
                    "Could you do some research on it for me?"
                ),
            },
        ],
    },
    config,
    stream_mode="values",
)
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================ Human Message =================================

I'm learning LangGraph. Could you do some research on it for me?
================================== Ai Message ==================================

[{'text': "Certainly! I'd be happy to research LangGraph for you. To get the most up-to-date and accurate information, I'll use the Tavily search engine to look this up. Let me do that for you now.", 'type': 'text'}, {'id': 'toolu_01BscbfJJB9EWJFqGrN6E54e', 'input': {'query': 'LangGraph latest information and features'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
  tavily_search_results_json (toolu_01BscbfJJB9EWJFqGrN6E54e)
 Call ID: toolu_01BscbfJJB9EWJFqGrN6E54e
  Args:
    query: LangGraph latest information and features
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://blockchain.news/news/langchain-new-features-upcoming-events-update", "content": "LangChain, a leading platform in the AI development space, has released its latest updates, showcasing new use cases and enhancements across its ecosystem. According to the LangChain Blog, the updates cover advancements in LangGraph Cloud, LangSmith's self-improving evaluators, and revamped documentation for LangGraph."}, {"url": "https://blog.langchain.ac.cn/langgraph-platform-announce/", "content": "With these learnings under our belt, we decided to couple some of our latest offerings under LangGraph Platform. LangGraph Platform today includes LangGraph Server, LangGraph Studio, plus the CLI and SDK. ... we added features in LangGraph Server to deliver on a few key value areas. Below, we'll focus on these aspects of LangGraph Platform."}]
================================== Ai Message ==================================

Thank you for your patience. I've found some recent information about LangGraph for you. Let me summarize the key points:

1. LangGraph is part of the LangChain ecosystem, which is a leading platform in AI development.

2. Recent updates and features of LangGraph include:

   a. LangGraph Cloud: This seems to be a cloud-based version of LangGraph, though specific details weren't provided in the search results.

   b. LangGraph Platform: This is a newly introduced concept that combines several offerings:
      - LangGraph Server
      - LangGraph Studio
      - CLI (Command Line Interface)
      - SDK (Software Development Kit)

3. LangGraph Server: This component has received new features to enhance its value proposition, though the specific features weren't detailed in the search results.

4. LangGraph Studio: This appears to be a new tool in the LangGraph ecosystem, likely providing a graphical interface for working with LangGraph.

5. Documentation: The LangGraph documentation has been revamped, which should make it easier for learners like yourself to understand and use the tool.

6. Integration with LangSmith: While not directly part of LangGraph, LangSmith (another tool in the LangChain ecosystem) now features self-improving evaluators, which might be relevant if you're using LangGraph as part of a larger LangChain project.

As you're learning LangGraph, it would be beneficial to:

1. Check out the official LangChain documentation, especially the newly revamped LangGraph sections.
2. Explore the different components of the LangGraph Platform (Server, Studio, CLI, and SDK) to see which best fits your learning needs.
3. Keep an eye on LangGraph Cloud developments, as cloud-based solutions often provide an easier starting point for learners.
4. Consider how LangGraph fits into the broader LangChain ecosystem, especially its interaction with tools like LangSmith.

Is there any specific aspect of LangGraph you'd like to know more about? I'd be happy to do a more focused search on particular features or use cases.

events = graph.stream(
    {
        "messages": [
            {
                "role": "user",
                "content": (
                    "Ya that's helpful. Maybe I'll "
                    "build an autonomous agent with it!"
                ),
            },
        ],
    },
    config,
    stream_mode="values",
)
for event in events:
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================ Human Message =================================

Ya that's helpful. Maybe I'll build an autonomous agent with it!
================================== Ai Message ==================================

[{'text': "That's an exciting idea! Building an autonomous agent with LangGraph is indeed a great application of this technology. LangGraph is particularly well-suited for creating complex, multi-step AI workflows, which is perfect for autonomous agents. Let me gather some more specific information about using LangGraph for building autonomous agents.", 'type': 'text'}, {'id': 'toolu_01QWNHhUaeeWcGXvA4eHT7Zo', 'input': {'query': 'Building autonomous agents with LangGraph examples and tutorials'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
  tavily_search_results_json (toolu_01QWNHhUaeeWcGXvA4eHT7Zo)
 Call ID: toolu_01QWNHhUaeeWcGXvA4eHT7Zo
  Args:
    query: Building autonomous agents with LangGraph examples and tutorials
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://towardsdatascience.com/building-autonomous-multi-tool-agents-with-gemini-2-0-and-langgraph-ad3d7bd5e79d", "content": "Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph | by Youness Mansar | Jan, 2025 | Towards Data Science Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph A practical tutorial with full code examples for building and running multi-tool agents Towards Data Science LLMs are remarkable — they can memorize vast amounts of information, answer general knowledge questions, write code, generate stories, and even fix your grammar. In this tutorial, we are going to build a simple LLM agent that is equipped with four tools that it can use to answer a user’s question. This Agent will have the following specifications: Follow Published in Towards Data Science --------------------------------- Your home for data science and AI. Follow Follow Follow"}, {"url": "https://github.com/anmolaman20/Tools_and_Agents", "content": "GitHub - anmolaman20/Tools_and_Agents: This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository serves as a comprehensive guide for building AI-powered agents using Langchain and Langgraph. It provides hands-on examples, practical tutorials, and resources for developers and AI enthusiasts to master building intelligent systems and workflows. AI Agent Development: Gain insights into creating intelligent systems that think, reason, and adapt in real time. This repository is ideal for AI practitioners, developers exploring language models, or anyone interested in building intelligent systems. This repository provides resources for building AI agents using Langchain and Langgraph."}]
================================== Ai Message ==================================

Great idea! Building an autonomous agent with LangGraph is definitely an exciting project. Based on the latest information I've found, here are some insights and tips for building autonomous agents with LangGraph:

1. Multi-Tool Agents: LangGraph is particularly well-suited for creating autonomous agents that can use multiple tools. This allows your agent to have a diverse set of capabilities and choose the right tool for each task.

2. Integration with Large Language Models (LLMs): You can combine LangGraph with powerful LLMs like Gemini 2.0 to create more intelligent and capable agents. The LLM can serve as the "brain" of your agent, making decisions and generating responses.

3. Workflow Management: LangGraph excels at managing complex, multi-step AI workflows. This is crucial for autonomous agents that need to break down tasks into smaller steps and execute them in the right order.

4. Practical Tutorials Available: There are tutorials available that provide full code examples for building and running multi-tool agents. These can be incredibly helpful as you start your project.

5. Langchain Integration: LangGraph is often used in conjunction with Langchain. This combination provides a powerful framework for building AI agents, offering features like memory management, tool integration, and prompt management.

6. GitHub Resources: There are repositories available (like the one by anmolaman20) that provide comprehensive resources for building AI agents using Langchain and LangGraph. These can be valuable references as you develop your agent.

7. Real-time Adaptation: LangGraph allows you to create agents that can think, reason, and adapt in real-time, which is crucial for truly autonomous behavior.

8. Customization: You can equip your agent with specific tools tailored to your use case. For example, you might include tools for web searching, data analysis, or interacting with specific APIs.

To get started with your autonomous agent project:

1. Familiarize yourself with LangGraph's documentation and basic concepts.
2. Look into tutorials that specifically deal with building autonomous agents, like the one mentioned from Towards Data Science.
3. Decide on the specific capabilities you want your agent to have and identify the tools it will need.
4. Start with a simple agent and gradually add complexity as you become more comfortable with the framework.
5. Experiment with different LLMs to find the one that works best for your use case.
6. Pay attention to how you structure the agent's decision-making process and workflow.
7. Don't forget to implement proper error handling and safety measures, especially if your agent will be interacting with external systems or making important decisions.

Building an autonomous agent is an iterative process, so be prepared to refine and improve your agent over time. Good luck with your project! If you need any more specific information as you progress, feel free to ask.
现在 Agent 已经执行了几个步骤,我们可以 replay 完整的状态历史,看看发生了什么。

to_replay = None
for state in graph.get_state_history(config):
    print("Num Messages: ", len(state.values["messages"]), "Next: ", state.next)
    print("-" * 80)
    if len(state.values["messages"]) == 6:
        # We are somewhat arbitrarily selecting a specific state based on the number of chat messages in the state.
        to_replay = state
Num Messages:  8 Next:  ()
--------------------------------------------------------------------------------
Num Messages:  7 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  6 Next:  ('tools',)
--------------------------------------------------------------------------------
Num Messages:  5 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  4 Next:  ('__start__',)
--------------------------------------------------------------------------------
Num Messages:  4 Next:  ()
--------------------------------------------------------------------------------
Num Messages:  3 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  2 Next:  ('tools',)
--------------------------------------------------------------------------------
Num Messages:  1 Next:  ('chatbot',)
--------------------------------------------------------------------------------
Num Messages:  0 Next:  ('__start__',)
--------------------------------------------------------------------------------
注意,图的每一步都会保存检查点。这跨越调用,因此您可以在完整线程的历史中回溯。我们已经将 to_replay 选为要从中恢复的状态。这是上述第二次图调用中 chatbot 节点之后的状态。

从这一点恢复应该接下来调用行动(action)节点。

print(to_replay.next)
print(to_replay.config)
('tools',)
{'configurable': {'thread_id': '1', 'checkpoint_ns': '', 'checkpoint_id': '1efd43e3-0c1f-6c4e-8006-891877d65740'}}
注意,检查点的 config (to_replay.config) 包含一个 checkpoint_id 时间戳。提供此 checkpoint_id 值告诉 LangGraph 的检查点加载该时间点的状态。我们来试试看:

# The `checkpoint_id` in the `to_replay.config` corresponds to a state we've persisted to our checkpointer.
for event in graph.stream(None, to_replay.config, stream_mode="values"):
    if "messages" in event:
        event["messages"][-1].pretty_print()
================================== Ai Message ==================================

[{'text': "That's an exciting idea! Building an autonomous agent with LangGraph is indeed a great application of this technology. LangGraph is particularly well-suited for creating complex, multi-step AI workflows, which is perfect for autonomous agents. Let me gather some more specific information about using LangGraph for building autonomous agents.", 'type': 'text'}, {'id': 'toolu_01QWNHhUaeeWcGXvA4eHT7Zo', 'input': {'query': 'Building autonomous agents with LangGraph examples and tutorials'}, 'name': 'tavily_search_results_json', 'type': 'tool_use'}]
Tool Calls:
  tavily_search_results_json (toolu_01QWNHhUaeeWcGXvA4eHT7Zo)
 Call ID: toolu_01QWNHhUaeeWcGXvA4eHT7Zo
  Args:
    query: Building autonomous agents with LangGraph examples and tutorials
================================= Tool Message =================================
Name: tavily_search_results_json

[{"url": "https://towardsdatascience.com/building-autonomous-multi-tool-agents-with-gemini-2-0-and-langgraph-ad3d7bd5e79d", "content": "Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph | by Youness Mansar | Jan, 2025 | Towards Data Science Building Autonomous Multi-Tool Agents with Gemini 2.0 and LangGraph A practical tutorial with full code examples for building and running multi-tool agents Towards Data Science LLMs are remarkable — they can memorize vast amounts of information, answer general knowledge questions, write code, generate stories, and even fix your grammar. In this tutorial, we are going to build a simple LLM agent that is equipped with four tools that it can use to answer a user’s question. This Agent will have the following specifications: Follow Published in Towards Data Science --------------------------------- Your home for data science and AI. Follow Follow Follow"}, {"url": "https://github.com/anmolaman20/Tools_and_Agents", "content": "GitHub - anmolaman20/Tools_and_Agents: This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository provides resources for building AI agents using Langchain and Langgraph. This repository serves as a comprehensive guide for building AI-powered agents using Langchain and Langgraph. It provides hands-on examples, practical tutorials, and resources for developers and AI enthusiasts to master building intelligent systems and workflows. AI Agent Development: Gain insights into creating intelligent systems that think, reason, and adapt in real time. This repository is ideal for AI practitioners, developers exploring language models, or anyone interested in building intelligent systems. This repository provides resources for building AI agents using Langchain and Langgraph."}]
================================== Ai Message ==================================

Great idea! Building an autonomous agent with LangGraph is indeed an excellent way to apply and deepen your understanding of the technology. Based on the search results, I can provide you with some insights and resources to help you get started:

1. Multi-Tool Agents:
   LangGraph is well-suited for building autonomous agents that can use multiple tools. This allows your agent to have a variety of capabilities and choose the appropriate tool based on the task at hand.

2. Integration with Large Language Models (LLMs):
   There's a tutorial that specifically mentions using Gemini 2.0 (Google's LLM) with LangGraph to build autonomous agents. This suggests that LangGraph can be integrated with various LLMs, giving you flexibility in choosing the language model that best fits your needs.

3. Practical Tutorials:
   There are tutorials available that provide full code examples for building and running multi-tool agents. These can be invaluable as you start your project, giving you a concrete starting point and demonstrating best practices.

4. GitHub Resources:
   There's a GitHub repository (github.com/anmolaman20/Tools_and_Agents) that provides resources for building AI agents using both Langchain and Langgraph. This could be a great resource for code examples, tutorials, and understanding how LangGraph fits into the broader LangChain ecosystem.

5. Real-Time Adaptation:
   The resources mention creating intelligent systems that can think, reason, and adapt in real-time. This is a key feature of advanced autonomous agents and something you can aim for in your project.

6. Diverse Applications:
   The materials suggest that these techniques can be applied to various tasks, from answering questions to potentially more complex decision-making processes.

To get started with your autonomous agent project using LangGraph, you might want to:

1. Review the tutorials mentioned, especially those with full code examples.
2. Explore the GitHub repository for hands-on examples and resources.
3. Decide on the specific tasks or capabilities you want your agent to have.
4. Choose an LLM to integrate with LangGraph (like GPT, Gemini, or others).
5. Start with a simple agent that uses one or two tools, then gradually expand its capabilities.
6. Implement decision-making logic to help your agent choose between different tools or actions.
7. Test your agent thoroughly with various inputs and scenarios to ensure robust performance.

Remember, building an autonomous agent is an iterative process. Start simple and gradually increase complexity as you become more comfortable with LangGraph and its capabilities.

Would you like more information on any specific aspect of building your autonomous agent with LangGraph?
注意,图从 **action** 节点恢复执行。您可以看出这是事实,因为上面打印的第一个值是我们的搜索引擎工具的响应。

恭喜!您现在已经在 LangGraph 中使用了时间旅行检查点遍历。能够回溯并探索替代路径为调试、实验和交互式应用打开了无限可能。

下一步

通过探索部署和高级功能,进一步推进您的旅程

服务器快速入门

LangGraph Cloud

LangGraph 框架

LangGraph 平台

通过这些资源扩展您的知识

评论