{ "cells": [ { "attachments": {}, "cell_type": "markdown", "id": "d2eecb96-cf0e-47ed-8116-88a7eaa4236d", "metadata": {}, "source": [ "# How to add cross-thread persistence to your graph\n", "\n", "
\n", "

Prerequisites

\n", "

\n", " This guide assumes familiarity with the following:\n", "

\n", "

\n", "
\n", "\n", "In the [previous guide](https://langchain-ai.github.io/langgraph/how-tos/persistence/) you learned how to persist graph state across multiple interactions on a single [thread](). LangGraph also allows you to persist data across **multiple threads**. For instance, you can store information about users (their names or preferences) in a shared memory and reuse them in the new conversational threads.\n", "\n", "In this guide, we will show how to construct and use a graph that has a shared memory implemented using the [Store](https://langchain-ai.github.io/langgraph/reference/store/#langgraph.store.base.BaseStore) interface.\n", "\n", "
\n", "

Note

\n", "

\n", " Support for the Store API that is used in this guide was added in LangGraph v0.2.32.\n", "

\n", "
\n", "\n", "## Setup\n", "\n", "First, let's install the required packages and set our API keys" ] }, { "cell_type": "code", "execution_count": 1, "id": "3457aadf", "metadata": {}, "outputs": [], "source": [ "%%capture --no-stderr\n", "%pip install -U langchain_openai langgraph" ] }, { "cell_type": "code", "execution_count": 2, "id": "aa2c64a7", "metadata": {}, "outputs": [ { "name": "stdin", "output_type": "stream", "text": [ "ANTHROPIC_API_KEY: ········\n" ] } ], "source": [ "import getpass\n", "import os\n", "\n", "\n", "def _set_env(var: str):\n", " if not os.environ.get(var):\n", " os.environ[var] = getpass.getpass(f\"{var}: \")\n", "\n", "\n", "_set_env(\"ANTHROPIC_API_KEY\")" ] }, { "cell_type": "markdown", "id": "51b6817d", "metadata": {}, "source": [ "
\n", "

Set up LangSmith for LangGraph development

\n", "

\n", " Sign up for LangSmith to quickly spot issues and improve the performance of your LangGraph projects. LangSmith lets you use trace data to debug, test, and monitor your LLM apps built with LangGraph — read more about how to get started here. \n", "

\n", "
" ] }, { "cell_type": "markdown", "id": "c4c550b5-1954-496b-8b9d-800361af17dc", "metadata": {}, "source": [ "## Define store\n", "\n", "In this example we will create a graph that will be able to retrieve information about a user's preferences. We will do so by defining an `InMemoryStore` - an object that can store data in memory and query that data. We will then pass the store object when compiling the graph. This allows each node in the graph to access the store: when you define node functions, you can define `store` keyword argument, and LangGraph will automatically pass the store object you compiled the graph with.\n", "\n", "When storing objects using the `Store` interface you define two things:\n", "\n", "* the namespace for the object, a tuple (similar to directories)\n", "* the object key (similar to filenames)\n", "\n", "In our example, we'll be using `(\"memories\", )` as namespace and random UUID as key for each new memory.\n", "\n", "Importantly, to determine the user, we will be passing `user_id` via the config keyword argument of the node function.\n", "\n", "Let's first define an `InMemoryStore` which is already populated with some memories about the users." ] }, { "cell_type": "code", "execution_count": 3, "id": "a7f303d6-612e-4e34-bf36-29d4ed25d802", "metadata": {}, "outputs": [], "source": [ "from langgraph.store.memory import InMemoryStore\n", "\n", "in_memory_store = InMemoryStore()" ] }, { "cell_type": "markdown", "id": "3389c9f4-226d-40c7-8bfc-ee8aac24f79d", "metadata": {}, "source": [ "## Create graph" ] }, { "cell_type": "code", "execution_count": 4, "id": "2a30a362-528c-45ee-9df6-630d2d843588", "metadata": {}, "outputs": [], "source": [ "import uuid\n", "from typing import Annotated\n", "from typing_extensions import TypedDict\n", "\n", "from langchain_anthropic import ChatAnthropic\n", "from langchain_core.runnables import RunnableConfig\n", "from langgraph.graph import StateGraph, MessagesState, START\n", "from langgraph.checkpoint.memory import MemorySaver\n", "from langgraph.store.base import BaseStore\n", "\n", "\n", "model = ChatAnthropic(model=\"claude-3-5-sonnet-20240620\")\n", "\n", "\n", "# NOTE: we're passing the Store param to the node --\n", "# this is the Store we compile the graph with\n", "def call_model(state: MessagesState, config: RunnableConfig, *, store: BaseStore):\n", " user_id = config[\"configurable\"][\"user_id\"]\n", " namespace = (\"memories\", user_id)\n", " memories = store.search(namespace)\n", " info = \"\\n\".join([d.value[\"data\"] for d in memories])\n", " system_msg = f\"You are a helpful assistant talking to the user. User info: {info}\"\n", "\n", " # Store new memories if the user asks the model to remember\n", " last_message = state[\"messages\"][-1]\n", " if \"remember\" in last_message.content.lower():\n", " memory = \"User name is Bob\"\n", " store.put(namespace, str(uuid.uuid4()), {\"data\": memory})\n", "\n", " response = model.invoke(\n", " [{\"type\": \"system\", \"content\": system_msg}] + state[\"messages\"]\n", " )\n", " return {\"messages\": response}\n", "\n", "\n", "builder = StateGraph(MessagesState)\n", "builder.add_node(\"call_model\", call_model)\n", "builder.add_edge(START, \"call_model\")\n", "\n", "# NOTE: we're passing the store object here when compiling the graph\n", "graph = builder.compile(checkpointer=MemorySaver(), store=in_memory_store)\n", "# If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass the store or checkpointer when compiling the graph, since it's done automatically." ] }, { "cell_type": "markdown", "id": "f22a4a18-67e4-4f0b-b655-a29bbe202e1c", "metadata": {}, "source": [ "
\n", "

Note

\n", "

\n", " If you're using LangGraph Cloud or LangGraph Studio, you don't need to pass store when compiling the graph, since it's done automatically.\n", "

\n", "
" ] }, { "cell_type": "markdown", "id": "552d4e33-556d-4fa5-8094-2a076bc21529", "metadata": {}, "source": [ "## Run the graph!" ] }, { "cell_type": "markdown", "id": "1842c626-6cd9-4f58-b549-58978e478098", "metadata": {}, "source": [ "Now let's specify a user ID in the config and tell the model our name:" ] }, { "cell_type": "code", "execution_count": 5, "id": "c871a073-a466-46ad-aafe-2b870831057e", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "Hi! Remember: my name is Bob\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "Hello Bob! It's nice to meet you. I'll remember that your name is Bob. How can I assist you today?\n" ] } ], "source": [ "config = {\"configurable\": {\"thread_id\": \"1\", \"user_id\": \"1\"}}\n", "input_message = {\"type\": \"user\", \"content\": \"Hi! Remember: my name is Bob\"}\n", "for chunk in graph.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " chunk[\"messages\"][-1].pretty_print()" ] }, { "cell_type": "code", "execution_count": 6, "id": "d862be40-1f8a-4057-81c4-b7bf073dc4c1", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "what is my name?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "Your name is Bob.\n" ] } ], "source": [ "config = {\"configurable\": {\"thread_id\": \"2\", \"user_id\": \"1\"}}\n", "input_message = {\"type\": \"user\", \"content\": \"what is my name?\"}\n", "for chunk in graph.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " chunk[\"messages\"][-1].pretty_print()" ] }, { "cell_type": "markdown", "id": "80fd01ec-f135-4811-8743-daff8daea422", "metadata": {}, "source": [ "We can now inspect our in-memory store and verify that we have in fact saved the memories for the user:" ] }, { "cell_type": "code", "execution_count": 7, "id": "76cde493-89cf-4709-a339-207d2b7e9ea7", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "{'data': 'User name is Bob'}\n" ] } ], "source": [ "for memory in in_memory_store.search((\"memories\", \"1\")):\n", " print(memory.value)" ] }, { "cell_type": "markdown", "id": "23f5d7eb-af23-4131-b8fd-2a69e74e6e55", "metadata": {}, "source": [ "Let's now run the graph for another user to verify that the memories about the first user are self contained:" ] }, { "cell_type": "code", "execution_count": 8, "id": "d362350b-d730-48bd-9652-983812fd7811", "metadata": {}, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "================================\u001b[1m Human Message \u001b[0m=================================\n", "\n", "what is my name?\n", "==================================\u001b[1m Ai Message \u001b[0m==================================\n", "\n", "I apologize, but I don't have any information about your name. As an AI assistant, I don't have access to personal information about users unless it has been specifically shared in our conversation. If you'd like, you can tell me your name and I'll be happy to use it in our discussion.\n" ] } ], "source": [ "config = {\"configurable\": {\"thread_id\": \"3\", \"user_id\": \"2\"}}\n", "input_message = {\"type\": \"user\", \"content\": \"what is my name?\"}\n", "for chunk in graph.stream({\"messages\": [input_message]}, config, stream_mode=\"values\"):\n", " chunk[\"messages\"][-1].pretty_print()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.9" } }, "nbformat": 4, "nbformat_minor": 5 }