跳到内容

记忆API参考

函数

create_memory_manager

create_memory_manager(
    model: str | BaseChatModel,
    /,
    *,
    schemas: Sequence[S] = (Memory,),
    instructions: str = _MEMORY_INSTRUCTIONS,
    enable_inserts: bool = True,
    enable_updates: bool = True,
    enable_deletes: bool = False,
) -> Runnable[MemoryState, list[ExtractedMemory]]

创建一个记忆管理器,用于处理对话消息并生成结构化的记忆条目。

此函数创建一个异步可调用对象,用于分析对话消息和现有记忆,以生成或更新结构化的记忆条目。它可以从对话中识别隐含偏好、重要上下文和关键信息,并将其组织成结构良好的记忆,用于改善未来的交互。

该管理器支持非结构化的基于字符串的记忆以及由 Pydantic 模型定义的结构化记忆,所有这些都会自动持久化到配置的存储中。

参数

  • model (Union[str, BaseChatModel]) –

    用于丰富记忆的语言模型。可以是模型名称字符串或 BaseChatModel 实例。

  • schemas (Optional[list], default: (Memory,) ) –

    定义记忆条目结构的 Pydantic 模型列表。每个模型应定义一种记忆类型的字段和验证规则。如果为 None,则使用非结构化的基于字符串的记忆。默认为 None。

  • instructions (str, default: _MEMORY_INSTRUCTIONS ) –

    用于生成和组织记忆的自定义指令。这些指令指导模型如何从对话中提取和结构化信息。默认为预定义的记忆指令。

  • enable_inserts (bool, default: True ) –

    是否允许创建新的记忆条目。当为 False 时,管理器将仅更新现有记忆。默认为 True。

  • enable_updates (bool, default: True ) –

    是否允许更新因新信息而过时或矛盾的现有记忆。默认为 True。

  • enable_deletes (bool, default: False ) –

    是否允许删除因新信息而过时或矛盾的现有记忆。默认为 False。

返回值

  • manager ( Runnable[MemoryState, list[ExtractedMemory]] ) –

    一个可运行对象,用于处理对话并返回 ExtractedMemory。函数的签名取决于是否提供了 schemas

示例

基本的非结构化记忆丰富

from langmem import create_memory_manager

manager = create_memory_manager("anthropic:claude-3-5-sonnet-latest")

conversation = [
    {"role": "user", "content": "I prefer dark mode in all my apps"},
    {"role": "assistant", "content": "I'll remember that preference"},
]

# Extract memories from conversation
memories = await manager(conversation)
print(memories[0][1])  # First memory's content
# Output: "User prefers dark mode for all applications"

使用 Pydantic 模型进行结构化记忆丰富

from pydantic import BaseModel
from langmem import create_memory_manager

class PreferenceMemory(BaseModel):
    """Store the user's preference"""
    category: str
    preference: str
    context: str

manager = create_memory_manager(
    "anthropic:claude-3-5-sonnet-latest",
    schemas=[PreferenceMemory]
)

# Same conversation, but with structured output
conversation = [
    {"role": "user", "content": "I prefer dark mode in all my apps"},
    {"role": "assistant", "content": "I'll remember that preference"}
]
memories = await manager(conversation)
print(memories[0][1])
# Output:
# PreferenceMemory(
#     category="ui",
#     preference="dark_mode",
#     context="User explicitly stated preference for dark mode in all applications"
# )

处理现有记忆

conversation = [
    {
        "role": "user",
        "content": "Actually I changed my mind, dark mode hurts my eyes",
    },
    {"role": "assistant", "content": "I'll update your preference"},
]

# The manager will upsert; working with the existing memory instead of always creating a new one
updated_memories = await manager.ainvoke(
    {"messages": conversation, "existing": memories}
)

仅插入的记忆

manager = create_memory_manager(
    "anthropic:claude-3-5-sonnet-latest",
    schemas=[PreferenceMemory],
    enable_updates=False,
    enable_deletes=False,
)

conversation = [
    {
        "role": "user",
        "content": "Actually I changed my mind, dark mode is the best mode",
    },
    {"role": "assistant", "content": "I'll update your preference"},
]

# The manager will only create new memories
updated_memories = await manager.ainvoke(
    {"messages": conversation, "existing": memories}
)
print(updated_memories)

为提取和合成提供多个最大步长

manager = create_memory_manager(
    "anthropic:claude-3-5-sonnet-latest",
    schemas=[PreferenceMemory],
)

conversation = [
    {"role": "user", "content": "I prefer dark mode in all my apps"},
    {"role": "assistant", "content": "I'll remember that preference"},
]

# Set max steps for extraction and synthesis
max_steps = 3
memories = await manager.ainvoke(
    {"messages": conversation, "max_steps": max_steps}
)
print(memories)

create_memory_store_manager

create_memory_store_manager(
    model: str | BaseChatModel,
    /,
    *,
    schemas: list[S] | None = None,
    instructions: str = _MEMORY_INSTRUCTIONS,
    default: str | dict | S | None = None,
    default_factory: Callable[
        [RunnableConfig], str | dict | S
    ]
    | None = None,
    enable_inserts: bool = True,
    enable_deletes: bool = False,
    query_model: str | BaseChatModel | None = None,
    query_limit: int = 5,
    namespace: tuple[str, ...] = (
        "memories",
        "{langgraph_user_id}",
    ),
    store: BaseStore | None = None,
    phases: list[MemoryPhase] | None = None,
) -> MemoryStoreManager

丰富存储在配置的 BaseStore 中的记忆。

系统自动搜索相关记忆,提取新信息,更新现有记忆,并维护所有更改的版本历史记录。

参数

  • model (Union[str, BaseChatModel]) –

    用于记忆丰富的主要语言模型。可以是模型名称字符串或 BaseChatModel 实例。

  • schemas (Optional[list], default: None ) –

    定义记忆条目结构的 Pydantic 模型列表。每个模型应定义一种记忆类型的字段和验证规则。如果为 None,则使用非结构化的基于字符串的记忆。默认为 None。

  • instructions (str, default: _MEMORY_INSTRUCTIONS ) –

    用于生成和组织记忆的自定义指令。这些指令指导模型如何从对话中提取和结构化信息。默认为预定义的记忆指令。

  • default (str | dict | None, default: None ) –

    如果在存储中找不到其他记忆,则持久化到存储的默认值。默认为 None。这主要用于管理配置文件记忆并希望用一些默认值初始化它。结果记忆将位于配置命名空间中存储的“default”键下。

  • default_factory (Callable[[RunnableConfig], str | dict | S], default: None ) –

    用于生成默认值的工厂函数。当默认值依赖于运行时配置时,这非常有用。默认为 None。

  • enable_inserts (bool, default: True ) –

    是否允许创建新的记忆条目。当为 False 时,管理器将仅更新现有记忆。默认为 True。

  • enable_deletes (bool, default: False ) –

    是否允许删除因新信息而过时或矛盾的现有记忆。默认为 True。

  • query_model (Optional[Union[str, BaseChatModel]], default: None ) –

    可选的用于记忆搜索查询的单独模型。在此处使用更小、更快的模型可以提高性能。如果为 None,则使用主要模型。默认为 None。

  • query_limit (int, default: 5 ) –

    为每次对话检索的相关记忆的最大数量。限制越高,提供的上下文越多,但也可能减慢处理速度。默认为 5。

  • namespace (tuple[str, ...], default: ('memories', '{langgraph_user_id}') ) –

    用于组织记忆的存储命名空间结构。支持模板值,如“{langgraph_user_id}”,这些值从运行时上下文填充。默认为 ("memories", "{langgraph_user_id}")

  • store (Optional[BaseStore], default: None ) –

    用于记忆存储的存储。如果为 None,则使用 LangGraph 配置中配置的存储。默认为 None。使用 LangGraph Platform 时,服务器将为您管理存储。

  • phases (Optional[list], default: None ) –

    MemoryPhase 对象列表,定义记忆丰富过程的各个阶段。

返回值

  • manager ( MemoryStoreManager ) –

    一个可运行对象,用于处理对话并自动管理 LangGraph BaseStore 中的记忆。

基本数据流如下

sequenceDiagram
participant Client
participant Manager
participant Store
participant LLM

Client->>Manager: conversation history
Manager->>Store: find similar memories
Store-->>Manager: memories
Manager->>LLM: analyze & extract
LLM-->>Manager: memory updates
Manager->>Store: apply changes
Manager-->>Client: updated memories
示例

在 LangGraph 应用内“内联”运行记忆提取。默认情况下,每个“记忆”都是一个简单的字符串

import os

from anthropic import AsyncAnthropic
from langchain_core.runnables import RunnableConfig
from langgraph.func import entrypoint
from langgraph.store.memory import InMemoryStore

from langmem import create_memory_store_manager

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": "openai:text-embedding-3-small",
    }
)

manager = create_memory_store_manager("anthropic:claude-3-5-sonnet-latest", namespace=("memories", "{langgraph_user_id}"))
client = AsyncAnthropic(api_key=os.getenv("ANTHROPIC_API_KEY"))


@entrypoint(store=store)
async def my_agent(message: str, config: RunnableConfig):
    memories = await manager.asearch(
        query=message,
        config=config,
    )
    llm_response = await client.messages.create(
        model="claude-3-5-sonnet-latest",
        system="You are a helpful assistant.\n\n## Memories from the user:"
        f"\n<memories>\n{memories}\n</memories>",
        max_tokens=2048,
        messages=[{"role": "user", "content": message}],
    )
    response = {"role": "assistant", "content": llm_response.content[0].text}

    await manager.ainvoke(
        {"messages": [{"role": "user", "content": message}, response]},
    )
    return response["content"]

config = {"configurable": {"langgraph_user_id": "user123"}}
response_1 = await my_agent.ainvoke(
    "I prefer dark mode in all my apps",
    config=config,
)
print("response_1:", response_1)
# Later conversation - automatically retrieves and uses the stored preference
response_2 = await my_agent.ainvoke(
    "What theme do I prefer?",
    config=config,
)
print("response_2:", response_2)
# You can list over memories in the user's namespace manually:
print(manager.search(query="app preferences", config=config))

您可以通过定义 schemas来自定义每个记忆的外观

from langgraph.func import entrypoint
from langgraph.store.memory import InMemoryStore
from pydantic import BaseModel

from langmem import create_memory_store_manager

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": "openai:text-embedding-3-small",
    }
)
manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest",
    namespace=("memories", "{langgraph_user_id}"),
)

class PreferenceMemory(BaseModel):
    """Store preferences about the user."""
    category: str
    preference: str
    context: str


store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": "openai:text-embedding-3-small",
    }
)
manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest",
    schemas=[PreferenceMemory],
    namespace=("project", "team_1", "{langgraph_user_id}"),
)


@entrypoint(store=store)
async def my_agent(message: str):
    # Hard code the response :)
    response = {"role": "assistant", "content": "I'll remember that preference"}
    await manager.ainvoke(
        {"messages": [{"role": "user", "content": message}, response]}
    )
    return response


# Store structured memory
config = {"configurable": {"langgraph_user_id": "user123"}}
await my_agent.ainvoke(
    "I prefer dark mode in all my apps",
    config=config,
)

# See the extracted memories yourself
print(manager.search(query="app preferences", config=config))

# Memory is automatically stored and can be retrieved in future conversations
# The system will also automatically update it if preferences change

在某些情况下,您可能希望提供一个“default”记忆值,以便在未找到记忆时使用。例如,如果您存储了一些提示偏好,您可能有一个可以随时间演变的“application”默认值。这可以通过设置 default 参数来完成

manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest",
    namespace=("memories", "{langgraph_user_id}"),
    # Note: This default value must be compatible with the schemas
    # you provided above. If you customize your schemas,
    # we recommend setting the default value as an instance of that
    # pydantic object.
    default="Use a concise and professional tone in all responses. The user likes light mode.",
)


# ... same agent as before ...
@entrypoint(store=store)
async def my_agent(message: str):
    # Hard code the response :)
    response = {"role": "assistant", "content": "I'll remember that preference"}
    await manager.ainvoke(
        {"messages": [{"role": "user", "content": message}, response]}
    )
    return response


# Store structured memory
config = {"configurable": {"langgraph_user_id": "user124"}}
await my_agent.ainvoke(
    "I prefer dark mode in all my apps",
    config=config,
)

# See the extracted memories yourself
print(manager.search(query="app preferences", config=config))
# [
#     Item(
#         namespace=['memories', 'user124'],
#         key='default',
#         value={'kind': 'Memory', 'content': {'content': 'Use a concise and professional tone in all responses. The user prefers dark mode in all apps'}},
#         created_at='2025-04-14T22:20:25.148884+00:00',
#         updated_at='2025-04-14T22:20:25.148892+00:00',
#         score=None
#     )
# ]

您甚至可以通过提供 default_factory 将默认值设置为某个可配置值。

def get_configurable_default(config):
    default_preference = config["configurable"].get(
        "preference", "Use a concise and professional tone in all responses."
    )
    return default_preference


manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest",
    namespace=("memories", "{langgraph_user_id}"),
    default_factory=get_configurable_default,
)


# ... same agent as before ...
@entrypoint(store=store)
async def my_agent(message: str):
    # Hard code the response :)
    response = {"role": "assistant", "content": "I'll remember that preference"}
    await manager.ainvoke(
        {"messages": [{"role": "user", "content": message}, response]}
    )
    return response


# Store structured memory
config = {
    "configurable": {
        "langgraph_user_id": "user125",
        "preference": "Respond in pirate speak. User likes light mode",
    }
}
await my_agent.ainvoke(
    "I prefer dark mode in all my apps",
    config=config,
)

# See the extracted memories yourself
print(manager.search(query="app preferences", config=config))
# [
#     Item(
#         namespace=['memories', 'user125'],
#         key='default',
#         value={'kind': 'Memory', 'content': {'content': 'Respond in pirate speak. User prefers dark mode in all apps'}},
#         created_at='2025-04-14T22:20:25.148884+00:00',
#         updated_at='2025-04-14T22:20:25.148892+00:00',
#         score=None
#     )
# ]

默认情况下,相关记忆通过直接嵌入新消息来召回。您也可以选择使用单独的查询模型来搜索最相似的记忆。其工作原理如下

    sequenceDiagram
        participant Client
        participant Manager
        participant QueryLLM
        participant Store
        participant MainLLM

        Client->>Manager: messages
        Manager->>QueryLLM: generate search query
        QueryLLM-->>Manager: optimized query
        Manager->>Store: find memories
        Store-->>Manager: memories
        Manager->>MainLLM: analyze & extract
        MainLLM-->>Manager: memory updates
        Manager->>Store: apply changes
        Manager-->>Client: result
使用 LLM 搜索记忆
from langmem import create_memory_store_manager
from langgraph.store.memory import InMemoryStore
from langgraph.func import entrypoint

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": "openai:text-embedding-3-small",
    }
)
manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest",  # Main model for memory processing
    query_model="anthropic:claude-3-5-haiku-latest",  # Faster model for search
    query_limit=10,  # Retrieve more relevant memories
    namespace=("memories", "{langgraph_user_id}"),
)


@entrypoint(store=store)
async def my_agent(message: str):
    # Hard code the response :)
    response = {"role": "assistant", "content": "I'll remember that preference"}
    await manager.ainvoke(
        {"messages": [{"role": "user", "content": message}, response]}
    )
    return response

config = {"configurable": {"langgraph_user_id": "user123"}}
await my_agent.ainvoke(
    "I prefer dark mode in all my apps",
    config=config,
)

# See the extracted memories yourself
print(manager.search(config=config))

在上面的示例中,我们在主线程中调用了管理器。在实际应用中,您可能希望在后台执行管理器,无论是通过在后台线程中执行还是在单独的服务器上执行。为此,您可以使用 ReflectionExecutor

sequenceDiagram
    participant Agent
    participant Background
    participant Store

    Agent->>Agent: process message
    Agent-->>User: response
    Agent->>Background: schedule enrichment<br/>(after_seconds=0)
    Note over Background,Store: Memory processing happens<br/>in background thread
在后台运行 reflections

使用 @entrypoint 进行后台丰富

from langmem import create_memory_store_manager, ReflectionExecutor
from langgraph.prebuilt import create_react_agent
from langgraph.store.memory import InMemoryStore
from langgraph.func import entrypoint

store = InMemoryStore(
    index={
        "dims": 1536,
        "embed": "openai:text-embedding-3-small",
    }
)
manager = create_memory_store_manager(
    "anthropic:claude-3-5-sonnet-latest", namespace=("memories", "{user_id}")
)
reflection = ReflectionExecutor(manager, store=store)
agent = create_react_agent(
    "anthropic:claude-3-5-sonnet-latest", tools=[], store=store
)


@entrypoint(store=store)
async def chat(messages: list):
    response = await agent.ainvoke({"messages": messages})

    fut = reflection.submit(
        {
            "messages": response["messages"],
        },
        # We'll schedule this immediately.
        # Adding a delay lets you **debounce** and deduplicate reflection work
        # whenever the user is actively engaging with the agent.
        after_seconds=0,
    )

    return fut

config = {"configurable": {"user_id": "user-123"}}
fut = await chat.ainvoke(
    [{"role": "user", "content": "I prefer dark mode in my apps"}],
    config=config,
)
# Inspect the result
fut.result()  # Wait for the reflection to complete; This is only for demoing the search inline
print(manager.search(query="app preferences", config=config))

评论