内存工具 API 参考¶
函数
-
create_manage_memory_tool
–创建一个用于在对话中管理持久记忆的工具。
-
create_search_memory_tool
–创建一个用于搜索存储在 LangGraph BaseStore 中的记忆的工具。
create_manage_memory_tool ¶
create_manage_memory_tool(
namespace: tuple[str, ...] | str,
*,
instructions: str = "Proactively call this tool when you:\n\n1. Identify a new USER preference.\n2. Receive an explicit USER request to remember something or otherwise alter your behavior.\n3. Are working and want to record important context.\n4. Identify that an existing MEMORY is incorrect or outdated.\n",
schema: Type = str,
actions_permitted: Optional[
tuple[Literal["create", "update", "delete"], ...]
] = ("create", "update", "delete"),
store: Optional[BaseStore] = None,
name: str = "manage_memory",
)
创建一个用于在对话中管理持久记忆的工具。
此函数创建一个工具,允许 AI 助手创建、更新和删除在对话之间持续存在的记忆。该工具有助于在不同会话中维护上下文和用户偏好。
参数
-
instructions
(str
, default:'在以下情况时主动调用此工具:\n\n1. 识别到新的用户偏好。\n2. 收到用户明确要求记住某事或改变行为的请求。\n3. 正在工作并希望记录重要上下文。\n4. 识别到现有记忆不正确或已过时。\n'
) –使用记忆工具的自定义说明。默认为一组预定义的、用于主动记忆管理的指导方针。
-
namespace
(tuple[str, ...] | str
) –用于在 LangGraph 的 BaseStore 中组织记忆的命名空间结构。使用运行时配置,包含
{langgraph_user_id}
等占位符。 -
store
(Optional[BaseStore]
, default:None
) –用于搜索的 BaseStore。如果未提供,该工具将使用图或入口点中配置的 BaseStore。仅在您打算在 LangGraph 上下文之外使用这些工具时才设置此参数。
返回
-
memory_tool
(Tool
) –一个经过修饰的异步函数,可用作记忆管理的工具。该工具支持创建、更新和删除记忆,并进行适当的验证。
生成的工具的签名如下所示
注意:该工具支持同步和异步使用。命名空间配置
命名空间在运行时通过 config
参数配置
提示
此工具连接到在您的图或入口点中配置的 LangGraph BaseStore。如果您不提供存储,它将无法工作。
示例
from langmem import create_manage_memory_tool
from langgraph.func import entrypoint
from langgraph.store.memory import InMemoryStore
memory_tool = create_manage_memory_tool(
# All memories saved to this tool will live within this namespace
# The brackets will be populated at runtime by the configurable values
namespace=("project_memories", "{langgraph_user_id}"),
)
store = InMemoryStore(
index={
"dims": 1536,
"embed": "openai:text-embedding-3-small",
}
)
@entrypoint(store=store)
async def workflow(state: dict, *, previous=None):
# Other work....
result = await memory_tool.ainvoke(state)
print(result)
return entrypoint.final(value=result, save={})
config = {
"configurable": {
# This value will be formatted into the namespace you configured above ("project_memories", "{langgraph_user_id}")
"langgraph_user_id": "123e4567-e89b-12d3-a456-426614174000"
}
}
# Create a new memory
await workflow.ainvoke(
{"content": "Team prefers to use Python for backend development"},
config=config,
)
# Output: 'created memory 123e4567-e89b-12d3-a456-426614174000'
# Update an existing memory
result = await workflow.ainvoke(
{
"id": "123e4567-e89b-12d3-a456-426614174000",
"content": "Team uses Python for backend and TypeScript for frontend",
"action": "update",
},
config=config,
)
print(result)
# Output: 'updated memory 123e4567-e89b-12d3-a456-426614174000'
您可以在 LangGraph 预构建的 create_react_agent
中使用
from langgraph.prebuilt import create_react_agent
from langgraph.config import get_config, get_store
def prompt(state):
config = get_config()
memories = get_store().search(
# Search within the same namespace as the one
# we've configured for the agent
("memories", config["configurable"]["langgraph_user_id"]),
)
system_prompt = f"""You are a helpful assistant.
<memories>
{memories}
</memories>
"""
system_message = {"role": "system", "content": system_prompt}
return [system_message, *state["messages"]]
agent = create_react_agent(
"anthropic:claude-3-5-sonnet-latest",
tools=[
create_manage_memory_tool(namespace=("memories", "{langgraph_user_id}")),
],
store=store,
)
agent.invoke(
{"messages": [{"role": "user", "content": "We've decided we like golang more than python for backend work"}]},
config=config,
)
如果您想自定义记忆的预期模式,可以通过提供 schema
参数来实现。
from pydantic import BaseModel
class UserProfile(BaseModel):
name: str
age: int | None = None
recent_memories: list[str] = []
preferences: dict | None = None
memory_tool = create_manage_memory_tool(
# All memories saved to this tool will live within this namespace
# The brackets will be populated at runtime by the configurable values
namespace=("memories", "{langgraph_user_id}", "user_profile"),
schema=UserProfile,
actions_permitted=["create", "update"],
instructions="Update the existing user profile (or create a new one if it doesn't exist) based on the shared information.",
)
store = InMemoryStore(
index={
"dims": 1536,
"embed": "openai:text-embedding-3-small",
}
)
agent = create_react_agent(
"anthropic:claude-3-5-sonnet-latest",
prompt=prompt,
tools=[
memory_tool,
],
store=store,
)
result = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "I'm 60 years old and have been programming for 5 days.",
}
]
},
config=config,
)
result["messages"][-1].pretty_print()
# I've created a memory with your age of 60 and noted that you started programming 5 days ago...
result = agent.invoke(
{
"messages": [
{"role": "user", "content": "Just had by 61'st birthday today!!"}
]
},
config=config,
)
result["messages"][-1].pretty_print()
# Happy 61st birthday! 🎂 I've updated your profile to reflect your new age. Is there anything else I can help you with?
print(
store.search(
("memories", "123e4567-e89b-12d3-a456-426614174000", "user_profile")
)
)
# [Item(
# namespace=['memories', '123e4567-e89b-12d3-a456-426614174000', 'user_profile'],
# key='1528553b-0900-4363-8dc2-c6b72844096e',
# value={
# highlight-next-line
# 'content': UserProfile(
# name='User',
# age=61,
# recent_memories=['Started programming 5 days ago'],
# preferences={'programming_experience': '5 days'}
# )
# },
# created_at='2025-02-07T01:12:14.383762+00:00',
# updated_at='2025-02-07T01:12:14.383763+00:00',
# score=None
# )]
如果您想限制工具可执行的操作,可以通过提供 actions_permitted
参数来实现。
create_search_memory_tool ¶
create_search_memory_tool(
namespace: tuple[str, ...] | str,
*,
instructions: str = _MEMORY_SEARCH_INSTRUCTIONS,
store: BaseStore | None = None,
response_format: Literal[
"content", "content_and_artifact"
] = "content",
name: str = "search_memory",
)
创建一个用于搜索存储在 LangGraph BaseStore 中的记忆的工具。
此函数创建一个工具,允许 AI 助手使用语义匹配或精确匹配来搜索先前存储的记忆。该工具返回记忆内容和原始记忆对象,以供高级使用。
参数
-
instructions
(str
, default:_MEMORY_SEARCH_INSTRUCTIONS
) –使用搜索工具的自定义说明。默认为一组预定义的指导方针。
-
namespace
(tuple[str, ...] | str
) –用于在 LangGraph 的 BaseStore 中组织记忆的命名空间结构。使用运行时配置,包含
{langgraph_user_id}
等占位符。参见记忆命名空间。 -
store
(BaseStore | None
, default:None
) –用于搜索的 BaseStore。如果未提供,该工具将使用图或入口点中配置的 BaseStore。仅在您打算在 LangGraph 上下文之外使用这些工具时才设置此参数。
返回
-
search_tool
(Tool
) –一个经过修饰的函数,可用作记忆搜索的工具。该工具返回序列化记忆和原始记忆对象。
生成的工具的签名如下所示
def search_memory(
query: str, # Search query to match against memories
limit: int = 10, # Maximum number of results to return
offset: int = 0, # Number of results to skip
filter: dict | None = None, # Additional filter criteria
) -> tuple[list[dict], list]: ... # Returns (serialized memories, raw memories)
注意:该工具支持同步和异步使用。
提示
此工具连接到在您的图或入口点中配置的 LangGraph BaseStore。如果您不提供存储,它将无法工作。
示例
from langmem import create_search_memory_tool
from langgraph.func import entrypoint
from langgraph.store.memory import InMemoryStore
search_tool = create_search_memory_tool(
namespace=("project_memories", "{langgraph_user_id}"),
)
store = InMemoryStore(
index={
"dims": 1536,
"embed": "openai:text-embedding-3-small",
}
)
@entrypoint(store=store)
async def workflow(state: dict, *, previous=None):
# Search for memories about Python
memories, _ = await search_tool.ainvoke(
{"query": "Python preferences", "limit": 5}
)
print(memories)
return entrypoint.final(value=memories, save={})