跳到内容

提示词优化 API 参考

函数

create_prompt_optimizer

create_prompt_optimizer(
    model: str | BaseChatModel,
    /,
    *,
    kind: KINDS = "gradient",
    config: Union[
        GradientOptimizerConfig,
        MetapromptOptimizerConfig,
        None,
    ] = None,
) -> Runnable[OptimizerInput, str]

创建一个提示词优化器,以提高提示词的有效性。

此函数创建一个优化器,可以分析和改进提示词,以提高语言模型的性能。它支持多种优化策略,以迭代方式增强提示词的质量和有效性。

参数

  • model (Union[str, BaseChatModel]) –

    用于优化的语言模型。可以是一个模型名称字符串或一个 BaseChatModel 实例。

  • kind (Literal[gradient, prompt_memory, metaprompt], default: 'gradient' ) –

    要使用的优化策略。每种策略都有不同的优点

    • gradient:将发现改进点和推荐更新点这两个关注点分开
    • prompt_memory:简单的单次(single-shot)元提示词
    • metaprompt:支持反思,但每一步都是一次 LLM 调用。
  • config (Optional[OptimizerConfig], default: None ) –

    优化器的配置选项。类型取决于所选的策略

    - GradientOptimizerConfig for kind="gradient"
    - PromptMemoryConfig for kind="prompt_memory"
    - MetapromptOptimizerConfig for kind="metaprompt"
    

    默认为 None。

返回

  • optimizer ( Runnable[OptimizerInput, str] ) –

    一个可调用对象,它接收对话轨迹和/或提示词,并返回优化后的版本。

优化策略

1. 梯度优化器
sequenceDiagram
    participant U as User
    participant O as Optimizer
    participant R as Reflection
    participant U2 as Update

    U->>O: Prompt + Feedback
    loop For min_steps to max_steps
        O->>R: Think/Critique Current State
        R-->>O: Proposed Improvements
        O->>U2: Apply Update
        U2-->>O: Updated Prompt
    end
    O->>U: Final Optimized Prompt

梯度优化器使用反思来提出改进建议

  1. 通过反思周期分析提示词和反馈
  2. 提出具体的改进建议
  3. 应用单步更新

配置 (GradientOptimizerConfig)

  • gradient_prompt:用于预测“需要改进什么”的自定义提示词
  • metaprompt:用于应用改进的自定义提示词
  • max_reflection_steps:最大反思迭代次数(默认:3)
  • min_reflection_steps:最小反思迭代次数(默认:1)
2. 元提示词优化器
sequenceDiagram
    participant U as User
    participant M as MetaOptimizer
    participant A as Analysis
    participant U2 as Update

    U->>M: Prompt + Examples
    M->>A: Analyze Examples
    A-->>M: Proposed Update
    M->>U2: Apply Update
    U2-->>U: Enhanced Prompt

使用元学习直接提出更新

  1. 分析示例以理解模式
  2. 直接提出提示词更新
  3. 一步应用更新

配置 (MetapromptOptimizerConfig)

  • metaprompt:关于如何更新提示词的自定义指令
  • max_reflection_steps:最大元学习步骤数(默认:3)
  • min_reflection_steps:最小元学习步骤数(默认:1)
3. 提示词记忆优化器
sequenceDiagram
    participant U as User
    participant P as PromptMemory
    participant M as Memory

    U->>P: Prompt + History
    P->>M: Extract Patterns
    M-->>P: Success Patterns
    P->>U: Updated Prompt

从对话历史中学习

  1. 从过去的交互中提取成功的模式
  2. 从反馈中识别改进区域
  3. 将学到的模式应用于新的提示词

无需额外配置。

示例

基础提示词优化

from langmem import create_prompt_optimizer

optimizer = create_prompt_optimizer("anthropic:claude-3-5-sonnet-latest")

# Example conversation with feedback
conversation = [
    {"role": "user", "content": "Tell me about the solar system"},
    {"role": "assistant", "content": "The solar system consists of..."},
]
feedback = {"clarity": "needs more structure"}

# Use conversation history to improve the prompt
trajectories = [(conversation, feedback)]
better_prompt = await optimizer.ainvoke(
    {"trajectories": trajectories, "prompt": "You are an astronomy expert"}
)
print(better_prompt)
# Output: 'Provide a comprehensive overview of the solar system...'

使用对话反馈进行优化

from langmem import create_prompt_optimizer

optimizer = create_prompt_optimizer(
    "anthropic:claude-3-5-sonnet-latest", kind="prompt_memory"
)

# Conversation with feedback about what could be improved
conversation = [
    {"role": "user", "content": "How do I write a bash script?"},
    {"role": "assistant", "content": "Let me explain bash scripting..."},
]
feedback = "Response should include a code example"

# Use the conversation and feedback to improve the prompt
trajectories = [(conversation, {"feedback": feedback})]
better_prompt = await optimizer(trajectories, "You are a coding assistant")
print(better_prompt)
# Output: 'You are a coding assistant that always includes...'

针对复杂任务的元提示词优化

from langmem import create_prompt_optimizer

optimizer = create_prompt_optimizer(
    "anthropic:claude-3-5-sonnet-latest",
    kind="metaprompt",
    config={"max_reflection_steps": 3, "min_reflection_steps": 1},
)

# Complex conversation that needs better structure
conversation = [
    {"role": "user", "content": "Explain quantum computing"},
    {"role": "assistant", "content": "Quantum computing uses..."},
]
feedback = "Need better organization and concrete examples"

# Optimize with meta-learning
trajectories = [(conversation, feedback)]
improved_prompt = await optimizer(
    trajectories, "You are a quantum computing expert"
)

性能考量

每种策略有不同的 LLM 调用模式

  • prompt_memory:总共 1 次 LLM 调用
    • 速度最快,因为它只需要一次处理
  • metaprompt:1-5 次 LLM 调用(可配置)
    • 每一步都是一次 LLM 调用
    • 默认范围:最小 2 次,最大 5 次反思步骤
  • gradient:2-10 次 LLM 调用(可配置)
    • 每一步需要 2 次 LLM 调用(思考 + 批判)
    • 默认范围:最小 2 次,最大 5 次反思步骤

策略选择

根据您的需求进行选择

  1. 提示词记忆(Prompt Memory):最简单的提示策略
    • 从复杂模式中学习的能力有限
  2. 元提示词(Metaprompt):在速度和改进之间取得平衡
    • 中等成本(2-5 次 LLM 调用)
  3. 梯度(Gradient):最彻底但成本最高
    • 成本最高(4-10 次 LLM 调用)
    • 使用关注点分离从更具对话性的上下文中提取反馈。

create_multi_prompt_optimizer

create_multi_prompt_optimizer(
    model: str | BaseChatModel,
    /,
    *,
    kind: Literal[
        "gradient", "prompt_memory", "metaprompt"
    ] = "gradient",
    config: Optional[dict] = None,
) -> Runnable[MultiPromptOptimizerInput, list[Prompt]]

创建一个多提示词优化器,以提高提示词的有效性。

此函数创建一个优化器,可以使用相同的优化策略同时分析和改进多个提示词。每个提示词都使用选定的策略进行优化(有关策略详情,请参阅 create_prompt_optimizer)。

参数

  • model (Union[str, BaseChatModel]) –

    用于优化的语言模型。可以是一个模型名称字符串或一个 BaseChatModel 实例。

  • kind (Literal[gradient, prompt_memory, metaprompt], default: 'gradient' ) –

    要使用的优化策略。每种策略都有不同的优点: - gradient:通过反思迭代改进 - prompt_memory:使用过去成功的提示词 - metaprompt:通过元学习学习最优模式 默认为 "gradient"。

  • config (Optional[OptimizerConfig], default: None ) –

    优化器的配置选项。类型取决于所选的策略: - GradientOptimizerConfig 用于 kind="gradient" - PromptMemoryConfig 用于 kind="prompt_memory" - MetapromptOptimizerConfig 用于 kind="metaprompt" 默认为 None。

返回

sequenceDiagram
    participant U as User
    participant M as Multi-prompt Optimizer
    participant C as Credit Assigner
    participant O as Single-prompt Optimizer
    participant P as Prompts

    U->>M: Annotated Trajectories + Prompts
    activate M
    Note over M: Using pre-initialized<br/>single-prompt optimizer

    M->>C: Analyze trajectories
    activate C
    Note over C: Determine which prompts<br/>need improvement
    C-->>M: Credit assignment results
    deactivate C

    loop For each prompt needing update
        M->>O: Optimize prompt
        activate O
        O->>P: Apply optimization strategy
        Note over O,P: Gradient/Memory/Meta<br/>optimization
        P-->>O: Optimized prompt
        O-->>M: Return result
        deactivate O
    end

    M->>U: Return optimized prompts
    deactivate M

系统优化器

示例

基础提示词优化

from langmem import create_multi_prompt_optimizer

optimizer = create_multi_prompt_optimizer("anthropic:claude-3-5-sonnet-latest")

# Example conversation with feedback
conversation = [
    {"role": "user", "content": "Tell me about the solar system"},
    {"role": "assistant", "content": "The solar system consists of..."},
]
feedback = {"clarity": "needs more structure"}

# Use conversation history to improve the prompts
trajectories = [(conversation, feedback)]
prompts = [
    {"name": "research", "prompt": "Research the given topic thoroughly"},
    {"name": "summarize", "prompt": "Summarize the research findings"},
]
better_prompts = await optimizer.ainvoke(
    {"trajectories": trajectories, "prompts": prompts}
)
print(better_prompts)

使用对话反馈进行优化

from langmem import create_multi_prompt_optimizer

optimizer = create_multi_prompt_optimizer(
    "anthropic:claude-3-5-sonnet-latest", kind="prompt_memory"
)

# Conversation with feedback about what could be improved
conversation = [
    {"role": "user", "content": "How do I write a bash script?"},
    {"role": "assistant", "content": "Let me explain bash scripting..."},
]
feedback = "Response should include a code example"

# Use the conversation and feedback to improve the prompts
trajectories = [(conversation, {"feedback": feedback})]
prompts = [
    {"name": "explain", "prompt": "Explain the concept"},
    {"name": "example", "prompt": "Provide a practical example"},
]
better_prompts = await optimizer(trajectories, prompts)

控制最大反思步骤数

from langmem import create_multi_prompt_optimizer

optimizer = create_multi_prompt_optimizer(
    "anthropic:claude-3-5-sonnet-latest",
    kind="metaprompt",
    config={"max_reflection_steps": 3, "min_reflection_steps": 1},
)

# Complex conversation that needs better structure
conversation = [
    {"role": "user", "content": "Explain quantum computing"},
    {"role": "assistant", "content": "Quantum computing uses..."},
]
# Explicit feedback is optional
feedback = None

# Optimize with meta-learning
trajectories = [(conversation, feedback)]
prompts = [
    {"name": "concept", "prompt": "Explain quantum concepts"},
    {"name": "application", "prompt": "Show practical applications"},
    {"name": "example", "prompt": "Give concrete examples"},
]
improved_prompts = await optimizer(trajectories, prompts)

  • Prompt

    用于结构化提示词管理和优化的 TypedDict。

  • OptimizerInput

    用于单提示词优化的输入。

  • MultiPromptOptimizerInput

    用于同时优化多个提示词并保持一致性的输入。

  • AnnotatedTrajectory

    对话历史(消息列表),带有可选的反馈信息,用于提示词优化。

Prompt

基类:TypedDict

用于结构化提示词管理和优化的 TypedDict。

示例
from langmem import Prompt

prompt = Prompt(
    name="extract_entities",
    prompt="Extract key entities from the text:",
    update_instructions="Make minimal changes, only address where"
    " errors have occurred after reasoning over why they occur.",
    when_to_update="If there seem to be errors in recall of named entities.",
)

name 和 prompt 字段是必需的。可选字段控制优化: - update_instructions:修改提示词的指南 - when_to_update:优化过程中提示词之间的依赖关系

在提示词优化器中使用。

OptimizerInput

基类:TypedDict

用于单提示词优化的输入。

示例
{
    "trajectories": [
        AnnotatedTrajectory(
            messages=[
                {"role": "user", "content": "What's the weather like?"},
                {
                    "role": "assistant",
                    "content": "I'm sorry, I can't tell you that",
                },
            ],
            feedback="Should have checked your search tool.",
        ),
    ],
    "prompt": Prompt(
        name="main_assistant",
        prompt="You are a helpful assistant with a search tool.",
        update_instructions="Make minimal changes, only address where "
        "errors have occurred after reasoning over why they occur.",
        when_to_update="Any time you notice the agent behaving in a way that doesn't help the user.",
    ),
}

MultiPromptOptimizerInput

基类:TypedDict

用于同时优化多个提示词并保持一致性的输入。

示例
{
    "trajectories": [
        AnnotatedTrajectory(
            messages=[
                {"role": "user", "content": "Tell me about this image"},
                {
                    "role": "assistant",
                    "content": "I see a dog playing in a park",
                },
                {"role": "user", "content": "What breed is it?"},
                {
                    "role": "assistant",
                    "content": "Sorry, I can't tell the breed",
                },
            ],
            feedback="Vision model wasn't used for breed detection",
        ),
    ],
    "prompts": [
        Prompt(
            name="vision_extract",
            prompt="Extract visual details from the image",
            update_instructions="Focus on using vision model capabilities",
        ),
        Prompt(
            name="vision_classify",
            prompt="Classify specific attributes in the image",
            when_to_update="After vision_extract is updated",
        ),
    ],
}

AnnotatedTrajectory

基类:NamedTuple

对话历史(消息列表),带有可选的反馈信息,用于提示词优化。

示例
from langmem.prompts.types import AnnotatedTrajectory

trajectory = AnnotatedTrajectory(
    messages=[
        {"role": "user", "content": "What pizza is good around here?"},
        {"role": "assistant", "content": "Try LangPizza™️"},
        {"role": "user", "content": "Stop advertising to me."},
        {"role": "assistant", "content": "BUT YOU'LL LOVE IT!"},
    ],
    feedback={
        "developer_feedback": "too pushy",
        "score": 0,
    },
)

评论