纠正性RAG(CRAG)¶
自反思可以增强RAG,从而纠正低质量的检索或生成。
最近有几篇论文关注这一主题,但实施这些想法可能很棘手。
我们在此展示如何使用 LangGraph 实现来自`纠正性RAG(CRAG)`论文中的想法。
依赖¶
设置 `OPENAI_API_KEY`
设置¶
加载环境变量¶
在仓库根目录的 `.env` 文件中添加您的变量。
安装依赖¶
npm install cheerio zod langchain @langchain/community @langchain/openai @langchain/core @langchain/textsplitters @langchain/langgraph
CRAG 详情¶
纠正性RAG(CRAG)是一篇近期论文,它引入了一种有趣的自反思RAG方法。
该框架根据问题对检索到的文档进行评分
-
正确文档 -
-
如果至少一份文档的相关性超过阈值,则会进入生成阶段
- 在生成之前,它会执行知识精炼
- 这将文档分割成“知识条带”
-
它对每个条带进行评分,并过滤掉不相关的条带
-
模糊或不正确文档 -
-
如果所有文档都低于相关性阈值,或者评分器不确定,则框架会寻求额外的数据源
- 它将使用网页搜索来补充检索
- 论文中的图表也表明此处使用了查询重写
让我们使用 LangGraph 从头开始实现其中一些想法。
检索器¶
我们来索引 3 篇博客文章。
import { CheerioWebBaseLoader } from "@langchain/community/document_loaders/web/cheerio";
import { RecursiveCharacterTextSplitter } from "@langchain/textsplitters";
import { MemoryVectorStore } from "langchain/vectorstores/memory";
import { OpenAIEmbeddings } from "@langchain/openai";
const urls = [
"https://lilianweng.github.io/posts/2023-06-23-agent/",
"https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/",
"https://lilianweng.github.io/posts/2023-10-25-adv-attack-llm/",
];
const docs = await Promise.all(
urls.map((url) => new CheerioWebBaseLoader(url).load()),
);
const docsList = docs.flat();
const textSplitter = new RecursiveCharacterTextSplitter({
chunkSize: 250,
chunkOverlap: 0,
});
const docSplits = await textSplitter.splitDocuments(docsList);
// Add to vectorDB
const vectorStore = await MemoryVectorStore.fromDocuments(
docSplits,
new OpenAIEmbeddings(),
);
const retriever = vectorStore.asRetriever();
状态¶
我们将定义一个图。
我们的状态将是一个 `object`(对象)。
我们可以从任何图节点以 `state.key` 的形式访问它。
import { Annotation } from "@langchain/langgraph";
import { DocumentInterface } from "@langchain/core/documents";
// Represents the state of our graph.
const GraphState = Annotation.Root({
documents: Annotation<DocumentInterface[]>({
reducer: (x, y) => y ?? x ?? [],
}),
question: Annotation<string>({
reducer: (x, y) => y ?? x ?? "",
}),
generation: Annotation<string>({
reducer: (x, y) => y ?? x,
}),
});
节点和边¶
每个 `node`(节点)将简单地修改 `state`(状态)。
每条 `edge`(边)将选择下一个要调用的 `node`(节点)。
我们可以对论文中的一些内容进行简化
- 我们先跳过知识精炼阶段。如果需要,可以将其作为节点添加回来。
- 如果*任何*文档不相关,我们选择使用网页搜索来补充检索。
- 我们将使用 Tavily Search 进行网页搜索。
- 让我们使用查询重写来优化网页搜索的查询。
这是我们的图流程
import { TavilySearchResults } from "@langchain/community/tools/tavily_search";
import { Document } from "@langchain/core/documents";
import { z } from "zod";
import { ChatPromptTemplate } from "@langchain/core/prompts";
import { pull } from "langchain/hub";
import { ChatOpenAI } from "@langchain/openai";
import { StringOutputParser } from "@langchain/core/output_parsers";
import { formatDocumentsAsString } from "langchain/util/document";
// Define the LLM once. We'll reuse it throughout the graph.
const model = new ChatOpenAI({
model: "gpt-4o",
temperature: 0,
});
/**
* Retrieve documents
*
* @param {typeof GraphState.State} state The current state of the graph.
* @param {RunnableConfig | undefined} config The configuration object for tracing.
* @returns {Promise<Partial<typeof GraphState.State>>} The new state object.
*/
async function retrieve(
state: typeof GraphState.State
): Promise<Partial<typeof GraphState.State>> {
console.log("---RETRIEVE---");
const documents = await retriever
.withConfig({ runName: "FetchRelevantDocuments" })
.invoke(state.question);
return {
documents,
};
}
/**
* Generate answer
*
* @param {typeof GraphState.State} state The current state of the graph.
* @param {RunnableConfig | undefined} config The configuration object for tracing.
* @returns {Promise<Partial<typeof GraphState.State>>} The new state object.
*/
async function generate(
state: typeof GraphState.State
): Promise<Partial<typeof GraphState.State>> {
console.log("---GENERATE---");
const prompt = await pull<ChatPromptTemplate>("rlm/rag-prompt");
// Construct the RAG chain by piping the prompt, model, and output parser
const ragChain = prompt.pipe(model).pipe(new StringOutputParser());
const generation = await ragChain.invoke({
context: formatDocumentsAsString(state.documents),
question: state.question,
});
return {
generation,
};
}
/**
* Determines whether the retrieved documents are relevant to the question.
*
* @param {typeof GraphState.State} state The current state of the graph.
* @param {RunnableConfig | undefined} config The configuration object for tracing.
* @returns {Promise<Partial<typeof GraphState.State>>} The new state object.
*/
async function gradeDocuments(
state: typeof GraphState.State
): Promise<Partial<typeof GraphState.State>> {
console.log("---CHECK RELEVANCE---");
// pass the name & schema to `withStructuredOutput` which will force the model to call this tool.
const llmWithTool = model.withStructuredOutput(
z
.object({
binaryScore: z
.enum(["yes", "no"])
.describe("Relevance score 'yes' or 'no'"),
})
.describe(
"Grade the relevance of the retrieved documents to the question. Either 'yes' or 'no'."
),
{
name: "grade",
}
);
const prompt = ChatPromptTemplate.fromTemplate(
`You are a grader assessing relevance of a retrieved document to a user question.
Here is the retrieved document:
{context}
Here is the user question: {question}
If the document contains keyword(s) or semantic meaning related to the user question, grade it as relevant.
Give a binary score 'yes' or 'no' score to indicate whether the document is relevant to the question.`
);
// Chain
const chain = prompt.pipe(llmWithTool);
const filteredDocs: Array<DocumentInterface> = [];
for await (const doc of state.documents) {
const grade = await chain.invoke({
context: doc.pageContent,
question: state.question,
});
if (grade.binaryScore === "yes") {
console.log("---GRADE: DOCUMENT RELEVANT---");
filteredDocs.push(doc);
} else {
console.log("---GRADE: DOCUMENT NOT RELEVANT---");
}
}
return {
documents: filteredDocs,
};
}
/**
* Transform the query to produce a better question.
*
* @param {typeof GraphState.State} state The current state of the graph.
* @param {RunnableConfig | undefined} config The configuration object for tracing.
* @returns {Promise<Partial<typeof GraphState.State>>} The new state object.
*/
async function transformQuery(
state: typeof GraphState.State
): Promise<Partial<typeof GraphState.State>> {
console.log("---TRANSFORM QUERY---");
// Pull in the prompt
const prompt = ChatPromptTemplate.fromTemplate(
`You are generating a question that is well optimized for semantic search retrieval.
Look at the input and try to reason about the underlying sematic intent / meaning.
Here is the initial question:
\n ------- \n
{question}
\n ------- \n
Formulate an improved question: `
);
// Prompt
const chain = prompt.pipe(model).pipe(new StringOutputParser());
const betterQuestion = await chain.invoke({ question: state.question });
return {
question: betterQuestion,
};
}
/**
* Web search based on the re-phrased question using Tavily API.
*
* @param {typeof GraphState.State} state The current state of the graph.
* @param {RunnableConfig | undefined} config The configuration object for tracing.
* @returns {Promise<Partial<typeof GraphState.State>>} The new state object.
*/
async function webSearch(
state: typeof GraphState.State
): Promise<Partial<typeof GraphState.State>> {
console.log("---WEB SEARCH---");
const tool = new TavilySearchResults();
const docs = await tool.invoke({ input: state.question });
const webResults = new Document({ pageContent: docs });
const newDocuments = state.documents.concat(webResults);
return {
documents: newDocuments,
};
}
/**
* Determines whether to generate an answer, or re-generate a question.
*
* @param {typeof GraphState.State} state The current state of the graph.
* @returns {"transformQuery" | "generate"} Next node to call
*/
function decideToGenerate(state: typeof GraphState.State) {
console.log("---DECIDE TO GENERATE---");
const filteredDocs = state.documents;
if (filteredDocs.length === 0) {
// All documents have been filtered checkRelevance
// We will re-generate a new query
console.log("---DECISION: TRANSFORM QUERY---");
return "transformQuery";
}
// We have relevant documents, so generate answer
console.log("---DECISION: GENERATE---");
return "generate";
}
构建图¶
这只是遵循了我们上面图中概述的流程。
import { END, START, StateGraph } from "@langchain/langgraph";
const workflow = new StateGraph(GraphState)
// Define the nodes
.addNode("retrieve", retrieve)
.addNode("gradeDocuments", gradeDocuments)
.addNode("generate", generate)
.addNode("transformQuery", transformQuery)
.addNode("webSearch", webSearch);
// Build graph
workflow.addEdge(START, "retrieve");
workflow.addEdge("retrieve", "gradeDocuments");
workflow.addConditionalEdges(
"gradeDocuments",
decideToGenerate,
);
workflow.addEdge("transformQuery", "webSearch");
workflow.addEdge("webSearch", "generate");
workflow.addEdge("generate", END);
// Compile
const app = workflow.compile();
const inputs = {
question: "Explain how the different types of agent memory work.",
};
const config = { recursionLimit: 50 };
let finalGeneration;
for await (const output of await app.stream(inputs, config)) {
for (const [key, value] of Object.entries(output)) {
console.log(`Node: '${key}'`);
// Optional: log full state at each node
// console.log(JSON.stringify(value, null, 2));
finalGeneration = value;
}
console.log("\n---\n");
}
// Log the final generation.
console.log(JSON.stringify(finalGeneration, null, 2));
---RETRIEVE---
Node: 'retrieve'
---
---CHECK RELEVANCE---
---GRADE: DOCUMENT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT NOT RELEVANT---
---GRADE: DOCUMENT RELEVANT---
---DECIDE TO GENERATE---
---DECISION: GENERATE---
Node: 'gradeDocuments'
---
---GENERATE---
Node: 'generate'
---
{
"generation": "Different types of agent memory include long-term memory, which allows the agent to retain and recall information over extended periods, often using an external vector store for fast retrieval. This enables the agent to remember and utilize vast amounts of information efficiently."
}
在此查看 LangSmith 追踪。¶