Class ConversationalRetrievalQAChain

Class for conducting conversational question-answering tasks with a retrieval component. Extends the BaseChain class and implements the ConversationalRetrievalQAChainInput interface.

Example

const model = new ChatAnthropic({});

const text = fs.readFileSync("state_of_the_union.txt", "utf8");

const textSplitter = new RecursiveCharacterTextSplitter({ chunkSize: 1000 });
const docs = await textSplitter.createDocuments([text]);

const vectorStore = await HNSWLib.fromDocuments(docs, new OpenAIEmbeddings());

const chain = ConversationalRetrievalQAChain.fromLLM(
model,
vectorStore.asRetriever(),
);

const question = "What did the president say about Justice Breyer?";

const res = await chain.call({ question, chat_history: "" });
console.log(res);

const chatHistory = `${question}\n${res.text}`;
const followUpRes = await chain.call({
question: "Was that nice?",
chat_history: chatHistory,
});
console.log(followUpRes);

Hierarchy

Implements

Constructors

Properties

chatHistoryKey: string = "chat_history"
combineDocumentsChain: BaseChain<ChainValues, ChainValues>
inputKey: string = "question"
questionGeneratorChain: LLMChain<string, any>
retriever: BaseRetrieverInterface
returnGeneratedQuestion: boolean = false
returnSourceDocuments: boolean = false
memory?: any

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optional config: any[]

    Returns Promise<ChainValues[]>

    ⚠️ Deprecated ⚠️

    Use .batch() instead. Will be removed in 0.2.0.

    This feature is deprecated and will be removed in the future.

    It is not recommended for use.

    Call the chain on all inputs in the list

  • Parameters

    • values: any
    • Optional config: any
    • Optional tags: string[]

      Deprecated

    Returns Promise<ChainValues>

    Deprecated

    Use .invoke() instead. Will be removed in 0.2.0.

    Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optional config: any

      Optional configuration for the Runnable.

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optional config: any

    Returns Promise<string>

    Deprecated

    Use .invoke() instead. Will be removed in 0.2.0.

  • Static method to create a new ConversationalRetrievalQAChain from a BaseLanguageModel and a BaseRetriever.

    Parameters

    • llm: BaseLanguageModelInterface

      Toolkit instance used to generate a new question.

    • retriever: BaseRetrieverInterface

      Toolkit instance used to retrieve relevant documents.

    • options: {
          outputKey?: string;
          qaChainOptions?: QAChainParams;
          qaTemplate?: string;
          questionGeneratorChainOptions?: {
              llm?: any;
              template?: string;
          };
          questionGeneratorTemplate?: string;
          returnSourceDocuments?: boolean;
      } & Omit<ConversationalRetrievalQAChainInput, "retriever" | "combineDocumentsChain" | "questionGeneratorChain"> = {}

    Returns ConversationalRetrievalQAChain

    A new instance of ConversationalRetrievalQAChain.

  • Static method to convert the chat history input into a formatted string.

    Parameters

    • chatHistory: string | BaseMessage[] | string[][]

      Chat history input which can be a string, an array of BaseMessage instances, or an array of string arrays.

    Returns string

    A formatted string representing the chat history.

Generated using TypeDoc