Class implementing the Large Language Model (LLM) interface using the Hugging Face Inference API for text generation.

Example

const model = new HuggingFaceInference({
model: "gpt2",
temperature: 0.7,
maxTokens: 50,
});

const res = await model.call(
"Question: What would be a good company name for a company that makes colorful socks?\nAnswer:"
);
console.log({ res });

Hierarchy

Implements

Constructors

Properties

apiKey: undefined | string = undefined

API key to use.

endpointUrl: undefined | string = undefined

Custom inference endpoint URL to use

frequencyPenalty: undefined | number = undefined

Penalizes repeated tokens according to frequency

includeCredentials: undefined | string | boolean = undefined

Credentials to use for the request. If this is a string, it will be passed straight on. If it's a boolean, true will be "include" and false will not send credentials at all.

maxTokens: undefined | number = undefined

Maximum number of tokens to generate in the completion.

model: string = "gpt2"

Model to use

stopSequences: undefined | string[] = undefined

The model will stop generating text when one of the strings in the list is generated.

temperature: undefined | number = undefined

Sampling temperature to use

topK: undefined | number = undefined

Integer to define the top tokens considered within the sample operation to create new text.

topP: undefined | number = undefined

Total probability mass of tokens to consider at each step

Methods

  • Parameters

    • Optional options: unknown

    Returns {
        model: string;
        parameters: {
            max_new_tokens: undefined | number;
            repetition_penalty: undefined | number;
            return_full_text: boolean;
            stop: any;
            temperature: undefined | number;
            top_k: undefined | number;
            top_p: undefined | number;
        };
    }

    • model: string
    • parameters: {
          max_new_tokens: undefined | number;
          repetition_penalty: undefined | number;
          return_full_text: boolean;
          stop: any;
          temperature: undefined | number;
          top_k: undefined | number;
          top_p: undefined | number;
      }
      • max_new_tokens: undefined | number
      • repetition_penalty: undefined | number
      • return_full_text: boolean
      • stop: any
      • temperature: undefined | number
      • top_k: undefined | number
      • top_p: undefined | number

Generated using TypeDoc