The London Perl and Raku Workshop takes place on 26th Oct 2024. If your company depends on Perl, please consider sponsoring and/or attending.

NAME

OpenAI::API::Request::Completion - Request class for OpenAI API text completion

SYNOPSIS

    use OpenAI::API::Request::Completion;

    my $completion = OpenAI::API::Request::Completion->new(
        model      => 'text-davinci-003',
        prompt     => 'Once upon a time',
        max_tokens => 50,
    );

    my $res  = $completion->send();         # or: my $res = $completion->send( http_response => 1 );
    my $text = $res->{choices}[0]{text};    # or: my $text = "$res";

DESCRIPTION

This module provides a request class for interacting with the OpenAI API's chat-based completion endpoint. It inherits from OpenAI::API::Request.

ATTRIBUTES

model

ID of the model to use.

See Models overview for a reference of them.

prompt

The prompt for the text generation.

suffix [optional]

The suffix that comes after a completion of inserted text.

max_tokens [optional]

The maximum number of tokens to generate.

Most models have a context length of 2048 tokens (except for the newest models, which support 4096.

temperature [optional]

What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

top_p [optional]

An alternative to sampling with temperature, called nucleus sampling.

We generally recommend altering this or temperature but not both.

n [optional]

How many completions to generate for each prompt.

Use carefully and ensure that you have reasonable settings for max_tokens and stop.

stop [optional]

Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence.

frequency_penalty [optional]

Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far.

presence_penalty [optional]

Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far.

best_of [optional]

Generates best_of completions server-side and returns the "best" (the one with the highest log probability per token).

Use carefully and ensure that you have reasonable settings for max_tokens and stop.

METHODS

add_message($role, $content)

This method adds a new message with the given role and content to the messages attribute.

send_message($content)

This method adds a user message with the given content, sends the request, and returns the response. It also adds the assistant's response to the messages attribute.

INHERITED METHODS

This module inherits the following methods from OpenAI::API::Request:

send(%args)

send_async(%args)

SEE ALSO

OpenAI::API::Request, OpenAI::API::Config