Create Text Completion with Anthropic
Perform a text completion with Anthropic's Claude LLM
by @pixies
How to Use
No instructions provided yet. Have questions?
Ask the PixieBrix Community!
Inputs
Name | Required | Type | Description |
---|---|---|---|
topK |
number
|
Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Defaults to -1, which disables it. | |
topP |
number
|
Does nucleus sampling, in which we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. Defaults to -1, which disables it. Note that you should either alter temperature or top_p, but not both. | |
model |
string
|
ID of the model to use. See a list of models at https://console.anthropic.com/docs/api/reference#-v1-complete | |
prompt |
string
|
The prompt you want Claude to complete | |
anthropic |
anthropic/api integration
|
||
maxTokens |
integer
|
A maximum number of tokens to generate before stopping. | |
temperature |
number
|
Amount of randomness injected into the response. Ranges from 0 to 1. Use temp closer to 0 for analytical / multiple choice, and temp closer to 1 for creative and generative tasks. |
Outputs
Name | Required | Type | Description |
---|---|---|---|
stop |
string
|
If the stop_reason is stop_sequence, this contains the actual stop sequence (of the stop_sequences list passed-in) that was seen | |
completion |
string
|
The resulting completion up to and excluding the stop sequences | |
stop_reason |
string
|
The reason we stopped sampling, either stop_sequence if we reached one of your provided stop_sequences, or max_tokens if we exceeded max_tokens_to_sample. |