ibm-granite/granite-3.3-8b-instruct

Granite-3.3-8B-Instruct is a 8-billion parameter 128K context length language model fine-tuned for improved reasoning and instruction-following capabilities.

Input
Configure the inputs for the AI model.

Random seed. Leave unspecified to randomize the seed.

A list of sequences to stop generation at. For example, ["<end>","<stop>"] will stop generation at the first instance of "<end>" or "<stop>".

Tools for request. Passed to the chat template.

0
100

The number of highest probability tokens to consider for generating the output. If > 0, only keep the top k tokens with highest probability (top-k filtering).

0
100

A probability threshold for generating the output. If < 1.0, only keep the top tokens with cumulative probability >= top_p (nucleus filtering). Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751).

Completion API user prompt.

Request streaming response. Defaults to False.

Chat completion API messages.

Documents for request. Passed to the chat template.

0
100

max_tokens is deprecated in favor of the max_completion_tokens field.

0
100

The minimum number of tokens the model should generate as output.

0
100

The value used to modulate the next token probabilities.

Tool choice for request. If the choice is a specific function, this should be specified as a JSON string.

A template to format the prompt with. If not specified, the chat template provided by the model will be used.

An object specifying the format that the model must output.

0
100

Presence penalty

0
100

Frequency penalty

Additional arguments to be passed to the chat template.

Add generation prompt. Passed to the chat template. Defaults to True.

0
100

An upper bound for the number of tokens that can be generated for a completion, including visible output tokens and reasoning tokens.

Output
The generated output will appear here.

No output yet

Click "Generate" to create an output.

granite-3.3-8b-instruct - ikalos.ai