Models

Manage your models and add custom models

Global Settings

GPT 4

GPT 4o mini

O1 preview

O1 mini

Deepseek Chat V3

Deepseek Coder

Deepseek Reasoner

Claude 3.5 Sonnet

Claude 3.5 Haiku

Claude 3 Opus

Claude 3 Sonnet

xAI/Grok 2 Vision

xAI/Grok 2

xAI/Grok beta

xAI/Grok vision beta

Gemini 2.0 Flash Thinking Experimental

Gemini 2.0 Flash Experimental

Gemini 1.5 Pro

Gemini 1.5 Flash

Groq LLAMA3 70B

Groq LLAMA3 8B instant

Groq Mixtral 8x7B

Default Model:

System Instruction:

Context Limit:

Temperature:

Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.

Precise
Neutral
Creative

Top K:

Only sample from the top K options for each subsequent token. Used to remove "long tail" low probability responses. Min: 0

Top P:

An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.

Precise
Creative

Presence Penalty:

How much to penalize new tokens based on whether they appear in the text so far. Increases the model's likelihood to talk about new topics.

Balanced
Open-minded

Frequency Penalty:

Balanced
Open-minded

Max Tokens:

The maximum number of tokens to generate before stopping.