إنتقل إلى المحتوى الرئيسي

Bot Configuration

Every bot in AI Supreme Council is defined by a configuration object that controls its provider, model, behavior, and appearance. This page documents the full configuration schema and explains each field.

The Config Panel

The config panel slides out from the right side of the screen. It is visible by default on desktop and accessible via the settings icon on mobile. The panel contains:

  • Provider and model selectors at the top
  • System prompt text area
  • Generation parameters (temperature, max tokens, etc.)
  • Advanced settings (expandable section)
  • Persona settings (icon, description, color)
  • Per-bot API key field

Changes are saved automatically as you edit.

Core Fields

Provider

The LLM provider that hosts the model. Select from the dropdown:

ProviderAPI Key SourceNotes
Anthropicconsole.anthropic.comClaude models -- advanced reasoning
OpenAIplatform.openai.comGPT-4o, o3, DALL-E
xAIconsole.x.aiGrok models
Google Geminiaistudio.google.comFree Flash tier available
OpenRouteropenrouter.ai300+ models, free tier
Groqconsole.groq.comUltra-fast LPU inference
DeepSeekplatform.deepseek.comR1 and V3 reasoning models
Mistralconsole.mistral.aiLarge, Small, Codestral
OllamaN/A (local)Runs locally, no API key needed
CustomUser-definedAny OpenAI-compatible endpoint

Changing the provider updates the model dropdown to show available models for that provider.

Model

The specific model from the selected provider. The dropdown is populated from the community model registry and shows:

  • Model name and ID
  • Pricing tier (free or paid)
  • Context window size
  • Capabilities (vision, reasoning, tools)
Model info card

When you select a model, an info card appears below the dropdown showing pricing (input/output per million tokens), context window size, max output tokens, and capability tags. This helps you choose the right model for your task.

System Prompt

Instructions sent to the model at the beginning of every conversation. Defines the bot's behavior, personality, and constraints. See System Prompts for detailed guidance and templates.

The system prompt field includes a template dropdown with 5 built-in presets. Select a template and customize it to your needs.

Temperature

Controls the randomness/creativity of responses.

ValueBehavior
0Deterministic -- always picks the most likely token
0.3Conservative -- focused, consistent responses
0.7Balanced (default) -- good mix of creativity and coherence
1.0Creative -- more varied responses
1.5-2.0Very creative -- unpredictable, experimental

Max Tokens

Maximum number of tokens the model can generate in a single response. Range: 128 to 16,384 (or higher for models that support it). Default: 4,096.

تلميح

Setting max tokens too low may cause responses to be cut off mid-sentence. Setting it too high may increase costs for paid models. 4,096 is a good default for most use cases.

Advanced Fields

Expand the Advanced section in the config panel to access these parameters.

Reasoning / Thinking Effort

For models that support extended thinking (Claude, GPT o-series, DeepSeek R1, Gemini thinking models), this controls how much computation the model spends reasoning before responding.

ValueDescription
OffNo extended thinking (standard response)
LowMinimal reasoning, fastest responses
MediumModerate reasoning depth
HighDeep reasoning, more thorough
HighestMaximum reasoning effort
CustomSet a custom budget token count
ملاحظة

Not all models support reasoning/thinking mode. The field only appears when the selected model or provider supports it. Models known to support reasoning include Anthropic Claude (extended thinking), OpenAI o1/o3 series, DeepSeek R1, xAI Grok, and Google Gemini thinking models.

Top P (Nucleus Sampling)

Controls the cumulative probability threshold for token selection. Default: 1.0 (no filtering).

  • 1.0 -- consider all tokens (default)
  • 0.9 -- consider tokens comprising the top 90% of probability mass
  • 0.1 -- very restrictive, almost deterministic
warning

It is generally recommended to adjust either Temperature or Top P, not both at the same time. Changing both can produce unpredictable results.

Frequency Penalty

Reduces the likelihood of the model repeating tokens that have already appeared. Range: 0 to 2. Default: 0.

  • 0 -- no penalty (default)
  • 0.5 -- mild repetition reduction
  • 1.0-2.0 -- strong repetition reduction

Presence Penalty

Encourages the model to talk about new topics by penalizing tokens that have appeared at all. Range: 0 to 2. Default: 0.

  • 0 -- no penalty (default)
  • 0.5 -- mild topic diversity encouragement
  • 1.0-2.0 -- strong push toward new topics

Seed

An integer value for reproducible outputs. When the same seed, prompt, and parameters are used, the model attempts to return the same response. Not all providers support this feature.

Stop Sequences

A comma-separated list of custom stop tokens. When the model generates any of these sequences, it stops producing further output. Useful for structured output or role-playing scenarios.

Example: END,---,</answer> causes the model to stop when it generates END, ---, or </answer>.

Response Format

Controls the output format:

ValueDescription
textStandard text output (default)
jsonForces JSON output (model must support structured output)
ملاحظة

JSON mode requires the system prompt to instruct the model to produce JSON. Simply setting the response format to JSON without mentioning it in the prompt may result in errors or malformed output.

Tool Calling

Configure function/tool calling for models that support it. This is an advanced feature for building bots that can invoke external tools or structured actions.

Per-Bot API Key

Each bot can use a different API key from the global key:

  1. Find the API Key field in the config panel (below the provider selector)
  2. Enter a key specific to this bot
  3. This key overrides the global key for this bot only

Per-bot keys are stored locally and are never included in shared bot URLs.

Config Short Keys

Internally, bot configurations use short keys for efficient URL compression. This is useful if you are building integrations or debugging shared URLs:

Short KeyFull NameTypeDefault
nNamestring"New Bot"
pProviderstring"anthropic"
mModelstring--
sSystem Promptstring""
tTemperaturenumber0.7
xMax Tokensnumber4096
tpTop Pnumber1.0
fpFrequency Penaltynumber0
ppPresence Penaltynumber0
seSeednumber--
reReasoning Effortstring--
stStop Sequencesstring--
rfResponse Formatstring"text"
tcTool Choicestring--
tlTool Listarray--
piPersona Iconstring--
pdPersona Descriptionstring--
pcPersona Colorstring--

Council Config (c key)

When the bot is a council (2+ members), the c key contains:

Short KeyFull NameTypeDefault
csCouncil Stylestring"council"
msMembersarray--
chChairmannumber0
vmVoting Modestring"weighted"
srSkip Reviewbooleanfalse
mxMax Tokens (per member)number1024
drDeliberation Roundsnumber2

Chat Settings

These per-bot settings control the chat experience:

SettingShort KeyDescriptionDefault
Context LimitclMax messages sent to the API (0 = unlimited)Unlimited
StreamingsmStream tokens in real-timeEnabled
Auto-TitleatAuto-generate conversation titlesDisabled
MarkdownmrRender markdown in responsesEnabled
Show Token CountstcDisplay token usage per messageDisabled

Editing Raw Config

The config panel provides a visual editor for all fields. You do not need to edit raw JSON under normal use. However, the internal format is documented here for developers building integrations, custom providers, or tools that generate bot configurations programmatically.

Example raw config:

{
"n": "Code Assistant",
"p": "anthropic",
"m": "claude-sonnet-4-20250514",
"s": "You are an expert software engineer. Provide clear, well-documented code with explanations.",
"t": 0.3,
"x": 8192,
"pi": "💻",
"pd": "Software engineering assistant",
"pc": "#4a90d9"
}