Bot Configuration
Every bot in AI Supreme Council is defined by a configuration object that controls its provider, model, behavior, and appearance. This page documents the full configuration schema and explains each field.
The Config Panel
The config panel slides out from the right side of the screen. It is visible by default on desktop and accessible via the settings icon on mobile. The panel contains:
- Provider and model selectors at the top
- System prompt text area
- Generation parameters (temperature, max tokens, etc.)
- Advanced settings (expandable section)
- Persona settings (icon, description, color)
- Per-bot API key field
Changes are saved automatically as you edit.
Core Fields
Provider
The LLM provider that hosts the model. Select from the dropdown:
| Provider | API Key Source | Notes |
|---|---|---|
| Anthropic | console.anthropic.com | Claude models -- advanced reasoning |
| OpenAI | platform.openai.com | GPT-4o, o3, DALL-E |
| xAI | console.x.ai | Grok models |
| Google Gemini | aistudio.google.com | Free Flash tier available |
| OpenRouter | openrouter.ai | 300+ models, free tier |
| Groq | console.groq.com | Ultra-fast LPU inference |
| DeepSeek | platform.deepseek.com | R1 and V3 reasoning models |
| Mistral | console.mistral.ai | Large, Small, Codestral |
| Ollama | N/A (local) | Runs locally, no API key needed |
| Custom | User-defined | Any OpenAI-compatible endpoint |
Changing the provider updates the model dropdown to show available models for that provider.
Model
The specific model from the selected provider. The dropdown is populated from the community model registry and shows:
- Model name and ID
- Pricing tier (free or paid)
- Context window size
- Capabilities (vision, reasoning, tools)
When you select a model, an info card appears below the dropdown showing pricing (input/output per million tokens), context window size, max output tokens, and capability tags. This helps you choose the right model for your task.
System Prompt
Instructions sent to the model at the beginning of every conversation. Defines the bot's behavior, personality, and constraints. See System Prompts for detailed guidance and templates.
The system prompt field includes a template dropdown with 5 built-in presets. Select a template and customize it to your needs.
Temperature
Controls the randomness/creativity of responses.
| Value | Behavior |
|---|---|
| 0 | Deterministic -- always picks the most likely token |
| 0.3 | Conservative -- focused, consistent responses |
| 0.7 | Balanced (default) -- good mix of creativity and coherence |
| 1.0 | Creative -- more varied responses |
| 1.5-2.0 | Very creative -- unpredictable, experimental |
Max Tokens
Maximum number of tokens the model can generate in a single response. Range: 128 to 16,384 (or higher for models that support it). Default: 4,096.
Setting max tokens too low may cause responses to be cut off mid-sentence. Setting it too high may increase costs for paid models. 4,096 is a good default for most use cases.
Advanced Fields
Expand the Advanced section in the config panel to access these parameters.
Reasoning / Thinking Effort
For models that support extended thinking (Claude, GPT o-series, DeepSeek R1, Gemini thinking models), this controls how much computation the model spends reasoning before responding.
| Value | Description |
|---|---|
| Off | No extended thinking (standard response) |
| Low | Minimal reasoning, fastest responses |
| Medium | Moderate reasoning depth |
| High | Deep reasoning, more thorough |
| Highest | Maximum reasoning effort |
| Custom | Set a custom budget token count |
Not all models support reasoning/thinking mode. The field only appears when the selected model or provider supports it. Models known to support reasoning include Anthropic Claude (extended thinking), OpenAI o1/o3 series, DeepSeek R1, xAI Grok, and Google Gemini thinking models.
Top P (Nucleus Sampling)
Controls the cumulative probability threshold for token selection. Default: 1.0 (no filtering).
1.0-- consider all tokens (default)0.9-- consider tokens comprising the top 90% of probability mass0.1-- very restrictive, almost deterministic
It is generally recommended to adjust either Temperature or Top P, not both at the same time. Changing both can produce unpredictable results.
Frequency Penalty
Reduces the likelihood of the model repeating tokens that have already appeared. Range: 0 to 2. Default: 0.
0-- no penalty (default)0.5-- mild repetition reduction1.0-2.0-- strong repetition reduction
Presence Penalty
Encourages the model to talk about new topics by penalizing tokens that have appeared at all. Range: 0 to 2. Default: 0.
0-- no penalty (default)0.5-- mild topic diversity encouragement1.0-2.0-- strong push toward new topics
Seed
An integer value for reproducible outputs. When the same seed, prompt, and parameters are used, the model attempts to return the same response. Not all providers support this feature.
Stop Sequences
A comma-separated list of custom stop tokens. When the model generates any of these sequences, it stops producing further output. Useful for structured output or role-playing scenarios.
Example: END,---,</answer> causes the model to stop when it generates END, ---, or </answer>.
Response Format
Controls the output format:
| Value | Description |
|---|---|
text | Standard text output (default) |
json | Forces JSON output (model must support structured output) |
JSON mode requires the system prompt to instruct the model to produce JSON. Simply setting the response format to JSON without mentioning it in the prompt may result in errors or malformed output.
Tool Calling
Configure function/tool calling for models that support it. This is an advanced feature for building bots that can invoke external tools or structured actions.
Per-Bot API Key
Each bot can use a different API key from the global key:
- Find the API Key field in the config panel (below the provider selector)
- Enter a key specific to this bot
- This key overrides the global key for this bot only
Per-bot keys are stored locally and are never included in shared bot URLs.
Config Short Keys
Internally, bot configurations use short keys for efficient URL compression. This is useful if you are building integrations or debugging shared URLs:
| Short Key | Full Name | Type | Default |
|---|---|---|---|
n | Name | string | "New Bot" |
p | Provider | string | "anthropic" |
m | Model | string | -- |
s | System Prompt | string | "" |
t | Temperature | number | 0.7 |
x | Max Tokens | number | 4096 |
tp | Top P | number | 1.0 |
fp | Frequency Penalty | number | 0 |
pp | Presence Penalty | number | 0 |
se | Seed | number | -- |
re | Reasoning Effort | string | -- |
st | Stop Sequences | string | -- |
rf | Response Format | string | "text" |
tc | Tool Choice | string | -- |
tl | Tool List | array | -- |
pi | Persona Icon | string | -- |
pd | Persona Description | string | -- |
pc | Persona Color | string | -- |
Council Config (c key)
When the bot is a council (2+ members), the c key contains:
| Short Key | Full Name | Type | Default |
|---|---|---|---|
cs | Council Style | string | "council" |
ms | Members | array | -- |
ch | Chairman | number | 0 |
vm | Voting Mode | string | "weighted" |
sr | Skip Review | boolean | false |
mx | Max Tokens (per member) | number | 1024 |
dr | Deliberation Rounds | number | 2 |
Chat Settings
These per-bot settings control the chat experience:
| Setting | Short Key | Description | Default |
|---|---|---|---|
| Context Limit | cl | Max messages sent to the API (0 = unlimited) | Unlimited |
| Streaming | sm | Stream tokens in real-time | Enabled |
| Auto-Title | at | Auto-generate conversation titles | Disabled |
| Markdown | mr | Render markdown in responses | Enabled |
| Show Token Count | stc | Display token usage per message | Disabled |
Editing Raw Config
The config panel provides a visual editor for all fields. You do not need to edit raw JSON under normal use. However, the internal format is documented here for developers building integrations, custom providers, or tools that generate bot configurations programmatically.
Example raw config:
{
"n": "Code Assistant",
"p": "anthropic",
"m": "claude-sonnet-4-20250514",
"s": "You are an expert software engineer. Provide clear, well-documented code with explanations.",
"t": 0.3,
"x": 8192,
"pi": "💻",
"pd": "Software engineering assistant",
"pc": "#4a90d9"
}