Providers Overview
AI Supreme Council connects to large language model (LLM) providers directly from your browser. There is no proxy server in between -- your API keys and conversations go straight to the provider's API endpoint. This is the BYOK (Bring Your Own Key) model.
How It Works
- You obtain an API key from a provider (e.g., Anthropic, OpenAI, Google)
- You paste the key into AI Supreme Council's settings
- The key is stored locally in your browser (
localStorage) -- it never touches our servers - When you send a message, the browser calls the provider's API directly
- Responses stream back to you in real time via Server-Sent Events (SSE)
API keys are stored exclusively in your browser's localStorage. They are never included in shared bot URLs, never sent to AI Supreme Council servers, and never logged. Only the provider you are chatting with receives your key.
Provider Comparison
| Provider | API Key Required | Free Tier | Notable Models | Reasoning | Vision |
|---|---|---|---|---|---|
| Google Gemini | Yes | Yes (no credit card) | Gemini 2.5 Flash, 2.5 Pro, 3 Flash Preview | Yes | Yes |
| OpenRouter | Yes | Yes (20+ free models) | 300+ models from all providers | Yes | Yes |
| Groq | Yes | Yes (rate limited) | Llama 3.3 70B, DeepSeek R1 Distill, Compound Beta | Yes | Yes |
| Anthropic | Yes | No | Claude Opus 4.6, Sonnet 4.5, Haiku 4.5 | Yes | Yes |
| OpenAI | Yes | No | GPT-5, GPT-4.1, o3, o4-mini | Yes | Yes |
| xAI | Yes | No | Grok 4.1 Fast, Grok 4, Grok 3 | Yes | Yes |
| DeepSeek | Yes | No | DeepSeek V3.2, R1, V3.2 Reasoner | Yes | No |
| Mistral | Yes | No | Mistral Large 3, Codestral, Devstral 2 | No | Yes |
| Ollama | No | Free (local) | Any model you install locally | Varies | Varies |
The fastest way to get started is with Google Gemini (free API key, no credit card) or OpenRouter (20+ free models including DeepSeek R1, Qwen 3, and Llama 3.3). See Getting Started for a walkthrough.
Adding an API Key
- Open AI Supreme Council at aiscouncil.com
- Click the Settings gear icon in the sidebar
- Go to the AI Model tab
- Find the provider you want and paste your API key
- The key is saved immediately and persisted in your browser
You can also enter API keys during the first-run wizard when creating your first bot profile.
Per-Bot API Keys
Each bot profile can have its own API key that overrides the global key for that provider. This is useful if you have separate keys for different projects or billing accounts. Set a per-bot key in the bot's configuration panel (right sidebar).
How Provider Selection Works
When you create a bot profile, you choose a provider and a model. The provider determines which API endpoint is called, and the model determines which specific AI you are chatting with.
Models are loaded from the community model registry, which is updated independently of the app. New models appear automatically when the registry is refreshed (every 24 hours, or on page reload).
API Formats
Most providers use the OpenAI-compatible Chat Completions API format. Two exceptions:
| Format | Providers | Notes |
|---|---|---|
| OpenAI-compatible | OpenAI, xAI, OpenRouter, DeepSeek, Mistral, Groq, Ollama, and others | Standard POST /v1/chat/completions with Bearer auth |
| Anthropic | Anthropic | Custom Messages API with x-api-key header |
| Gemini | Google Gemini | Native generateContent API with ?key= query param |
This difference is handled automatically -- you do not need to worry about API formats when using the app.
Reasoning / Thinking Support
Several providers support reasoning or "thinking" modes where the model shows its step-by-step thought process before answering:
| Provider | Feature Name | How to Enable |
|---|---|---|
| Anthropic | Extended Thinking | Set reasoning effort in config panel (budget tokens or preset) |
| Google Gemini | Thinking Config | Set reasoning effort in config panel (budget tokens or preset) |
| OpenAI-compatible | Reasoning Effort | Set to low, medium, or high in config panel |
Reasoning output appears in a collapsible "thinking" block above the model's response.
Custom Providers
You can add any OpenAI-compatible API endpoint as a custom provider:
- Open Settings > AI Model
- Scroll to Custom Providers
- Enter a name, API endpoint URL, and API key
- The custom provider appears in the provider dropdown when creating bot profiles
Custom providers are persisted in localStorage and support all standard features (streaming, tool calling, etc.) as long as the endpoint implements the OpenAI Chat Completions format.
Usage Tracking
AI Supreme Council tracks token usage per provider in Settings > Usage. You can see input tokens, output tokens, and estimated costs across all your chat sessions. This helps you monitor spending without needing to check each provider's dashboard separately.