Google Gemini
Google Gemini provides powerful AI models with massive context windows (up to 1M tokens) and a generous free tier that requires no credit card. This makes it one of the best starting points for new users.
Getting a Free API Key
- Go to aistudio.google.com/apikey
- Sign in with your Google account
- Click Create API key
- Copy the key (starts with
AIza...) - Paste it in AI Supreme Council under Settings > AI Model > Google Gemini
Google Gemini's free tier is truly free -- no credit card, no billing setup, no trial period. You can start chatting immediately after generating your API key.
Supported Models
| Model | Context Window | Max Output | Tier | Rate Limit (Free) | Capabilities |
|---|---|---|---|---|---|
| Gemini 3 Pro Preview | 1M | 65K | Paid | -- | Vision, tools, reasoning, code |
| Gemini 3 Flash Preview | 1M | 65K | Free | 10 RPM | Vision, tools, code |
| Gemini 2.5 Pro | 1M | 65K | Paid | 5 RPM (free tier) | Vision, tools, reasoning, code |
| Gemini 2.5 Flash | 1M | 65K | Free | 10 RPM | Vision, tools, code |
| Gemini 2.5 Flash-Lite | 1M | 65K | Free | 30 RPM | Vision, code |
Free Tier Rate Limits
| Model | Requests/Min | Tokens/Min | Requests/Day |
|---|---|---|---|
| Gemini 3 Flash Preview | 10 | 250K | 500 |
| Gemini 2.5 Flash | 10 | 250K | 500 |
| Gemini 2.5 Pro | 5 | 250K | 100 |
| Gemini 2.5 Flash-Lite | 30 | 250K | 1,500 |
RPM = requests per minute, TPM = tokens per minute, RPD = requests per day.
For higher rate limits, you can enable billing on your Google Cloud account. Paid pricing for Gemini 2.5 Pro is $1.25/$10.00 per MTok (input/output). Free models (Flash, Flash-Lite) remain free even with billing enabled.
Thinking / Reasoning Support
Gemini 2.5 Pro and Gemini 3 Pro Preview support thinking mode, where the model reasons through a problem step by step before answering. Thinking output appears in a collapsible block above the response.
To enable thinking, set the Reasoning Effort in the bot configuration panel:
| Setting | Budget Tokens | Best For |
|---|---|---|
low | 8,192 | Quick reasoning, simple problems |
medium | 32,768 | Moderate analysis |
high | 128,000 | Deep reasoning, complex problems |
max | Model limit - 1024 | Maximum reasoning depth |
| Custom number | Your value | Fine-tuned control |
Thinking is implemented via Gemini's thinkingConfig parameter with a thinkingBudget value. When thinking is enabled, the app automatically increases maxOutputTokens to accommodate both thinking and response tokens. Thinking content is streamed with a thought: true flag on the response parts.
Vision Support
All Gemini models support vision input. You can:
- Paste images directly into the chat input (Ctrl+V / Cmd+V)
- Upload images using the attachment button
- Drag and drop images into the chat area
Images are sent as inline base64 data to the Gemini API. Gemini models excel at multimodal tasks -- analyzing images, reading documents, and understanding visual content.
Technical Details
Gemini uses its own native API format, which differs from the OpenAI-compatible format used by most other providers:
| Aspect | Gemini | OpenAI-compatible |
|---|---|---|
| Endpoint | generativelanguage.googleapis.com/v1beta | /v1/chat/completions |
| Auth | URL query parameter (?key=) | Bearer token header |
| Messages | contents array with role: "user"/"model" | messages array with role: "user"/"assistant" |
| System prompt | systemInstruction field | System message in messages array |
| Streaming | ?alt=sse parameter | stream: true in body |
AI Supreme Council handles all format differences automatically. You do not need to worry about the API format -- just select Gemini as the provider and start chatting.
Configuration
When creating a bot profile, select Google Gemini as the provider and choose your preferred model. You can set a per-bot API key in the bot configuration panel to override the global key.
The Gemini provider uses the native streamGenerateContent API with SSE streaming. API keys are passed as a URL query parameter (not a header), which avoids CORS preflight requests and improves performance.
Tips for Best Results
- Start here if you are new. Gemini's free tier with no credit card requirement makes it the lowest-friction way to start using AI Supreme Council.
- Use Flash-Lite for high-volume tasks. At 30 RPM free, it handles rapid-fire queries better than any other free option.
- Leverage the 1M context window. All Gemini models support 1 million tokens of context -- you can paste entire books, codebases, or document collections.
- Enable thinking for Pro models. Gemini 2.5 Pro with thinking enabled is competitive with the best reasoning models from any provider.
- Combine with OpenRouter. Use Gemini for free-tier tasks and OpenRouter for accessing models from other providers, all without a credit card.