Using LLM Providers
IdeaTwister works with any LLM provider LiteLLM supports. The three first-class providers (Gemini, Anthropic, OpenAI) are auto-detected. To use anything else, export that provider's API key and set IDEATWISTER_MODEL.
- 1.The provider's API key. Each provider names it differently. The tables below show which one to export.
- 2.
IDEATWISTER_MODEL=<provider>/<model>. Only needed for non first-class providers. Format example:openrouter/meta-llama/llama-3.3-70b-instruct.
First-class providers
Export one of these keys and the installer picks a sensible default model. No IDEATWISTER_MODEL needed. Defaults are editable per-install in config.json.
| Provider | Default model | Env var | Get a key |
|---|---|---|---|
| Gemini | gemini/gemini-3-flash-preview | GEMINI_API_KEY | aistudio.google.com/app/apikey |
| Anthropic | anthropic/claude-sonnet-4-5 | ANTHROPIC_API_KEY | console.anthropic.com |
| OpenAI | openai/gpt-4o | OPENAI_API_KEY | platform.openai.com/api-keys |
If you export more than one of these, the installer picks Gemini first, then Anthropic, then OpenAI.
Other LiteLLM providers
For anything below, export the API key and set IDEATWISTER_MODEL to a model id from that provider. Source of truth for the full list is the LiteLLM provider docs.
| Provider | Sample IDEATWISTER_MODEL | API key env var | Get a key | Notes |
|---|---|---|---|---|
| OpenRouter | openrouter/meta-llama/llama-3.3-70b-instruct | OPENROUTER_API_KEY | openrouter.ai/keys | Routes to many models behind one key. Cheapest entry into Llama, Mistral, and friends. |
| DeepSeek | deepseek/deepseek-chat | DEEPSEEK_API_KEY | platform.deepseek.com | Strong code and reasoning models at low cost. |
| Together AI | together_ai/meta-llama/Llama-3.3-70B-Instruct-Turbo | TOGETHER_AI_API_KEY | api.together.ai | Open-weight models with fast inference. |
| Groq | groq/llama-3.3-70b-versatile | GROQ_API_KEY | console.groq.com | Sub-second latency for open-weight models. |
| Mistral | mistral/mistral-large-latest | MISTRAL_API_KEY | console.mistral.ai | EU-hosted. Good for regulated workloads. |
| Cohere | cohere/command-r-plus | COHERE_API_KEY | dashboard.cohere.com/api-keys | Strong RAG and grounding models. |
| Perplexity | perplexity/llama-3.1-sonar-large-128k-online | PERPLEXITYAI_API_KEY | docs.perplexity.ai | Built-in web search. Overlaps with the orchestrator’s research phase. |
| Fireworks | fireworks_ai/accounts/fireworks/models/llama-v3p3-70b-instruct | FIREWORKS_API_KEY | fireworks.ai | Fast open-weight inference. |
| xAI | xai/grok-2-latest | XAI_API_KEY | console.x.ai | |
| Ollama (local) | ollama/llama3.3 | (none, runs locally) | ollama.com | Free and runs on your machine. Slow without a GPU. |
Provider-native web search is wired up only for Gemini, Anthropic, and OpenAI. Other providers run without it. Variations still ship, just less grounded in fresh sources.
After switching providers, run python scripts/precheck.py. It does a live ping to confirm the key works before you spend tokens on a full run.
Worked example
Here is the full flow for OpenRouter:
# In your shell rc (~/.zshrc or ~/.bashrc):
export OPENROUTER_API_KEY="sk-or-..."
export IDEATWISTER_MODEL="openrouter/meta-llama/llama-3.3-70b-instruct"
# Restart your AI CLI so it picks up the new env vars.
# Then:
/ideatwister some idea here
DeepSeek looks the same with one swap:
export DEEPSEEK_API_KEY="sk-..."
export IDEATWISTER_MODEL="deepseek/deepseek-chat"
Model validation (strict mode)
LiteLLM keeps a registry of known model ids, but it lags real model releases by a few days to a few weeks. By default the precheck is non-strict: an unknown model id produces a warning and the run proceeds. The live ping is what actually gates the run.
If you want hard-fail behaviour on unknown ids (useful for catching typos in CI-style automation), edit the installed config.json for your platform and set "strictModelCheck": true.
Where the file lives
- •Claude Code:
~/.claude/skills/ideatwister/scripts/config.json - •Cursor:
~/.cursor/skills/ideatwister/scripts/config.json - •Other CLI platforms:
~/.agents/skills/ideatwister/scripts/config.json
Troubleshooting
Precheck failed with key-mismatch
Your IDEATWISTER_MODEL points at a provider you have not exported a key for. Either export the matching key, or change the model id to one that matches a key you do have.
Precheck failed with smoke-test-failed
Your key is set, but the live API call did not go through. Common causes: wrong key for the project, expired key, a typo in the model name, or a network block. Check the precheck report for the exact short reason.
Variations feel shallow on a non first-class provider
Provider-native web search is wired up only for Gemini, Anthropic, and OpenAI. Other providers run without web search, so variations get less grounding from fresh sources. Either add a Gemini, Anthropic, or OpenAI key for the research phase, or accept the shallower depth.
How do I switch back to a first-class provider?
Unset IDEATWISTER_MODEL. Auto-detection takes over again and picks Gemini, Anthropic, or OpenAI based on which key you have exported.
Where is the full LiteLLM provider list?
See the LiteLLM docs at docs.litellm.ai/docs/providers. Anything they support works here. The table above is the curated set we test against most often.