$50 OFF · First 100 buyers

Using LLM Providers

IdeaTwister works with any LLM provider LiteLLM supports. The three first-class providers (Gemini, Anthropic, OpenAI) are auto-detected. To use anything else, export that provider's API key and set IDEATWISTER_MODEL.

The two env vars that matter
  • 1.The provider's API key. Each provider names it differently. The tables below show which one to export.
  • 2.IDEATWISTER_MODEL=<provider>/<model>. Only needed for non first-class providers. Format example: openrouter/meta-llama/llama-3.3-70b-instruct.

First-class providers

Export one of these keys and the installer picks a sensible default model. No IDEATWISTER_MODEL needed. Defaults are editable per-install in config.json.

ProviderDefault modelEnv varGet a key
Geminigemini/gemini-3-flash-previewGEMINI_API_KEYaistudio.google.com/app/apikey
Anthropicanthropic/claude-sonnet-4-5ANTHROPIC_API_KEYconsole.anthropic.com
OpenAIopenai/gpt-4oOPENAI_API_KEYplatform.openai.com/api-keys

If you export more than one of these, the installer picks Gemini first, then Anthropic, then OpenAI.

Other LiteLLM providers

For anything below, export the API key and set IDEATWISTER_MODEL to a model id from that provider. Source of truth for the full list is the LiteLLM provider docs.

ProviderSample IDEATWISTER_MODELAPI key env varGet a keyNotes
OpenRouteropenrouter/meta-llama/llama-3.3-70b-instructOPENROUTER_API_KEYopenrouter.ai/keysRoutes to many models behind one key. Cheapest entry into Llama, Mistral, and friends.
DeepSeekdeepseek/deepseek-chatDEEPSEEK_API_KEYplatform.deepseek.comStrong code and reasoning models at low cost.
Together AItogether_ai/meta-llama/Llama-3.3-70B-Instruct-TurboTOGETHER_AI_API_KEYapi.together.aiOpen-weight models with fast inference.
Groqgroq/llama-3.3-70b-versatileGROQ_API_KEYconsole.groq.comSub-second latency for open-weight models.
Mistralmistral/mistral-large-latestMISTRAL_API_KEYconsole.mistral.aiEU-hosted. Good for regulated workloads.
Coherecohere/command-r-plusCOHERE_API_KEYdashboard.cohere.com/api-keysStrong RAG and grounding models.
Perplexityperplexity/llama-3.1-sonar-large-128k-onlinePERPLEXITYAI_API_KEYdocs.perplexity.aiBuilt-in web search. Overlaps with the orchestrator’s research phase.
Fireworksfireworks_ai/accounts/fireworks/models/llama-v3p3-70b-instructFIREWORKS_API_KEYfireworks.aiFast open-weight inference.
xAIxai/grok-2-latestXAI_API_KEYconsole.x.ai
Ollama (local)ollama/llama3.3(none, runs locally)ollama.comFree and runs on your machine. Slow without a GPU.
Web search caveat

Provider-native web search is wired up only for Gemini, Anthropic, and OpenAI. Other providers run without it. Variations still ship, just less grounded in fresh sources.

Always run precheck

After switching providers, run python scripts/precheck.py. It does a live ping to confirm the key works before you spend tokens on a full run.

Worked example

Here is the full flow for OpenRouter:

# In your shell rc (~/.zshrc or ~/.bashrc):

export OPENROUTER_API_KEY="sk-or-..."

export IDEATWISTER_MODEL="openrouter/meta-llama/llama-3.3-70b-instruct"

# Restart your AI CLI so it picks up the new env vars.

# Then:

/ideatwister some idea here

DeepSeek looks the same with one swap:

export DEEPSEEK_API_KEY="sk-..."

export IDEATWISTER_MODEL="deepseek/deepseek-chat"

Model validation (strict mode)

LiteLLM keeps a registry of known model ids, but it lags real model releases by a few days to a few weeks. By default the precheck is non-strict: an unknown model id produces a warning and the run proceeds. The live ping is what actually gates the run.

If you want hard-fail behaviour on unknown ids (useful for catching typos in CI-style automation), edit the installed config.json for your platform and set "strictModelCheck": true.

Where the file lives

  • Claude Code: ~/.claude/skills/ideatwister/scripts/config.json
  • Cursor: ~/.cursor/skills/ideatwister/scripts/config.json
  • Other CLI platforms: ~/.agents/skills/ideatwister/scripts/config.json

Troubleshooting

Precheck failed with key-mismatch

Your IDEATWISTER_MODEL points at a provider you have not exported a key for. Either export the matching key, or change the model id to one that matches a key you do have.

Precheck failed with smoke-test-failed

Your key is set, but the live API call did not go through. Common causes: wrong key for the project, expired key, a typo in the model name, or a network block. Check the precheck report for the exact short reason.

Variations feel shallow on a non first-class provider

Provider-native web search is wired up only for Gemini, Anthropic, and OpenAI. Other providers run without web search, so variations get less grounding from fresh sources. Either add a Gemini, Anthropic, or OpenAI key for the research phase, or accept the shallower depth.

How do I switch back to a first-class provider?

Unset IDEATWISTER_MODEL. Auto-detection takes over again and picks Gemini, Anthropic, or OpenAI based on which key you have exported.

Where is the full LiteLLM provider list?

See the LiteLLM docs at docs.litellm.ai/docs/providers. Anything they support works here. The table above is the curated set we test against most often.

Related