Custom model support
Configure which AI model to use for each Nimbus role. Supports Anthropic (default), OpenAI, and Ollama (local).
View current configuration
nimbus models
Output:
nimbus models
planner claude-opus-4-6 (default)
implementer claude-sonnet-4-6 (default)
reviewer gpt-4o (custom)
chat claude-sonnet-4-6 (default)
Configuration
In ~/.nimbus/config.toml:
[models]
planner = "claude-opus-4-6"
implementer = "claude-sonnet-4-6"
reviewer = "claude-haiku-4-5-20251001"
chat = "claude-sonnet-4-6"
# OpenAI (requires: pip install openai + OPENAI_API_KEY)
[models.openai]
enabled = false
api_key = ""
# Ollama — local models, fully offline
[models.ollama]
enabled = false
base_url = "http://localhost:11434"
planner = "deepseek-coder:33b"
Supported models
Anthropic (default):
claude-opus-4-6— most capable, used for planningclaude-sonnet-4-6— fast and capable, used for implementationclaude-haiku-4-5-20251001— fastest, used for review
OpenAI (requires pip install openai):
gpt-4ogpt-4o-minigpt-4-turbo
Ollama (local, air-gapped):
- Any model available in your Ollama instance
- Examples:
deepseek-coder:33b,codellama:34b,llama3:70b
When to use Ollama
Ollama is ideal for:
- Air-gapped environments where no data can leave the machine
- Organizations with strict data residency requirements
- Experimenting with open-source models
- Reducing API costs for high-volume usage
Install Ollama from ollama.ai, pull a model (ollama pull deepseek-coder:33b), then set enabled = true in your config.