Model Providers
docker-agent supports multiple AI model providers. Choose the right one for your use case, or use multiple providers in the same configuration.
Supported Providers
OpenAI
GPT-4o, GPT-5, GPT-5-mini. The most widely used AI models.
Anthropic
Claude Sonnet 4, Claude Sonnet 4.5. Excellent for coding and analysis.
Google Gemini
Gemini 2.5 Flash, Gemini 3 Pro. Fast and cost-effective.
AWS Bedrock
Access Claude, Nova, Llama, and more through AWS infrastructure.
Docker Model Runner
Run models locally with Docker. No API keys, no costs.
Provider Definitions
Define reusable provider configurations with shared defaults for any provider type.
Quick Comparison
| Provider | Key | Local? | Strengths |
|---|---|---|---|
| OpenAI | openai |
No | Broad model selection, tool calling, multimodal |
| Anthropic | anthropic |
No | Strong coding, extended thinking, large context |
google |
No | Fast inference, competitive pricing, multimodal | |
| AWS Bedrock | amazon-bedrock |
No | Enterprise features, multiple models, AWS integration |
| Docker Model Runner | dmr |
Yes | No API costs, data privacy, offline capable |
Additional Built-in Providers
docker-agent also includes built-in aliases for these providers:
| Provider | Alias | API Key / Env Variable |
|---|---|---|
| Mistral | mistral |
MISTRAL_API_KEY |
| xAI (Grok) | xai |
XAI_API_KEY |
| Nebius | nebius |
NEBIUS_API_KEY |
| MiniMax | minimax |
MINIMAX_API_KEY |
| Requesty | requesty |
REQUESTY_API_KEY |
| Azure OpenAI | azure |
AZURE_OPENAI_API_KEY + base_url |
| Ollama | ollama |
None (local; optional base_url) |
| GitHub Copilot | github-copilot |
GitHub CLI auth (gh auth login) |
# Use built-in providers inline
agents:
root:
model: mistral/mistral-large-latest
Use expensive models for complex reasoning and cheaper/local models for routine tasks. See the example below.
Using Multiple Providers
Different agents can use different providers in the same configuration:
models:
claude:
provider: anthropic
model: claude-sonnet-4-0
max_tokens: 64000
gpt:
provider: openai
model: gpt-4o
local:
provider: dmr
model: ai/qwen3
agents:
root:
model: claude # coordinator uses Claude
sub_agents: [coder, helper]
coder:
model: gpt # coder uses GPT-4o
helper:
model: local # helper runs locally for free