Configuration Overview
docker-agent uses YAML configuration files to define agents, models, tools, and their relationships.
File Structure
A docker-agent YAML config has these main sections:
# 1. Version — configuration schema version (optional but recommended)
version: 8
# 2. Metadata — optional agent metadata for distribution
metadata:
author: my-org
description: My helpful agent
version: "1.0.0"
# 3. Models — define AI models with their parameters
models:
claude:
provider: anthropic
model: claude-sonnet-4-0
max_tokens: 64000
# 4. Agents — define AI agents with their behavior
agents:
root:
model: claude
description: A helpful assistant
instruction: You are helpful.
toolsets:
- type: think
# 5. RAG — define retrieval-augmented generation sources (optional)
rag:
docs:
docs: ["./docs"]
strategies:
- type: chunked-embeddings
model: openai/text-embedding-3-small
# 6. MCPs — reusable MCP server definitions (optional)
mcps:
github:
remote:
url: https://api.githubcopilot.com/mcp
transport_type: sse
# 7. Providers — optional reusable provider definitions
providers:
my_provider:
provider: anthropic # or openai (default), google, amazon-bedrock, etc.
token_key: MY_API_KEY
max_tokens: 16384
# 8. Permissions — agent-level tool permission rules (optional)
# For user-wide global permissions, see ~/.config/cagent/config.yaml
permissions:
allow: ["read_*"]
deny: ["shell:cmd=sudo*"]
Minimal Config
The simplest possible configuration — a single agent with an inline model:
agents:
root:
model: openai/gpt-4o
description: A helpful assistant
instruction: You are a helpful assistant.
Inline vs Named Models
Models can be referenced inline or defined in the models section:
Inline
Quick and simple. Use provider/model syntax directly.
model: openai/gpt-4o
Named
Full control over parameters. Reusable across agents.
model: my_claude
Config Sections
Agent Config
All agent properties: model, instruction, tools, sub-agents, hooks, and more.
Model Config
Provider setup, parameters, thinking budget, and provider-specific options.
Tool Config
Built-in tools, MCP tools, Docker MCP, LSP, API tools, and tool filtering.
Advanced Configuration
Hooks
Run shell commands at lifecycle events like tool calls and session start/end.
Permissions
Control which tools auto-approve, require confirmation, or are blocked.
Sandbox Mode
Run agents in an isolated Docker container for security.
Structured Output
Constrain agent responses to match a specific JSON schema.
Environment Variables
API keys and secrets are read from environment variables — never stored in config files. See Managing Secrets for all the ways to provide credentials (env files, Docker Compose secrets, macOS Keychain, pass):
| Variable | Provider |
|---|---|
OPENAI_API_KEY |
OpenAI |
ANTHROPIC_API_KEY |
Anthropic |
GOOGLE_API_KEY |
Google Gemini |
MISTRAL_API_KEY |
Mistral |
XAI_API_KEY |
xAI |
NEBIUS_API_KEY |
Nebius |
Tool Auto-Installation:
| Variable | Description |
|---|---|
DOCKER_AGENT_AUTO_INSTALL |
Set to false to disable automatic tool installation |
DOCKER_AGENT_TOOLS_DIR |
Override the base directory for installed tools (default: ~/.cagent/tools/) |
Model references are case-sensitive: openai/gpt-4o is not the same as openai/GPT-4o.
Validation
docker-agent validates your configuration at startup:
- Local
sub_agentsmust reference agents defined in the config (external OCI references likeagentcatalog/pirateare pulled from registries automatically) - Named model references must exist in the
modelssection - Provider names must be valid (
openai,anthropic,google,dmr, etc.) - Required environment variables (API keys) must be set
- Tool-specific fields are validated (e.g.,
pathis only valid formemory)
JSON Schema
For editor autocompletion and validation, use the Docker Agent JSON Schema. Add this to the top of your YAML file:
# yaml-language-server: $schema=https://raw.githubusercontent.com/docker/docker-agent/main/agent-schema.json
Config Versioning
docker-agent configs are versioned. The current version is 8. Add the version at the top of your config:
version: 8
agents:
root:
model: openai/gpt-4o
# ...
When you load an older config, docker-agent automatically migrates it to the latest schema. It’s recommended to include the version to ensure consistent behavior.
Metadata Section
Optional metadata for agent distribution via OCI registries:
metadata:
author: my-org
license: Apache-2.0
description: A helpful coding assistant
readme: | # Displayed in registries
This agent helps with coding tasks.
version: "1.0.0"
| Field | Description |
|---|---|
author |
Author or organization name |
license |
License identifier (e.g., Apache-2.0, MIT) |
description |
Short description for the agent |
readme |
Longer markdown description |
version |
Semantic version string |
See Agent Distribution for publishing agents to registries.
Reusable MCP Servers (mcps:)
The top-level mcps: section defines named MCP server configurations that agents can reference with toolsets: [{type: mcp, ref: <name>}]. This avoids repeating the same command / URL / headers across agents and keeps credentials in one place.
mcps:
github:
remote:
url: https://api.githubcopilot.com/mcp
transport_type: sse
playwright:
command: npx
args: ["-y", "@modelcontextprotocol/server-playwright"]
agents:
root:
model: openai/gpt-4o
toolsets:
- type: mcp
ref: github # reuse the definition above
- type: mcp
ref: playwright
An mcps entry accepts every field a regular type: mcp toolset accepts (command/args/env, remote with url/transport_type/headers/oauth, tools filter, instruction, defer, …) — the type: mcp is implicit. See the Tool Config page for all options and the Remote MCP Servers guide for remote setups.
Custom Providers Section
Define reusable provider configurations with shared defaults. Providers can wrap any provider type — not just OpenAI-compatible endpoints:
providers:
# OpenAI-compatible custom endpoint
azure:
api_type: openai_chatcompletions
base_url: https://my-resource.openai.azure.com/openai/deployments/gpt-4o
token_key: AZURE_OPENAI_API_KEY
# Anthropic with shared model defaults
team_anthropic:
provider: anthropic
token_key: TEAM_ANTHROPIC_KEY
max_tokens: 32768
thinking_budget: high
models:
azure_gpt:
provider: azure
model: gpt-4o
claude:
provider: team_anthropic
model: claude-sonnet-4-5
# Inherits max_tokens, thinking_budget from provider
agents:
root:
model: claude
| Field | Description |
|---|---|
provider |
Underlying provider type: openai (default), anthropic, google, amazon-bedrock, etc. |
api_type |
API schema: openai_chatcompletions (default) or openai_responses. OpenAI-only. |
base_url |
Base URL for the API endpoint. Required for OpenAI-compatible providers. |
token_key |
Environment variable name for the API token. |
temperature |
Default sampling temperature. |
max_tokens |
Default maximum response tokens. |
thinking_budget |
Default reasoning effort/budget. |
task_budget |
Default total token budget for an agentic task (Anthropic; honored by Claude Opus 4.7 today). |
top_p |
Default top-p sampling parameter. |
frequency_penalty |
Default frequency penalty. |
presence_penalty |
Default presence penalty. |
parallel_tool_calls |
Enable parallel tool calls by default. |
track_usage |
Track token usage by default. |
provider_opts |
Provider-specific options. |
See Provider Definitions for more details.