OpenAI
Use GPT-4o, GPT-5, GPT-5-mini, and other OpenAI models with docker-agent.
Setup
# Set your API key
export OPENAI_API_KEY="sk-..."
Configuration
Inline
agents:
root:
model: openai/gpt-4o
Named Model
models:
gpt:
provider: openai
model: gpt-4o
temperature: 0.7
max_tokens: 4000
Available Models
| Model | Best For |
|---|---|
gpt-5 |
Most capable, complex reasoning |
gpt-5-mini |
Fast, cost-effective, good reasoning |
gpt-4o |
Multimodal, balanced performance |
gpt-4o-mini |
Cheapest, fast for simple tasks |
Find more model names at modelname.ai.
Thinking Budget
OpenAI uses effort level strings:
models:
gpt-thinking:
provider: openai
model: gpt-5-mini
thinking_budget: low # minimal | low | medium (default) | high
💡 Custom endpoints
Use base_url for proxies and OpenAI-compatible services. See Custom Providers for full setup.
Custom Endpoint
Use base_url to connect to OpenAI-compatible APIs:
models:
custom:
provider: openai
model: gpt-4o
base_url: https://your-proxy.example.com/v1
WebSocket Transport
For OpenAI Responses API models (gpt-4.1+, o-series, gpt-5), you can use WebSocket streaming instead of the default SSE (Server-Sent Events):
models:
fast-gpt:
provider: openai
model: gpt-4.1
provider_opts:
transport: websocket # Use WebSocket instead of SSE
Benefits
- ~40% faster for workflows with 20+ tool calls
- Persistent connection reduces per-turn overhead
- Server-side caching of connection state
- Automatic fallback to SSE if WebSocket fails
Requirements
- Only works with Responses API models:
gpt-4.1+,o1,o3,o4,gpt-5 - NOT compatible with
--gatewayflag (automatically falls back to SSE) - Requires
OPENAI_API_KEYenvironment variable
Example
See examples/websocket_transport.yaml for a complete example.