Custom Providers
Connect cagent to any OpenAI-compatible API endpoint — without modifying cagent’s source code.
Overview
The providers section in your agent YAML lets you define custom providers that work with any OpenAI-compatible API. This is useful for:
- Self-hosted models (vLLM, Ollama, LocalAI, etc.)
- API proxies and routers (Requesty, LiteLLM, etc.)
- Enterprise deployments with custom endpoints
- Any service with an OpenAI-compatible chat completions API
ℹ️ Works with any OpenAI-compatible API
If a service supports the /v1/chat/completions endpoint, you can use it with cagent. No source code changes needed.
Configuration
providers:
my_provider:
api_type: openai_chatcompletions # or openai_responses
base_url: https://api.example.com/v1
token_key: MY_API_KEY # env var name
models:
my_model:
provider: my_provider
model: gpt-4o
max_tokens: 32768
agents:
root:
model: my_model
instruction: You are a helpful assistant.
Provider Properties
| Property | Description | Default |
|---|---|---|
api_type |
API schema: openai_chatcompletions or openai_responses |
openai_chatcompletions |
base_url |
Base URL for the API endpoint | — |
token_key |
Name of the environment variable containing the API token | — |
Shorthand Syntax
Once a custom provider is defined, you can use the shorthand provider/model syntax:
agents:
root:
model: my_provider/gpt-4o-mini # uses the provider's base_url and token
API Types
openai_chatcompletions— Standard OpenAI Chat Completions API. Works with most OpenAI-compatible endpoints.openai_responses— OpenAI Responses API. For newer models that require the Responses API format.
Examples
vLLM / Ollama
providers:
local_llm:
api_type: openai_chatcompletions
base_url: http://localhost:8000/v1
agents:
root:
model: local_llm/llama-3.1-8b
API Router (Requesty, LiteLLM)
providers:
router:
api_type: openai_chatcompletions
base_url: https://router.requesty.ai/v1
token_key: REQUESTY_API_KEY
agents:
root:
model: router/anthropic/claude-sonnet-4-0
Azure OpenAI
models:
azure_model:
provider: azure
model: gpt-4o
base_url: https://your-llm.openai.azure.com
provider_opts:
api_version: 2024-12-01-preview
How It Works
When you reference a custom provider:
- The provider’s
base_urlis applied to the model (if not already set) - The provider’s
token_keyis applied to the model (if not already set) - The provider’s
api_typeis stored inprovider_opts.api_type - The model is used with the appropriate API client