Configure Codex CLI, OpenAI’s open-source local coding agent, to use Eden AI for access to 500+ models behind a single API key.Documentation Index
Fetch the complete documentation index at: https://www.edenai.co/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Codex CLI runs in your terminal and edits code in your project. Pointing it at Eden AI gives you:- 500+ models: switch between Codex variants, Claude, Gemini, and more without reinstalling
- One API key: unified billing and monitoring across providers
- Provider failover: keep working when an upstream provider has an incident
Prerequisites
- Codex CLI installed (install instructions)
- Eden AI API key from app.edenai.run → API Keys
Setup
1. Configure ~/.codex/config.toml
Codex CLI reads its provider settings from ~/.codex/config.toml. Add an edenai provider and point Codex at it:
2. Export your API key
export line to ~/.bashrc or ~/.zshrc and reload your shell.
3. Launch Codex
config.toml.
Switching models
Eden AI uses theprovider/model format. Update the model field in ~/.codex/config.toml (or pass --model on the command line) to switch:
GET https://api.edenai.run/v3/models.
Troubleshooting
Authentication errors
Verify the API key is loaded in your shell and that it has LLM access:Model not found
Use the fullprovider/model string (e.g. openai/gpt-5.1-codex, not gpt-5.1-codex). Confirm the ID is in the catalog returned by GET /v3/models.
Connection issues
Confirmbase_url is exactly https://api.edenai.run/v3 — Codex appends /chat/completions itself. Check Eden AI status at app-edenai.instatus.com.
Next Steps
- Claude Code — Anthropic’s official CLI on Eden AI
- OpenCode — terminal coding agent with auto-generated config
- Chat Completions — full reference for the underlying endpoint