This guide explains how to configure the OpenAI Codex CLI to use Azure AI Foundry (Azure OpenAI) as the model provider.
- OpenAI Codex CLI installed (
npm install -g @openai/codexor via other methods) - An Azure AI Foundry resource with a deployed model
- Your Azure OpenAI API key
export AZURE_OPENAI_API_KEY="your-api-key-here"Add this to your shell profile (~/.zshrc, ~/.bashrc, etc.) for persistence.
# Set your model deployment name and provider
model = "gpt-5.1-codex-max" # Replace with your actual Azure deployment name
model_provider = "azure"
# Optional: Set reasoning effort for reasoning models
model_reasoning_effort = "high"
# Configure the Azure provider
[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1"
env_key = "AZURE_OPENAI_API_KEY"
wire_api = "responses"| Field | Description |
|---|---|
model |
Your Azure deployment name (not the model name). This is the name you gave when deploying the model in Azure. |
model_provider |
Must be "azure" to use the Azure provider defined below. |
base_url |
Your Azure OpenAI endpoint. Important: Do NOT include /responses at the end - Codex appends it automatically. |
env_key |
The environment variable name containing your API key. |
wire_api |
Use "responses" for the new Responses API (required for newer models like GPT-5.1, o-series, etc.). |
To avoid being prompted each time, add trusted directories:
[projects."/path/to/your/project"]
trust_level = "trusted"- Go to Azure AI Foundry or Azure Portal
- Navigate to your Azure OpenAI resource
- Find the endpoint URL (looks like
https://YOUR-RESOURCE-NAME.openai.azure.com/) - Append
/openai/v1to get the base_url
model = "gpt-5.1-codex-max"
model_provider = "azure"
model_reasoning_effort = "high"
[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://my-azure-resource.openai.azure.com/openai/v1"
env_key = "AZURE_OPENAI_API_KEY"
wire_api = "responses"
[projects."/Users/myuser/projects/myapp"]
trust_level = "trusted"Run a simple test:
codex exec "say hello"If you encounter issues, enable debug logging:
RUST_LOG=debug codex exec "test"- Your model may not support the chat completions API. Use
wire_api = "responses"for newer models.
- Remove
/responsesfrom yourbase_url. Codex appends it automatically whenwire_api = "responses".
- Verify
AZURE_OPENAI_API_KEYis set correctly - Check that your API key has access to the deployment