Skip to content

Instantly share code, notes, and snippets.

@weltonrodrigo
Created December 27, 2025 23:38
Show Gist options
  • Select an option

  • Save weltonrodrigo/3c9a70d5f0988e4dcac230d7ff2848a5 to your computer and use it in GitHub Desktop.

Select an option

Save weltonrodrigo/3c9a70d5f0988e4dcac230d7ff2848a5 to your computer and use it in GitHub Desktop.
How to configure OpenAI Codex CLI to use Azure AI Foundry (Azure OpenAI)

Configuring OpenAI Codex CLI with Azure AI Foundry

This guide explains how to configure the OpenAI Codex CLI to use Azure AI Foundry (Azure OpenAI) as the model provider.

Prerequisites

  • OpenAI Codex CLI installed (npm install -g @openai/codex or via other methods)
  • An Azure AI Foundry resource with a deployed model
  • Your Azure OpenAI API key

Setup

1. Set your API key as an environment variable

export AZURE_OPENAI_API_KEY="your-api-key-here"

Add this to your shell profile (~/.zshrc, ~/.bashrc, etc.) for persistence.

2. Create or edit ~/.codex/config.toml

# Set your model deployment name and provider
model = "gpt-5.1-codex-max"  # Replace with your actual Azure deployment name
model_provider = "azure"

# Optional: Set reasoning effort for reasoning models
model_reasoning_effort = "high"

# Configure the Azure provider
[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://YOUR-RESOURCE-NAME.openai.azure.com/openai/v1"
env_key = "AZURE_OPENAI_API_KEY"
wire_api = "responses"

Key Configuration Notes

Field Description
model Your Azure deployment name (not the model name). This is the name you gave when deploying the model in Azure.
model_provider Must be "azure" to use the Azure provider defined below.
base_url Your Azure OpenAI endpoint. Important: Do NOT include /responses at the end - Codex appends it automatically.
env_key The environment variable name containing your API key.
wire_api Use "responses" for the new Responses API (required for newer models like GPT-5.1, o-series, etc.).

3. Trust your project directory (optional)

To avoid being prompted each time, add trusted directories:

[projects."/path/to/your/project"]
trust_level = "trusted"

Finding Your Azure Endpoint

  1. Go to Azure AI Foundry or Azure Portal
  2. Navigate to your Azure OpenAI resource
  3. Find the endpoint URL (looks like https://YOUR-RESOURCE-NAME.openai.azure.com/)
  4. Append /openai/v1 to get the base_url

Example Complete Config

model = "gpt-5.1-codex-max"
model_provider = "azure"
model_reasoning_effort = "high"

[model_providers.azure]
name = "Azure OpenAI"
base_url = "https://my-azure-resource.openai.azure.com/openai/v1"
env_key = "AZURE_OPENAI_API_KEY"
wire_api = "responses"

[projects."/Users/myuser/projects/myapp"]
trust_level = "trusted"

Testing

Run a simple test:

codex exec "say hello"

If you encounter issues, enable debug logging:

RUST_LOG=debug codex exec "test"

Troubleshooting

"OperationNotSupported" Error

  • Your model may not support the chat completions API. Use wire_api = "responses" for newer models.

Double /responses/responses in URL

  • Remove /responses from your base_url. Codex appends it automatically when wire_api = "responses".

Authentication Errors

  • Verify AZURE_OPENAI_API_KEY is set correctly
  • Check that your API key has access to the deployment

References

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment