The following guide will show you how to connect a local model served with MLX to OpenCode for local coding.
1. Install OpenCode
curl -fsSL https://opencode.ai/install | bash
2. Install mlx-lm
pip install mlx-lm
Open ~/.config/opencode/opencode.json and past the following (if you already have a config just add the MLX provider):
{
"$schema": "https://opencode.ai/config.json",
"provider": {
"mlx": {
"npm": "@ai-sdk/openai-compatible",
"name": "MLX (local)",
"options": {
"baseURL": "http://127.0.0.1:8080/v1"
},
"models": {
"mlx-community/NVIDIA-Nemotron-3-Nano-30B-A3B-4bit": {
"name": "Nemotron 3 Nano"
}
}
}
}
}mlx_lm.serverIn the repo you plan to work, type opencode.
Once inside the OpenCode TUI:
- Enter
/connect - Type
MLXand select it - For the API key enter
none - Select the model
- Start planning and building


Nice! So I don't need to specify the model checkpoint as a command line option of
mlx_lm.server, correct? Will opencode attach the model name in the request and triggers the server to load the model?