Skip to content

Instantly share code, notes, and snippets.

@awni
Last active January 9, 2026 22:56
Show Gist options
  • Select an option

  • Save awni/93a973a0cf5fb539b2ce1f37ec4a9989 to your computer and use it in GitHub Desktop.

Select an option

Save awni/93a973a0cf5fb539b2ce1f37ec4a9989 to your computer and use it in GitHub Desktop.
OpenCode with MLX

The following guide will show you how to connect a local model served with MLX to OpenCode for local coding.

1. Install OpenCode

curl -fsSL https://opencode.ai/install | bash

2. Install mlx-lm

pip install mlx-lm

3. Make a custom provider for OpenCode

Open ~/.config/opencode/opencode.json and past the following (if you already have a config just add the MLX provider):

{
  "$schema": "https://opencode.ai/config.json",
  "provider": {
    "mlx": {
      "npm": "@ai-sdk/openai-compatible",
      "name": "MLX (local)",
      "options": {
        "baseURL": "http://127.0.0.1:8080/v1"
      },
      "models": {
        "mlx-community/NVIDIA-Nemotron-3-Nano-30B-A3B-4bit": {
          "name": "Nemotron 3 Nano"
        }
      }
    }
  }
}

4. Start the mlx-lm server

mlx_lm.server

5. Start OpenCode and select the provider

In the repo you plan to work, type opencode.

Once inside the OpenCode TUI:

  1. Enter /connect
  2. Type MLX and select it
  3. For the API key enter none
  4. Select the model
  5. Start planning and building
@awni
Copy link
Author

awni commented Jan 6, 2026

because I can load 2 models on the same base URL ?

Each provider (e.g. MLX) has a url (localhost for local providers).
Each provider can have an arbitrary number of models.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment