Skip to content

Instantly share code, notes, and snippets.

@c0m4r
Last active February 8, 2026 03:21
Show Gist options
  • Select an option

  • Save c0m4r/e5c6b8b401b85054c986b08781746733 to your computer and use it in GitHub Desktop.

Select an option

Save c0m4r/e5c6b8b401b85054c986b08781746733 to your computer and use it in GitHub Desktop.
Open WebUI and Ollama API via Wireguard VPN

Open WebUI and Ollama API via Wireguard VPN

Host Open WebUI on your VPS and connect it to your PC to use local LLMs.

As an alterantive to Open WebUI you can consider Kurczak, which is simplified, lightweight UI for Ollama I've made.

+ --------------------- +                                      + ---------------- +
|    Ollama API (PC)    | <====  WireGuard VPN tunnel  =====>  | Open WebUI (VPS) |
| http://10.0.0.2:11434 |            10.0.0.0/24               |     10.0.0.1     |
+ --------------------- +                                      + ---------------- +

PC side

Ollama (PC)

I'm starting Ollama server manually. But you can put the envs into your init system.

You should set higher context lenght for coding and longer discussions.

Default is4096, but remember that higher values = higher VRAM usage.

export OLLAMA_HOST=10.0.0.2
export OLLAMA_CONTEXT_LENGTH=8096
ollama serve

You can now install some models, i.e.

ollama run qwen3-vl:8b

Wireguard (PC)

Install Wireguard

Example configuration (client side)

cd /etc/wireguard
wg genkey > private
chmod 600 private
wg pubkey < private > public
touch wg0.conf

Edit wg0.conf

[Interface]
Address = 10.0.0.2/24
ListenPort = 51820
PrivateKey = <pc_private_key>
MTU = 1384

[Peer]
PublicKey = <vps_public_key
AllowedIPs = 10.0.0.1/32
Endpoint = <VPS_PUBLIC_IP>:51820
PersistentKeepalive = 25

Start the tunnel:

wg-quick up wg0

If your endpoint is IPv6 put it in brackets, i.e.

Endpoint = [x:x:x:x::x]:51820

VPS side

Wireguard (VPS)

Install Wireguard

Example configuration (server side)

cd /etc/wireguard
wg genkey > private
chmod 600 private
wg pubkey < private > public
touch wg0.conf

Edit wg0.conf

[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <vps_private_key>
MTU = 1420

[Peer]
PublicKey = <pc_public_key>
AllowedIPs = 10.0.0.2/32

Start the tunnel:

wg-quick up wg0

Check connection:

wg
ping 10.0.0.2

Open WebUI (VPS)

Run Open WebUI directly or via docker.

In case of docker you can pass -e OLLAMA_BASE_URL=http://10.0.0.2:11434, but you can also set it in the Admin Panel.

If you want to run it directly, but your Python version is too recent, use miniconda to install specific version of Python.

conda create --name example python=3.11
conda activate example
pip install open-webui
open-webui serve
  1. Go to Admin Panel -> Settings -> Connections
  2. Edit Ollama API (don't confuse it with OpenAI API)
  3. Set the URL to http://10.0.0.2:11434 (you don't need API Key)

Troubleshooting

If Open WebUI isn't able to connect to Ollama API it could be because of the firewall somewhere between the two sides.

Make sure the tunnel on both sides is up with

wg
ping 10.0.0.1
ping 10.0.0.2

From the VPS you can probe the Ollama API with curl, telnet or nc:

curl -v http://10.0.0.2:11434
timeout 2 telnet 10.0.0.2 11434
timeout 3 nc -z 10.0.0.2 11434 || echo "failed"

From the PC you can also make sure the Ollama API is up and running:

ss -lnpt | grep 11434 || echo "not found"
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment