Host Open WebUI on your VPS and connect it to your PC to use local LLMs.
As an alterantive to Open WebUI you can consider Kurczak, which is simplified, lightweight UI for Ollama I've made.
+ --------------------- + + ---------------- +
| Ollama API (PC) | <==== WireGuard VPN tunnel =====> | Open WebUI (VPS) |
| http://10.0.0.2:11434 | 10.0.0.0/24 | 10.0.0.1 |
+ --------------------- + + ---------------- +
I'm starting Ollama server manually. But you can put the envs into your init system.
You should set higher context lenght for coding and longer discussions.
Default is4096, but remember that higher values = higher VRAM usage.
export OLLAMA_HOST=10.0.0.2
export OLLAMA_CONTEXT_LENGTH=8096
ollama serveYou can now install some models, i.e.
ollama run qwen3-vl:8bExample configuration (client side)
cd /etc/wireguard
wg genkey > private
chmod 600 private
wg pubkey < private > public
touch wg0.confEdit wg0.conf
[Interface]
Address = 10.0.0.2/24
ListenPort = 51820
PrivateKey = <pc_private_key>
MTU = 1384
[Peer]
PublicKey = <vps_public_key
AllowedIPs = 10.0.0.1/32
Endpoint = <VPS_PUBLIC_IP>:51820
PersistentKeepalive = 25
Start the tunnel:
wg-quick up wg0If your endpoint is IPv6 put it in brackets, i.e.
Endpoint = [x:x:x:x::x]:51820
Example configuration (server side)
cd /etc/wireguard
wg genkey > private
chmod 600 private
wg pubkey < private > public
touch wg0.confEdit wg0.conf
[Interface]
Address = 10.0.0.1/24
ListenPort = 51820
PrivateKey = <vps_private_key>
MTU = 1420
[Peer]
PublicKey = <pc_public_key>
AllowedIPs = 10.0.0.2/32
Start the tunnel:
wg-quick up wg0Check connection:
wg
ping 10.0.0.2Run Open WebUI directly or via docker.
In case of docker you can pass -e OLLAMA_BASE_URL=http://10.0.0.2:11434, but you can also set it in the Admin Panel.
If you want to run it directly, but your Python version is too recent, use miniconda to install specific version of Python.
conda create --name example python=3.11
conda activate example
pip install open-webui
open-webui serve- Go to Admin Panel -> Settings -> Connections
- Edit Ollama API (don't confuse it with OpenAI API)
- Set the URL to
http://10.0.0.2:11434(you don't need API Key)
If Open WebUI isn't able to connect to Ollama API it could be because of the firewall somewhere between the two sides.
Make sure the tunnel on both sides is up with
wg
ping 10.0.0.1
ping 10.0.0.2From the VPS you can probe the Ollama API with curl, telnet or nc:
curl -v http://10.0.0.2:11434
timeout 2 telnet 10.0.0.2 11434
timeout 3 nc -z 10.0.0.2 11434 || echo "failed"From the PC you can also make sure the Ollama API is up and running:
ss -lnpt | grep 11434 || echo "not found"