Skip to content

Instantly share code, notes, and snippets.

@kinncj
Last active December 22, 2025 03:02
Show Gist options
  • Select an option

  • Save kinncj/fa3bf0f350fd1b0522301e5ffd9e922c to your computer and use it in GitHub Desktop.

Select an option

Save kinncj/fa3bf0f350fd1b0522301e5ffd9e922c to your computer and use it in GitHub Desktop.
DGX SPARK: OpenWebUI + ComfyUI

NVIDIA DGX Spark – OpenWebUI + ComfyUI (Blackwell-Optimized)

This gist provides a production-ready docker-compose.yaml for running OpenWebUI + ComfyUI on NVIDIA DGX Spark (Grace Blackwell / GB10).

NVIDIA blueprints are a good baseline.
They break down once you combine ComfyUI, Ollama, and OpenWebUI in a multi-service setup.
Mainly due to dependency drift, frontend changes, and memory assumptions.

This configuration closes those gaps while preserving NVIDIA’s Blackwell-optimized stack.


Key Optimizations (GB10 / Blackwell)

  • Native sm_121 support
    Uses nvcr.io/nvidia/pytorch:25.10-py3, which includes PyTorch 2.9.0a0 compiled for Blackwell and CUDA 13.

  • Shared memory tuning
    Sets shm_size: 16gb and appropriate ulimits to avoid bus errors during large tensor transfers. Required to use the Spark’s 128GB unified memory effectively.

  • Dependency protection
    Installs missing Python packages (transformers, torchsde, einops, av, comfyui-frontend-package, etc.) without overwriting NVIDIA’s optimized PyTorch build.

  • ComfyUI frontend handling
    Explicitly installs comfyui-frontend-package and workflow templates, which are now mandatory after recent ComfyUI changes.


Prerequisites

  • Hardware: DELLGB10 | NVIDIA DGX Spark (GB10)
  • Driver: 580.95.05 or newer (required for sm_121)
  • Software: Docker Engine with NVIDIA Container Toolkit (runtime: nvidia)
  • Secrets: .env file with GOOGLE_API_KEY and GOOGLE_CX (used by OpenWebUI RAG search)

Deployment

  1. Configure Docker runtime (if not already done):

    sudo nvidia-ctk runtime configure --runtime=docker
    sudo systemctl restart docker
  2. Start the stack:

    docker compose up -d
  3. Monitor first startup:

    docker logs -f comfyui

    Initial run takes ~60 seconds to clone ComfyUI and install media dependencies.


OpenWebUI → ComfyUI Integration

Use the following Node ID mappings from the Blackwell-optimized default.json workflow:

Feature Node ID Class Type
Prompt 6 CLIPTextEncode
Model 4 CheckpointLoaderSimple
Sampler 3 KSampler
Latent / Size 5 EmptyLatentImage

Notes

  • Blackwell coherency
    Leverages NVLink-C2C (≈900 GB/s) between Grace CPU and Blackwell GPU.

  • Root pip warnings
    Expected. The container installs dependencies globally to ensure NVIDIA libraries stay correctly linked.

  • Unified memory usage
    Designed to run large diffusion workflows and very large LLMs (100B–200B via Ollama) concurrently within the 128GB unified memory pool.

services:
open-webui:
image: ghcr.io/open-webui/open-webui:v0.5.6
container_name: open-webui
platform: linux/arm64
ports:
- "12000:8080"
volumes:
- open-webui:/app/backend/data
environment:
- TIKA_SERVER_URL=http://tika:9998
- ENABLE_RAG_WEB_SEARCH=True
- RAG_WEB_SEARCH_ENGINE=google_pse
- GOOGLE_PSE_API_KEY=${GOOGLE_API_KEY}
- GOOGLE_PSE_CX=${GOOGLE_CX}
- RAG_WEB_SEARCH_RESULT_COUNT=3
- WEB_LOADER_ENGINE=playwright
- PLAYWRIGHT_WS_URL=ws://playwright:3000
- OLLAMA_BASE_URL=http://ollama:11434
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: always
ollama:
image: ollama/ollama:latest
container_name: ollama
platform: linux/arm64
volumes:
- open-webui-ollama:/root/.ollama
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
restart: always
tika:
image: apache/tika:latest
container_name: tika
platform: linux/arm64
restart: always
playwright:
image: mcr.microsoft.com/playwright:v1.56.0-noble
container_name: playwright
platform: linux/arm64
restart: always
command: npx -y playwright@1.56.0 run-server --port 3000 --host 0.0.0.0
ipc: host
comfyui:
build:
context: .
dockerfile: Dockerfile.comfyui
container_name: comfyui
platform: linux/arm64
ports:
- "8188:8188"
ipc: host
runtime: nvidia
environment:
- NVIDIA_VISIBLE_DEVICES=all
- NVIDIA_DRIVER_CAPABILITIES=compute,utility
volumes:
- comfyui-workspace:/workspace
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
command: >
/bin/bash -c "
cd /workspace;
[ ! -d 'ComfyUI' ] && git clone https://github.com/comfyanonymous/ComfyUI.git;
cd /workspace/ComfyUI;
python3 main.py --listen 0.0.0.0 --port 8188 --highvram --use-flash-attention"
restart: always
volumes:
open-webui:
open-webui-ollama:
comfyui-workspace:
FROM nvcr.io/nvidia/pytorch:25.10-py3
# Allow system-wide installs for the few things we need
ENV PIP_BREAK_SYSTEM_PACKAGES=1
# Install only the system-level drivers for OpenCV
RUN apt-get update && apt-get install -y libgl1 libglib2.0-0 git ffmpeg && rm -rf /var/lib/apt/lists/*
# Install only the missing AI libs.
RUN python3 -m pip install --no-cache-dir \
transformers==4.47.0 \
huggingface_hub \
tokenizers==0.21.0 \
pyyaml regex fsspec filelock \
einops torchsde safetensors aiohttp tqdm scipy pillow av kornia spandrel \
pydantic-settings comfyui-frontend-package comfyui-workflow-templates \
ultralytics toml alembic GitPython scikit-image piexif
# --- CORE COMFYUI DEPENDENCIES ---
# We pin these to ensure stability with Transformers 4.47+
transformers==4.47.0
huggingface_hub
tokenizers==0.21.0
pyyaml
regex
fsspec
filelock
# --- IMAGE & MATH PROCESSING ---
einops
torchsde
safetensors
aiohttp
tqdm
scipy
pillow
av
kornia
spandrel
psutil
# --- UI & WORKFLOW UTILS ---
pydantic-settings
comfyui-frontend-package
comfyui-workflow-templates
toml
alembic
GitPython
# --- PHOTOREALISM & IMPACT PACK ---
# These are the ones previously causing 'ModuleNotFound' errors
ultralytics
scikit-image
piexif
# --- AUDIO (ACE / MMAudio) ---
# Note: Dockerfile will use the system's torchaudio to prevent CUDA issues
# but we list it here for completeness
torchaudio
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment