This document outlines how to host the Keystone live-streaming system in development, staging, and production.
| Component | Role | Ports / protocol | Depends on |
|---|---|---|---|
| control-api | REST API (auth, streams, sessions) | HTTP 8081 | Postgres, Redis |
| edge-gateway | WHIP/WHEP auth proxy in front of SRS | HTTP 8082 | Redis, SRS |
| PostgreSQL | Persistent data (users, streams) | TCP 5432 | — |
| Redis | Tokens, visibility cache, rate limit | TCP 6379 | — |
| SRS | RTMP, WHIP/WHEP, HLS, SRT, WebRTC | 1935, 1985, 8080, 8000/udp, 10080/udp | — |
| Web UI | Static frontend (Vite build) | Served over HTTP/HTTPS | control-api (API) |
| Caddy (opt) | Reverse proxy (API + rtc + UI) | e.g. 443, 80 | All above |
The Android publisher app is a client; it only needs the public base URLs for the control-api and SRS/edge (and optionally RTMP/SRT host/ports).
- Already supported:
make up(Postgres, Redis, SRS in Docker);make run-control,make run-gateway,make run-webon the host. - Optional: Caddy + ngrok for a single public URL (see README “Single ngrok domain”).
- One server or small cluster with all services.
- Use real TLS (e.g. Let’s Encrypt) and production-like env (e.g.
EDGE_BASE_URL,CONTROL_API_CORS_ORIGINS,JWT_SECRET). - Same topology as production but smaller instance sizes and single region.
- Control plane and edge can be scaled; SRS and media ports need careful placement (see below).
- Prefer managed Postgres and Redis where possible; run control-api, edge-gateway, and SRS on VMs/containers.
Idea: One machine runs all services in Docker Compose. No host binaries: control-api, edge-gateway, Postgres, Redis, SRS, Web UI, and Caddy are containers. Caddy is the single entrypoint for HTTP; SRS ports are exposed for RTMP/SRT/WebRTC/HLS.
| Service | Image / build | Role |
|---|---|---|
| postgres | postgres:16-alpine |
control-api database |
| redis | redis:7-alpine |
Tokens, visibility cache, rate limit |
| control-api | Build from Dockerfile.app |
REST API (auth, streams, sessions) |
| edge-gateway | Same image, different CMD | WHIP/WHEP proxy in front of SRS |
| srs | ossrs/srs:5 |
RTMP, WHIP/WHEP, HLS, SRT, WebRTC |
| web | Build from web/Dockerfile |
Static Vite app (nginx) |
| caddy | caddy:2-alpine |
Reverse proxy: / → web, /api → control-api, /rtc → edge-gateway |
- control-api and edge-gateway share one Go image (
Dockerfile.app); each service overridescommandin Compose. - SRS uses
deploy/srs.conf.container, which pointshttp_hooksathttp://control-api:8081(Docker network), so nohost.docker.internalis needed. - Web is built with
VITE_API_BASE=/apiso the UI calls the same origin/apiin production.
| File | Purpose |
|---|---|
Dockerfile.app |
Multi-stage Go build for control-api and edge-gateway |
web/Dockerfile |
Build Vite app, serve with nginx |
web/nginx.conf |
nginx config for SPA (try_files) and static assets |
deploy/srs.conf.container |
SRS config with hooks to control-api:8081 |
deploy/Caddyfile.option-a |
Caddy routes: /api → control-api, /rtc → edge-gateway, / → web |
deploy/docker-compose.option-a.yml |
Full stack: all services + optional migrate profile |
# First time: run migrations (uses profile "migrate")
docker compose -f deploy/docker-compose.option-a.yml --profile migrate run --rm migrate
# Start everything
docker compose -f deploy/docker-compose.option-a.yml up -d- UI:
http://localhost/ - API:
http://localhost/api(e.g.http://localhost/api/v1/streams) - WHIP/WHEP:
http://localhost/rtc/v1/whip/...,.../whep/... - HLS (direct to SRS):
http://localhost:8083/live/<stream-id>.m3u8 - RTMP:
localhost:1935 - SRT:
localhost:10080/udp
Optional: create a .env in deploy/ (or repo root) with JWT_SECRET, EDGE_BASE_URL, HLS_BASE_URL, CONTROL_API_CORS_ORIGINS, etc. Compose passes these into the containers. For local, defaults in the Compose file are enough.
| Port | Protocol | Service | Purpose |
|---|---|---|---|
| 80 | TCP | Caddy | HTTP (UI + API + /rtc) |
| 1935 | TCP | SRS | RTMP ingest/play |
| 8083 | TCP | SRS | HLS/HTTP-FLV playback |
| 8000 | UDP | SRS | WebRTC RTP |
| 10080 | UDP | SRS | SRT |
Ports 1985 (SRS HTTP API), 8081 (control-api), 8082 (edge-gateway), 6379 (Redis), 5432 (Postgres) are not published; only Caddy and the SRS media ports are. Clients use Caddy for API and WHIP/WHEP; they use the host’s 1935, 8083, 8000/udp, 10080/udp for RTMP, HLS, WebRTC, SRT.
- In
Caddyfile.option-a, switch from:80to your domain and let Caddy handle TLS (e.g.https://keystone.example.com). - Expose 443 in
docker-compose.option-a.ymlfor Caddy and setEDGE_BASE_URL,HLS_BASE_URL,CONTROL_API_CORS_ORIGINStohttps://keystone.example.com(and/rtc,/apias needed). - For WebRTC, set SRS
rtc_server.candidateinsrs.conf.containerto the server’s public IP or hostname.
- Pros: Single
docker compose up, no host Go/Node; good for staging or a small production VPS; all dependencies containerized. - Cons: Single host; scaling = bigger instance or move to Option B/C.
Suggested minimum: 2 vCPU, 4 GB RAM; 4 vCPU, 8 GB RAM if you expect multiple concurrent streams.
Idea: Run control-api, edge-gateway, and SRS on one or two “app” servers; use managed Postgres and Redis.
| Component | Where |
|---|---|
| Postgres | Managed: Neon, Supabase, AWS RDS, GCP Cloud SQL, etc. |
| Redis | Managed: Upstash, AWS ElastiCache, Redis Cloud, etc. |
| control-api | VM/container (e.g. same host as edge-gateway). |
| edge-gateway | VM/container; same host as SRS to keep media path short. |
| SRS | VM/container; same host as edge-gateway preferred. |
| Caddy / LB | Same host or separate; TLS and routing. |
| Web UI | Static hosting: same Caddy, or S3 + CloudFront / equivalent. |
- Pros: DB and Redis are backed up and maintained; you scale app+SRS separately.
- Cons: SRS still needs UDP (8000, 10080) and multiple ports; some managed platforms are TCP-only.
Idea: control-api, edge-gateway, SRS, Caddy as workloads; Postgres/Redis can be managed or in-cluster.
- control-api / edge-gateway: Deployments + Services; env from ConfigMap/Secret.
- SRS: Deployment + Service; need NodePort or LoadBalancer for 1935, 1985, 8080, 8000/udp, 10080/udp. Ingress usually does not handle UDP; use a LoadBalancer or host network for SRS.
- Web: Ingress + static files (e.g. from ConfigMap or object storage).
- Postgres / Redis: Prefer managed; otherwise StatefulSets + persistent volumes.
Pros: Scaling, rolling updates, multi-region possible.
Cons: More ops; UDP and multi-port for SRS need explicit handling.
-
Single domain (recommended for simplicity):
- Caddy (or Nginx) on 443/80.
- Routes:
/api/*→ control-api,/rtc/*→ edge-gateway,/→ Web UI (and/or static). - Set
EDGE_BASE_URLandCONTROL_API_CORS_ORIGINSto that domain (e.g.https://keystone.example.com).
-
Ports to expose (if not behind one HTTPS domain):
- 443 (HTTPS) for API + rtc + UI.
- If SRS is reached directly by clients (e.g. RTMP/SRT): 1935 (RTMP), 10080/udp (SRT), and optionally 8000/udp for WebRTC (depends on SRS config). Often these are on the same host as edge-gateway.
-
SRS and NAT: For WebRTC, SRS
rtc_server.candidatemust be the public IP or hostname that clients use. In production, set this indeploy/srs.conf(or override) to your server’s public IP or a TURN-like endpoint.
-
control-api:
DATABASE_URL,REDIS_ADDR(andREDIS_PASSWORDif used),JWT_SECRET,EDGE_BASE_URL,CONTROL_API_CORS_ORIGINS,HLS_BASE_URL(and optionallyRTMP_BASE_URL,SRT_BASE_URL) to the public URLs clients will use. -
edge-gateway:
SRS_UPSTREAM(internal URL to SRS, e.g.http://localhost:1985orhttp://srs:1985),REDIS_ADDR(andREDIS_PASSWORDif used). -
SRS:
http_hooksmust point to control-api (e.g.http://control-api:8081/v1/hooks/...in Docker, or the internal host/IP).
rtc_server.candidate= public IP/hostname for WebRTC.
- control-api / edge-gateway: Stateless; scale horizontally behind a load balancer. Redis and Postgres must be shared.
- SRS: CPU/memory scale with concurrent streams and resolution. For many streams, run multiple SRS nodes and put a load balancer in front of the edge-gateway (or multiple edge-gateway instances), with sticky routing or stream-based routing so a given stream always hits the same SRS instance.
- Redis: Size for token count and visibility cache; use managed Redis for persistence and failover if needed.
- Postgres: Size for users and stream metadata; use managed Postgres for backups and HA.
- In the app, set Control API Base URL to the public API base (e.g.
https://keystone.example.com/apiif Caddy strips/apiand forwards to control-api). - Set Media / SRS host (and ports) to the host that serves RTMP/SRT/WHIP (often the same domain or a dedicated media host). Use the same TLS domain where possible to avoid mixed content.
- Postgres: managed or backed up; migrations run (
make migratewith prodDATABASE_URL). - Redis: persistent and (if prod) managed or HA.
-
JWT_SECRETand any Redis password: strong and from secrets. - TLS on all client-facing endpoints (Caddy/Ingress).
-
EDGE_BASE_URL,HLS_BASE_URL, CORS, and SRScandidateset for public access. - SRS
http_hooksreachable from SRS (e.g. control-api on same network or reachable host). - Web UI built and served (
cd web && npm run build; serveweb/dist). - Android app configured with production API and media base URLs.
- Health checks:
/healthz(and optionally/metrics) for control-api and edge-gateway; monitor SRS and DB/Redis.
- Staging / small prod (Option A): Single VPS (e.g. 4 GB RAM, 2 vCPU): ~$20–50/month; plus optional managed DB/Redis ~$15–40/month.
- Production (Option B): Managed Postgres + Redis ~$30–80/month; one or two app VMs (4–8 GB RAM) ~$40–100/month; optional CDN for static web ~$5–20/month. Total roughly $75–200/month for a small-to-medium deployment.
- Kubernetes (Option C): Cluster cost (e.g. GKE/EKS) plus nodes and managed DB/Redis; typically $150+/month depending on region and size.
Use this as a starting point; adjust for traffic, number of streams, and chosen providers.