Skip to content

Instantly share code, notes, and snippets.

@octaharon
Created February 3, 2026 07:27
Show Gist options
  • Select an option

  • Save octaharon/b1c7624bf47cd3c98d90a05166613654 to your computer and use it in GitHub Desktop.

Select an option

Save octaharon/b1c7624bf47cd3c98d90a05166613654 to your computer and use it in GitHub Desktop.
SSP HLD

HLD: SSP Platform (MVP + Evolutionary Path)

Objective

Build a lightweight, self-hosted Supply-Side Platform (SSP) to monetize web and CTV/OTT video inventory via server-to-server header bidding, starting with Magnite and FreeWheel, with transparent reporting and reconciliation. Reduce reliance on third-party ad servers while maintaining operational simplicity.


Core Principles

  • Start small, scale deliberately: MVP supports ≤500 RPS; architecture is horizontally scalable but not over-engineered.
  • Own the stack: Self-hosted Prebid Server for full control over auction logic and data.
  • Video-first: VAST 3.0/4.0 compliant; bid caching enabled.
  • Reconciliation-ready: Reporting prioritizes accuracy over real-time speed (5–60 min latency acceptable).
  • Future-proof: No architectural blockers for display, mobile, or direct campaigns later.

MVP Scope vs. Future

Capability Area MVP Post-MVP
Inventory Formats Web video, CTV/OTT (VAST) Mobile app, display, audio
Demand Sources Magnite, FreeWheel (RTB only) Additional DSPs, direct-sold campaigns, PMPs
Auction Logic Static first-price auction, fixed timeouts/floors Dynamic floors, priority rules, hybrid auctions
Compliance ads.txt, sellers.json, SCA propagation Advanced privacy controls, consent-aware routing
User Data & Targeting None (pass-through IDs only) Segmentation, blacklists, supply/DSP-specific rules
Tracking & Measurement Basic PBS logging Unified event endpoint, pixel multiplexing, dynamic pixel injection
Reporting Hourly aggregates + partner reconciliation Near-real-time KPIs, anomaly detection, attribution-ready
Optimization Manual configuration Automated KPI-driven tuning (eCPM, sellout, CPA, etc.)

Key Components (MVP)

Component Name Function MVP? Readymade Solutions (GCP / AWS) Approx. Workload for Delivery
Inventory Gateway Accepts VAST/Player requests; normalizes to OpenRTB; injects metadata & schain ✅ Yes Cloud Run (GCP) / Lambda + API Gateway (AWS) 1 week
Prebid Server Orchestrates S2S auction with demand partners; handles bid caching, timeouts, SCA ✅ Yes Self-hosted on GKE (GCP) / EKS or Fargate (AWS) 1 week (config + deploy)
Log Storage Stores raw PBS request/response logs for reporting ✅ Yes Cloud Storage (GCP) / S3 (AWS) <1 day
Batch ETL Pipeline Ingests logs hourly; pulls partner reports; enriches & loads into DWH ✅ Yes Cloud Dataflow / Composer (GCP) / Glue + EventBridge (AWS) 2 weeks
Data Warehouse (DWH) Central store for aggregated metrics (impressions, wins, eCPM, etc.) ✅ Yes BigQuery (GCP) / Redshift or Athena (AWS) <1 day (provisioning)
Reconciliation Dashboard Visualizes internal vs. partner-reported metrics ✅ Yes Looker (GCP) / QuickSight or Metabase on EC2 (AWS) / Self-hosted Grafana 1 week
SSP Configuration UI Web interface to manage demand partners, bid floors, timeouts, schain settings, and reporting views ❌ No 4–6 weeks (Post-MVP)
CDP (Customer Data Platform) Hosts user segments, blacklists; applies rules per supply/DSP ❌ No 4–6 weeks (Post-MVP)
Event Tracking Service Aggregates tracking pixels; exposes unified /track endpoint; injects pixels into ad markup; ❌ No 3–5 weeks (Post-MVP)
Post-Optimization Service Analyzes performance data; auto-adjusts floors, weights, campaign settings ❌ No 4–6 weeks (Post-MVP)
Direct Campaign DB Stores direct-sold line items, targeting rules, priorities ❌ No Firestore / Cloud SQL (GCP) / DynamoDB / RDS (AWS) 2–3 weeks (Post-MVP)
Compliance Assets Hosts ads.txt, sellers.json, schain validation ✅ Yes Cloud CDN / Firebase Hosting (GCP) / S3 + CloudFront (AWS) <1 day

Overview

  • All MVP components can be built with minimal custom code (mostly config + light glue logic).
  • Post-MVP services require custom development but can leverage cloud-managed backends (e.g., DWH, auth, queues).
  • Total MVP engineering effort: ~4 weeks
  • While not strictly required for MVP (configs can be managed via YAML/CI), a configuration UI becomes essential for managing dynamic floors, DSP routing, direct campaigns and optimization rules. It’s best prioritized in early Post-MVP (Phase 2) to avoid operational bottlenecks.

Architectural Diagram

graph LR
    A[[Supply Side Player/SDK]] ==>|Bid Request| B
    B ==> O
    O((Ad Template)) ==> |Bid Response| A
    A -.->|Event Tracking|M
    M --->|Inject Pixels| O

    subgraph "Core SSP (MVP)"
        B(Inventory Gateway)
        B <==>|OpenRTB| C[Prebid Server]
        C -->|Structured Logs| F[("Log Storage")]
    end

    subgraph "External"
        D[Magnite]
        E[FreeWheel]
        N[Verificators]
        I[Magnite API]
        J[FreeWheel API]
        C --> D
        C --> E
    end

    subgraph "Reporting & Ops (MVP)"
        F --> G[Batch ETL]
        G --> H[("DWH")]
        I --> G
        J --> G
        H --> K[Reconciliation Dashboard]
    end

    subgraph "Post-MVP"
        L[Custom CDP]
        Q[(Campaign Database)]
        B <--> |User Context|L
        L -->|Segments/Blacklists| C
        C -->|Win Notification| M[Event Tracking Service]
        C --> |Direct Sales/PMP| Q
        M --> |Session-based Analytics| H
        M -.->|Multiplex| D
        M -.->|Multiplex| E
        M -.->|Multiplex| N
        H --> P[Post-Optimization Service]
        P -->|Dynamic Floors / Rules| C
        P -->|Campaign Settings| Q
    end

    classDef mvp fill:#000,stroke:#2e8b57;
    classDef future fill:#555,stroke:#1e5dbf;

    class B,C,D,E,F,G,H,I,J,K,O mvp
    class L,M,N,P,Q future
Loading

Legend

  • Green (MVP): Components required for the initial launch.
  • Blue (Post-MVP): The intelligence and control layer that enables advanced optimization, privacy-safe tracking, and data-driven decisioning.
  • The Inventory Gateway acts as the central entry point, enriching requests with context from the CDP.
  • The Event Tracking Service decouples your supply from vendor-specific tracking, giving you control and auditability.
  • The Post-Optimization Service closes the loop by using reporting data to automatically tune the auction

Why Prebid Server?

  • Provides battle-tested OpenRTB auction logic, concurrency, and timeout handling.
  • Avoids rebuilding core RTB infrastructure (saves 3–6 months of dev effort).
  • Enables rapid integration with future demand partners via community adapters.
  • Fully inspectable and auditable—aligns with transparency goals.

No custom code in MVP if standard Prebid adapters exist. If not, minimal adapter development (~1–2 weeks).


Core Risks & Mitigations

Risk Impact Likelihood Mitigation Strategy
Demand partner lacks an adapter for PBS Delivery delayed by 1-2 weeks; revenue impact Low-Medium • Audit adapter availability immediately
• Budget 2 weeks for custom adapter dev
• Use generic OpenRTB adapter as fallback if endpoints are standard
Reporting reconciliation mismatches
(internal vs. partner numbers diverge)
Erodes trust; difficulties in KPI fulfilment High • Define canonical event schema early (impression = win + visible render)
• Use partner-reported spend as source of truth
• Invest into data analyst early to localize discrepancies
Autoscaling failure under unexpected traffic spike
(e.g., viral CTV content → 2K+ RPS)
Revenue loss, latency spikes, auction timeouts Medium • Design stateless, containerized PBS deployment
• Keep some extra resources online during peak hours
• Use load testing pre-launch
• Add circuit breaker: degrade gracefully (e.g., disable slow bidders)
Understaffing post-MVP
(especially UI or data engineers)
Post-MVP features stall; tech debt accumulates; accidents in production might cost a fortune High • Hire or assign at least 1 full-stack engineer with UI experience during MVP planning
• Prioritize UI early (even basic CRUD) to avoid config-as-code bottleneck
• Consider low-code dashboards (Metabase, Retool) to reduce frontend load
Lack of observability in production
(can’t debug low fill rates or latency)
Slow incident response; poor optimization High • Enforce structured JSON logging from Day 1
• Monitor key SLOs: p95 latency <800ms, error rate <1%
• Setup alerting for important KPIs and partner bid response rates
Video-specific issues
(VAST parsing errors, creative validation, player timeouts)
Poor user experience; impression losses Medium • Validate VAST responses in gateway
• Consider PBS bid caching for video
• Set conservative timeouts (≤600ms for auction)
Compliance gaps
(missing schain, invalid sellers.json)
Bids rejected by premium DSPs Low • Automate schain propagation in PBS
• Validate everything via IAB toolset
• Include compliance checks in CI pipeline
Over-reliance on open-source Prebid Server without upgrade path Security/feature stagnation, costly innovations Medium • Pin to stable PBS version but monitor releases
• Avoid deep forks; use config/hooks for customization
• Allocate 10% engineering time for dependency updates
• Identify essential requirements after 6-12 months of operations and consider building a custom replacement
Business success outpaces infrastructure
(e.g., 10x traffic in 3 months)
System instability, missed revenue Medium • Choose cloud-native, elastic services from Day 1 (no VMs)
• Design log pipeline, CDP and pixel multiplexing and to scale independently
• Establish cost-per-RPS monitoring to forecast budget needs
No clear ownership of optimization logic
(who tunes floors? who defines KPIs?)
Post-MVP optimization stalls, more traction with partners High • Define product/engineering partnership model early
• Start with simple rule engine (e.g., “raise floor if fill >90%”) before heavy-duty automation

Key Takeaways

  • Hiring risk is real: UI and/or Ops debt will cripple iteration speed
  • Scale surprises are typical in OLV: A single popular title can spike CTV traffic 10–100x. Need to design failsafes or overspend on infrastructure.
  • Reconciliation is expensive: to efficiently and timely resolve problems causing irregularities, having a relevant competency on the team is a must
  • Start observability early: Debugging RTB issues without logs/metrics is a hopeless gamble

Team & Timeline (Estimate)

  • Team:
    • 1 BE Engineer
    • 1 DevOps / Data Engineer (shared)
  • MVP TTM: 4 weeks
    • Week 1: Deploy & configure Prebid Server + gateway
    • Week 2: Integrate demand partners + logging
    • Weeks 3-4: Reporting pipeline + reconciliation validation
  • Post-MVP Team :
    • 2 BE Engineers
    • 1 FE / Full-Stack Engineer
    • 1 DevOps / Data Engineer
    • 1 Data Engineer / Data Analyst
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment