Skip to content

Instantly share code, notes, and snippets.

@tanvirraj
Created December 3, 2025 14:27
Show Gist options
  • Select an option

  • Save tanvirraj/b8807d0e58ed401731d853d782c567b7 to your computer and use it in GitHub Desktop.

Select an option

Save tanvirraj/b8807d0e58ed401731d853d782c567b7 to your computer and use it in GitHub Desktop.

You:

i want to learn how to build build a sync-engine like linear from scrach how can i do that what resource avaible

you can serach github to find out some easy to grok project that implement real time sync enigne from scratch not using any service like electric sql or zero sync

i wan to implement totaly myself


ChatGPT:

You can absolutely build your own Linear-style sync engine, but it is a big project—so the trick is to tackle it in layers and steal as many ideas as possible from good prior art.

Below I’ll give you:

  1. What a “Linear-like sync engine” actually is (in simple terms)

  2. A concrete step-by-step plan to build one from scratch

  3. Very specific GitHub repos to read / copy ideas from

  4. Reading / video list about Linear, Replicache, local-first sync, etc.


  1. What you’re trying to build (Linear-style sync)

For a Linear-style app you basically want:

  • Local DB on every client (IndexedDB / SQLite / whatever). UI reads/writes only to local data → instant, offline-capable. typeonce.dev+1

  • Authoritative DB on the server (Postgres, etc).

  • Sync engine that glues them together:

    • Initial sync (download everything you care about)

    • Incremental sync (only deltas)

    • Push mutations from client → server

    • Broadcast changes from server → all relevant clients

    • Handle conflicts & offline edits → eventual consistency

Convex’s “Object Sync Engine” and Typeonce’s article both describe this as three parts: local store, server store, and sync protocol. stack.convex.dev+1

Linear specifically uses: local IndexedDB + MobX models on the client, a Postgres-backed sync server that tails the replication log, and a protocol for initial + incremental sync plus optimistic updates. stack.convex.dev+2GitHub+2

You’re trying to build your own version of that.


  1. Step-by-step plan to build your own sync engine

I’ll assume TypeScript + Node + a web client (React or similar), but you can translate to any stack.

Phase 0 – Study how Linear & friends do it

Before coding, skim these:

  • Linear’s own talk: “Scaling the Linear Sync Engine” (Tuomas Artman) – high-level architecture and scaling concerns. linear.app

  • Reverse engineering Linear:

    • GitHub: wzhudev/reverse-linear-sync-engine – super detailed write-up plus annotated code. GitHub

    • Blog on DEV: “I Reversed Linear’s Sync Engine to See How It Works”. DEV Community

  • “A Map of Sync” & “Object Sync Engine” (Convex) – gives you a taxonomy of sync engines & shows how Linear fits into the “object sync” space. stack.convex.dev+1

You don’t need to understand every detail on first read; just try to understand:

  • They keep a log / versioning of changes.

  • They distinguish local models vs server truth.

  • They have a query-based / subscription-based sync (not just “full table”).

Then start building your own tiny version.


Phase 1 – Tiny vertical slice: sync a single table (todos)

Goal: a minimal engine that syncs a Todo list across multiple browsers in real time, with offline support.

Use this as a reference implementation:
matheuslanduci/sync-engine + the article “Building a Sync Engine” – it walks through local DB (Dexie/IndexedDB), WebSockets, in-memory server store, and real-time sync. matheuslanduci.com

1. Client local store

Pick one:

Schema for a simple todo:

type Todo = {
  id: string;       // UUID created on client
  content: string;
  done: boolean;
  version: number;  // simple counter per item
  clientId: string; // which client created/edited it
  createdAt: string;
}

Client rules:

  • All UI reads/writes go to local DB. No direct server calls from components.

  • A background process (hook, web worker, or effect) watches for local changes and sends them to the server.

2. WebSocket connection + protocol

Create a WS connection like:

// Client
const ws = new WebSocket(`ws://localhost:3000/ws?clientId=${clientId}`);

Define simple messages:

type WSMessage =
  | { event: "item:put"; todo: Todo }
  | { event: "item:delete"; todoId: string };
  • When local user creates/updates/deletes a todo, you:

    1. Apply it locally (DB transaction).

    2. Enqueue a message in an outbox table.

    3. Background loop reads the outbox and sends messages over WebSocket.

Matheus’s article literally does this with CustomEvent + Dexie + WebSocket and is very approachable. matheuslanduci.com

3. Minimal server

Server has:

  • A simple in-memory or SQLite todos table.

  • A Map<clientId, WebSocket> of connected clients.

  • On item:put:

    • Upsert into server store.

    • Broadcast to all clients (including the sender).

  • On item:delete:

    • Delete in server store.

    • Broadcast delete.

Again, the article’s server code is almost exactly this. matheuslanduci.com

✅ At the end of this phase you have:

  • Realtime sync

  • Multiple clients see the same data

  • Local storage + WebSockets

  • Very naive conflict handling (last writer wins, probably by createdAt or version).

This is already a “sync engine” — just tiny and with only one table.


Phase 2 – Upgrade to a versioned change log (Replicache-style)

Now move closer to what Replicache and Convex describe: /pull + /push endpoints with a monotonically increasing server version. stack.convex.dev+1

Server schema (example)

Add a changes log:

changes(
  server_seq     BIGSERIAL PRIMARY KEY, -- strictly increasing
  client_id      TEXT,
  client_mut_id  BIGINT,   -- per-client mutation id
  table          TEXT,
  row_id         TEXT,
  op             TEXT,     -- 'put' | 'delete'
  patch          JSONB,    -- the actual data or diff
  ts             TIMESTAMPTZ DEFAULT now()
);
  • Whenever a mutation happens on the server, you:

    • Apply it to the canonical tables.

    • Append an entry to changes.

Replicache / Convex “Object Sync Engine” both basically work like this (with more polish). stack.convex.dev+2stack.convex.dev+2

Sync endpoints

Borrow the Replicache-style protocol:

  • POST /push

    • Body: { clientID, mutations: [{ id, name, args, baseVersion }] }

    • Server:

      • Runs each mutation in order.

      • Writes to DB and to changes.

      • Returns { lastMutationID, nextServerVersion }.

  • POST /pull

    • Body: { clientID, sinceVersion }

    • Server:

      • Looks at changes where server_seq > sinceVersion.

      • Returns:
        { changes: [...], lastServerVersion }.

Convex calls these responsibilities “initial sync, incremental sync, and mutations”, and their article lays out exactly these concepts. stack.convex.dev

Client logic

Client keeps:

let lastServerVersion: number;   // stored in local DB
let lastMutationID: number;      // per client, stored locally

Process:

  1. On startup:

    • Load local DB.

    • Call /pull with sinceVersion = lastServerVersion (0 if first time).

    • Apply any changes into the local DB.

  2. When user mutates:

    • Apply an optimistic transaction to local DB.

    • Enqueue a mutation { id: ++lastMutationID, name, args } in an outbox table.

  3. Background sync loop:

    • If there are pending mutations, call /push.

    • Update lastMutationID from server.

    • Then call /pull with updated sinceVersion.

This is the “generic engine” piece. As soon as you have this working for multiple tables (issues, comments, etc.), you’ve basically built your own small Linear-style sync core.


Phase 3 – Conflict resolution & offline behavior

At first, use simple Last-Write-Wins (LWW):

  • Each row has updatedAt or server_seq.

  • When applying server changes to local state:

    • If incoming server_seq > local version → overwrite.
  • For deletions, treat them as a special op with their own server_seq.

For many Linear-like use cases (issues, labels, states), this is fine. The Linear reverse-engineering write-up notes that OT/CRDT is overkill for simple CRUD data and that LWW with good modeling can be enough. GitHub+1

If you ever need true collaborative structures (rich text, lists with concurrent inserts, etc.), your sync engine can embed a CRDT per field:

  • Typeonce explains using Loro (CRDT) for merging updates and just syncing bytes. typeonce.dev+1

  • Automerge and Yjs repos show similar patterns. GitHub+1

But you can keep your “Linear engine” mostly relational + LWW and only use CRDTs at edges.


Phase 4 – Query-based / partial sync (closer to Linear)

Linear doesn’t just “sync whole tables”; it syncs a subset of objects per workspace / view. Convex’s object sync article and the Linear section describe this architecture: the sync server tails Postgres logs and pushes only objects relevant to each client’s queries. stack.convex.dev+2stack.convex.dev+2

Simpler way for your own engine:

  • On the client, define named queries (like issues:my-active).

  • Client sends to server:

    {
      "subscriptions": [
        { "key": "issues:my-active", "filter": { "assigneeId": "me", "status": "open" } }
      ]
    }
  • Server:

    • Evaluates that filter against DB.

    • Uses your changes log to decide which changes affect that subscription.

    • For now, you can just re-run the query each /pull and diff results.

Later (if you want to go full nerd) you can:

  • Index subscriptions by affected rows.

  • Maintain per-subscription change lists.

  • Use GraphQL-style fragments like LunaDB/Linear. stack.convex.dev

But even a dumb, re-query-everything implementation will give you a decent partial-sync engine for small data sizes.


  1. GitHub projects that are actually easy to grok

You asked specifically for repos that implement sync themselves (no ElectricSQL/ZeroSync backend), and are readable. Here’s a curated list:

Starter / tutorial-grade

  1. matheuslanduci/sync-engine

    • From the article “Building a Sync Engine” – local-first todos with Dexie (IndexedDB), Node/Hono backend, WebSockets, and a simple in-memory server store. matheuslanduci.com

    • Very easy to run & read. Great first codebase to internalize.

  2. typeonce-dev/sync-engine-web

    • “A Sync Engine for the web: React, Web Workers, Loro CRDT, server.” GitHub+1

    • Monorepo with:

      • Client (React + IndexedDB + web workers)

      • Server (REST + WebSocket)

      • Shared schema / types

    • Much closer to a full production-ish engine, but still small enough to study.

  3. brandon-schabel/bnk-sync

    • Type-safe WebSocket sync libraries for Bun + TS: server, client, React helpers, with examples. GitHub

    • Focus is on the wire protocol and type-safe message handling, not local DB, so it’s great to copy patterns for your own WS layer.

CRDT-centric, but with small sync servers

  1. automerge/automerge-repo + automerge/automerge-repo-sync-server

    • automerge-repo: local-first CRDT toolkit. GitHub+1

    • automerge-repo-sync-server: tiny Express sync server for CRDT documents over WebSockets. GitHub

    • Very small code that shows:

      • How to manage client connections

      • How to send “sync messages” back and forth

      • How to persist changes.

  2. jhovadev/elysiajs-react-lorocrdt-example

    • Example: Loro CRDT + Elysia (Bun) websockets + React. GitHub+1

    • Good if you want to see a minimal CRDT + WebSocket round-trip without a complex framework.

  3. oneakash/ResilientDocs

    • Real-time collaborative editor using React, TipTap, Yjs CRDT and WebSocket sync. GitHub

    • Shows:

      • How to wire up client presence

      • How to stream binary CRDT updates

      • How to persist them server-side.

Linear-specific research

  1. wzhudev/reverse-linear-sync-engine

    • Not an engine you drop in, but an incredibly detailed reverse-engineering of Linear’s sync engine, with diagrams, code snippets, and commentary. GitHub+1

    • If you want “a Linear-like engine”, this is essential reading.


  1. Reading / video list (to go deeper)

These are worth working through in roughly this order:

Core sync / local-first concepts

  • “How to Sync Anything: Building a Sync Engine from Scratch (Parts 1–3)” – CouchDB folks explaining fundamental sync patterns, idempotence, conflict handling, etc. DEV Community+1

  • “Building a Sync Engine” (Matheus Landuci) – tutorial with complete code (the sync-engine repo above). matheuslanduci.com

  • “How to implement a Sync Engine for the Web” (Typeonce) – local DB + web worker + WebSocket + CRDT, all step-by-step. typeonce.dev

  • “An Object Sync Engine for Local-first Apps” + “A Map of Sync” (Convex) – great mental models for thinking about queries, consistency, and protocol design. stack.convex.dev+1

Linear / Replicache / local-first ecosystem

  • Tuomas Artman – Scaling the Linear Sync Engine (Linear blog + talk). linear.app

  • “I Reversed Linear’s Sync Engine to See How It Works” + the associated repo. DEV Community+1

  • Replicache notes – short conceptual overview of how Replicache structures /pull /push and consistency. Tushar Tripathi

  • “Some notes on Local-First Development” (bricolage) – good context on why sync engines exist and where they’re headed. bricolage.io


  1. How I’d personally approach building your Linear-like engine

If I were in your shoes and wanted to “implement totally myself” (no ElectricSQL / ZeroSync):

  1. Clone & run matheuslanduci/sync-engine. Break it, modify it, make sure you understand every line. matheuslanduci.com

  2. Refactor it to:

    • Add a second table (e.g., projects + todos with FK).

    • Replace the ad-hoc broadcast logic with a changes log + /pull /push API.

  3. Introduce a local DB layer that feels nice (Dexie or a TS wrapper) and make all UI go through it.

  4. Add per-client “subscriptions”: only sync todos for a given project/workspace.

  5. Read the Linear reverse-engineering repo and see how your design differs; borrow ideas for:

    • Lazy hydration

    • Undo/redo

    • Partial sync

  6. Optionally, embed CRDTs for any fields that truly need collaborative editing (comments text, descriptions), using Loro/Yjs/Automerge as reference. typeonce.dev+2GitHub+2

If you tell me what stack you’re planning to use (TS/Rust/Go/etc.), I can sketch a more concrete “here are the exact files and interfaces I’d create” for your first version.


You:

can you find any tutorial and implement real time sync engine from scratch withyout CRDT


ChatGPT:

Yes! Let’s do this in a very concrete way:

  • I’ll point you to one good tutorial to read.

  • Then I’ll show you a small, complete real-time sync engine (no CRDTs) you can build from scratch:

    • Node.js server (HTTP + WebSocket)

    • In-memory “database” + change log

    • Minimal protocol for push/pull + realtime broadcast

You can plug any UI on top (React, vanilla, mobile, whatever).


  1. Tutorial you should read first (no CRDTs)

The one that best matches what you want:

📝 “Building a Sync Engine” — Matheus Landuci matheuslanduci.com+1

  • Tech: Node.js (Hono), TypeScript, React, Dexie (IndexedDB), WebSockets.

  • It builds a local-first todo app:

    • Data stored locally with Dexie.

    • Mutations trigger item:put / item:delete events.

    • WebSocket server receives those and broadcasts them to all clients.

  • No CRDTs; conflict resolution is basically “central server is source of truth + LWW”.

Clone his repo and run it to get a feel for the flow, then compare to the implementation below. matheuslanduci.com

For deeper theory (but still non-CRDT), the CouchDB series “How to Sync Anything: Building a Sync Engine from Scratch” explains background sync, ids, and change tracking. neighbourhood.ie+1


  1. Minimal real-time sync engine (no CRDT)

What this example does

  • Data model: a simple Todo list.

  • Server:

    • Keeps todos + a monotonic version counter + change log.

    • Exposes:

      • GET /changes?since=<version> → incremental sync.

      • WebSocket /ws → accepts mutations and broadcasts changes.

  • Client:

    • Keeps local in-memory state (you can later persist with IndexedDB/Dexie).

    • On startup:

      • Fetches initial changes (since=0).

      • Connects WebSocket and starts sending mutations.

    • On mutation:

      • Applies change locally (optimistic).

      • Sends mutation to server.

    • On remote changes:

      • Applies them into local state.

No CRDTs, just server-authoritative state + versioned change log + LWW.


  1. Data shapes (shared types)

Use these shapes on both client & server:

// Shared types (pseudo-TS, can be plain JS + JSDoc)

/** A todo item on the server */
export interface Todo {
  id: string;
  text: string;
  completed: boolean;
  version: number; // last server version that touched this row
}

/** A change recorded on the server */
export interface Change {
  version: number;
  op: "put" | "delete";
  todo: Todo | { id: string }; // for delete we only need id
}

/** Messages from client to server over WebSocket */
export type ClientToServerMessage =
  | {
      type: "hello";
      clientId: string;
      lastKnownVersion: number;
    }
  | {
      type: "mutate";
      clientId: string;
      mutationId: number; // per-client increasing id
      op: "put" | "delete";
      todo: { id: string; text?: string; completed?: boolean };
    };

/** Messages from server to client over WebSocket */
export type ServerToClientMessage =
  | {
      type: "snapshot";
      todos: Todo[];
      serverVersion: number;
    }
  | {
      type: "changes";
      changes: Change[];
      serverVersion: number;
    }
  | {
      type: "mutationAck";
      mutationId: number;
    };

  1. Server implementation (Node.js + Express + ws)

4.1 Install deps

npm init -y
npm install express ws uuid
npm install --save-dev typescript @types/node @types/express @types/ws
npx tsc --init

4.2 server.ts

import express from "express";
import { createServer } from "http";
import { WebSocketServer, WebSocket } from "ws";
import { randomUUID } from "crypto";

// ---------- Data model in memory ----------

interface Todo {
  id: string;
  text: string;
  completed: boolean;
  version: number;
}

interface Change {
  version: number;
  op: "put" | "delete";
  todo: Todo | { id: string };
}

interface ClientToServerMessage {
  type: "hello" | "mutate";
  clientId: string;
  lastKnownVersion?: number;
  mutationId?: number;
  op?: "put" | "delete";
  todo?: { id: string; text?: string; completed?: boolean };
}

interface ServerToClientMessageSnapshot {
  type: "snapshot";
  todos: Todo[];
  serverVersion: number;
}

interface ServerToClientMessageChanges {
  type: "changes";
  changes: Change[];
  serverVersion: number;
}

interface ServerToClientMessageMutationAck {
  type: "mutationAck";
  mutationId: number;
}

type ServerToClientMessage =
  | ServerToClientMessageSnapshot
  | ServerToClientMessageChanges
  | ServerToClientMessageMutationAck;

// In-memory "database"
const todos = new Map<string, Todo>();
let serverVersion = 0;
const changeLog: Change[] = [];

// Connected clients
const clients = new Set<WebSocket>();

// ---------- Helpers ----------

function recordChange(op: "put" | "delete", todo: Todo | { id: string }) {
  serverVersion += 1;
  if (op === "put" && "version" in todo) {
    todo.version = serverVersion;
  }
  const change: Change = { version: serverVersion, op, todo };
  changeLog.push(change);
  // (for a real app you’d prune or persist changeLog)
  return change;
}

function getChangesSince(since: number): Change[] {
  return changeLog.filter((c) => c.version > since);
}

function applyMutation(msg: ClientToServerMessage): Change | null {
  if (msg.type !== "mutate" || !msg.op || !msg.todo) return null;

  const { op, todo } = msg;

  if (op === "put") {
    const existing = todos.get(todo.id);
    const updated: Todo = {
      id: todo.id,
      text: todo.text ?? existing?.text ?? "",
      completed: todo.completed ?? existing?.completed ?? false,
      version: existing?.version ?? 0,
    };
    todos.set(updated.id, updated);
    return recordChange("put", updated);
  }

  if (op === "delete") {
    todos.delete(todo.id);
    return recordChange("delete", { id: todo.id });
  }

  return null;
}

function broadcast(message: ServerToClientMessage) {
  const json = JSON.stringify(message);
  for (const ws of clients) {
    if (ws.readyState === WebSocket.OPEN) {
      ws.send(json);
    }
  }
}

// ---------- HTTP + WS server ----------

const app = express();
app.use(express.json());

// Simple endpoint for incremental sync via HTTP
app.get("/changes", (req, res) => {
  const since = Number(req.query.since ?? "0");
  const changes = getChangesSince(since);
  res.json({ changes, serverVersion });
});

// Optional: seed demo data
app.post("/seed", (req, res) => {
  const id = randomUUID();
  const todo: Todo = {
    id,
    text: req.body.text ?? "Example todo",
    completed: false,
    version: 0,
  };
  todos.set(id, todo);
  recordChange("put", todo);
  res.json({ ok: true });
});

const httpServer = createServer(app);

const wss = new WebSocketServer({ server: httpServer, path: "/ws" });

wss.on("connection", (ws) => {
  clients.add(ws);

  ws.on("message", (data) => {
    let msg: ClientToServerMessage;
    try {
      msg = JSON.parse(data.toString());
    } catch {
      console.error("Bad JSON from client");
      return;
    }

    if (msg.type === "hello") {
      const last = Number(msg.lastKnownVersion ?? 0);
      const changes = getChangesSince(last);
      if (last === 0) {
        // send full snapshot
        const allTodos = Array.from(todos.values());
        const snap: ServerToClientMessageSnapshot = {
          type: "snapshot",
          todos: allTodos,
          serverVersion,
        };
        ws.send(JSON.stringify(snap));
      } else if (changes.length > 0) {
        const payload: ServerToClientMessageChanges = {
          type: "changes",
          changes,
          serverVersion,
        };
        ws.send(JSON.stringify(payload));
      }
      return;
    }

    if (msg.type === "mutate") {
      const change = applyMutation(msg);
      if (!change) return;

      // acknowledge to the sender if mutationId present
      if (typeof msg.mutationId === "number") {
        const ack: ServerToClientMessageMutationAck = {
          type: "mutationAck",
          mutationId: msg.mutationId,
        };
        ws.send(JSON.stringify(ack));
      }

      // broadcast change to everyone (including sender)
      broadcast({
        type: "changes",
        changes: [change],
        serverVersion,
      });
    }
  });

  ws.on("close", () => {
    clients.delete(ws);
  });
});

const PORT = 3000;
httpServer.listen(PORT, () => {
  console.log(`HTTP+WS server listening on http://localhost:${PORT}`);
});

That’s a complete engine core:

  • Tracks all mutations with a change log.

  • Gives you incremental sync via /changes.

  • Streams real-time updates via WebSockets.

  • No CRDTs: the server is the single source of truth; clients optimistically mutate and reconcile via versions (LWW).


  1. Minimal client-side sync loop

Here’s a tiny client “sync engine” you can drop into a browser app. For brevity this keeps todos in memory; you can swap that for IndexedDB/Dexie later.

// syncClient.ts

type Todo = {
  id: string;
  text: string;
  completed: boolean;
  version: number;
};

type Change = {
  version: number;
  op: "put" | "delete";
  todo: Todo | { id: string };
};

type ServerMessage =
  | { type: "snapshot"; todos: Todo[]; serverVersion: number }
  | { type: "changes"; changes: Change[]; serverVersion: number }
  | { type: "mutationAck"; mutationId: number };

type Listener = (todos: Todo[]) => void;

export class SyncClient {
  private ws: WebSocket | null = null;
  private todos = new Map<string, Todo>();
  private listeners = new Set<Listener>();
  private lastServerVersion = 0;
  private mutationCounter = 0;
  private clientId: string;

  constructor(private url = "ws://localhost:3000/ws") {
    this.clientId =
      localStorage.getItem("clientId") ??
      crypto.randomUUID?.() ??
      String(Math.random());
    localStorage.setItem("clientId", this.clientId);
  }

  subscribe(listener: Listener) {
    this.listeners.add(listener);
    listener(Array.from(this.todos.values())); // initial
    return () => this.listeners.delete(listener);
  }

  private notify() {
    const list = Array.from(this.todos.values()).sort((a, b) =>
      a.version === b.version ? a.id.localeCompare(b.id) : a.version - b.version
    );
    for (const l of this.listeners) l(list);
  }

  connect() {
    if (this.ws && this.ws.readyState === WebSocket.OPEN) return;
    this.ws = new WebSocket(this.url);
    this.ws.onopen = () => {
      // initial hello, ask for any missed changes
      this.ws?.send(
        JSON.stringify({
          type: "hello",
          clientId: this.clientId,
          lastKnownVersion: this.lastServerVersion,
        })
      );
    };
    this.ws.onmessage = (ev) => {
      const msg: ServerMessage = JSON.parse(ev.data.toString());
      if (msg.type === "snapshot") {
        this.todos.clear();
        msg.todos.forEach((t) => this.todos.set(t.id, t));
        this.lastServerVersion = msg.serverVersion;
        this.notify();
      } else if (msg.type === "changes") {
        for (const change of msg.changes) {
          if (change.op === "put" && "version" in change.todo) {
            this.todos.set(change.todo.id, change.todo as Todo);
          } else if (change.op === "delete") {
            this.todos.delete((change.todo as { id: string }).id);
          }
        }
        this.lastServerVersion = msg.serverVersion;
        this.notify();
      } else if (msg.type === "mutationAck") {
        // could mark mutation as delivered if you keep an outbox
      }
    };
    this.ws.onclose = () => {
      // naive reconnect
      setTimeout(() => this.connect(), 1000);
    };
  }

  /** Local optimistic upsert */
  putTodo(partial: { id?: string; text: string; completed?: boolean }) {
    const id = partial.id ?? crypto.randomUUID?.() ?? String(Math.random());
    const existing = this.todos.get(id);
    const optimistic: Todo = {
      id,
      text: partial.text,
      completed: partial.completed ?? existing?.completed ?? false,
      version: existing?.version ?? this.lastServerVersion,
    };
    // apply locally
    this.todos.set(id, optimistic);
    this.notify();

    // send mutation
    const mutationId = ++this.mutationCounter;
    this.ws?.send(
      JSON.stringify({
        type: "mutate",
        clientId: this.clientId,
        mutationId,
        op: "put",
        todo: { id, text: optimistic.text, completed: optimistic.completed },
      })
    );
  }

  deleteTodo(id: string) {
    this.todos.delete(id);
    this.notify();
    const mutationId = ++this.mutationCounter;
    this.ws?.send(
      JSON.stringify({
        type: "mutate",
        clientId: this.clientId,
        mutationId,
        op: "delete",
        todo: { id },
      })
    );
  }
}

Then in your UI (React example):

// App.tsx
import React, { useEffect, useState } from "react";
import { SyncClient } from "./syncClient";

const client = new SyncClient();

export function App() {
  const [todos, setTodos] = useState<{ id: string; text: string; completed: boolean }[]>([]);

  useEffect(() => {
    client.connect();
    const unsubscribe = client.subscribe(setTodos);
    return unsubscribe;
  }, []);

  const onAdd = (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    const form = e.currentTarget;
    const input = form.elements.namedItem("text") as HTMLInputElement;
    if (!input.value.trim()) return;
    client.putTodo({ text: input.value.trim() });
    input.value = "";
  };

  return (
    <div>
      <h1>Realtime Todos</h1>
      <form onSubmit={onAdd}>
        <input name="text" placeholder="New todo..." />
        <button type="submit">Add</button>
      </form>
      <ul>
        {todos.map((t) => (
          <li key={t.id}>
            <label>
              <input
                type="checkbox"
                checked={t.completed}
                onChange={(e) =>
                  client.putTodo({
                    id: t.id,
                    text: t.text,
                    completed: e.target.checked,
                  })
                }
              />
              {t.text}
            </label>
            <button onClick={() => client.deleteTodo(t.id)}></button>
          </li>
        ))}
      </ul>
    </div>
  );
}

That’s a full working real-time sync engine:

  • No CRDTs.

  • Real-time updates.

  • Incremental sync via version + changeLog.


  1. How to evolve this toward “Linear-style”

Once you’ve got this running, next steps (all still without CRDT):

  1. Persist client state with Dexie (or SQLocal/PGlite) like in Matheus’s tutorial. matheuslanduci.com

  2. Add an outbox table so offline mutations are stored and pushed when connected (CouchDB’s “How to Sync Anything” discusses idempotent operations and retries). neighbourhood.ie+2DEV Community+2

  3. Add more tables (projects, issues, comments) and share the same version + changeLog mechanism.

  4. Later, if you really want query-based sync like Linear/Replicache, you can turn Change into {"table": "issues", "row": {...}} and filter by subscriptions. Stack+1


If you tell me your preferred stack (TS/Node/Postgres, Go, Rust, etc.), I can adapt this into a more “production-ish” layout with folders, interfaces and maybe a simple Postgres schema (still without any CRDTs).


You:

we are seeing that when new chages happn in any paticulr table we are updating version version like

version: number; // last server version that touched this row

serverVersion

my question, it's a integar value how much highest value in can hold le't say this app is running next 5 years


ChatGPT:

Good question—that “version” counter is just an integer, so it’s smart to ask how far it can go.

I’ll assume you’re using TypeScript/JavaScript on the server (like in the examples we discussed). If you’re using another language/DB I’ll mention that too.


  1. If you use JS number (the default in TS/Node)

In JavaScript/TypeScript, number is a 64-bit floating point value (IEEE-754).
The important limit is MAX_SAFE_INTEGER:

Number.MAX_SAFE_INTEGER === 9007199254740991; // 2^53 - 1

That’s:

9,007,199,254,740,991

You can safely increment an integer up to that value without losing precision.

How many increments is that in practice?

Let’s do some rough worst-case math:

  • 5 years ≈
    5 × 365 × 24 × 60 × 60 = 157,680,000 seconds

  • Imagine your app does:

    • 1,000,000 (1 million) version increments per second (which is insanely high for most CRUD apps)

Total increments in 5 years:

157,680,000 × 1,000,000 = 157,680,000,000,000
1.5768 × 10¹⁴

Compare that to Number.MAX_SAFE_INTEGER ≈ 9.0 × 10¹⁵.

You’d still be more than 50× below the safe integer limit.

So for a realistic app:

  • Even at crazy throughput (millions of changes per second, every second for 5 years),

  • You will not get anywhere near the JS safe integer limit.

Conclusion:
👉 Using a monotonic serverVersion: number that just increments is totally fine for many years, in JavaScript/TypeScript.


  1. If you store it in a database

Most people back this with a DB field:

  • Postgres BIGINT / BIGSERIAL

    • 64-bit signed integer

    • Max ≈ 9,223,372,036,854,775,807 (≈ 9 × 10¹⁸)

    • That’s three orders of magnitude larger than JS’s safe integer range.

  • MySQL BIGINT, similar story.

  • If you stick to DB-side incrementing (BIGSERIAL, sequences), it’s essentially impossible to overflow with any realistic usage.

Even if you somehow had 1 billion changes per second (which you won’t):

  • Over 5 years:
    1,000,000,000 × 157,680,000 ≈ 1.5768 × 10¹⁷

  • Still under 9 × 10¹⁸.

So DB is ultra-safe.


  1. Recommended pattern for your sync engine

On the DB

  • Use BIGINT / BIGSERIAL for:

    • server_version

    • changes.version (or server_seq)

  • Let the DB sequence or trigger handle incrementing.

In your Node/TS code

  • If you’re happy with JS number and your throughput is normal:

    • Parse to number and use it directly; you’ll be fine for many years.
  • If you want to be ultra-conservative or expect insane throughput:

    • Use bigint type in TS:

      let serverVersion: bigint = 0n;
      serverVersion += 1n;
    • Convert to string when serializing over JSON (JSON.stringify doesn’t support bigint directly).

But honestly, for a Linear-style app, the plain number is enough. The bottleneck will be CPU/IO, not your counter overflowing.


  1. Short answer

How high can this go, and is it safe for 5 years?

  • With JS number, you can safely go up to 9,007,199,254,740,991.

  • For 5+ years of usage, even at absurd write rates, you’re nowhere close to that.

  • With Postgres BIGINT/BIGSERIAL, you’re even safer.

So you don’t need to worry about serverVersion overflowing in the next 5 years (or 50), as long as you’re using normal 64-bit integer semantics.


You:

ccorcos-chet-stack-8a5edab282632443.txt Document what do you think about implement of this sync engine https://github.com/ccorcos/chet-stack/tree/v0 browse the code carefully i also attach the codebase in txt file


ChatGPT:

Short answer: this is a really solid example of a “Linear-style” sync engine without CRDTs. It’s more complete than what you strictly need, but as a reference architecture it’s excellent.

Let me walk through how it works and what I think about it.


  1. How the sync engine actually works

Data model + versioning

  • Every row in every table has an id and a version field; that version is crucial for sync. ccorcos-chet-stack-8a5edab28263…

  • Writes are done via transactions made up of operations (set, listInsert, listRemove, create, etc.). ccorcos-chet-stack-8a5edab28263…

  • On the server, apis/write.ts:

    • Loads all records being touched into beforeRecordMap. ccorcos-chet-stack-8a5edab28263…

    • Clones to afterRecordMap and applies each operation to mutate it. ccorcos-chet-stack-8a5edab28263…

    • Builds a versionMap capturing the old version of each record before applying operations. ccorcos-chet-stack-8a5edab28263…

    • Validates schemas and permissions by comparing beforeRecordMap vs afterRecordMap. ccorcos-chet-stack-8a5edab28263…

    • Writes all records to the DB with that versionMap as an optimistic concurrency check. ccorcos-chet-stack-8a5edab28263…

The docs clarify the rule: every time you apply an operation, you increment the record’s version; on write you assert the old version still matches DB, otherwise reject and retry (classic optimistic concurrency + LWW). ccorcos-chet-stack-8a5edab28263…

Database conflict detection

Database.write (JSON file backend) uses that VersionMap:

  • It looks up each record’s current version in its in-memory RecordMap.

  • If any record’s current version doesn’t equal the expected version in versionMap, it throws TransactionConflictError. ccorcos-chet-stack-8a5edab28263…

  • Otherwise it writes the new records (with incremented version) and saves to disk.

So concurrency is enforced at record level: any two overlapping writes to the same record will conflict; one will win, one must retry.

Realtime pub/sub

After the DB write succeeds, apis/write publishes updates:

  • It sends version updates keyed by “subscription keys” (like getRecord:user/123) using the PubsubServer. ccorcos-chet-stack-8a5edab28263…

  • The client has a WebsocketPubsubClient that:

    • On reconnect, resubscribes to all active subscription keys. ccorcos-chet-stack-8a5edab28263…

    • On change, decodes the key to know whether it’s getRecord, getThreads, or getMessages and acts accordingly. ccorcos-chet-stack-8a5edab28263…

For individual records:

  • onChange for a getRecord key passes the new version to syncRecord(...). ccorcos-chet-stack-8a5edab28263…

  • syncRecord:

    • Checks if the record exists locally.

    • If local record.version < version, calls getRecords to re-fetch the record and writes it into cache + storage. ccorcos-chet-stack-8a5edab28263…

So the websocket messages are just version numbers, not the whole record. The “real” data sync happens via HTTP getRecords. ccorcos-chet-stack-8a5edab28263…

Client caching + offline

On the client:

  • RecordCache is an in-memory RecordMap.

  • RecordStorage is persistent (IndexedDB) for offline. ccorcos-chet-stack-8a5edab28263…

  • The “loader” pattern (loadRecord, loadThreads, etc.) uses stale-while-revalidate:

    • First load from IndexedDB and put into RecordCache.

    • Also fetch from API; whichever has higher version wins. ccorcos-chet-stack-8a5edab28263… & ccorcos-chet-stack-8a5edab28263…

SubscriptionCache tracks which records/queries are currently used by React components:

  • When subscribing, it calls pubsub.subscribe(key).

  • On un-subscribe, it calls pubsub.unsubscribe(key), then unloads records after a delay. ccorcos-chet-stack-8a5edab28263…

Transactions, optimistic UI, undo/redo, offline queue

Client-side writes go through client/actions/write.ts:

  • It inverts the operations and pushes them to an undo stack (with batching) for undo/redo. ccorcos-chet-stack-8a5edab28263…

  • It performs optimisticWrite: apply operations directly to cached records and RecordStorage. ccorcos-chet-stack-8a5edab28263…

  • Then enqueues the transaction in TransactionQueue. ccorcos-chet-stack-8a5edab28263…

TransactionQueue:

  • Persists queued transactions (localStorage) so they’re retried when back online.

  • Exposes subscribeIsPendingWrite so the UI can show “pending” state for a record. ccorcos-chet-stack-8a5edab28263…

  • On conflict (e.g. TransactionConflictError), onRollback refetches latest records and overwrites caches to get back to a consistent state. ccorcos-chet-stack-8a5edab28263…

Offline:

  • Service worker provides offline assets.

  • Data side: loaders fall back to RecordStorage when HTTP fails with status 0 (offline). ccorcos-chet-stack-8a5edab28263…


  1. What’s good about this design

For your goal (“build a Linear-like sync engine from scratch, no CRDTs”), this is honestly a fantastic reference.

a) Very clear, minimal core idea

The heart of the system is tiny:

  1. Each record has a monotonically increasing version. ccorcos-chet-stack-8a5edab28263…

  2. Server uses optimistic concurrency on that version (VersionMap) to detect conflicts. ccorcos-chet-stack-8a5edab28263…

  3. After write, broadcast updated versions over pubsub; clients refetch records when their cached version is behind. ccorcos-chet-stack-8a5edab28263…

This is easy to reason about and aligns well with what you wanted (no CRDT, transactional server, last-write-wins).

b) Clean separation of responsibilities

  • Database knows nothing about websockets or React. It just reads/writes RecordMaps and enforces version checks. ccorcos-chet-stack-8a5edab28263…

  • PubsubServer is just websocket pub/sub with keys. ccorcos-chet-stack-8a5edab28263…

  • write.ts is the authoritative mutation pipeline: load → apply ops → validate schema → validate permission → DB write → pubsub. ccorcos-chet-stack-8a5edab28263…

  • Client splits: RecordCache, RecordStorage, LoaderCache, SubscriptionCache, TransactionQueue, WebsocketPubsubClient. ccorcos-chet-stack-8a5edab28263…

You can swap:

  • JSON file → Postgres/SQLite.

  • In-process pubsub → Redis / NATS / etc. ccorcos-chet-stack-8a5edab28263…

This is exactly the kind of separation you want if you plan to grow from “toy” to “serious” later.

c) Great offline + optimistic UX story

  • Optimistic write against local cache, then enqueue transaction → instant UI. ccorcos-chet-stack-8a5edab28263…

  • Offline queue with automatic retry. ccorcos-chet-stack-8a5edab28263…

  • SWR pattern for data: local IndexedDB first, then network. ccorcos-chet-stack-8a5edab28263…

  • Undo/redo is built on the same operation model. ccorcos-chet-stack-8a5edab28263…

This gives you “Notion/Linear-ish” feeling with relatively little complexity.

d) Permissions integrated with sync

On reads:

  • loadRecordsWithPermissionRecords loads both target records and any records needed to compute permissions. ccorcos-chet-stack-8a5edab28263…

  • filterRecordMapForPermission strips out anything the user shouldn’t see. ccorcos-chet-stack-8a5edab28263…

On writes:

  • validateWrite checks that the transition from beforeRecordMapafterRecordMap is allowed. ccorcos-chet-stack-8a5edab28263…

Most “tutorial sync engines” ignore permissions; this one doesn’t, which makes it much closer to a real app (like Linear).


  1. Trade-offs / limitations

None of these are “this is bad”; they’re conscious trade-offs. But good to know:

a) Conflict granularity = record-level LWW

Any overlapping writes to the same record will conflict and require retry:

  • There’s no field-level merging: if user A edits title and user B edits description at the same time, one wins and the other must re-apply.

  • This is perfect for many business apps (Linear-style), but not for Google Docs-style collaborative editing.

If you needed fine-grained merges, you’d eventually want per-field versions or CRDTs. For now, what it does is exactly what you said you wanted.

b) Pubsub isn’t transactional with DB

The comment in write.ts explicitly calls this out:

// Ideally everything after this comment should be transactional but we can
// settle with eventually consistent.
``` :contentReference[oaicite:40]{index=40}  

In practice:

- DB write succeeds but pubsub fails → some clients won’t immediately see the update (they’ll eventually see it via other reads).
- For most SaaS apps, that’s acceptable. If you really cared, you’d wire your DB + message bus more tightly (e.g., outbox pattern).

### c) JSON file DB is only for demo

The author says “use JSON now, but replace with SQLite/Postgres later.” :contentReference[oaicite:41]{index=41}  

For your own implementation:

- Copy the *ideas*, not the JSON DB.
- Start with SQLite or Postgres from day one if you already know them.

### d) Simple “version only” pubsub messages

Pubsub payloads are just `{ version }` (or similar) for a subscription key:

- Pro: messages are tiny; you don’t risk accidentally leaking data over pubsub.
- Con: always a second HTTP roundtrip (`getRecords`) before UI sees the actual change.

For many apps this is a good trade-off. If you need ultra-low latency or want to reduce queries, you might publish small diffs or even full records instead.

### e) A bit of boilerplate for “realtime queries”

The docs admit that adding new realtime queries (`getThreads`, `getMessages`, etc.) is slightly tedious: you must wire up indices, loader, pubsub, and subscription handling. :contentReference[oaicite:42]{index=42}  

That’s fine for a boilerplate, but you may want to generate some of this or build thin abstractions.

---

## 4. Is this a good base for *your* sync engine?

**Yes.** If your target is “Linear-like app with:**

- central transactional server,
- optimistic UI,
- offline queue,
- LWW conflict resolution,
- permission-aware sync,

…then this is almost exactly the architecture you’d want.

If I were you, I’d treat Chet as a *reference manual* and build your own smaller version:

### Minimal core you should copy

Server:

1. **Schema:** every record table has `{ id, version, ...fields }`.
2. **Write endpoint:**
   - Accept `{ txId, authorId, operations[] }`.
   - Load records → build `beforeRecordMap`.
   - Clone → apply operations → `afterRecordMap`.
   - Validate schema & permissions.
   - Build `VersionMap` of old versions.
   - DB write with optimistic concurrency on `VersionMap`.
   - Publish pubsub updates for updated records.

3. **Read endpoint:**
   - `getRecords({ pointers })  { recordMap }`.

Client:

1. `RecordCache` (memory) + `RecordStorage` (IndexedDB) + SWR loader.
2. `TransactionQueue` for offline + retry.
3. `write()` helper that:
   - Inverts operations for undo/redo (optional).
   - Applies optimistic changes to cache.
   - Enqueues transaction.
4. `WebsocketPubsubClient`:
   - `subscribe(key)`/`unsubscribe(key)`.
   - On `getRecord` update: call `syncRecord` which refetches if local version is old.
5. `SubscriptionCache` that tracks which keys are in-use by UI hooks.

If you build just that, you basically have a stripped-down Chet Stack sync engine.

### Things you can ignore at first

- The whole email/thread UI (`App.tsx`, messages, threads) – that’s just sample app. :contentReference[oaicite:43]{index=43}  
- Queue tasks (`sendWelcomeMessage`, `autoindex`), file upload, etc.
- All the service-worker details at the beginning.

---

## 5. TL;DR judgment

- **Conceptually**: very clean, easy to reason about, nice separation of responsibilities.
- **Practically**: closer to a real production app than most tutorials (permissions, offline, undo).
- **For you**:  
  - Great as a *guide / pattern library*.  
  - Don’t blindly copy everything; instead replicate the core sync pipeline and adapt to your domain and DB choice.

If you want, next step we can:

- Extract just the sync pieces into a small pseudo-code “blueprint” tailored to your app (e.g., your tables, your `version` semantics), or  
- Compare this with your current design (the `serverVersion` integer you mentioned earlier) and see exactly what you’re missing / doing differently.
::contentReference[oaicite:44]{index=44}

You:

ok but it's very different beckend style write to db what if we use drizzle+postgresl when api/s/write.ts would be diffetnt and we don't write directly calling table


ChatGPT:

Yeah, totally fair: Chet’s backend is “JSON file as a DB”, and if you’re using Drizzle + Postgres your write path will look really different.

The good news: you can keep almost all of apis/write.ts the same and just swap out the Database implementation behind environment.db.write. That’s exactly what Chet is designed for. ccorcos-chet-stack-8a5edab28263…

Let’s walk through how to adapt it.


  1. What stays the same

In Chet, the write API looks like this: ccorcos-chet-stack-8a5edab28263…

export async function write(environment: ServerEnvironment, args: Transaction) {
  const { txId, authorId, operations } = args;

  const pointers = uniqWith(
    operations.map(({ table, id }) => ({ table, id }) as RecordPointer),
    isEqual
  );

  const beforeRecordMap = await loadRecordsWithPermissionRecords(environment, pointers);
  const afterRecordMap = cloneDeep(beforeRecordMap);

  // 1. Build versionMap from beforeRecordMap
  const versionMap: VersionMap = {};
  for (const pointer of pointers) {
    const record = getRecordMap(afterRecordMap, pointer);
    if (record) setRecordMap(versionMap, pointer, record.version);
  }

  // 2. Apply operations (increment version inside applyOperation)
  for (const operation of operations) {
    applyOperation(afterRecordMap, operation);
  }

  // 3. Validate schema + permissions...

  // 4. Build records array and call db.write
  const records = pointers.map((pointer) => ({
    ...pointer,
    record: getRecordMap(afterRecordMap, pointer) as any,
  }));

  await environment.db.write(records, versionMap);

  // 5. Publish pubsub updates...
}

So apis/write.ts only knows:

  • It has records: { table, id, record }[]

  • It has versionMap: { [table][id] -> previousVersion }

  • It calls environment.db.write(records, versionMap)

You don’t have to touch any Drizzle tables here at all.


  1. Where you plug in Drizzle: Database.write

Right now, Database.write is implemented for a JSON file. It:

  1. Checks that the current record.version in the JSON matches versionMap for each record.

  2. If any mismatch → TransactionConflictError.

  3. Otherwise, writes the new records. ccorcos-chet-stack-8a5edab28263…

You’ll replace that with a Drizzle+Postgres implementation.

2.1. High-level idea with Drizzle

For each record in records:

  • If it’s new (no previous version in versionMap):

    • INSERT row with version = 0 (or whatever record.version is).
  • If it’s existing:

    • UPDATE ... SET ... WHERE id = ? AND version = previousVersion.

    • If the UPDATE affects 0 rows → another transaction has already touched it → throw TransactionConflictError.

All of this should be inside a single DB transaction so it’s atomic.


  1. Example: Drizzle Database service

Assume you have Drizzle set up roughly like:

// db.ts
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";
export const pool = new Pool({ /* connection config */ });
export const db = drizzle(pool);

And some table definitions:

// schema.ts
import { pgTable, text, integer, timestamp } from "drizzle-orm/pg-core";

export const users = pgTable("users", {
  id: text("id").primaryKey(),
  version: integer("version").notNull(),
  username: text("username").notNull(),
  createdAt: timestamp("created_at").notNull(),
  updatedAt: timestamp("updated_at").notNull(),
});

export const threads = pgTable("threads", {
  id: text("id").primaryKey(),
  version: integer("version").notNull(),
  subject: text("subject").notNull(),
  // ...
});

Now implement Database something like:

// src/server/services/Database.ts
import { db } from "../drizzle/db";
import { users, threads /* ... */ } from "../drizzle/schema";
import { eq, and } from "drizzle-orm";
import {
  RecordWithTable,
  RecordPointer,
  VersionMap,
  RecordMap,
  RecordTable,
} from "../../shared/schema";
import { TransactionConflictError } from "../../shared/errors";
import { getRecordMap } from "../../shared/recordMapHelpers";

export class Database {
  constructor() {}

  /** Helper: map logical table name -> drizzle table + serializer */
  private tableMap = {
    user: {
      table: users,
      toRow: (record: any) => ({
        id: record.id,
        version: record.version,
        username: record.username,
        createdAt: new Date(record.created_at),
        updatedAt: new Date(record.updated_at),
      }),
    },
    thread: {
      table: threads,
      toRow: (record: any) => ({
        id: record.id,
        version: record.version,
        subject: record.subject,
        // ...
      }),
    },
    // add others...
  } as const;

  async getRecords(pointers: RecordPointer[]): Promise<RecordMap> {
    // Simple version: fetch each table separately with WHERE id IN (...)
    // For brevity I'll sketch just "user" and "thread".

    const recordMap: RecordMap = {};

    const byTable: Record<string, string[]> = {};
    for (const { table, id } of pointers) {
      if (!byTable[table]) byTable[table] = [];
      byTable[table].push(id);
    }

    // Users
    if (byTable.user?.length) {
      const rows = await db
        .select()
        .from(users)
        .where(users.id.in(byTable.user));
      for (const row of rows) {
        recordMap.user ??= {};
        recordMap.user[row.id] = {
          id: row.id,
          version: row.version,
          username: row.username,
          created_at: row.createdAt.toISOString(),
          updated_at: row.updatedAt.toISOString(),
        };
      }
    }

    // Threads...
    // same pattern

    return recordMap;
  }

  async write(records: RecordWithTable[], versionMap: VersionMap): Promise<void> {
    await db.transaction(async (tx) => {
      for (const { table, id, record } of records) {
        const mapping = (this.tableMap as any)[table];
        if (!mapping) throw new Error(`Unknown table: ${table}`);

        const drizzleTable = mapping.table;
        const row = mapping.toRow(record);

        // previousVersion we saw before applying operations
        const previousVersion: number | undefined = getRecordMap(versionMap, { table, id });

        if (previousVersion === undefined) {
          // New record: insert, but make sure we don't clobber something that
          // already exists. You can either:
          // - (a) just insert and rely on PK conflict, or
          // - (b) use onConflictDoNothing and check result length.

          await tx.insert(drizzleTable).values(row);
        } else {
          // Existing record: optimistic concurrency using version in WHERE clause.
          const result = await tx
            .update(drizzleTable)
            .set(row)
            .where(and(eq(drizzleTable.id, id), eq(drizzleTable.version, previousVersion)))
            .returning({ id: drizzleTable.id });

          if (result.length === 0) {
            // Another transaction updated this row first
            throw new TransactionConflictError(`Conflict on ${table}:${id}`);
          }
        }
      }
    });
  }
}

Key points:

  • We still use VersionMap just like the JSON version. ccorcos-chet-stack-8a5edab28263…

  • The optimistic concurrency now lives inside SQL WHERE version = previousVersion, not manual JSON checks.

  • The public interface (getRecords, write) is the same, so apis/write.ts doesn’t care whether it’s JSON or Postgres.


  1. “We don’t write directly calling table”

That’s actually good — it matches Chet’s pattern.

Right now Chet uses environment.db.write(...) and environment.db.getRecords(...), not raw table access from APIs. ccorcos-chet-stack-8a5edab28263…

To keep that style with Drizzle:

  • Keep Database as a small service that wraps all Drizzle usage.

  • Export only methods like:

    • getRecord(pointer)

    • getRecords(pointers)

    • getUserByUsername(username)

    • write(records, versionMap)

  • The rest of your server code (all of server/apis/*) should only talk to environment.db, never to Drizzle tables directly. ccorcos-chet-stack-8a5edab28263…

That way, if you later change schema or DB, you only touch Database.ts, not the sync logic.


  1. Summary of changes to make

  1. Leave apis/write.ts alone (maybe tiny tweaks but logic stays). It already does:

    • load records

    • apply operations

    • build versionMap

    • validate

    • call db.write ccorcos-chet-stack-8a5edab28263…

  2. Rewrite src/server/services/Database.ts to:

    • Use Drizzle tables instead of JSON.

    • Implement getRecord(s) using SELECT.

    • Implement write(records, versionMap) using INSERT/UPDATE ... WHERE version = previousVersion inside a transaction.

  3. Keep pubsub, client-side sync, TransactionQueue, etc. exactly the same — they don’t care what DB you use. ccorcos-chet-stack-8a5edab28263…

If you want, in the next step I can:

  • Sketch a full Database.write that uses INSERT .. ON CONFLICT DO UPDATE WHERE ... instead of separate insert/update, or

  • Show a smaller example just for 1 table (e.g., issue) so you can copy-paste the pattern for the rest.


You:

https://pketh.org/building-offline.html

read this do you think it similiar to chetstack pattern for building


ChatGPT:

Yeah, they’re very similar in philosophy, and ChetStack is basically a more “engine-ified” version of the pattern Peter describes for Kinopio.

Let’s line them up.


  1. Multi-storage loading (Kinopio) vs RecordStorage + RecordCache (Chet)

From the article:

  • Each “space” is cached locally (stringified JSON under a space-ID key). pketh.org

  • On load, Kinopio combines local cache for speed + remote server as source of truth so you get “local speed and remote truth,” and local still works offline. pketh.org

ChetStack does the same thing, but with more structure:

  • RecordStorage = persistent storage (IndexedDB / filesystem).

  • RecordCache = in-memory cache.

  • loadRecord:

    • First loads from RecordStorage into RecordCache.

    • Then calls api.getRecords and merges by version (higher version wins). ccorcos-chet-stack-8a5edab28263…

So:

  • Kinopio: “multi-storage loading” with local JSON + remote fetch. pketh.org

  • Chet: the same idea, just wrapped in loadRecord / RecordStorage / RecordCache and used everywhere. ccorcos-chet-stack-8a5edab28263…

Conceptually: ✅ same pattern.


  1. Operations queue vs TransactionQueue

Kinopio:

  • All updates go through centralized update methods that:

    • Update local cache and app state.

    • Update real-time broadcast stream.

    • Add an entry to an API operations queue so many rapid updates (dragging, typing) get grouped. pketh.org+1

  • The queue is persisted in localStorage so it can be resumed when connectivity comes back. pketh.org

ChetStack:

  • client/actions/write does optimistic updates into RecordCache + RecordStorage, then enqueues the transaction. ccorcos-chet-stack-8a5edab28263…

  • TransactionQueue:

    • Holds a queue of Transaction objects (operations[]).

    • Persists them to localStorage under "transactions" for offline. ccorcos-chet-stack-8a5edab28263…

    • Dequeues and sends them to server/apis/write when online. ccorcos-chet-stack-8a5edab28263…

    • Tracks pending writes per record so the UI can show “this record has unsynced changes”. ccorcos-chet-stack-8a5edab28263…

So:

  • Kinopio: operations queue to batch high-frequency updates and retry later. pketh.org

  • Chet: TransactionQueue doing batched, offline-resilient writes and optimistic UI. ccorcos-chet-stack-8a5edab28263…

Again: ✅ same core idea, Chet just expresses it as typed “transactions” with schema/permissions.


  1. Offline UX / status

Kinopio:

  • Uses a PWA service worker to cache assets so the app boots offline. pketh.org

  • Has an explicit offline status UI: badges, dialogs, and hiding features that aren’t available offline. pketh.org

ChetStack:

  • Service worker caches web assets and falls back to cache when no connection. ccorcos-chet-stack-8a5edab28263…

  • HTTP helper returns status: 0 for offline; loaders then try RecordStorage and surface “offline cache miss” if nothing is cached. ccorcos-chet-stack-8a5edab28263… ccorcos-chet-stack-8a5edab28263…

  • TransactionQueue saves transactions locally and resumes them when window.online fires. ccorcos-chet-stack-8a5edab28263… ccorcos-chet-stack-8a5edab28263…

Chet doesn’t yet have as nice a visual offline UI as Kinopio describes, but the plumbing (assets cached, data cached, queue persisted, offline detection) is the same.


  1. Where Chet goes further than the article

The article is more high-level; ChetStack bakes in some extra engine-y stuff:

  • Per-record versions + optimistic concurrency (VersionMap + DB write with conflict detection). ccorcos-chet-stack-8a5edab28263… ccorcos-chet-stack-8a5edab28263…

  • Realtime sync via pubsub:

    • Server publishes version bumps per subscription key.

    • Client WebsocketPubsubClient uses those to decide when to refetch and update RecordCache. ccorcos-chet-stack-8a5edab28263… ccorcos-chet-stack-8a5edab28263…

  • Permissions layered into sync (validation of before/after maps on write, plus read-side filtering). ccorcos-chet-stack-8a5edab28263…

  • Undo/redo built from the same operations that get queued and synced. ccorcos-chet-stack-8a5edab28263…

Kinopio clearly has realtime collaboration and undo/redo, but the blog post doesn’t spell out those internals the way Chet’s code does.


  1. So, are they “the same pattern”?

I’d summarize it like this:

  • Conceptually:
    Yes — both use the same building blocks:

    • Local-first storage (fast + offline)

    • Server as source of truth

    • Multi-storage loading (local + remote)

    • Centralized operations queue stored locally for offline + batching

    • PWA/service worker + explicit offline handling

  • Practically / implementation detail:
    ChetStack is like a typed, generic framework version of what Kinopio describes:

    • More generic: per-record versions instead of “per space JSON”.

    • Built-in concurrency control, permissions, pubsub wiring.

    • Ready to swap JSON DB → Postgres/Drizzle without changing the sync semantics.

If you like how Kinopio’s article feels, ChetStack is a very faithful, “batteries-included” implementation of that same philosophy that you can study and then reimplement with your own tech stack (e.g., Drizzle + Postgres).


You:

ok , great do you think i can implement same feature , real time sync but my choice of tech instead of chet stack comes in like use hono instead of express use postgresql and drizzle and in client side use https://tanstack.com/db/latest

what do you think i mean i wan real time sync and all good engineering goods and pattern from chet but i want to implememnt with this


ChatGPT:

Totally, yes — Chet’s “sync engine” ideas are almost completely tech-agnostic. You can absolutely steal the patterns and rebuild them with:

  • Server: Hono + Postgres + Drizzle

  • Client: TanStack DB for local state + your own sync layer

  • Realtime: WebSockets (or SSE) on top of Hono

Think of ChetStack as a design doc, not a tech stack you must copy.


  1. What parts are “universal” (you should copy)

These pieces are independent of Express / JSON DB / whatever:

  1. Per-record versioning

    • Every row: { id, version, ...rest }

    • On every write, increment version.

    • When writing from the server, do optimistic concurrency:

      • “Update this row IF version = oldVersion

      • If 0 rows updated → conflict → retry / rollback.

  2. Transaction + VersionMap

    • API takes a transaction = { txId, authorId, operations[] }

    • Server:

      • Loads all records being touched.

      • Builds versionMap[table][id] = oldVersion from those records.

      • Applies operations → new records.

      • Validates schema + permissions.

      • Calls db.write(records, versionMap).

  3. Local-first + SWR

    On the client:

    • Local persistent DB (your TanStack DB backed by IndexedDB/SQLite).

    • In-memory cache.

    • Loader pattern:

      • Read from local DB first → show instantly.

      • Fetch from server → if newer version, overwrite local.

  4. TransactionQueue + optimistic UI

    • All user writes go through a central write():

      • Apply operations optimistically to local DB.

      • Push transaction into a queue persisted locally.

    • Background worker:

      • Dequeues, calls /api/write.

      • On success: pop from queue.

      • On conflict: fetch latest records, reconcile, maybe re-apply.

  5. Pubsub → “this record changed”

    • Server: after a successful write, publishes events like
      key = getRecord:user/123, version = 42

    • Client:

      • Keeps SubscriptionCache of which keys are in use.

      • WebSocket client listens to pubsub messages.

      • On getRecord:user/123 with version v:

        • If local version < v → re-fetch record, update local DB.

None of that cares whether you’re using Express or Hono, JSON or Postgres, React or Svelte.


  1. Mapping Chet → your stack

Server side

Chet:

  • Express + JSON “Database” service.

  • environment.db.write(records, versionMap) is where concurrency happens.

  • PubsubServer wraps a WebSocket server.

You (Hono + Drizzle + Postgres):

  1. HTTP routes with Hono

    • POST /api/write → your port of apis/write.ts

    • POST /api/getRecords → fetch multiple records by { table, id }[]

    • Optionally GET /api/changes or other query endpoints.

  2. DB service using Drizzle

    You implement the same interface Chet uses:

    interface Database {
      getRecords(pointers: RecordPointer[]): Promise<RecordMap>;
      write(records: RecordWithTable[], versionMap: VersionMap): Promise<void>;
    }

    Internally, write(...) uses Drizzle + Postgres:

    • Wrap in db.transaction.

    • For each record:

      • If it’s new (no entry in versionMap): INSERT.

      • If it’s existing:

        await tx
          .update(table)
          .set(row)
          .where(and(eq(table.id, id), eq(table.version, previousVersion)));
      • If affectedRows === 0 → throw TransactionConflictError.

  3. Pubsub over WebSockets in Hono

    • Use Hono + a WS helper (or raw ws) at /ws.

    • Maintain Map<clientId, websocket> and Map<subscriptionKey, Set<clientId>>.

    • subscribe(key) → add clientId to that set.

    • publish(key, payload):

      • For each client in that set, send { key, ...payload }.

    Then from your write handler, after DB commit:

    pubsub.publish(`getRecord:${table}/${id}`, { version: newVersion });

    Exactly like Chet does, just with Hono.


Client side

Chet:

  • RecordCache + RecordStorage + SubscriptionCache.

  • TransactionQueue for offline.

  • WebSocket client (WebsocketPubsubClient).

You (TanStack DB + your sync client):

You want a small “sync layer” that uses TanStack DB as the local store.

Think in terms of a SyncClient:

  1. Local DB with TanStack DB

    • Define tables mirroring your server:

      • users, threads, messages, etc. with version columns.
    • This is your “RecordStorage + RecordCache”.

  2. Loaders using TanStack DB + fetch

    • For getRecord(user/123):

      • Query TanStack DB immediately → show stale-or-fresh record.

      • In parallel, call /api/getRecords to fetch from server.

      • If server version > local, update TanStack DB row.

    You can integrate this with TanStack Query for caching + suspense if you want.

  3. TransactionQueue

    Implement a tiny queue that:

    • Lives in memory but is also stored in localStorage or a TanStack DB table pending_transactions.

    • Each item: { txId, operations, affectedPointers }.

    • Exposes:

      • enqueue(transaction)

      • subscribeIsPendingWrite(pointer) for UI badges.

    • Background loop:

      • If online & queue not empty:

        • Send POST /api/write.

        • On success: delete from queue.

        • On TransactionConflictError: fetch latest records for affectedPointers, update TanStack DB, maybe retry.

  4. WebSocket pubsub client

    • On app start, connect to ws://.../ws.

    • Provide APIs:

      • subscribeRecord(table, id)

      • unsubscribeRecord(table, id)

    Internally:

    • Keep SubscriptionCache (map from key → refCount, and key → callback).

    • When refCount goes from 0 → 1, send {"type": "subscribe", "key": "getRecord:users/123"}.

    • When a message { key, version } arrives:

      • If local version < receivedVersion, call the loader (or directly fetch /api/getRecords) and update TanStack DB.
  5. Hooks for your UI

    For example, a React hook:

    function useSyncedRecord(table: string, id: string) {
      const [record, setRecord] = useState<YourRecordType | null>(null);
    
      useEffect(() => {
        // 1. subscribe to record
        syncClient.subscribeRecord(table, id);
    
        // 2. load from local DB and then server (SWR)
        syncClient.loadRecord(table, id).then(setRecord);
    
        return () => {
          syncClient.unsubscribeRecord(table, id);
        };
      }, [table, id]);
    
      const isPending = syncClient.isPendingWrite({ table, id });
    
      return { record, isPending };
    }

    Under the hood, loadRecord uses TanStack DB to read + write; the hook just reacts to DB changes / events.


  1. Any gotchas?

A few things to keep in mind:

  • TanStack DB is a local DB, not a sync engine.
    You still need the Chet-like parts:

    • TransactionQueue

    • Pubsub client

    • Versioned records

    • Permissions and schema validation on the server

  • Keep a clean abstraction layer like Chet:

    • Database service hides Drizzle/Postgres from the rest of your server.

    • SyncClient hides WebSocket + queue + TanStack DB details from your UI.

    • That’s what makes Chet feel “clean”.

  • Start small.
    Don’t try to rebuild everything at once. A good path:

    1. Single table (todos) with { id, text, completed, version }.

    2. Drizzle + Postgres + write(records, versionMap) using Hono.

    3. TanStack DB for local storage, loader: local-first then server.

    4. TransactionQueue (no pubsub yet).

    5. Add WebSocket pubsub → realtime updates.

    6. Then grow to multiple tables, permissions, undo/redo, etc.


  1. Short answer

Can I implement the same “good engineering” and real-time sync as ChetStack, but with Hono + Postgres/Drizzle + TanStack DB?

Yes. The “engine” part of ChetStack is just:

  • versioned records + optimistic concurrency,

  • local-first DB + SWR,

  • transaction queue,

  • pubsub-driven invalidation.

All of that ports cleanly to your stack. You’ll basically end up with:

  • A Drizzle-backed Database service implementing Chet-like write.

  • A Hono WebSocket pubsub server.

  • A SyncClient that wires TanStack DB + TransactionQueue + WebSocket pubsub.

If you want, I can sketch a concrete folder structure + minimal type definitions for a “Chet-style sync engine” using your tech choices as if it were a new repo.


You:

can you implement this

Start small. Don’t try to rebuild everything at once. A good path:

Single table (todos) with { id, text, completed, version }.

Drizzle + Postgres + write(records, versionMap) using Hono.

TanStack DB for local storage, loader: local-first then server.

TransactionQueue (no pubsub yet).

Add WebSocket pubsub → realtime updates.

Then grow to multiple tables, permissions, undo/redo, etc.


ChatGPT:

Yes, you totally can — and this is a really nice stack to do it with.

I’ll give you a concrete mini-implementation for each step:

  • Hono + Drizzle + Postgres backend

  • Single todos table with version + optimistic concurrency

  • Client with TanStack DB (React) as local store

  • TransactionQueue (no pubsub)

  • Then WebSocket pubsub for realtime

You can copy-paste this into a monorepo and iterate.


  1. Data model + Drizzle + Postgres

SQL table

CREATE TABLE todos (
  id        text PRIMARY KEY,
  text      text NOT NULL,
  completed boolean NOT NULL DEFAULT false,
  version   integer NOT NULL DEFAULT 0
);

Drizzle schema

// src/server/db/schema.ts
import { pgTable, text, boolean, integer } from "drizzle-orm/pg-core";

export const todos = pgTable("todos", {
  id: text("id").primaryKey(),
  text: text("text").notNull(),
  completed: boolean("completed").notNull().default(false),
  version: integer("version").notNull().default(0),
});

export type TodoRow = typeof todos.$inferSelect;
export type NewTodoRow = typeof todos.$inferInsert;

Drizzle client

// src/server/db/client.ts
import { drizzle } from "drizzle-orm/node-postgres";
import { Pool } from "pg";

const pool = new Pool({
  connectionString: process.env.DATABASE_URL,
});

export const db = drizzle(pool);

  1. write(records, versionMap) with Hono + Drizzle

We’ll mimic the Chet-style write(records, versionMap) but for a single todos table.

Types for the write API

// src/shared/types.ts

export type Todo = {
  id: string;
  text: string;
  completed: boolean;
  version: number; // last known server version
};

export type TodoOperation =
  | {
      type: "put"; // upsert
      todo: { id: string; text: string; completed: boolean };
      expectedVersion: number | null; // null = new row
    }
  | {
      type: "delete";
      id: string;
      expectedVersion: number | null;
    };

export type WriteRequest = {
  txId: string;
  operations: TodoOperation[];
};

export type WriteResponse = {
  todos: Todo[]; // the new versions as written to the DB
};

Hono app with /api/todos/write & /api/todos

// src/server/app.ts
import { Hono } from "hono";
import { db } from "./db/client";
import { todos } from "./db/schema";
import { eq, and } from "drizzle-orm";
import type { WriteRequest, WriteResponse, Todo } from "../shared/types";

export const app = new Hono();

// Fetch all todos (simple)
app.get("/api/todos", async (c) => {
  const rows = await db.select().from(todos);
  const data: Todo[] = rows;
  return c.json(data);
});

// Optimistic write endpoint
app.post("/api/todos/write", async (c) => {
  const body = (await c.req.json()) as WriteRequest;

  const result = await db.transaction(async (tx) => {
    const updated: Todo[] = [];

    for (const op of body.operations) {
      if (op.type === "put") {
        const { id, text, completed } = op.todo;
        const expected = op.expectedVersion;

        if (expected == null) {
          // New todo: insert with version 1
          const inserted = await tx
            .insert(todos)
            .values({ id, text, completed, version: 1 })
            .returning();
          updated.push(inserted[0]);
        } else {
          // Existing: update where id AND version match expectedVersion
          const nextVersion = expected + 1;
          const res = await tx
            .update(todos)
            .set({ text, completed, version: nextVersion })
            .where(and(eq(todos.id, id), eq(todos.version, expected)))
            .returning();

          if (res.length === 0) {
            // Conflict: someone else wrote first
            // For now: throw; client will refetch & retry
            throw new Error(`Conflict on todo ${id}`);
          }

          updated.push(res[0]);
        }
      } else if (op.type === "delete") {
        const { id, expectedVersion } = op;
        if (expectedVersion == null) {
          // If we thought it didn't exist, just delete blindly
          await tx.delete(todos).where(eq(todos.id, id));
        } else {
          const res = await tx
            .delete(todos)
            .where(and(eq(todos.id, id), eq(todos.version, expectedVersion)))
            .returning();

          if (res.length === 0) {
            throw new Error(`Conflict on delete ${id}`);
          }
        }
      }
    }

    return updated;
  });

  const response: WriteResponse = { todos: result };
  return c.json(response);
});

Then boot it:

// src/server/index.ts
import { app } from "./app";
import { serve } from "@hono/node-server";

serve({ fetch: app.fetch, port: 3000 });

So far we have:

  • Single todos table

  • Optimistic concurrency via version and expectedVersion in WHERE clause.


  1. TanStack DB on the client (local-first loader, no queue yet)

We’ll use:

  • @tanstack/react-db for DB + hooks

  • @tanstack/query-db-collection for loading from REST into a collection via TanStack Querytanstack.com+1

Install:

npm install @tanstack/react-db @tanstack/query-db-collection @tanstack/react-query

TanStack DB client & collection

// src/client/db/todoCollection.ts
import { z } from "zod";
import { createCollection, useLiveQuery } from "@tanstack/react-db";
import { queryCollectionOptions } from "@tanstack/query-db-collection";
import { QueryClient } from "@tanstack/react-query";

const todoSchema = z.object({
  id: z.string(),
  text: z.string(),
  completed: z.boolean(),
  version: z.number(),
});

export type Todo = z.infer<typeof todoSchema>;

// Create a QueryCollection that uses our REST API
export function createTodoCollection(queryClient: QueryClient) {
  return createCollection(
    queryCollectionOptions({
      id: "todos",
      queryClient,
      schema: todoSchema,
      queryKey: ["todos"],
      queryFn: async () => {
        const res = await fetch("/api/todos");
        if (!res.ok) throw new Error("Failed to load todos");
        const data = (await res.json()) as Todo[];
        return data;
      },
      getKey: (todo: Todo) => todo.id,
      // We'll handle mutations manually via our TransactionQueue later,
      // so don't configure onInsert/onUpdate here.
    })
  );
}

TanStack DB client bootstrap

// src/client/db/client.tsx
import { QueryClient, QueryClientProvider } from "@tanstack/react-query";
import { DBProvider } from "@tanstack/react-db"; // re-exported from @tanstack/react-db
import { createTodoCollection } from "./todoCollection";

const queryClient = new QueryClient();
const todoCollection = createTodoCollection(queryClient);

export function AppProviders({ children }: { children: React.ReactNode }) {
  return (
    <QueryClientProvider client={queryClient}>
      <DBProvider collections={[todoCollection]}>{children}</DBProvider>
    </QueryClientProvider>
  );
}

export { todoCollection };

Simple local-first usage with useLiveQuery

// src/client/App.tsx
import React from "react";
import { useLiveQuery } from "@tanstack/react-db";
import { eq } from "@tanstack/db"; // part of TanStack DB query builder:contentReference[oaicite:1]{index=1}
import { todoCollection } from "./db/client";

export function App() {
  const { data: todos } = useLiveQuery((q) =>
    q
      .from({ t: todoCollection })
      .select({
        id: (row) => row.t.id,
        text: (row) => row.t.text,
        completed: (row) => row.t.completed,
        version: (row) => row.t.version,
      })
      .orderBy((row) => row.t.id)
  );

  return (
    <div>
      <h1>Todos</h1>
      <pre>{JSON.stringify(todos, null, 2)}</pre>
    </div>
  );
}

This already gives you:

  • Local TanStack DB collection of todos

  • Fetches from /api/todos into that collection (TanStack Query + QueryCollection)tanstack.com+1

  • useLiveQuery = reactive read, blazing-fast local queries.

Next we add writes with a TransactionQueue.


  1. TransactionQueue (optimistic, no pubsub yet)

We’ll:

  • Mutate the TanStack DB collection optimistically.

  • Enqueue operations in a local queue (in-memory + localStorage).

  • A background loop drains the queue by calling /api/todos/write.

TransactionQueue implementation

// src/client/sync/transactionQueue.ts
import { nanoid } from "nanoid";
import { todoCollection, type Todo } from "../db/client";
import type { TodoOperation, WriteRequest, WriteResponse } from "../../shared/types";

type QueuedTx = {
  id: string;
  operations: TodoOperation[];
};

const STORAGE_KEY = "todo-tx-queue-v1";

function loadQueue(): QueuedTx[] {
  try {
    const raw = localStorage.getItem(STORAGE_KEY);
    if (!raw) return [];
    return JSON.parse(raw);
  } catch {
    return [];
  }
}

function saveQueue(queue: QueuedTx[]) {
  localStorage.setItem(STORAGE_KEY, JSON.stringify(queue));
}

let queue: QueuedTx[] = loadQueue();
let isDraining = false;

export function enqueueTransaction(operations: TodoOperation[]) {
  const tx: QueuedTx = { id: nanoid(), operations };
  queue.push(tx);
  saveQueue(queue);
  drainQueue(); // fire-and-forget
}

// naive online detection
function isOnline() {
  return window.navigator.onLine;
}

async function drainQueue() {
  if (isDraining) return;
  isDraining = true;

  try {
    while (queue.length > 0 && isOnline()) {
      const tx = queue[0];

      const body: WriteRequest = {
        txId: tx.id,
        operations: tx.operations,
      };

      const res = await fetch("/api/todos/write", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(body),
      });

      if (!res.ok) {
        // on conflict or server error, for now just break;
        // caller could add smarter handling later.
        console.error("write failed", await res.text());
        break;
      }

      const data = (await res.json()) as WriteResponse;

      // Reconcile server versions into collection
      for (const updated of data.todos) {
        todoCollection.update(updated.id, (draft: Todo) => {
          draft.text = updated.text;
          draft.completed = updated.completed;
          draft.version = updated.version;
        });
      }

      queue.shift();
      saveQueue(queue);
    }
  } finally {
    isDraining = false;
  }
}

// Listen for going back online
window.addEventListener("online", () => {
  drainQueue();
});

Optimistic mutations into TanStack DB + queue

// src/client/sync/todoActions.ts
import { nanoid } from "nanoid";
import { todoCollection, type Todo } from "../db/client";
import type { TodoOperation } from "../../shared/types";
import { enqueueTransaction } from "./transactionQueue";

// Create
export function createTodo(text: string) {
  const id = nanoid();

  const optimistic: Todo = {
    id,
    text,
    completed: false,
    version: 0, // unknown server version
  };

  // 1. Optimistic local insert
  todoCollection.insert(optimistic);

  // 2. Queue mutation
  const op: TodoOperation = {
    type: "put",
    todo: { id, text, completed: false },
    expectedVersion: null,
  };
  enqueueTransaction([op]);
}

// Toggle
export function toggleTodo(todo: Todo) {
  const newCompleted = !todo.completed;

  // 1. Optimistic local update
  todoCollection.update(todo.id, (draft) => {
    draft.completed = newCompleted;
  });

  // 2. Queue mutation with expectedVersion
  const op: TodoOperation = {
    type: "put",
    todo: { id: todo.id, text: todo.text, completed: newCompleted },
    expectedVersion: todo.version,
  };
  enqueueTransaction([op]);
}

// Delete
export function deleteTodo(todo: Todo) {
  // 1. Optimistic local delete
  todoCollection.delete(todo.id);

  // 2. Queue mutation
  const op: TodoOperation = {
    type: "delete",
    id: todo.id,
    expectedVersion: todo.version,
  };
  enqueueTransaction([op]);
}

Use these actions in the UI

// src/client/App.tsx (updated)
import React, { useState } from "react";
import { useLiveQuery } from "@tanstack/react-db";
import { todoCollection } from "./db/client";
import type { Todo } from "./db/todoCollection";
import { createTodo, toggleTodo, deleteTodo } from "./sync/todoActions";

export function App() {
  const { data: todos } = useLiveQuery((q) =>
    q.from({ t: todoCollection }).select({
      id: (row) => row.t.id,
      text: (row) => row.t.text,
      completed: (row) => row.t.completed,
      version: (row) => row.t.version,
    })
  );

  const [text, setText] = useState("");

  const onSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (!text.trim()) return;
    createTodo(text.trim());
    setText("");
  };

  return (
    <div>
      <h1>Todos (local-first + queued writes)</h1>

      <form onSubmit={onSubmit}>
        <input
          value={text}
          onChange={(e) => setText(e.target.value)}
          placeholder="New todo..."
        />
        <button type="submit">Add</button>
      </form>

      <ul>
        {todos?.map((t: Todo) => (
          <li key={t.id}>
            <label>
              <input
                type="checkbox"
                checked={t.completed}
                onChange={() => toggleTodo(t)}
              />
              {t.text}{" "}
              <small style={{ opacity: 0.6 }}>v{t.version ?? 0}</small>
            </label>
            <button onClick={() => deleteTodo(t)}></button>
          </li>
        ))}
      </ul>
    </div>
  );
}

At this point you have:

  • Local TanStack DB for reads

  • Optimistic local writes

  • TransactionQueue that syncs with Hono+Postgres in the background

  • Basic conflict detection via version and expectedVersion

No realtime yet — clients only see others’ changes when they re-query.


  1. Add WebSocket pubsub → realtime updates

Now we push it closer to Chet:

  • Server broadcasts: “todo X now has version N”.

  • Client listens and reconciles.

WebSocket server with Hono

Install WS:

npm install ws

Server side:

// src/server/ws.ts
import { WebSocketServer, WebSocket } from "ws";
import type { TodoRow } from "./db/schema";

type Client = WebSocket;

const clients = new Set<Client>();

export function createWSServer(server: import("http").Server) {
  const wss = new WebSocketServer({ server, path: "/ws" });

  wss.on("connection", (ws) => {
    clients.add(ws);

    ws.on("close", () => {
      clients.delete(ws);
    });
  });

  return wss;
}

export function broadcastTodoUpdated(todo: TodoRow) {
  const message = JSON.stringify({
    type: "todoUpdated" as const,
    todo: todo,
  });

  for (const ws of clients) {
    if (ws.readyState === ws.OPEN) {
      ws.send(message);
    }
  }
}

Wire it up in index.ts:

// src/server/index.ts
import { app } from "./app";
import { createServer } from "http";
import { createWSServer } from "./ws";

const httpServer = createServer((req, res) => app.fetch(req, res));
createWSServer(httpServer);

httpServer.listen(3000, () => {
  console.log("Server listening on http://localhost:3000");
});

Now call broadcastTodoUpdated from the write endpoint:

// src/server/app.ts (change in write handler)
import { broadcastTodoUpdated } from "./ws";

// ...

app.post("/api/todos/write", async (c) => {
  const body = (await c.req.json()) as WriteRequest;

  const result = await db.transaction(async (tx) => {
    const updated: Todo[] = [];

    for (const op of body.operations) {
      if (op.type === "put") {
        // ... same as before ...
        const [row] = resOrInsert;
        updated.push(row);
        // After we know row, we can broadcast outside tx
      }
      // ...
    }

    return updated;
  });

  // Broadcast after commit
  for (const t of result) {
    broadcastTodoUpdated(t);
  }

  const response: WriteResponse = { todos: result };
  return c.json(response);
});

WebSocket client → update TanStack DB

// src/client/sync/realtime.ts
import { todoCollection, type Todo } from "../db/client";

let socket: WebSocket | null = null;

export function connectRealtime() {
  if (socket && socket.readyState === WebSocket.OPEN) return;

  const url =
    (location.protocol === "https:" ? "wss://" : "ws://") +
    location.host +
    "/ws";

  socket = new WebSocket(url);

  socket.onmessage = (ev) => {
    try {
      const msg = JSON.parse(ev.data.toString());
      if (msg.type === "todoUpdated") {
        const updated = msg.todo as Todo;
        todoCollection.upsert(updated.id, (existing) => {
          if (!existing) return updated;
          // LWW merge by version
          if ((existing.version ?? 0) >= (updated.version ?? 0)) {
            return existing;
          }
          return updated;
        });
      }
    } catch (e) {
      console.error("bad ws message", e);
    }
  };

  socket.onclose = () => {
    // naive reconnect
    setTimeout(connectRealtime, 1000);
  };
}

Call this once near app startup:

// src/client/main.tsx
import React from "react";
import ReactDOM from "react-dom/client";
import { AppProviders } from "./db/client";
import { App } from "./App";
import { connectRealtime } from "./sync/realtime";

connectRealtime();

ReactDOM.createRoot(document.getElementById("root")!).render(
  <AppProviders>
    <App />
  </AppProviders>
);

Now you’ve got:

  • One client creates/updates/deletes todos → optimistic & queued → written to Postgres.

  • Server broadcasts todoUpdated over WS after commit.

  • Every client merges that update into its TanStack DB collection, LWW via version.

  • useLiveQuery picks up the change instantly.

That’s a real-time sync engine for a single table, using:

  • Hono + Drizzle + Postgres

  • TanStack DB as the local query engine

  • TransactionQueue for offline-ish writes

  • WebSocket pubsub for realtime

From here, you can:

  • Add more tables (issues, comments, etc.)

  • Generalize TodoOperation → generic operations per table

  • Add permission checks in the write handler

  • Add undo/redo on the client by storing inverse operations like Chet does

  • Introduce better conflict handling than just “throw on mismatch”

If you want next, we can generalize this to a “transaction with multiple tables” that looks even closer to Chet’s write(environment, transaction) but stays in your Hono/Drizzle/TanStack DB stack.


You:

can you export this whole thread and markdown for future llm


ChatGPT:

Nice idea. I’ve bundled everything we’ve been talking about into a single markdown “design doc” you can feed into another LLM later.

You can download it here:

Download the markdown file

It contains:

  • The overall goal and mental model (Linear-style sync, no CRDTs)

  • Server design (Hono + Drizzle + Postgres, version-based writes)

  • Client design (TanStack DB, local-first, TransactionQueue)

  • WebSocket realtime integration

  • All the key TypeScript/SQL snippets we walked through

  • Notes on how this maps to Chet Stack + Kinopio patterns

  • Suggested next steps (multi-table, permissions, undo/redo, etc.)

You can keep extending that file as your own “sync engine spec” and paste it into future LLM sessions as context.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment