Skip to content

Instantly share code, notes, and snippets.

@hamelsmu
Created February 5, 2026 14:58
Show Gist options
  • Select an option

  • Save hamelsmu/71aafa5c1761d4bb5e5983d20f2422b5 to your computer and use it in GitHub Desktop.

Select an option

Save hamelsmu/71aafa5c1761d4bb5e5983d20f2422b5 to your computer and use it in GitHub Desktop.
Architecture comparison: Jupyter vs Runme notebook systems

Charlotte/Runme vs JupyterLab: Architecture Comparison

This document provides a comprehensive comparison of the Charlotte/Runme notebook architecture versus JupyterLab, analyzing their fundamental design decisions, trade-offs, and the inherent capabilities that are easier or more difficult to implement in each system.


Executive Summary

Aspect Charlotte/Runme JupyterLab
Primary Use Case DevOps runbooks, operational workflows Data science, research, exploration
File Format Plain Markdown JSON (.ipynb)
Kernel Model Persistent shell session Isolated language-specific kernels
Frontend Framework React + Lit Web Components Lumino (Phosphor) widgets
Backend Language Go Python
Communication gRPC + Protocol Buffers REST + WebSocket + ZMQ
Extension Model Component composition Token-based dependency injection

1. Document Format

Charlotte/Runme: Markdown-Native

Approach: Notebooks are standard Markdown files (.md, .mdx, .mdr) with fenced code blocks.

# Setup Guide

Install dependencies:

```bash
npm install

Run the server:

npm run dev

**Pros**:
- **Version control friendly**: Clean diffs, easy code review, works naturally with Git
- **Universal readability**: Any text editor or Markdown viewer can display the content
- **Documentation IS code**: Single source of truth - docs are the executable notebook
- **No output bloat**: Source files remain small; outputs stored separately or not at all
- **Editor agnostic**: Edit in VS Code, vim, GitHub web UI, or any tool

**Cons**:
- **Output persistence challenge**: Difficult to store rich outputs (charts, images) inline
- **Metadata limitations**: Cell configuration requires non-standard annotations
- **Format ambiguity**: Code blocks in documentation could be misinterpreted as executable
- **No built-in execution state**: Harder to show "this was run and produced X"

### JupyterLab: JSON Document Model

**Approach**: Notebooks are JSON files (`.ipynb`) containing cells, metadata, and outputs.

```json
{
  "cells": [
    {
      "cell_type": "code",
      "source": ["print('hello')"],
      "outputs": [{"output_type": "stream", "text": ["hello\n"]}],
      "execution_count": 1
    }
  ],
  "metadata": {...},
  "nbformat": 4,
  "nbformat_minor": 5
}

Pros:

  • Self-contained documents: Outputs stored with code; notebooks are complete records
  • Rich output support: Native storage for images, HTML, interactive widgets
  • Execution history: Clear record of what was run and in what order
  • Standardized format: nbformat library provides validation and migration
  • Mature tooling: nbconvert, nbviewer, papermill, and extensive ecosystem

Cons:

  • Version control nightmare: JSON diffs are massive and unreadable
  • Output bloat: Notebooks grow large with embedded images and outputs
  • Merge conflicts: Almost impossible to resolve notebook merge conflicts
  • Special tooling required: Need Jupyter-aware tools to view or edit

Inherent Trade-off

Runme makes documentation-first workflows easy but struggles with output persistence. JupyterLab makes reproducible artifacts easy but struggles with collaboration and version control.


2. Kernel Architecture

Charlotte/Runme: Persistent Shell Session

Model: A single shell session maintains state across all cells, similar to a terminal.

Cell 1: export API_KEY="secret"
Cell 2: echo $API_KEY  # Works! Environment persists
Cell 3: curl -H "Authorization: $API_KEY" $URL  # Variables available

Characteristics:

  • Environment variables persist across cells
  • Previous cell output available via $__ variable
  • Named cells export as environment variables
  • Shell state (functions, aliases) maintained
  • Polyglot via shebang (#!/usr/bin/env python)

Pros:

  • Natural DevOps workflow: Matches how engineers actually work in terminals
  • Variable sharing is trivial: Just use environment variables
  • Multi-language support built-in: Any language with a CLI interpreter works
  • Lightweight: No heavy kernel process per language

Cons:

  • No true language kernels: Python cells don't maintain Python objects between runs
  • Limited introspection: No access to in-memory data structures
  • Restart means restart everything: Can't selectively reset state
  • Shell-centric: Data science workflows (DataFrames, plots) are awkward

JupyterLab: Language-Specific Kernels

Model: Separate kernel processes for each language, communicating via Jupyter Protocol.

Browser → WebSocket → Jupyter Server → ZMQ → Kernel Process (Python/R/Julia)

Characteristics:

  • Full language runtime in kernel process
  • Objects persist in kernel memory
  • Rich introspection (tab completion, object inspection)
  • Standardized messaging protocol across languages
  • Can interrupt, restart, or switch kernels

Pros:

  • True language integration: Access to full language features and state
  • Rich introspection: Tab completion, docstrings, variable explorer
  • Interactive computing: REPL-style exploration with persistent objects
  • Mature kernel ecosystem: IPython, IRkernel, IJulia, and 100+ community kernels
  • Kernel management: Restart, interrupt, switch without losing document

Cons:

  • Heavyweight: Each kernel is a separate process with memory overhead
  • Language-specific: Need different kernels for different languages
  • Complex communication: ZMQ + WebSocket + REST adds latency and failure modes
  • Single language per notebook: Polyglot requires workarounds (cell magics)

Inherent Capabilities

Capability Runme Difficulty JupyterLab Difficulty
Cross-cell variable sharing (shell) Easy Medium (magics)
Cross-cell object sharing (Python) Hard Easy
Multi-language in one notebook Easy (shebang) Hard (separate kernels)
Rich introspection/completion Hard Easy
Lightweight execution Easy Hard
Interactive debugging Hard Easy (via ipdb)
Long-running background tasks Easy Medium
Distributed kernel execution Hard Easy (Enterprise Gateway)

But Doesn't Jupyter Have Bash?

A common question: Jupyter has a bash kernel and ! shell commands—why is Runme better for DevOps?

Jupyter's ! commands run as isolated subprocesses:

# Cell 1
!export API_KEY="secret"

# Cell 2
!echo $API_KEY  # Empty! The subprocess already exited.

Each !command spawns a new shell, executes, and exits. Environment variables don't persist.

Jupyter's bash kernel is better but still differs:

The bash kernel does maintain some state, but it's designed around Jupyter's cell-isolation model rather than true terminal behavior.

Runme's shell session works like a real terminal:

# Cell 1
export API_KEY="secret"
source ~/.bashrc

# Cell 2
echo $API_KEY        # Works! Same shell session.
my_alias             # Works! Aliases from .bashrc available.

# Cell 3
curl -H "Auth: $API_KEY" $URL  # Still available

This matters for DevOps because real workflows:

  • Chain environment variables across steps
  • Source configuration files
  • Build up shell context incrementally
  • Use functions and aliases defined earlier

Polyglot without kernel juggling:

# Install deps
npm install
#!/usr/bin/env python
# Validate config
import yaml
print(yaml.safe_load(open('config.yml')))
# Deploy
kubectl apply -f manifests/

In Jupyter, this requires %%bash magics, separate kernels, or subprocess calls. In Runme, shebang just works naturally.


3. Frontend Architecture

Charlotte/Runme: React + Lit + WASM

Stack:

  • React 19: Higher-level UI components
  • Lit Web Components: Terminal rendering (console-view)
  • xterm.js: Terminal emulation
  • WASM: Client-side Markdown parsing (Go compiled to WASM)
  • Tailwind CSS: Styling

Architecture:

React Components (@runmedev/react-components)
    ↓
React Console (@runmedev/react-console)
    ↓
Web Components (@runmedev/renderers - Lit)
    ↓
xterm.js (terminal)

Pros:

  • Modern stack: React ecosystem, familiar to most frontend developers
  • Client-side parsing: WASM enables offline-first workflows
  • Component composition: Easy to embed in other React applications
  • Lighter weight: No complex widget system to learn

Cons:

  • Less mature: Fewer ready-made components for data visualization
  • Limited layout system: No built-in dock panels or split views
  • Terminal-centric: UI optimized for shell output, not rich media

JupyterLab: Lumino Widget System

Stack:

  • Lumino (Phosphor): Widget toolkit with layouts and events
  • React (optional): Via ReactWidget wrapper
  • CodeMirror: Code editing
  • RenderMime: Pluggable output renderers

Architecture:

JupyterLab Application Shell
    ↓
Lumino DockPanel (layout)
    ↓
NotebookPanel (widget)
    ↓
CellWidget → CodeMirror (input) + OutputArea (output)

Pros:

  • Desktop-like interface: Dock panels, split views, drag-and-drop
  • Rich layout system: Resize events, lifecycle hooks, advanced composition
  • Mature rendering: RenderMime handles diverse MIME types
  • Performance optimizations: Windowed rendering for large notebooks

Cons:

  • Steep learning curve: Lumino is complex and poorly documented
  • Imperative model: Different paradigm from React's declarative approach
  • Heavy framework: Significant bundle size and complexity
  • Limited React integration: ReactWidget is a wrapper, not native

Inherent Capabilities

Capability Runme Difficulty JupyterLab Difficulty
Terminal output rendering Easy Medium
Rich data visualization Medium Easy
Dock panels/split views Hard Easy
Embedding in other apps Easy Hard
Custom output renderers Medium Easy (RenderMime)
Mobile-friendly UI Medium Hard
Real-time collaboration UI Medium Medium
Custom keyboard shortcuts Medium Easy

4. Backend Architecture

Charlotte/Runme: Go + gRPC

Components:

  • Runme Server: Go binary exposing gRPC services
  • ParserService: Markdown ↔ Notebook conversion (Goldmark)
  • RunnerService: Command execution with session management
  • ProjectService: Task discovery across codebase

Communication:

Frontend → WebSocket → gRPC → Runme Go Server → Shell

Pros:

  • Performance: Go's concurrency handles many concurrent sessions
  • Type safety: Protocol Buffers provide strongly-typed APIs
  • Single binary: Easy deployment, no runtime dependencies
  • Efficient serialization: Binary protobuf faster than JSON

Cons:

  • Less flexible: Go is harder to extend than Python
  • Smaller ecosystem: Fewer libraries for scientific computing
  • Learning curve: gRPC/protobuf more complex than REST

JupyterLab: Python + Tornado + ZMQ

Components:

  • Jupyter Server: Python/Tornado web server
  • Kernel Manager: Spawns and manages kernel processes
  • Contents Manager: File system abstraction
  • Session Manager: Maps notebooks to kernels

Communication:

Frontend → REST/WebSocket → Jupyter Server → ZMQ → Kernel

Pros:

  • Extensibility: Python is easy to extend and customize
  • Rich ecosystem: NumPy, Pandas, matplotlib available to kernels
  • Mature infrastructure: Proven at scale (JupyterHub, Binder)
  • Standard protocols: REST and WebSocket are universal

Cons:

  • Performance: Python GIL limits concurrency
  • Complex deployment: Multiple processes, ZMQ sockets
  • Resource intensive: Each kernel is a separate Python process

Deployment Comparison: Concrete Details

Runme deployment:

# That's it. Single binary, no dependencies.
./runme agent serve --port 9977

Jupyter deployment:

# Typical setup
python -m venv jupyter-env
source jupyter-env/bin/activate
pip install jupyterlab jupyter-server
pip install bash_kernel && python -m bash_kernel.install
jupyter lab --port 8888

What's actually running:

Aspect Runme Jupyter
Processes 1 (Go binary) 1 server + N kernel processes
Memory baseline ~20MB ~100MB + ~50MB per kernel
Container image ~50MB ~500MB+ (Python + deps)
External deps None ZMQ C library, optionally Node.js
Config files 1 YAML file jupyter_config.py + kernel specs
Install command Download binary pip install + kernel installs

For production multi-user setup:

Runme Jupyter
Run more instances behind load balancer Add JupyterHub (separate service)
Add database for user state
Configure spawner (Docker/K8s)
Set up authenticator

The complexity difference is architectural: Runme's Go binary with goroutines vs Jupyter's Python server spawning separate kernel processes communicating over ZMQ.

Inherent Capabilities

Capability Runme Difficulty JupyterLab Difficulty
High-concurrency execution Easy Hard
Scientific computing in kernel Hard Easy
Simple deployment Easy Medium
Custom content managers Hard Easy
Remote kernel execution Medium Easy
Container orchestration Easy Medium
Real-time streaming output Easy Medium

5. Extension System

Charlotte/Runme: Component Composition

Model: Extensions are React/Lit components composed together.

Extension Points:

  • Custom output renderers via MIME types
  • React component wrapping and extension
  • gRPC service extensions
  • VS Code extension integration

Pros:

  • Simple mental model: Just compose React components
  • Web standards: Web Components are framework-agnostic
  • Easy to start: Lower barrier to entry

Cons:

  • Less structured: No formal plugin discovery/registration
  • Limited hooks: Fewer extension points than JupyterLab
  • Immature ecosystem: Few third-party extensions

JupyterLab: Token-Based Dependency Injection

Model: Plugins register and consume services via Lumino Tokens.

// Provider
const plugin: JupyterFrontEndPlugin<INotebookTracker> = {
  id: 'notebook-tracker',
  provides: INotebookTracker,
  activate: (app) => new NotebookTracker()
};

// Consumer
const extension: JupyterFrontEndPlugin<void> = {
  id: 'my-extension',
  requires: [INotebookTracker],
  activate: (app, tracker) => { /* use tracker */ }
};

Pros:

  • Loose coupling: Extensions don't depend on each other directly
  • Automatic ordering: JupyterLab resolves activation order
  • Type safety: TypeScript interfaces on tokens
  • Rich ecosystem: Hundreds of community extensions
  • Server + frontend: Can extend both sides

Cons:

  • Complex: Dependency injection is a learning curve
  • Over-engineered: Simple extensions require lots of boilerplate
  • Fragile: Token changes can break dependent extensions

Inherent Capabilities

Capability Runme Difficulty JupyterLab Difficulty
Simple UI extension Easy Medium
Complex multi-service extension Hard Easy
Server-side extension Medium Easy
Custom file handlers Hard Easy
Theme customization Easy Easy
Menu/toolbar extension Medium Easy
Third-party ecosystem Hard (immature) Easy (mature)

6. Use Case Fit

Charlotte/Runme Excels At

  1. DevOps Runbooks: Operational procedures documented as executable scripts
  2. Onboarding Documentation: Setup guides that actually run
  3. CLI Workflows: Tasks involving shell commands and environment setup
  4. GitOps: Markdown-based workflows that version control cleanly
  5. Multi-language Scripts: Polyglot shell scripts with Python/Node/Ruby
  6. CI/CD Integration: Headless execution in pipelines
  7. Cloud Operations: AWS/GCP console integration, infrastructure tasks

JupyterLab Excels At

  1. Data Exploration: Interactive analysis with rich visualizations
  2. Scientific Research: Reproducible experiments with embedded results
  3. Machine Learning: Model training with inline plots and metrics
  4. Teaching: Interactive coding tutorials with visible outputs
  5. Report Generation: Notebooks as living documents with embedded results
  6. Collaborative Analysis: Shared notebooks with complete execution history
  7. Language-Specific Work: Deep Python/R/Julia integration

7. Key Architectural Trade-offs

Format vs Version Control

  • Runme: Optimizes for Git workflows at the cost of output persistence
  • JupyterLab: Optimizes for self-contained artifacts at the cost of diff-ability

Kernel Simplicity vs Language Power

  • Runme: Simple shell kernel enables multi-language but limits introspection
  • JupyterLab: Complex kernel protocol enables rich language features but adds overhead

Modern Stack vs Mature Ecosystem

  • Runme: React/Go/gRPC is modern but has smaller extension ecosystem
  • JupyterLab: Lumino/Python/ZMQ is complex but has hundreds of extensions

DevOps vs Data Science

  • Runme: Purpose-built for operational workflows
  • JupyterLab: Purpose-built for computational exploration

What Jupyter's Complexity Buys You

Jupyter's heavier architecture isn't accidental—it enables capabilities that Runme's simpler model cannot provide:

1. True Language Runtimes

IPython kernel provides:

  • %debug - Post-mortem debugging, step through exceptions
  • %timeit - Accurate micro-benchmarking
  • Tab completion on live objects (df.col<TAB> shows actual columns)
  • ? and ?? for docstrings and source inspection
  • Variable explorer showing actual in-memory state

Runme can run Python, but can't inspect Python objects between cells.

2. Interactive Widgets

import ipywidgets as widgets
slider = widgets.IntSlider(value=50)
display(slider)
# Slider state lives in kernel memory, updates reactively

This requires a persistent kernel process—shell execution can't maintain widget state.

3. Kernel Ecosystem

100+ community kernels with deep language integration:

  • IRkernel (R with full tidyverse support)
  • IJulia (Julia with native performance)
  • Xeus-cling (C++ with JIT compilation)
  • SoS (polyglot workflows with data exchange between languages)

Each provides language-native introspection, not just "run this script."

4. Enterprise Features

  • JupyterHub: Multi-user server with authentication, spawners, resource limits
  • Enterprise Gateway: Run kernels on remote clusters (Kubernetes, YARN, Docker Swarm)
  • Binder: Reproducible environments from Git repos
  • nbgrader: Automated grading for educational use

These exist because Jupyter's architecture separates concerns cleanly.

5. Rich Output Persistence

Notebooks store outputs inline—a notebook is a complete record:

  • Plots render without re-execution
  • Share notebooks with results visible
  • Version outputs alongside code (debatable benefit for Git)

Bottom line: If you need to inspect a pandas DataFrame, debug Python interactively, or use Jupyter widgets, Jupyter's complexity is essential. Runme's shell-centric model deliberately trades these capabilities for simplicity and DevOps alignment.


8. Recommendations

Choose Runme When:

  • Documentation is the primary artifact
  • Workflows involve many shell commands
  • Git-based collaboration is critical
  • Lightweight deployment is needed
  • DevOps/infrastructure is the domain

Choose JupyterLab When:

  • Data exploration is the primary workflow
  • Rich visualizations are needed
  • Language-specific features (debugging, introspection) matter
  • Self-contained reproducible artifacts are required
  • Mature extension ecosystem is valuable

Consider Hybrid Approaches:

  • Use Runme for setup/deployment documentation alongside Jupyter for analysis
  • Convert between formats for different stages of workflow
  • Embed Jupyter outputs in Markdown documentation

9. Future Considerations

Convergence Opportunities

Both architectures could benefit from learning from each other:

  1. JupyterLab could adopt: Better Markdown support, simpler extension model, lighter kernels
  2. Runme could adopt: Richer output persistence, kernel introspection, RenderMime-style rendering

Emerging Alternatives

  • Observable: Web-native reactive notebooks
  • Marimo: Python notebooks stored as scripts
  • Quarto: Markdown-based publishing with Jupyter execution
  • VS Code Notebooks: Native notebook API with multiple kernel types

Appendix: Technical Deep Dive

A. Runme gRPC Services

service ParserService {
  rpc Deserialize(DeserializeRequest) returns (Notebook);
  rpc Serialize(SerializeRequest) returns (Markdown);
}

service RunnerService {
  rpc Execute(ExecuteRequest) returns (stream ExecuteResponse);
  rpc CreateSession(CreateSessionRequest) returns (Session);
}

service ProjectService {
  rpc DiscoverTasks(DiscoverRequest) returns (TaskList);
}

B. JupyterLab Token System

// Token definition
const INotebookTracker = new Token<INotebookTracker>('notebook-tracker');

// Service interface
interface INotebookTracker {
  currentWidget: NotebookPanel | null;
  widgetAdded: ISignal<this, NotebookPanel>;
}

// Plugin activation
activate: (app: JupyterFrontEnd, tracker: INotebookTracker) => {
  tracker.widgetAdded.connect((sender, panel) => {
    console.log('Notebook opened:', panel.title.label);
  });
}

C. Jupyter Messaging Protocol

execute_request → shell channel
  ↓
status: busy → iopub channel
  ↓
stream (stdout/stderr) → iopub channel
  ↓
execute_result/display_data → iopub channel
  ↓
status: idle → iopub channel
  ↓
execute_reply → shell channel

Generated from comprehensive architecture research comparing Charlotte/Runme and JupyterLab codebases and documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment