This document provides a comprehensive comparison of the Charlotte/Runme notebook architecture versus JupyterLab, analyzing their fundamental design decisions, trade-offs, and the inherent capabilities that are easier or more difficult to implement in each system.
| Aspect | Charlotte/Runme | JupyterLab |
|---|---|---|
| Primary Use Case | DevOps runbooks, operational workflows | Data science, research, exploration |
| File Format | Plain Markdown | JSON (.ipynb) |
| Kernel Model | Persistent shell session | Isolated language-specific kernels |
| Frontend Framework | React + Lit Web Components | Lumino (Phosphor) widgets |
| Backend Language | Go | Python |
| Communication | gRPC + Protocol Buffers | REST + WebSocket + ZMQ |
| Extension Model | Component composition | Token-based dependency injection |
Approach: Notebooks are standard Markdown files (.md, .mdx, .mdr) with fenced code blocks.
# Setup Guide
Install dependencies:
```bash
npm installRun the server:
npm run dev
**Pros**:
- **Version control friendly**: Clean diffs, easy code review, works naturally with Git
- **Universal readability**: Any text editor or Markdown viewer can display the content
- **Documentation IS code**: Single source of truth - docs are the executable notebook
- **No output bloat**: Source files remain small; outputs stored separately or not at all
- **Editor agnostic**: Edit in VS Code, vim, GitHub web UI, or any tool
**Cons**:
- **Output persistence challenge**: Difficult to store rich outputs (charts, images) inline
- **Metadata limitations**: Cell configuration requires non-standard annotations
- **Format ambiguity**: Code blocks in documentation could be misinterpreted as executable
- **No built-in execution state**: Harder to show "this was run and produced X"
### JupyterLab: JSON Document Model
**Approach**: Notebooks are JSON files (`.ipynb`) containing cells, metadata, and outputs.
```json
{
"cells": [
{
"cell_type": "code",
"source": ["print('hello')"],
"outputs": [{"output_type": "stream", "text": ["hello\n"]}],
"execution_count": 1
}
],
"metadata": {...},
"nbformat": 4,
"nbformat_minor": 5
}
Pros:
- Self-contained documents: Outputs stored with code; notebooks are complete records
- Rich output support: Native storage for images, HTML, interactive widgets
- Execution history: Clear record of what was run and in what order
- Standardized format:
nbformatlibrary provides validation and migration - Mature tooling: nbconvert, nbviewer, papermill, and extensive ecosystem
Cons:
- Version control nightmare: JSON diffs are massive and unreadable
- Output bloat: Notebooks grow large with embedded images and outputs
- Merge conflicts: Almost impossible to resolve notebook merge conflicts
- Special tooling required: Need Jupyter-aware tools to view or edit
Runme makes documentation-first workflows easy but struggles with output persistence. JupyterLab makes reproducible artifacts easy but struggles with collaboration and version control.
Model: A single shell session maintains state across all cells, similar to a terminal.
Cell 1: export API_KEY="secret"
Cell 2: echo $API_KEY # Works! Environment persists
Cell 3: curl -H "Authorization: $API_KEY" $URL # Variables available
Characteristics:
- Environment variables persist across cells
- Previous cell output available via
$__variable - Named cells export as environment variables
- Shell state (functions, aliases) maintained
- Polyglot via shebang (
#!/usr/bin/env python)
Pros:
- Natural DevOps workflow: Matches how engineers actually work in terminals
- Variable sharing is trivial: Just use environment variables
- Multi-language support built-in: Any language with a CLI interpreter works
- Lightweight: No heavy kernel process per language
Cons:
- No true language kernels: Python cells don't maintain Python objects between runs
- Limited introspection: No access to in-memory data structures
- Restart means restart everything: Can't selectively reset state
- Shell-centric: Data science workflows (DataFrames, plots) are awkward
Model: Separate kernel processes for each language, communicating via Jupyter Protocol.
Browser → WebSocket → Jupyter Server → ZMQ → Kernel Process (Python/R/Julia)
Characteristics:
- Full language runtime in kernel process
- Objects persist in kernel memory
- Rich introspection (tab completion, object inspection)
- Standardized messaging protocol across languages
- Can interrupt, restart, or switch kernels
Pros:
- True language integration: Access to full language features and state
- Rich introspection: Tab completion, docstrings, variable explorer
- Interactive computing: REPL-style exploration with persistent objects
- Mature kernel ecosystem: IPython, IRkernel, IJulia, and 100+ community kernels
- Kernel management: Restart, interrupt, switch without losing document
Cons:
- Heavyweight: Each kernel is a separate process with memory overhead
- Language-specific: Need different kernels for different languages
- Complex communication: ZMQ + WebSocket + REST adds latency and failure modes
- Single language per notebook: Polyglot requires workarounds (cell magics)
| Capability | Runme Difficulty | JupyterLab Difficulty |
|---|---|---|
| Cross-cell variable sharing (shell) | Easy | Medium (magics) |
| Cross-cell object sharing (Python) | Hard | Easy |
| Multi-language in one notebook | Easy (shebang) | Hard (separate kernels) |
| Rich introspection/completion | Hard | Easy |
| Lightweight execution | Easy | Hard |
| Interactive debugging | Hard | Easy (via ipdb) |
| Long-running background tasks | Easy | Medium |
| Distributed kernel execution | Hard | Easy (Enterprise Gateway) |
A common question: Jupyter has a bash kernel and ! shell commands—why is Runme better for DevOps?
Jupyter's ! commands run as isolated subprocesses:
# Cell 1
!export API_KEY="secret"
# Cell 2
!echo $API_KEY # Empty! The subprocess already exited.Each !command spawns a new shell, executes, and exits. Environment variables don't persist.
Jupyter's bash kernel is better but still differs:
The bash kernel does maintain some state, but it's designed around Jupyter's cell-isolation model rather than true terminal behavior.
Runme's shell session works like a real terminal:
# Cell 1
export API_KEY="secret"
source ~/.bashrc
# Cell 2
echo $API_KEY # Works! Same shell session.
my_alias # Works! Aliases from .bashrc available.
# Cell 3
curl -H "Auth: $API_KEY" $URL # Still availableThis matters for DevOps because real workflows:
- Chain environment variables across steps
- Source configuration files
- Build up shell context incrementally
- Use functions and aliases defined earlier
Polyglot without kernel juggling:
# Install deps
npm install#!/usr/bin/env python
# Validate config
import yaml
print(yaml.safe_load(open('config.yml')))# Deploy
kubectl apply -f manifests/In Jupyter, this requires %%bash magics, separate kernels, or subprocess calls. In Runme, shebang just works naturally.
Stack:
- React 19: Higher-level UI components
- Lit Web Components: Terminal rendering (
console-view) - xterm.js: Terminal emulation
- WASM: Client-side Markdown parsing (Go compiled to WASM)
- Tailwind CSS: Styling
Architecture:
React Components (@runmedev/react-components)
↓
React Console (@runmedev/react-console)
↓
Web Components (@runmedev/renderers - Lit)
↓
xterm.js (terminal)
Pros:
- Modern stack: React ecosystem, familiar to most frontend developers
- Client-side parsing: WASM enables offline-first workflows
- Component composition: Easy to embed in other React applications
- Lighter weight: No complex widget system to learn
Cons:
- Less mature: Fewer ready-made components for data visualization
- Limited layout system: No built-in dock panels or split views
- Terminal-centric: UI optimized for shell output, not rich media
Stack:
- Lumino (Phosphor): Widget toolkit with layouts and events
- React (optional): Via
ReactWidgetwrapper - CodeMirror: Code editing
- RenderMime: Pluggable output renderers
Architecture:
JupyterLab Application Shell
↓
Lumino DockPanel (layout)
↓
NotebookPanel (widget)
↓
CellWidget → CodeMirror (input) + OutputArea (output)
Pros:
- Desktop-like interface: Dock panels, split views, drag-and-drop
- Rich layout system: Resize events, lifecycle hooks, advanced composition
- Mature rendering: RenderMime handles diverse MIME types
- Performance optimizations: Windowed rendering for large notebooks
Cons:
- Steep learning curve: Lumino is complex and poorly documented
- Imperative model: Different paradigm from React's declarative approach
- Heavy framework: Significant bundle size and complexity
- Limited React integration: ReactWidget is a wrapper, not native
| Capability | Runme Difficulty | JupyterLab Difficulty |
|---|---|---|
| Terminal output rendering | Easy | Medium |
| Rich data visualization | Medium | Easy |
| Dock panels/split views | Hard | Easy |
| Embedding in other apps | Easy | Hard |
| Custom output renderers | Medium | Easy (RenderMime) |
| Mobile-friendly UI | Medium | Hard |
| Real-time collaboration UI | Medium | Medium |
| Custom keyboard shortcuts | Medium | Easy |
Components:
- Runme Server: Go binary exposing gRPC services
- ParserService: Markdown ↔ Notebook conversion (Goldmark)
- RunnerService: Command execution with session management
- ProjectService: Task discovery across codebase
Communication:
Frontend → WebSocket → gRPC → Runme Go Server → Shell
Pros:
- Performance: Go's concurrency handles many concurrent sessions
- Type safety: Protocol Buffers provide strongly-typed APIs
- Single binary: Easy deployment, no runtime dependencies
- Efficient serialization: Binary protobuf faster than JSON
Cons:
- Less flexible: Go is harder to extend than Python
- Smaller ecosystem: Fewer libraries for scientific computing
- Learning curve: gRPC/protobuf more complex than REST
Components:
- Jupyter Server: Python/Tornado web server
- Kernel Manager: Spawns and manages kernel processes
- Contents Manager: File system abstraction
- Session Manager: Maps notebooks to kernels
Communication:
Frontend → REST/WebSocket → Jupyter Server → ZMQ → Kernel
Pros:
- Extensibility: Python is easy to extend and customize
- Rich ecosystem: NumPy, Pandas, matplotlib available to kernels
- Mature infrastructure: Proven at scale (JupyterHub, Binder)
- Standard protocols: REST and WebSocket are universal
Cons:
- Performance: Python GIL limits concurrency
- Complex deployment: Multiple processes, ZMQ sockets
- Resource intensive: Each kernel is a separate Python process
Runme deployment:
# That's it. Single binary, no dependencies.
./runme agent serve --port 9977Jupyter deployment:
# Typical setup
python -m venv jupyter-env
source jupyter-env/bin/activate
pip install jupyterlab jupyter-server
pip install bash_kernel && python -m bash_kernel.install
jupyter lab --port 8888What's actually running:
| Aspect | Runme | Jupyter |
|---|---|---|
| Processes | 1 (Go binary) | 1 server + N kernel processes |
| Memory baseline | ~20MB | ~100MB + ~50MB per kernel |
| Container image | ~50MB | ~500MB+ (Python + deps) |
| External deps | None | ZMQ C library, optionally Node.js |
| Config files | 1 YAML file | jupyter_config.py + kernel specs |
| Install command | Download binary | pip install + kernel installs |
For production multi-user setup:
| Runme | Jupyter |
|---|---|
| Run more instances behind load balancer | Add JupyterHub (separate service) |
| Add database for user state | |
| Configure spawner (Docker/K8s) | |
| Set up authenticator |
The complexity difference is architectural: Runme's Go binary with goroutines vs Jupyter's Python server spawning separate kernel processes communicating over ZMQ.
| Capability | Runme Difficulty | JupyterLab Difficulty |
|---|---|---|
| High-concurrency execution | Easy | Hard |
| Scientific computing in kernel | Hard | Easy |
| Simple deployment | Easy | Medium |
| Custom content managers | Hard | Easy |
| Remote kernel execution | Medium | Easy |
| Container orchestration | Easy | Medium |
| Real-time streaming output | Easy | Medium |
Model: Extensions are React/Lit components composed together.
Extension Points:
- Custom output renderers via MIME types
- React component wrapping and extension
- gRPC service extensions
- VS Code extension integration
Pros:
- Simple mental model: Just compose React components
- Web standards: Web Components are framework-agnostic
- Easy to start: Lower barrier to entry
Cons:
- Less structured: No formal plugin discovery/registration
- Limited hooks: Fewer extension points than JupyterLab
- Immature ecosystem: Few third-party extensions
Model: Plugins register and consume services via Lumino Tokens.
// Provider
const plugin: JupyterFrontEndPlugin<INotebookTracker> = {
id: 'notebook-tracker',
provides: INotebookTracker,
activate: (app) => new NotebookTracker()
};
// Consumer
const extension: JupyterFrontEndPlugin<void> = {
id: 'my-extension',
requires: [INotebookTracker],
activate: (app, tracker) => { /* use tracker */ }
};Pros:
- Loose coupling: Extensions don't depend on each other directly
- Automatic ordering: JupyterLab resolves activation order
- Type safety: TypeScript interfaces on tokens
- Rich ecosystem: Hundreds of community extensions
- Server + frontend: Can extend both sides
Cons:
- Complex: Dependency injection is a learning curve
- Over-engineered: Simple extensions require lots of boilerplate
- Fragile: Token changes can break dependent extensions
| Capability | Runme Difficulty | JupyterLab Difficulty |
|---|---|---|
| Simple UI extension | Easy | Medium |
| Complex multi-service extension | Hard | Easy |
| Server-side extension | Medium | Easy |
| Custom file handlers | Hard | Easy |
| Theme customization | Easy | Easy |
| Menu/toolbar extension | Medium | Easy |
| Third-party ecosystem | Hard (immature) | Easy (mature) |
- DevOps Runbooks: Operational procedures documented as executable scripts
- Onboarding Documentation: Setup guides that actually run
- CLI Workflows: Tasks involving shell commands and environment setup
- GitOps: Markdown-based workflows that version control cleanly
- Multi-language Scripts: Polyglot shell scripts with Python/Node/Ruby
- CI/CD Integration: Headless execution in pipelines
- Cloud Operations: AWS/GCP console integration, infrastructure tasks
- Data Exploration: Interactive analysis with rich visualizations
- Scientific Research: Reproducible experiments with embedded results
- Machine Learning: Model training with inline plots and metrics
- Teaching: Interactive coding tutorials with visible outputs
- Report Generation: Notebooks as living documents with embedded results
- Collaborative Analysis: Shared notebooks with complete execution history
- Language-Specific Work: Deep Python/R/Julia integration
- Runme: Optimizes for Git workflows at the cost of output persistence
- JupyterLab: Optimizes for self-contained artifacts at the cost of diff-ability
- Runme: Simple shell kernel enables multi-language but limits introspection
- JupyterLab: Complex kernel protocol enables rich language features but adds overhead
- Runme: React/Go/gRPC is modern but has smaller extension ecosystem
- JupyterLab: Lumino/Python/ZMQ is complex but has hundreds of extensions
- Runme: Purpose-built for operational workflows
- JupyterLab: Purpose-built for computational exploration
Jupyter's heavier architecture isn't accidental—it enables capabilities that Runme's simpler model cannot provide:
1. True Language Runtimes
IPython kernel provides:
%debug- Post-mortem debugging, step through exceptions%timeit- Accurate micro-benchmarking- Tab completion on live objects (
df.col<TAB>shows actual columns) ?and??for docstrings and source inspection- Variable explorer showing actual in-memory state
Runme can run Python, but can't inspect Python objects between cells.
2. Interactive Widgets
import ipywidgets as widgets
slider = widgets.IntSlider(value=50)
display(slider)
# Slider state lives in kernel memory, updates reactivelyThis requires a persistent kernel process—shell execution can't maintain widget state.
3. Kernel Ecosystem
100+ community kernels with deep language integration:
- IRkernel (R with full tidyverse support)
- IJulia (Julia with native performance)
- Xeus-cling (C++ with JIT compilation)
- SoS (polyglot workflows with data exchange between languages)
Each provides language-native introspection, not just "run this script."
4. Enterprise Features
- JupyterHub: Multi-user server with authentication, spawners, resource limits
- Enterprise Gateway: Run kernels on remote clusters (Kubernetes, YARN, Docker Swarm)
- Binder: Reproducible environments from Git repos
- nbgrader: Automated grading for educational use
These exist because Jupyter's architecture separates concerns cleanly.
5. Rich Output Persistence
Notebooks store outputs inline—a notebook is a complete record:
- Plots render without re-execution
- Share notebooks with results visible
- Version outputs alongside code (debatable benefit for Git)
Bottom line: If you need to inspect a pandas DataFrame, debug Python interactively, or use Jupyter widgets, Jupyter's complexity is essential. Runme's shell-centric model deliberately trades these capabilities for simplicity and DevOps alignment.
- Documentation is the primary artifact
- Workflows involve many shell commands
- Git-based collaboration is critical
- Lightweight deployment is needed
- DevOps/infrastructure is the domain
- Data exploration is the primary workflow
- Rich visualizations are needed
- Language-specific features (debugging, introspection) matter
- Self-contained reproducible artifacts are required
- Mature extension ecosystem is valuable
- Use Runme for setup/deployment documentation alongside Jupyter for analysis
- Convert between formats for different stages of workflow
- Embed Jupyter outputs in Markdown documentation
Both architectures could benefit from learning from each other:
- JupyterLab could adopt: Better Markdown support, simpler extension model, lighter kernels
- Runme could adopt: Richer output persistence, kernel introspection, RenderMime-style rendering
- Observable: Web-native reactive notebooks
- Marimo: Python notebooks stored as scripts
- Quarto: Markdown-based publishing with Jupyter execution
- VS Code Notebooks: Native notebook API with multiple kernel types
service ParserService {
rpc Deserialize(DeserializeRequest) returns (Notebook);
rpc Serialize(SerializeRequest) returns (Markdown);
}
service RunnerService {
rpc Execute(ExecuteRequest) returns (stream ExecuteResponse);
rpc CreateSession(CreateSessionRequest) returns (Session);
}
service ProjectService {
rpc DiscoverTasks(DiscoverRequest) returns (TaskList);
}// Token definition
const INotebookTracker = new Token<INotebookTracker>('notebook-tracker');
// Service interface
interface INotebookTracker {
currentWidget: NotebookPanel | null;
widgetAdded: ISignal<this, NotebookPanel>;
}
// Plugin activation
activate: (app: JupyterFrontEnd, tracker: INotebookTracker) => {
tracker.widgetAdded.connect((sender, panel) => {
console.log('Notebook opened:', panel.title.label);
});
}execute_request → shell channel
↓
status: busy → iopub channel
↓
stream (stdout/stderr) → iopub channel
↓
execute_result/display_data → iopub channel
↓
status: idle → iopub channel
↓
execute_reply → shell channel
Generated from comprehensive architecture research comparing Charlotte/Runme and JupyterLab codebases and documentation.