Skip to content

Instantly share code, notes, and snippets.

@kalomaze
kalomaze / microgpt_y2k6.py
Last active February 12, 2026 04:17 — forked from karpathy/microgpt.py
microgpt_y2k6
"""
The most atomic way to train and inference a GPT LLM in pure, dependency-free Python.
Differences from GPT-2 are minor: rmsnorm instead of layer norm, no biases, square ReLU instead of GeLU nonlinearity.
The contents of this file is everything algorithmically needed to train a GPT. Everything else is just efficiency.
Art project by @karpathy. Ported to Python 2.5 by Claude Opus 4.6 because why not.
"""
from __future__ import division
from __future__ import with_statement
import os # for os.path.exists