Skip to content

Instantly share code, notes, and snippets.

View johnandersen777's full-sized avatar
🐢
Rolling Alice... ⏳

John johnandersen777

🐢
Rolling Alice... ⏳
View GitHub Profile
@karpathy
karpathy / microgpt.py
Last active February 13, 2026 14:13
microgpt
"""
The most atomic way to train and inference a GPT in pure, dependency-free Python.
This file is the complete algorithm.
Everything else is just efficiency.
@karpathy
"""
import os # os.path.exists
import math # math.log, math.exp
@zicklag
zicklag / exampleBrowserScript.ts
Last active September 9, 2025 16:58
Simple helper to make it easy to communicate with typed interfaces over browser MessagePorts
import MySharedWorker from './mySharedWorker.ts?sharedworker'; // using Vite shared worker import
import { messagePortInterface } from './messagePortInterface';
//
// We need to define the interfaces that we use on both sides of the message port.
//
// This is the interface we use for the frontend, i.e. when the worker wants to call
// a function remotely against the UI thread.
export type MainThreadInterface = {
GET /beta/deviceLocalCredentials/[DEVICE-ID]?$select=credentials HTTP/1.1
ocp-client-version: 1.0
client-request-id: 96cbfa59-dbfc-4a92-b261-7f77bd8f4b9b
ocp-client-name: Get-LapsAADPassword Windows LAPS Cmdlet
User-Agent: Mozilla/5.0 (Windows NT 10.0; Microsoft Windows 10.0.22621; en-US) PowerShell/5.1.22621.963 Invoke-MgGraphRequest
SdkVersion: graph-powershell/1.26.0, Graph-dotnet-1.25.1
FeatureFlag: 00000047
Cache-Control: no-store, no-cache
Authorization: Bearer [AAD-JWT-HERE]
Accept-Encoding: gzip
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
@shawwn
shawwn / example.sh
Created March 6, 2023 05:17
How I run 65B using my fork of llama at https://github.com/shawwn/llama
mp=1; size=7B; # to run 7B
mp=8; size=65B; # to run 65B
for seed in $(randint 1000000)
do
export TARGET_FOLDER=~/ml/data/llama/LLaMA
time python3 -m torch.distributed.run --nproc_per_node $mp example.py --ckpt_dir $TARGET_FOLDER/$size --tokenizer_path $TARGET_FOLDER/tokenizer.model --seed $seed --max_seq_len 2048 --max_gen_len 2048 --count 0 | tee -a ${size}_startrek.txt
done
{
"method": "ProtocolsConfigure",
"protocol": "https://identity.foundation/schemas/wallet",
"protocolVersion": "0.0.1",
"protocolDefinition": {
"labels": {
"persona": {
"schema": "https://schema.org/Person",
"dataFormat": [
"application/json"
{
"interface": "Protocols",
"method": "Configure",
"definition": {
"protocol": "https://chat.protocol/",
"types": {
"thread": {
"schema": "https://chat.protocol/schemas/thread",
"dataFormat": [
"application/json"
{
"method": "ProtocolsConfigure",
"protocol": "https://decentralized-music.org/protocol",
"protocolVersion": "1.0.0",
"protocolDefinition": {
"labels": {
"playlist": {
"schema": "https://decentralized-music.org/protocol/playlist",
"dataFormat": [
"application/json"
@hwayne
hwayne / Filters.tla
Created October 28, 2022 17:13
Email Filters
---- MODULE Filters ----
EXTENDS TLC, Integers
VARIABLE push_msgs, emails, i, filtered, pushed
vars == <<push_msgs, emails, i, filtered, pushed>>
set ++ x == set \union {x}
set -- x == set \ {x}
TypeInv ==
/\ emails \subseteq (1..3)
@eyeseast
eyeseast / python.md
Last active January 26, 2026 17:05
How to set up Python in 2022

I have an updated version of this on my blog here: https://chrisamico.com/blog/2023-01-14/python-setup/.

Python

This is my recommended Python setup, as of Fall 2022. The Python landscape can be a confusing mess of overlapping tools that sometimes don't work well together. This is an effort to standardize our approach and environments.

Tools and helpful links:

  • Python docs: https://docs.python.org/3/
  • Python Standard Library:  - Start here when you're trying to solve a specific problem