Skip to content

Instantly share code, notes, and snippets.

View jsalix's full-sized avatar

Josiah S jsalix

  • Southern Oregon
View GitHub Profile
@crodjer
crodjer / silence.md
Created August 24, 2025 14:19
A prompt that embraces silence
  • Respond briefly, directly, and tersely, using as few words as possible. Focus on the core point without elaboration, detail, or follow-up questions.
  • Say only what is necessary to help with the user's question.
  • Assume the user knows everything except the question asked.
  • Prioritize brevity over detail.
  • Don't be a sycophant.
  • Don't use headings, excessive formatting, or emoji.
  • Use lists, bold, etc., for clarity only if required.
  • Use - for lists and only put one space after list / numbered list symbols. Do not use * to represent bullets.
@av
av / post.md
Created December 28, 2024 20:14
r/LocalLLaMA - a year in review

r/LocalLLaMA - a year in review

This community was a great part of my life for the past two years, so as 2024 comes to a close, I wanted to feed my nostalgia a bit. Let me take you back to the most notable things happened here this year.

This isn't a log of model releases or research, rather things that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on of sorts. I hope that it'll also show the amount of progress and development that happend in just a single year and make you even more excited for what's to come in 2025.


The year started with the excitement about Phi-2 (443 upvotes, by u/steph_pop). Phi-2 feels like ancient history these days, it's also fascinating that we end the 2024 with the Phi-4. Just one week after, people discovered that apparently it [was trained on the software engineer's diary](https://reddit.com/r/LocalLLaMA/comments/1

@gittb
gittb / config.json
Last active November 19, 2024 15:54
Continue.dev: Qwen2.5-Coder 32B Instruct Chat and Autocomplete FIM config
{
"models": [
{
"apiBase": "YOURLOCALMODEL:8000/v1",
"title": "Qwen2.5-Coder-32B-Instruct",
"model": "/models/Qwen2.5-Coder-32B-Instruct",
"provider": "openai",
"apiKey": "YOURKEY"
}
],
@mausch
mausch / flake.nix
Last active June 14, 2025 16:46
llama-vicuna.nix
{
description = "llama.cpp running vicuna";
inputs = {
llama.url = "github:ggerganov/llama.cpp/aaf3b23debc1fe1a06733c8c6468fb84233cc44f";
flake-utils.url = "github:numtide/flake-utils/033b9f258ca96a10e543d4442071f614dc3f8412";
nixpkgs.url = "github:NixOS/nixpkgs/d9f759f2ea8d265d974a6e1259bd510ac5844c5d";
};
outputs = { self, flake-utils, llama, nixpkgs }:
@khvn26
khvn26 / steamos-vscode-docker-guide.md
Last active October 11, 2025 20:08
SteamOS VSCode + Docker guide

SteamOS VSCode + Docker guide

  1. Install Docker:

    sudo pacman -S docker
  2. Enable Docker systemctl unit:

@clample
clample / dualboot.md
Last active November 2, 2025 01:45
NixOS Ubuntu Dual Boot

NixOS Ubuntu Dual Boot

Why?

After using NixOS for a year, I've found it to be a great operating system. When the software I need is on nixpkgs, things work out great. When I need to install software from outside of nixpkgs, though, it can become a pain. Trying to figure out the quirks of some closed source application can become pretty complicated. It would be great to package it and contribute it back to nixpkgs, but a lot of the time I just want to have the application working as soon as possible.

Since Ubuntu is a more standard linux distribution, I hope that it's better supported by some of these closed source applications. By dual booting, it's possible to get the best of both worlds.

Alternatives to Dual Booting

@karpathy
karpathy / min-char-rnn.py
Last active December 31, 2025 01:12
Minimal character-level language model with a Vanilla Recurrent Neural Network, in Python/numpy
"""
Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy)
BSD License
"""
import numpy as np
# data I/O
data = open('input.txt', 'r').read() # should be simple plain text file
chars = list(set(data))
data_size, vocab_size = len(data), len(chars)