- Respond briefly, directly, and tersely, using as few words as possible. Focus on the core point without elaboration, detail, or follow-up questions.
- Say only what is necessary to help with the user's question.
- Assume the user knows everything except the question asked.
- Prioritize brevity over detail.
- Don't be a sycophant.
- Don't use headings, excessive formatting, or emoji.
- Use lists, bold, etc., for clarity only if required.
- Use
-for lists and only put one space after list / numbered list symbols. Do not use*to represent bullets.
Default meta prompt collection: https://gist.github.com/pyros-projects/c77402249b5b45f0a501998870766ae9
Meta prompt collection with creating summaries and context sync (use them when using Cline or other coding assistants): https://gist.github.com/pyros-projects/f6430df8ac6f1ac37e5cfb6a8302edcf
This community was a great part of my life for the past two years, so as 2024 comes to a close, I wanted to feed my nostalgia a bit. Let me take you back to the most notable things happened here this year.
This isn't a log of model releases or research, rather things that were discussed and upvoted by the people here. So notable things missing is also an indication of what was going on of sorts. I hope that it'll also show the amount of progress and development that happend in just a single year and make you even more excited for what's to come in 2025.
The year started with the excitement about Phi-2 (443 upvotes, by u/steph_pop). Phi-2 feels like ancient history these days, it's also fascinating that we end the 2024 with the Phi-4. Just one week after, people discovered that apparently it [was trained on the software engineer's diary](https://reddit.com/r/LocalLLaMA/comments/1
| { | |
| "models": [ | |
| { | |
| "apiBase": "YOURLOCALMODEL:8000/v1", | |
| "title": "Qwen2.5-Coder-32B-Instruct", | |
| "model": "/models/Qwen2.5-Coder-32B-Instruct", | |
| "provider": "openai", | |
| "apiKey": "YOURKEY" | |
| } | |
| ], |
| { | |
| description = "llama.cpp running vicuna"; | |
| inputs = { | |
| llama.url = "github:ggerganov/llama.cpp/aaf3b23debc1fe1a06733c8c6468fb84233cc44f"; | |
| flake-utils.url = "github:numtide/flake-utils/033b9f258ca96a10e543d4442071f614dc3f8412"; | |
| nixpkgs.url = "github:NixOS/nixpkgs/d9f759f2ea8d265d974a6e1259bd510ac5844c5d"; | |
| }; | |
| outputs = { self, flake-utils, llama, nixpkgs }: |
After using NixOS for a year, I've found it to be a great operating system. When the software I need is on nixpkgs, things work out great. When I need to install software from outside of nixpkgs, though, it can become a pain. Trying to figure out the quirks of some closed source application can become pretty complicated. It would be great to package it and contribute it back to nixpkgs, but a lot of the time I just want to have the application working as soon as possible.
Since Ubuntu is a more standard linux distribution, I hope that it's better supported by some of these closed source applications. By dual booting, it's possible to get the best of both worlds.
| """ | |
| Minimal character-level Vanilla RNN model. Written by Andrej Karpathy (@karpathy) | |
| BSD License | |
| """ | |
| import numpy as np | |
| # data I/O | |
| data = open('input.txt', 'r').read() # should be simple plain text file | |
| chars = list(set(data)) | |
| data_size, vocab_size = len(data), len(chars) |