Skip to content

Instantly share code, notes, and snippets.

@roninjin10
roninjin10 / prommcp.ts
Created December 29, 2025 19:51
vibe coded prom mcp
#!/usr/bin/env bun
/**
* Prometheus MCP Server
*
* Enables AI agents to query Prometheus metrics for debugging and observability.
* Provides tools to:
* - Query instant metrics
* - Query range metrics over time
* - List available metrics
* - Get service health status
@roninjin10
roninjin10 / prompt.example.md
Created December 29, 2025 09:18
Prompt example
@roninjin10
roninjin10 / greatprompt.md
Created December 24, 2025 07:11
Great prompt

Task: Route All Plue Traffic Through Cloudflare Edge

Objective

Refactor Plue's architecture so ALL traffic flows through Cloudflare:

  1. HTTP/API → Cloudflare Edge Worker (already exists, add SIWE auth)
  2. Git SSH → Cloudflare Spectrum (new)
  3. Origin Protection → mTLS with custom certificates (new)

After this change, the origin server will ONLY accept connections from Cloudflare.

@roninjin10
roninjin10 / workflowexample.py
Created December 23, 2025 08:31
workflow example
"""
AI-powered code review workflow with multiple focus areas.
Each focus runs an independent review pass with tool access to explore
the codebase and gather context before making judgments.
"""
from plue import workflow, pull_request
from plue.prompts import CodeReview
from plue.tools import readfile, grep, glob, websearch
@roninjin10
roninjin10 / mcpexample.ts
Created December 23, 2025 04:30
mcp server
#!/usr/bin/env bun
/**
* Prometheus MCP Server
*
* Enables AI agents to query Prometheus metrics for debugging and observability.
* Provides tools to:
* - Query instant metrics
* - Query range metrics over time
* - List available metrics
* - Get service health status
@roninjin10
roninjin10 / hillclimbing.context.md
Created December 18, 2025 14:05
Hill Climbing Context Draft 3

Hill Climbing Context

The control surface for agents

Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface we have.

A modern LLM call is stateless. It does not remember your repo, your last run, or what you meant five minutes ago. It only sees what you send this time. The “context window” is not a mystical entity; it’s the tokens in the request: instructions, the plan, tool outputs, snippets of files, diffs, logs—everything the model can condition on.

Once that clicks, a blunt conclusion follows:

@roninjin10
roninjin10 / context.hill.climbing.2.md
Created December 18, 2025 00:55
Context Hill Climbing Draft 2

Context Hill Climbing

How agents get “smarter” without the model changing

(Expanded draft; clarity-first; using your literal examples and workflows)

Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface you have.

A modern LLM call is stateless. It doesn’t remember your repo, your last run, or what you meant five minutes ago. It only sees what you send this time. The “context window” is not a mystical thing; it’s just the tokens in the request: system prompt + user prompt + tool outputs + any files/snippets you included.

@roninjin10
roninjin10 / hillclimbing.context.md
Created December 17, 2025 15:40
Hill Climbing Context

Hill Climbing Context: The Control Surface for Agents

A first draft

Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface we have.

A modern LLM call is stateless. It does not “remember” anything from last time unless you send it again. Your application can simulate state (a chat transcript, tool outputs, a memory file), but the model itself is still just: input tokens → output tokens. The “context window” is not a mystical entity; it’s whatever you put in the HTTP request.

And once you accept that, a fairly strong claim becomes hard to unsee:

@roninjin10
roninjin10 / memorytool.md
Created December 15, 2025 16:13
memory tool

How MemoryTool Works in Pydantic AI

The MemoryTool in Pydantic AI is a built-in tool that enables agents to persist information across conversations. Here's how it works:

Architecture

  1. Pydantic AI Side: The MemoryTool class (builtin_tools.py:336-345) is a simple marker that tells Anthropic's API to enable the memory capability:

@dataclass(kw_only=True) class MemoryTool(AbstractBuiltinTool):

@roninjin10
roninjin10 / implementationretro.md
Created December 15, 2025 01:46
Implementation Retrospective lizardbrain.plan.to.plan.md lizard brain plan to plan lizard.plan.md lizard.plan.md gist:8f6cc951c9d440411a8a24776846ec29 We give this lizard brain metaprompt promprt to the agent as instructions to implement https://gist.github.com/roninjin10/a4016d3144140ac94fe3ffbb7479180b

Implementation Retrospective

What Went Smoothly

  1. Parallelization worked well - The foundation files (signal.ts, trigger.ts, handler.ts) and handlers (compaction.ts, struggling.ts) were implemented in parallel without issues. The spec's interface definitions were clear enough that agents could work independently.
  2. Pattern discovery - The initial exploration agent provided excellent context on Bus, Instance.state, and namespace patterns. This made subsequent implementation consistent with the codebase.
  3. TypeScript caught issues early - The type system immediately flagged that the SDK types also needed updating for the "lizard" trigger.

Issues Encountered