Multi Injected Provider Discovery - wallets announce themselves, dapps discover them.
Spec: https://eips.ethereum.org/EIPS/eip-6963 Reference: https://github.com/wevm/mipd
| #!/usr/bin/env bun | |
| /** | |
| * Prometheus MCP Server | |
| * | |
| * Enables AI agents to query Prometheus metrics for debugging and observability. | |
| * Provides tools to: | |
| * - Query instant metrics | |
| * - Query range metrics over time | |
| * - List available metrics | |
| * - Get service health status |
Multi Injected Provider Discovery - wallets announce themselves, dapps discover them.
Spec: https://eips.ethereum.org/EIPS/eip-6963 Reference: https://github.com/wevm/mipd
Refactor Plue's architecture so ALL traffic flows through Cloudflare:
After this change, the origin server will ONLY accept connections from Cloudflare.
| """ | |
| AI-powered code review workflow with multiple focus areas. | |
| Each focus runs an independent review pass with tool access to explore | |
| the codebase and gather context before making judgments. | |
| """ | |
| from plue import workflow, pull_request | |
| from plue.prompts import CodeReview | |
| from plue.tools import readfile, grep, glob, websearch |
| #!/usr/bin/env bun | |
| /** | |
| * Prometheus MCP Server | |
| * | |
| * Enables AI agents to query Prometheus metrics for debugging and observability. | |
| * Provides tools to: | |
| * - Query instant metrics | |
| * - Query range metrics over time | |
| * - List available metrics | |
| * - Get service health status |
Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface we have.
A modern LLM call is stateless. It does not remember your repo, your last run, or what you meant five minutes ago. It only sees what you send this time. The “context window” is not a mystical entity; it’s the tokens in the request: instructions, the plan, tool outputs, snippets of files, diffs, logs—everything the model can condition on.
Once that clicks, a blunt conclusion follows:
(Expanded draft; clarity-first; using your literal examples and workflows)
Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface you have.
A modern LLM call is stateless. It doesn’t remember your repo, your last run, or what you meant five minutes ago. It only sees what you send this time. The “context window” is not a mystical thing; it’s just the tokens in the request: system prompt + user prompt + tool outputs + any files/snippets you included.
A first draft
Most people talk about “getting better at prompting” as if the goal were to discover a perfect incantation that unlocks the model’s intelligence. That framing misses the actual control surface we have.
A modern LLM call is stateless. It does not “remember” anything from last time unless you send it again. Your application can simulate state (a chat transcript, tool outputs, a memory file), but the model itself is still just: input tokens → output tokens. The “context window” is not a mystical entity; it’s whatever you put in the HTTP request.
And once you accept that, a fairly strong claim becomes hard to unsee:
How MemoryTool Works in Pydantic AI
The MemoryTool in Pydantic AI is a built-in tool that enables agents to persist information across conversations. Here's how it works:
Architecture
@dataclass(kw_only=True) class MemoryTool(AbstractBuiltinTool):
Implementation Retrospective
What Went Smoothly
Issues Encountered