Skip to content

Instantly share code, notes, and snippets.

@Colelyman
Created February 6, 2026 18:41
Show Gist options
  • Select an option

  • Save Colelyman/61d156be5c0b1d9c15acf9d18a017109 to your computer and use it in GitHub Desktop.

Select an option

Save Colelyman/61d156be5c0b1d9c15acf9d18a017109 to your computer and use it in GitHub Desktop.
Pi Coding Agent Extensions - Debug, Guardrails, Review, Todos, Emacs Bridge, Web Browser

Pi Coding Agent Extensions

Custom extensions for pi, a CLI coding agent.

Directory Structure

Place these files in ~/.pi/agent/extensions/:

~/.pi/agent/extensions/
├── debug.ts                    # Systematic debugging extension
├── guardrails.ts               # Safety guardrails extension  
├── review.ts                   # Code review extension
├── todos.ts                    # Todo management extension
├── emacs-bridge/
│   └── index.ts                # ← emacs-bridge.ts in this gist
└── web-browser/
    ├── index.ts                # ← web-browser.ts in this gist
    ├── package.json            # Dependencies (ws for WebSocket)
    └── scripts/
        ├── cdp.js              # Chrome DevTools Protocol client
        ├── dismiss-cookies.js  # Auto-dismiss cookie consent dialogs
        ├── eval.js             # Evaluate JavaScript in browser
        ├── logs-tail.js        # Tail browser console/network logs
        ├── nav.js              # Navigate to URLs
        ├── net-summary.js      # Summarize network activity
        ├── pick.js             # Interactive element picker
        ├── screenshot.js       # Take screenshots
        ├── start.js            # Start Chrome with remote debugging
        └── watch.js            # Background logger for console/network

Extensions

debug.ts

Provides /debug command for systematic debugging sessions. Enforces finding root cause before attempting fixes, tracks fix attempts, and warns when architecture may be the problem.

guardrails.ts

Safety guardrails that prompt for confirmation before:

  • Deleting files not tracked in git
  • Installing packages system-wide

Also sends OS notifications when pi needs input.

review.ts

Code review extension with commands:

  • /review - Review uncommitted changes, branches, commits, or PRs
  • /post-review - Post findings to GitHub PR as review comments
  • /end-review - Complete review session

todos.ts

File-based todo management stored in .pi/todos/:

  • /todo - List and manage todos
  • /plan - Socratic planning to refine ideas into well-specified todos

emacs-bridge.ts → emacs-bridge/index.ts

Integrates pi with Emacs via HTTP. Requires pi-bridge.el running in Emacs.

Tools: emacs_buffers, emacs_read, emacs_context, emacs_goto, emacs_insert, emacs_edit, emacs_eval, emacs_project_files, emacs_project_buffers

web-browser.ts → web-browser/index.ts

Browser automation via Chrome DevTools Protocol. Use /browser to activate.

# Install dependency
cd ~/.pi/agent/extensions/web-browser && npm install

# Usage
node scripts/start.js              # Start Chrome with remote debugging
node scripts/nav.js https://...    # Navigate
node scripts/screenshot.js         # Take screenshot
node scripts/eval.js 'document.title'

Installation

  1. Clone/copy files to ~/.pi/agent/extensions/ maintaining the directory structure above
  2. Rename emacs-bridge.tsemacs-bridge/index.ts
  3. Rename web-browser.tsweb-browser/index.ts
  4. For web-browser, run npm install in that directory
  5. Restart pi

License

MIT

/**
* Minimal CDP client - no puppeteer, no hangs
*/
import WebSocket from "ws";
export async function connect(timeout = 5000) {
const controller = new AbortController();
const timeoutId = setTimeout(() => controller.abort(), timeout);
try {
const resp = await fetch("http://localhost:9222/json/version", {
signal: controller.signal,
});
const { webSocketDebuggerUrl } = await resp.json();
clearTimeout(timeoutId);
return new Promise((resolve, reject) => {
const ws = new WebSocket(webSocketDebuggerUrl);
const connectTimeout = setTimeout(() => {
ws.close();
reject(new Error("WebSocket connect timeout"));
}, timeout);
ws.on("open", () => {
clearTimeout(connectTimeout);
resolve(new CDP(ws));
});
ws.on("error", (e) => {
clearTimeout(connectTimeout);
reject(e);
});
});
} catch (e) {
clearTimeout(timeoutId);
if (e.name === "AbortError") {
throw new Error("Connection timeout - is Chrome running with --remote-debugging-port=9222?");
}
throw e;
}
}
class CDP {
constructor(ws) {
this.ws = ws;
this.id = 0;
this.callbacks = new Map();
this.sessions = new Map();
this.eventHandlers = new Map();
ws.on("message", (data) => {
const msg = JSON.parse(data.toString());
if (msg.id && this.callbacks.has(msg.id)) {
const { resolve, reject } = this.callbacks.get(msg.id);
this.callbacks.delete(msg.id);
if (msg.error) {
reject(new Error(msg.error.message));
} else {
resolve(msg.result);
}
return;
}
if (msg.method) {
this.emit(msg.method, msg.params || {}, msg.sessionId || null);
}
});
}
on(method, handler) {
if (!this.eventHandlers.has(method)) {
this.eventHandlers.set(method, new Set());
}
this.eventHandlers.get(method).add(handler);
return () => this.off(method, handler);
}
off(method, handler) {
const handlers = this.eventHandlers.get(method);
if (!handlers) return;
handlers.delete(handler);
if (handlers.size === 0) {
this.eventHandlers.delete(method);
}
}
emit(method, params, sessionId) {
const handlers = this.eventHandlers.get(method);
if (!handlers || handlers.size === 0) return;
for (const handler of handlers) {
try {
handler(params, sessionId);
} catch {
// Ignore handler errors to keep CDP session alive.
}
}
}
send(method, params = {}, sessionId = null, timeout = 10000) {
return new Promise((resolve, reject) => {
const msgId = ++this.id;
const msg = { id: msgId, method, params };
if (sessionId) msg.sessionId = sessionId;
const timeoutId = setTimeout(() => {
this.callbacks.delete(msgId);
reject(new Error(`CDP timeout: ${method}`));
}, timeout);
this.callbacks.set(msgId, {
resolve: (result) => {
clearTimeout(timeoutId);
resolve(result);
},
reject: (err) => {
clearTimeout(timeoutId);
reject(err);
},
});
this.ws.send(JSON.stringify(msg));
});
}
async getPages() {
const { targetInfos } = await this.send("Target.getTargets");
return targetInfos.filter((t) => t.type === "page");
}
async attachToPage(targetId) {
const { sessionId } = await this.send("Target.attachToTarget", {
targetId,
flatten: true,
});
return sessionId;
}
async evaluate(sessionId, expression, timeout = 30000) {
const result = await this.send(
"Runtime.evaluate",
{
expression,
returnByValue: true,
awaitPromise: true,
},
sessionId,
timeout
);
if (result.exceptionDetails) {
throw new Error(
result.exceptionDetails.exception?.description ||
result.exceptionDetails.text
);
}
return result.result?.value;
}
async screenshot(sessionId, timeout = 10000) {
const { data } = await this.send(
"Page.captureScreenshot",
{ format: "png" },
sessionId,
timeout
);
return Buffer.from(data, "base64");
}
async navigate(sessionId, url, timeout = 30000) {
await this.send("Page.navigate", { url }, sessionId, timeout);
}
async getFrameTree(sessionId) {
const { frameTree } = await this.send("Page.getFrameTree", {}, sessionId);
return frameTree;
}
async evaluateInFrame(sessionId, frameId, expression, timeout = 30000) {
// Create isolated world for the frame
const { executionContextId } = await this.send(
"Page.createIsolatedWorld",
{ frameId, worldName: "cdp-eval" },
sessionId
);
const result = await this.send(
"Runtime.evaluate",
{
expression,
contextId: executionContextId,
returnByValue: true,
awaitPromise: true,
},
sessionId,
timeout
);
if (result.exceptionDetails) {
throw new Error(
result.exceptionDetails.exception?.description ||
result.exceptionDetails.text
);
}
return result.result?.value;
}
close() {
this.ws.close();
}
}
/**
* Systematic Debugging Extension
*
* Provides a `/debug` command that enforces a disciplined debugging process:
* 1. Find the root cause FIRST - no fixes until you understand WHY
* 2. Trace backwards through the call chain to find the original trigger
* 3. Test hypotheses minimally - one change at a time
* 4. Fix at the source, not the symptom
*
* Based on: https://github.com/obra/superpowers/tree/main/skills/systematic-debugging
*
* Usage:
* - `/debug` - Start debugging session (prompts for problem description)
* - `/debug test failure in auth module` - Start with specific problem
* - `/end-debug` - Complete debugging session and return
*/
import type { ExtensionAPI, ExtensionContext, ExtensionCommandContext } from "@mariozechner/pi-coding-agent";
import { BorderedLoader } from "@mariozechner/pi-coding-agent";
import { Text } from "@mariozechner/pi-tui";
// State for tracking debug session
let debugOriginId: string | undefined = undefined;
let debugFixAttempts = 0;
const DEBUG_STATE_TYPE = "debug-session";
type DebugSessionState = {
active: boolean;
originId?: string;
fixAttempts: number;
problem?: string;
};
function setDebugWidget(ctx: ExtensionContext, active: boolean, fixAttempts = 0) {
if (!ctx.hasUI) return;
if (!active) {
ctx.ui.setWidget("debug", undefined);
return;
}
ctx.ui.setWidget("debug", (_tui, theme) => {
const attemptsText = fixAttempts > 0 ? ` • ${fixAttempts} fix attempt${fixAttempts > 1 ? "s" : ""}` : "";
const warningText = fixAttempts >= 3 ? " ⚠️ QUESTION ARCHITECTURE" : "";
const text = new Text(
theme.fg("warning", `Debug session active${attemptsText}${warningText} — use /end-debug to return`),
0, 0
);
return {
render(width: number) { return text.render(width); },
invalidate() { text.invalidate(); },
};
});
}
function getDebugState(ctx: ExtensionContext): DebugSessionState | undefined {
let state: DebugSessionState | undefined;
for (const entry of ctx.sessionManager.getBranch()) {
if (entry.type === "custom" && entry.customType === DEBUG_STATE_TYPE) {
state = entry.data as DebugSessionState | undefined;
}
}
return state;
}
function applyDebugState(ctx: ExtensionContext) {
const state = getDebugState(ctx);
if (state?.active && state.originId) {
debugOriginId = state.originId;
debugFixAttempts = state.fixAttempts ?? 0;
setDebugWidget(ctx, true, debugFixAttempts);
return;
}
debugOriginId = undefined;
debugFixAttempts = 0;
setDebugWidget(ctx, false);
}
// The systematic debugging prompt
const DEBUG_PROMPT = `# Systematic Debugging Session
You are entering DEBUG MODE. Your goal is to find the ROOT CAUSE before attempting ANY fix.
## The Iron Law
\`\`\`
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
\`\`\`
Symptom fixes are failure. Quick patches mask underlying issues. You MUST complete Phase 1 before proposing any fix.
## The Problem
{problem}
---
## Phase 1: Root Cause Investigation (REQUIRED FIRST)
Complete ALL of these before moving to Phase 2:
### 1.1 Read Error Messages Carefully
- Don't skip past errors or warnings
- Read stack traces COMPLETELY
- Note line numbers, file paths, error codes
- They often contain the exact solution
### 1.2 Reproduce Consistently
- Can you trigger it reliably?
- What are the exact steps?
- If not reproducible → gather more data, don't guess
### 1.3 Check Recent Changes
- What changed that could cause this?
- \`git diff\`, recent commits
- New dependencies, config changes
### 1.4 Gather Evidence (for multi-component systems)
Before proposing fixes, add diagnostic instrumentation:
\`\`\`
For EACH component boundary:
- Log what data enters component
- Log what data exits component
- Verify environment/config propagation
- Check state at each layer
Run once to gather evidence showing WHERE it breaks
THEN analyze evidence to identify failing component
\`\`\`
### 1.5 Trace Data Flow Backwards
When error is deep in call stack:
- Where does bad value originate?
- What called this with bad value?
- Keep tracing UP until you find the source
- Fix at source, not at symptom
**Ask yourself:** "What called this? And what called that?"
---
## Phase 2: Pattern Analysis
### 2.1 Find Working Examples
- Locate similar working code in same codebase
- What works that's similar to what's broken?
### 2.2 Compare Against References
- If implementing a pattern, read reference implementation COMPLETELY
- Don't skim - understand fully
### 2.3 Identify Differences
- What's different between working and broken?
- List every difference, however small
---
## Phase 3: Hypothesis and Testing
### 3.1 Form Single Hypothesis
State clearly: "I think X is the root cause because Y"
Be specific, not vague.
### 3.2 Test Minimally
- Make the SMALLEST possible change to test hypothesis
- ONE variable at a time
- Don't fix multiple things at once
### 3.3 Verify Before Continuing
- Did it work? → Phase 4
- Didn't work? → Form NEW hypothesis, return to Phase 1 with new info
- DON'T add more fixes on top
---
## Phase 4: Implementation
### 4.1 Create Failing Test Case
- Simplest possible reproduction
- Automated test if possible
- MUST have before fixing
### 4.2 Implement Single Fix
- Address the root cause identified
- ONE change at a time
- No "while I'm here" improvements
### 4.3 Verify Fix
- Test passes now?
- No other tests broken?
- Issue actually resolved?
### 4.4 Add Defense-in-Depth
After fixing, add validation at EVERY layer data passes through:
- Layer 1: Entry point validation (API boundary)
- Layer 2: Business logic validation
- Layer 3: Environment guards (prevent dangerous ops in specific contexts)
- Layer 4: Debug instrumentation (for forensics)
---
## ⚠️ The 3-Fix Rule
**If 3+ fixes have failed:** STOP and question the architecture.
Pattern indicating architectural problem:
- Each fix reveals new shared state/coupling in different place
- Fixes require "massive refactoring" to implement
- Each fix creates new symptoms elsewhere
This is NOT a failed hypothesis - this is a wrong architecture.
Discuss with your human partner before attempting more fixes.
---
## 🚨 Red Flags - STOP and Return to Phase 1
If you catch yourself thinking:
- "Quick fix for now, investigate later"
- "Just try changing X and see if it works"
- "I don't fully understand but this might work"
- "It's probably X, let me fix that"
- Proposing solutions before tracing data flow
- "One more fix attempt" (when already tried 2+)
**ALL of these mean: STOP. Return to Phase 1.**
---
## Your Process
1. **Acknowledge** the problem and what you understand so far
2. **Investigate** - gather evidence, read errors, trace data flow
3. **Ask questions** if you need more information (ONE at a time)
4. **Form hypothesis** - state clearly what you think is wrong and why
5. **Test minimally** - propose the smallest possible verification
6. **Fix at source** - only after root cause is confirmed
7. **Add defenses** - make the bug structurally impossible
Start by reading any error messages or logs, then trace backwards to find the root cause.`;
// Prompt for incrementing fix attempts
const FIX_ATTEMPT_WARNING = `
---
## ⚠️ Fix Attempt #{count}
You are about to attempt fix #{count}. Remember:
{warning}
Before proceeding:
1. Have you confirmed the root cause through investigation?
2. Is this fix addressing the SOURCE, not the symptom?
3. Have you tested the hypothesis minimally first?
If you're not confident, return to Phase 1 and gather more evidence.
`;
function getFixAttemptWarning(count: number): string {
if (count >= 3) {
return `**STOP: You have attempted ${count - 1} fixes already.**
This pattern suggests an ARCHITECTURAL problem, not a bug:
- Each fix reveals new issues in different places
- You're playing whack-a-mole with symptoms
Before attempting another fix:
1. Step back and question the fundamental approach
2. Is this pattern/architecture sound?
3. Would refactoring be better than more patches?
Discuss with your human partner before continuing.`;
} else if (count === 2) {
return `You've already tried one fix that didn't fully work.
Before trying again:
- What NEW information did the failed fix reveal?
- Did you trace the data flow completely?
- Is there a deeper root cause you missed?`;
} else {
return `This is your first fix attempt. Make sure you:
- Have completed Phase 1 (Root Cause Investigation)
- Can state the root cause clearly
- Are fixing at the SOURCE, not the symptom`;
}
}
// Summary prompt for end-debug
const DEBUG_SUMMARY_PROMPT = `Summarize this debugging session for future reference.
Include:
1. **The Problem:** What was being debugged
2. **Investigation:** What evidence was gathered, what was traced
3. **Root Cause:** What was the actual source of the bug (not the symptom)
4. **Fix Applied:** What was changed and why
5. **Defenses Added:** What validation/guards were added to prevent recurrence
6. **Lessons Learned:** What would help debug similar issues faster
If the bug wasn't fully resolved, note:
- Current hypothesis
- What's been ruled out
- Suggested next steps`;
export default function debugExtension(pi: ExtensionAPI) {
// Restore debug state on session events
pi.on("session_start", (_event, ctx) => applyDebugState(ctx));
pi.on("session_switch", (_event, ctx) => applyDebugState(ctx));
pi.on("session_tree", (_event, ctx) => applyDebugState(ctx));
// Register /debug command
pi.registerCommand("debug", {
description: "Start a systematic debugging session (find root cause before fixing)",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Debug mode requires interactive mode", "error");
return;
}
if (debugOriginId) {
ctx.ui.notify("Already in a debug session. Use /end-debug first.", "warning");
return;
}
// Get problem description
let problem = args?.trim();
if (!problem) {
problem = await ctx.ui.editor(
"Describe the problem you're debugging:",
"e.g., Test failure in auth module - login returns 401 even with valid credentials"
);
if (!problem?.trim()) {
ctx.ui.notify("Debug session cancelled", "info");
return;
}
problem = problem.trim();
}
// Check if we should use fresh session
const entries = ctx.sessionManager.getEntries();
const messageCount = entries.filter(e => e.type === "message").length;
let useFreshSession = false;
if (messageCount > 0) {
const choice = await ctx.ui.select("Start debug session in:", ["Empty branch (recommended)", "Current session"]);
if (choice === undefined) {
ctx.ui.notify("Debug session cancelled", "info");
return;
}
useFreshSession = choice === "Empty branch (recommended)";
}
if (useFreshSession) {
const originId = ctx.sessionManager.getLeafId() ?? undefined;
if (!originId) {
ctx.ui.notify("Failed to determine origin", "error");
return;
}
debugOriginId = originId;
const lockedOriginId = originId;
const firstUserMessage = entries.find(e => e.type === "message" && e.message.role === "user");
if (!firstUserMessage) {
ctx.ui.notify("No user message found in session", "error");
debugOriginId = undefined;
return;
}
try {
const result = await ctx.navigateTree(firstUserMessage.id, { summarize: false, label: "debug-session" });
if (result.cancelled) {
debugOriginId = undefined;
return;
}
} catch (error) {
debugOriginId = undefined;
ctx.ui.notify(`Failed to start debug session: ${error instanceof Error ? error.message : String(error)}`, "error");
return;
}
debugOriginId = lockedOriginId;
ctx.ui.setEditorText("");
debugFixAttempts = 0;
setDebugWidget(ctx, true, 0);
pi.appendEntry(DEBUG_STATE_TYPE, { active: true, originId: lockedOriginId, fixAttempts: 0, problem });
}
const fullPrompt = DEBUG_PROMPT.replace("{problem}", problem);
ctx.ui.notify("Debug session started. Find the root cause before attempting any fix!", "info");
pi.sendUserMessage(fullPrompt);
},
});
// Register /fix-attempt command to track fix attempts
pi.registerCommand("fix-attempt", {
description: "Record a fix attempt (tracks count, warns at 3+)",
handler: async (args, ctx) => {
if (!debugOriginId) {
const state = getDebugState(ctx);
if (state?.active) {
debugOriginId = state.originId;
debugFixAttempts = state.fixAttempts ?? 0;
}
}
debugFixAttempts++;
setDebugWidget(ctx, true, debugFixAttempts);
// Update persisted state
if (debugOriginId) {
pi.appendEntry(DEBUG_STATE_TYPE, {
active: true,
originId: debugOriginId,
fixAttempts: debugFixAttempts
});
}
const warning = FIX_ATTEMPT_WARNING
.replace(/{count}/g, String(debugFixAttempts))
.replace("{warning}", getFixAttemptWarning(debugFixAttempts));
if (debugFixAttempts >= 3) {
ctx.ui.notify(`⚠️ Fix attempt #${debugFixAttempts} - STOP and question the architecture!`, "warning");
} else {
ctx.ui.notify(`Fix attempt #${debugFixAttempts} recorded`, "info");
}
pi.sendUserMessage(warning);
},
});
// Register /end-debug command
pi.registerCommand("end-debug", {
description: "Complete debugging session and return to original position",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Requires interactive mode", "error");
return;
}
if (!debugOriginId) {
const state = getDebugState(ctx);
if (state?.active && state.originId) {
debugOriginId = state.originId;
debugFixAttempts = state.fixAttempts ?? 0;
} else if (state?.active) {
setDebugWidget(ctx, false);
pi.appendEntry(DEBUG_STATE_TYPE, { active: false, fixAttempts: 0 });
ctx.ui.notify("Debug state cleared", "warning");
return;
} else {
ctx.ui.notify("Not in a debug session", "info");
return;
}
}
const choice = await ctx.ui.select("Summarize debugging session?", ["Summarize", "No summary"]);
if (choice === undefined) {
ctx.ui.notify("Cancelled. Use /end-debug to try again.", "info");
return;
}
const wantsSummary = choice === "Summarize";
const originId = debugOriginId;
if (wantsSummary) {
const result = await ctx.ui.custom<{ cancelled: boolean; error?: string } | null>((tui, theme, _kb, done) => {
const loader = new BorderedLoader(tui, theme, "Summarizing debug session...");
loader.onAbort = () => done(null);
ctx.navigateTree(originId!, {
summarize: true,
customInstructions: DEBUG_SUMMARY_PROMPT,
replaceInstructions: true,
})
.then(done)
.catch(err => done({ cancelled: false, error: err instanceof Error ? err.message : String(err) }));
return loader;
});
if (result === null) {
ctx.ui.notify("Cancelled. Use /end-debug to try again.", "info");
return;
}
if (result.error) {
ctx.ui.notify(`Failed: ${result.error}`, "error");
return;
}
setDebugWidget(ctx, false);
debugOriginId = undefined;
debugFixAttempts = 0;
pi.appendEntry(DEBUG_STATE_TYPE, { active: false, fixAttempts: 0 });
if (result.cancelled) {
ctx.ui.notify("Navigation cancelled", "info");
return;
}
if (!ctx.ui.getEditorText().trim()) {
ctx.ui.setEditorText("Apply the fix identified in the debug session");
}
ctx.ui.notify("Debug session complete!", "info");
} else {
try {
const result = await ctx.navigateTree(originId!, { summarize: false });
if (result.cancelled) {
ctx.ui.notify("Navigation cancelled", "info");
return;
}
setDebugWidget(ctx, false);
debugOriginId = undefined;
debugFixAttempts = 0;
pi.appendEntry(DEBUG_STATE_TYPE, { active: false, fixAttempts: 0 });
ctx.ui.notify("Debug session complete!", "info");
} catch (error) {
ctx.ui.notify(`Failed: ${error instanceof Error ? error.message : String(error)}`, "error");
}
}
},
});
// Register /trace command for quick root cause tracing
pi.registerCommand("trace", {
description: "Trace a value backwards through the call chain to find its origin",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Requires interactive mode", "error");
return;
}
let value = args?.trim();
if (!value) {
value = await ctx.ui.input(
"What value/error do you want to trace?",
"e.g., empty string in projectDir, null user object, 401 error"
);
if (!value?.trim()) {
ctx.ui.notify("Cancelled", "info");
return;
}
value = value.trim();
}
const tracePrompt = `# Root Cause Tracing
Trace the origin of: **${value}**
## The Process
### 1. Observe the Symptom
Where does this bad value appear? What error does it cause?
### 2. Find Immediate Cause
What code directly produces or uses this value?
### 3. Trace Backwards
Ask repeatedly: "What called this? What value was passed?"
\`\`\`
[Current location] ← receives bad value
↑ called by [caller 1] with value X
↑ called by [caller 2] with value Y
↑ called by [caller 3] ... ← FIND THE SOURCE
\`\`\`
### 4. Add Stack Traces if Needed
If you can't trace manually, add instrumentation:
\`\`\`typescript
console.error('DEBUG:', {
value,
stack: new Error().stack,
context: relevantContext
});
\`\`\`
### 5. Find the Original Trigger
Keep tracing until you find WHERE the bad value originates.
This is where the fix belongs - not where the error appears.
---
Start by identifying where "${value}" is used, then trace backwards to find where it comes from.`;
pi.sendUserMessage(tracePrompt);
},
});
}
#!/usr/bin/env node
/**
* Cookie Consent Dismissal Helper
*
* Automatically accepts cookie consent dialogs on EU websites.
* Supports common CMPs: OneTrust, Cookiebot, Didomi, Quantcast, Google, BBC, Amazon, etc.
*
* Usage:
* ./dismiss-cookies.js # Accept cookies
* ./dismiss-cookies.js --reject # Reject cookies (where possible)
*/
import { connect } from "./cdp.js";
const DEBUG = process.env.DEBUG === "1";
const log = DEBUG ? (...args) => console.error("[debug]", ...args) : () => {};
const reject = process.argv.includes("--reject");
const mode = reject ? "reject" : "accept";
const COOKIE_DISMISS_SCRIPT = `(acceptCookies) => {
const clicked = [];
const isVisible = (el) => {
if (!el) return false;
const style = getComputedStyle(el);
return style.display !== 'none' &&
style.visibility !== 'hidden' &&
style.opacity !== '0' &&
(el.offsetParent !== null || style.position === 'fixed' || style.position === 'sticky');
};
const tryClick = (selector, description) => {
const el = typeof selector === 'string' ? document.querySelector(selector) : selector;
if (isVisible(el)) {
el.click();
clicked.push(description || selector);
return true;
}
return false;
};
const findButtonByText = (patterns, container = document) => {
const buttons = Array.from(container.querySelectorAll('button, [role="button"], a.button, input[type="submit"], input[type="button"]'));
// Sort patterns by length descending to match more specific patterns first
const sortedPatterns = [...patterns].sort((a, b) => b.length - a.length);
// Check patterns in order of specificity (longest first)
for (const pattern of sortedPatterns) {
for (const btn of buttons) {
const text = (btn.textContent || btn.value || '').trim().toLowerCase();
if (text.length > 100) continue; // Skip buttons with very long text
if (!isVisible(btn)) continue; // Skip hidden buttons
if (typeof pattern === 'string' ? text.includes(pattern) : pattern.test(text)) {
return btn;
}
}
}
return null;
};
const acceptPatterns = [
'accept all', 'accept cookies', 'allow all', 'allow cookies',
'i agree', 'i accept', 'yes, i agree', 'agree and continue',
'alle akzeptieren', 'akzeptieren', 'alle zulassen', 'zustimmen',
'annehmen', 'einverstanden',
'accepter tout', 'tout accepter', "j'accepte", 'accepter et continuer', 'accepter',
'accetta tutti', 'accetta', 'accetto',
'aceptar todo', 'aceptar', 'acepto',
'aceitar tudo', 'aceitar',
'continue', 'agree',
];
const rejectPatterns = [
'reject all', 'decline all', 'deny all', 'refuse all',
'i do not agree', 'i disagree', 'no thanks',
'alle ablehnen', 'ablehnen', 'nicht zustimmen',
'refuser tout', 'tout refuser', 'refuser',
'rifiuta tutti', 'rifiuta',
'rechazar todo', 'rechazar',
'rejeitar tudo', 'rejeitar',
'only necessary', 'necessary only', 'nur notwendige',
'essential only', 'nur essentielle',
];
const patterns = acceptCookies ? acceptPatterns : rejectPatterns;
// OneTrust
if (document.querySelector('#onetrust-banner-sdk')) {
const selector = acceptCookies ? '#onetrust-accept-btn-handler' : '#onetrust-reject-all-handler';
if (tryClick(selector, 'OneTrust')) return clicked;
}
// Google
if (document.querySelector('[data-consent-dialog]') || document.querySelector('form[action*="consent.google"]') || document.querySelector('#CXQnmb')) {
const selector = acceptCookies ? '#L2AGLb' : '#W0wltc';
if (tryClick(selector, 'Google Consent')) return clicked;
}
// YouTube (Google-owned, custom consent element)
if (document.querySelector('ytd-consent-bump-v2-lightbox')) {
const btn = Array.from(document.querySelectorAll('ytd-consent-bump-v2-lightbox button'))
.find(b => acceptCookies
? b.textContent.includes('Accept all') || b.ariaLabel?.includes('Accept')
: b.textContent.includes('Reject all') || b.ariaLabel?.includes('Reject'));
if (btn) {
btn.click();
clicked.push('YouTube');
return clicked;
}
}
// Cookiebot
if (document.querySelector('#CybotCookiebotDialog')) {
const selector = acceptCookies
? '#CybotCookiebotDialogBodyLevelButtonLevelOptinAllowAll, #CybotCookiebotDialogBodyButtonAccept'
: '#CybotCookiebotDialogBodyButtonDecline, #CybotCookiebotDialogBodyLevelButtonLevelOptinDeclineAll';
if (tryClick(selector, 'Cookiebot')) return clicked;
}
// Didomi
if (document.querySelector('#didomi-host') || window.Didomi) {
const selector = acceptCookies ? '#didomi-notice-agree-button' : '#didomi-notice-disagree-button, [data-testid="disagree-button"]';
if (tryClick(selector, 'Didomi')) return clicked;
}
// Quantcast
if (document.querySelector('.qc-cmp2-container')) {
const selector = acceptCookies
? '.qc-cmp2-summary-buttons button[mode="primary"], .qc-cmp2-button[data-testid="accept-all"]'
: '.qc-cmp2-summary-buttons button[mode="secondary"], .qc-cmp2-button[data-testid="reject-all"]';
if (tryClick(selector, 'Quantcast')) return clicked;
}
// Usercentrics (shadow DOM)
const ucRoot = document.querySelector('#usercentrics-root');
if (ucRoot && ucRoot.shadowRoot) {
const shadow = ucRoot.shadowRoot;
const btn = acceptCookies
? shadow.querySelector('[data-testid="uc-accept-all-button"]')
: shadow.querySelector('[data-testid="uc-deny-all-button"]');
if (btn) { btn.click(); clicked.push('Usercentrics'); return clicked; }
}
// TrustArc
if (document.querySelector('#truste-consent-track') || document.querySelector('.trustarc-banner')) {
const selector = acceptCookies ? '#truste-consent-button, .trustarc-agree-btn' : '.trustarc-decline-btn';
if (tryClick(selector, 'TrustArc')) return clicked;
}
// Klaro
if (document.querySelector('.klaro')) {
const selector = acceptCookies ? '.klaro .cm-btn-accept-all, .klaro .cm-btn-success' : '.klaro .cm-btn-decline';
if (tryClick(selector, 'Klaro')) return clicked;
}
// BBC
if (document.querySelector('#bbccookies, .bbccookies-banner')) {
if (acceptCookies && tryClick('#bbccookies-continue-button', 'BBC')) return clicked;
}
// Amazon
if (document.querySelector('#sp-cc') || document.querySelector('#sp-cc-accept')) {
const selector = acceptCookies ? '#sp-cc-accept' : '#sp-cc-rejectall-link, #sp-cc-decline';
if (tryClick(selector, 'Amazon')) return clicked;
}
// CookieYes
if (document.querySelector('#cookie-law-info-bar') || document.querySelector('.cky-consent-container')) {
const selector = acceptCookies ? '#cookie_action_close_header, .cky-btn-accept' : '.cky-btn-reject';
if (tryClick(selector, 'CookieYes')) return clicked;
}
// Generic containers
const consentContainers = [
'[class*="cookie-banner"]', '[class*="cookie-consent"]', '[class*="cookie-notice"]',
'[class*="cookieBanner"]', '[class*="cookieConsent"]', '[class*="cookieNotice"]',
'[id*="cookie-banner"]', '[id*="cookie-consent"]', '[id*="cookie-notice"]',
'[class*="consent-banner"]', '[class*="consent-modal"]', '[class*="consent-dialog"]',
'[class*="gdpr"]', '[id*="gdpr"]', '[class*="privacy-banner"]', '[class*="privacy-notice"]',
'[role="dialog"][aria-label*="cookie" i]', '[role="dialog"][aria-label*="consent" i]',
];
for (const containerSel of consentContainers) {
const containers = document.querySelectorAll(containerSel);
for (const container of containers) {
if (!isVisible(container)) continue;
// Skip html/body elements that might match
if (container.tagName === 'HTML' || container.tagName === 'BODY') continue;
const btn = findButtonByText(patterns, container);
if (btn) { btn.click(); clicked.push('Generic (' + containerSel + ')'); return clicked; }
}
}
// Last resort: find button near cookie-related text content
// Look for visible containers that mention "cookie" and have accept/reject buttons
// Include custom elements (Reddit uses rpl-modal-card, etc.)
const allContainers = document.querySelectorAll('div, section, aside, [class*="modal"], [class*="dialog"], [role="dialog"]');
for (const container of allContainers) {
if (!isVisible(container)) continue;
const text = container.textContent?.toLowerCase() || '';
// Must mention cookies and be reasonably sized (not the whole page)
if (text.includes('cookie') && text.length > 100 && text.length < 3000) {
const btn = findButtonByText(patterns, container);
if (btn && isVisible(btn)) {
btn.click();
clicked.push('Generic (text-based)');
return clicked;
}
}
}
// Final fallback: look for any visible button with exact accept/reject text
// that appears alongside cookie-related content on the page
if (document.body.textContent?.toLowerCase().includes('cookie')) {
const exactPatterns = acceptCookies
? ['accept all', 'accept cookies', 'allow all', 'i agree', 'alle akzeptieren']
: ['reject all', 'decline all', 'reject optional', 'alle ablehnen'];
const singleWordPatterns = acceptCookies ? ['accept', 'agree'] : ['reject', 'decline'];
const buttons = document.querySelectorAll('button, [role="button"]');
for (const btn of buttons) {
if (!isVisible(btn)) continue;
const text = (btn.textContent || '').trim().toLowerCase();
if (exactPatterns.some(p => text.includes(p))) {
btn.click();
clicked.push('Generic (exact match)');
return clicked;
}
}
// Try single-word matches as last resort
for (const btn of buttons) {
if (!isVisible(btn)) continue;
const text = (btn.textContent || '').trim().toLowerCase();
if (singleWordPatterns.some(p => text === p)) {
btn.click();
clicked.push('Generic (single word)');
return clicked;
}
}
}
return clicked;
}`;
const IFRAME_DISMISS_SCRIPT = `(acceptCookies) => {
const clicked = [];
const isVisible = (el) => {
if (!el) return false;
const style = getComputedStyle(el);
return style.display !== 'none' &&
style.visibility !== 'hidden' &&
style.opacity !== '0' &&
(el.offsetParent !== null || style.position === 'fixed' || style.position === 'sticky');
};
const rejectIndicators = ['do not', "don't", 'nicht', 'no ', 'refuse', 'reject', 'decline', 'deny', 'disagree', 'ablehnen', 'refuser', 'rifiuta', 'rechazar', 'manage', 'settings', 'options', 'customize'];
const acceptIndicators = ['accept', 'agree', 'allow', 'yes', 'ok', 'got it', 'continue', 'akzeptieren', 'zustimmen', 'accepter', 'accetta', 'aceptar'];
const isRejectButton = (text) => rejectIndicators.some(p => text.includes(p));
const isAcceptButton = (text) => acceptIndicators.some(p => text.includes(p)) && !isRejectButton(text);
const buttons = document.querySelectorAll('button, [role="button"]');
for (const btn of buttons) {
const text = (btn.textContent || '').trim().toLowerCase();
if (!isVisible(btn)) continue;
const shouldClick = acceptCookies ? isAcceptButton(text) : isRejectButton(text);
if (shouldClick) { btn.click(); clicked.push('iframe: ' + text.slice(0, 30)); return clicked; }
}
const spBtn = acceptCookies
? document.querySelector('[title="Accept All"], [title="Accept"], [aria-label*="Accept"]')
: document.querySelector('[title="Reject All"], [title="Reject"], [aria-label*="Reject"]');
if (spBtn) { spBtn.click(); clicked.push('Sourcepoint iframe'); return clicked; }
return clicked;
}`;
// Recursively collect all frames
function collectFrames(frameTree, frames = []) {
frames.push({ id: frameTree.frame.id, url: frameTree.frame.url });
if (frameTree.childFrames) {
for (const child of frameTree.childFrames) {
collectFrames(child, frames);
}
}
return frames;
}
// Global timeout
const globalTimeout = setTimeout(() => {
console.error("✗ Global timeout exceeded (30s)");
process.exit(1);
}, 30000);
try {
log("connecting...");
const cdp = await connect(5000);
log("getting pages...");
const pages = await cdp.getPages();
const page = pages.at(-1);
if (!page) {
console.error("✗ No active tab found");
process.exit(1);
}
log("attaching to page...");
const sessionId = await cdp.attachToPage(page.targetId);
// Wait a bit for consent dialogs to appear
await new Promise((r) => setTimeout(r, 500));
log("trying main page...");
let result = await cdp.evaluate(sessionId, `(${COOKIE_DISMISS_SCRIPT})(${!reject})`);
// If nothing found, try iframes
if (result.length === 0) {
log("trying iframes...");
try {
const frameTree = await cdp.getFrameTree(sessionId);
const frames = collectFrames(frameTree);
for (const frame of frames) {
if (frame.url === "about:blank" || frame.url.startsWith("javascript:")) continue;
if (
frame.url.includes("sp_message") ||
frame.url.includes("consent") ||
frame.url.includes("privacy") ||
frame.url.includes("cmp") ||
frame.url.includes("sourcepoint") ||
frame.url.includes("cookie") ||
frame.url.includes("privacy-mgmt")
) {
log("trying frame:", frame.url.slice(0, 60));
try {
const frameResult = await cdp.evaluateInFrame(
sessionId,
frame.id,
`(${IFRAME_DISMISS_SCRIPT})(${!reject})`
);
if (frameResult.length > 0) {
result = frameResult;
break;
}
} catch (e) {
log("frame error:", e.message);
}
}
}
} catch (e) {
log("getFrameTree error:", e.message);
}
}
if (result.length > 0) {
console.log(`✓ Dismissed cookie dialog (${mode}): ${result.join(", ")}`);
} else {
console.log(`○ No cookie dialog found to ${mode}`);
}
log("closing...");
cdp.close();
log("done");
} catch (e) {
console.error("✗", e.message);
process.exit(1);
} finally {
clearTimeout(globalTimeout);
setTimeout(() => process.exit(0), 100);
}
/**
* Emacs Bridge Extension for pi
*
* Provides tools to interact with Emacs via pi-bridge.el HTTP server.
*
* Tools:
* - emacs_buffers: List open buffers
* - emacs_read: Read buffer contents
* - emacs_context: Get current editing context
* - emacs_goto: Navigate to file/line/buffer
* - emacs_insert: Insert text
* - emacs_edit: Edit a region
* - emacs_eval: Evaluate elisp
* - emacs_project_files: List project files
* - emacs_project_buffers: List project buffers
*/
import type { ExtensionAPI, ExtensionContext } from "@mariozechner/pi-coding-agent";
import { Type, type Static } from "@sinclair/typebox";
const EMACS_PORT = 9876;
const EMACS_HOST = "127.0.0.1";
const BASE_URL = `http://${EMACS_HOST}:${EMACS_PORT}`;
// Helper to make requests to Emacs
async function emacsRequest(
method: "GET" | "POST",
endpoint: string,
body?: object,
signal?: AbortSignal
): Promise<{ data?: any; error?: string }> {
try {
const url = `${BASE_URL}${endpoint}`;
const options: RequestInit = {
method,
signal,
headers: body ? { "Content-Type": "application/json" } : undefined,
body: body ? JSON.stringify(body) : undefined,
};
const response = await fetch(url, options);
const data = await response.json();
if (data.error) {
return { error: data.message || "Unknown error from Emacs" };
}
return { data };
} catch (e: any) {
if (e.cause?.code === "ECONNREFUSED") {
return {
error:
"Cannot connect to Emacs. Make sure pi-bridge is running: M-x pi-bridge-start",
};
}
return { error: e.message };
}
}
function formatResult(data: any): string {
if (typeof data === "string") return data;
return JSON.stringify(data, null, 2);
}
export default function (pi: ExtensionAPI) {
// ═══════════════════════════════════════════════════════════════════════════
// emacs_buffers - List open buffers
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_buffers",
label: "Emacs Buffers",
description:
"List all open buffers in Emacs with their metadata (name, file, mode, modified status)",
parameters: Type.Object({}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("GET", "/buffers", undefined, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
const buffers = data.buffers || [];
let output = `Found ${buffers.length} buffers:\n\n`;
for (const buf of buffers) {
const modified = buf.modified ? " [modified]" : "";
const readonly = buf.readonly ? " [readonly]" : "";
output += `• ${buf.name}${modified}${readonly}\n`;
if (buf.file) output += ` File: ${buf.file}\n`;
output += ` Mode: ${buf.mode}, Line: ${buf.line}, Size: ${buf.size}\n\n`;
}
return {
content: [{ type: "text", text: output }],
details: { buffers },
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_read - Read buffer contents
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_read",
label: "Emacs Read Buffer",
description:
"Read the contents of an Emacs buffer. If no name provided, reads the current buffer.",
parameters: Type.Object({
name: Type.Optional(
Type.String({ description: "Buffer name. Omit for current buffer." })
),
}),
async execute(toolCallId, params, signal) {
const endpoint = params.name ? `/buffer?name=${encodeURIComponent(params.name)}` : "/buffer";
const { data, error } = await emacsRequest("GET", endpoint, undefined, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
let output = `Buffer: ${data.name}\n`;
if (data.file) output += `File: ${data.file}\n`;
output += `Mode: ${data.mode}\n`;
output += `Point: ${data.point} (line ${data.line}, col ${data.column})\n`;
output += `Modified: ${data.modified}\n`;
output += `\n--- Content ---\n${data.content}`;
return {
content: [{ type: "text", text: output }],
details: {
name: data.name,
file: data.file,
mode: data.mode,
point: data.point,
line: data.line,
},
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_context - Get current editing context
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_context",
label: "Emacs Context",
description:
"Get current editing context: buffer, point, line, region, current function, project root",
parameters: Type.Object({}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("GET", "/context", undefined, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
let output = `Current Context:\n`;
output += ` Buffer: ${data.buffer}\n`;
if (data.file) output += ` File: ${data.file}\n`;
output += ` Mode: ${data.mode}\n`;
output += ` Position: line ${data.line}, column ${data.column} (point ${data.point})\n`;
if (data.defun) output += ` Function: ${data.defun}\n`;
if (data.project) output += ` Project: ${data.project}\n`;
output += ` Current line: ${data.line_content}`;
if (data.region_active) {
output += `\n\nActive region:\n${data.region_text}`;
}
return {
content: [{ type: "text", text: output }],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_goto - Navigate to location
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_goto",
label: "Emacs Goto",
description:
"Navigate to a location in Emacs. Can open a file, switch to a buffer, or go to a specific line/column.",
parameters: Type.Object({
file: Type.Optional(Type.String({ description: "File path to open" })),
buffer: Type.Optional(Type.String({ description: "Buffer name to switch to" })),
line: Type.Optional(Type.Number({ description: "Line number (1-indexed)" })),
column: Type.Optional(Type.Number({ description: "Column number (0-indexed)" })),
}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("POST", "/goto", params, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
return {
content: [
{
type: "text",
text: `Navigated to ${data.buffer} at line ${data.line}`,
},
],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_insert - Insert text
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_insert",
label: "Emacs Insert",
description:
"Insert text into an Emacs buffer at point, or replace the active region.",
parameters: Type.Object({
text: Type.String({ description: "Text to insert" }),
buffer: Type.Optional(Type.String({ description: "Target buffer name" })),
point: Type.Optional(Type.Number({ description: "Position to insert at" })),
replace_region: Type.Optional(
Type.Boolean({ description: "If true, replace the active region" })
),
}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("POST", "/insert", params, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
return {
content: [
{
type: "text",
text: `Inserted text into ${data.buffer} at point ${data.point}`,
},
],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_edit - Edit a region
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_edit",
label: "Emacs Edit Region",
description:
"Replace text between start and end positions with new text.",
parameters: Type.Object({
start: Type.Number({ description: "Start position (point)" }),
end: Type.Number({ description: "End position (point)" }),
text: Type.String({ description: "Replacement text" }),
buffer: Type.Optional(Type.String({ description: "Target buffer name" })),
}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("POST", "/edit-region", params, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
return {
content: [
{
type: "text",
text: `Edited region in ${data.buffer}`,
},
],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_replace - Replace entire buffer
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_replace",
label: "Emacs Replace Buffer",
description: "Replace the entire contents of a buffer.",
parameters: Type.Object({
content: Type.String({ description: "New buffer content" }),
buffer: Type.Optional(Type.String({ description: "Target buffer name" })),
}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("POST", "/replace", params, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
return {
content: [
{
type: "text",
text: `Replaced contents of ${data.buffer}`,
},
],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_eval - Evaluate elisp
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_eval",
label: "Emacs Eval",
description:
"Evaluate arbitrary Emacs Lisp code. Use for operations not covered by other tools.",
parameters: Type.Object({
code: Type.String({ description: "Elisp code to evaluate" }),
}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("POST", "/eval", params, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
return {
content: [
{
type: "text",
text: `Result: ${data.result}`,
},
],
details: data,
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_project_files - List project files
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_project_files",
label: "Emacs Project Files",
description: "List all files in the current Emacs project.",
parameters: Type.Object({}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest("GET", "/project/files", undefined, signal);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
const files = data.files || [];
let output = `Project: ${data.root}\n`;
output += `Files (${files.length}):\n\n`;
output += files.slice(0, 100).join("\n");
if (files.length > 100) {
output += `\n\n... and ${files.length - 100} more`;
}
return {
content: [{ type: "text", text: output }],
details: { root: data.root, count: files.length },
};
},
});
// ═══════════════════════════════════════════════════════════════════════════
// emacs_project_buffers - List project buffers
// ═══════════════════════════════════════════════════════════════════════════
pi.registerTool({
name: "emacs_project_buffers",
label: "Emacs Project Buffers",
description: "List all buffers belonging to the current Emacs project.",
parameters: Type.Object({}),
async execute(toolCallId, params, signal) {
const { data, error } = await emacsRequest(
"GET",
"/project/buffers",
undefined,
signal
);
if (error) {
return {
content: [{ type: "text", text: `Error: ${error}` }],
isError: true,
};
}
const buffers = data.buffers || [];
let output = `Project: ${data.root}\n`;
output += `Open buffers (${buffers.length}):\n\n`;
for (const buf of buffers) {
const modified = buf.modified ? " [modified]" : "";
output += `• ${buf.name}${modified}\n`;
output += ` ${buf.file}\n`;
output += ` Mode: ${buf.mode}, Line: ${buf.line}\n\n`;
}
return {
content: [{ type: "text", text: output }],
details: { root: data.root, buffers },
};
},
});
// Register /emacs command for quick status check
pi.registerCommand("emacs", {
description: "Check Emacs bridge connection status",
handler: async (args, ctx) => {
const { data, error } = await emacsRequest("GET", "/health");
if (error) {
ctx.ui.notify(`Emacs bridge: ${error}`, "error");
} else {
ctx.ui.notify(
`Emacs bridge connected (v${data.version})`,
"success"
);
}
},
});
}
#!/usr/bin/env node
import { connect } from "./cdp.js";
const DEBUG = process.env.DEBUG === "1";
const log = DEBUG ? (...args) => console.error("[debug]", ...args) : () => {};
const code = process.argv.slice(2).join(" ");
if (!code) {
console.log("Usage: eval.js 'code'");
console.log("\nExamples:");
console.log(' eval.js "document.title"');
console.log(" eval.js \"document.querySelectorAll('a').length\"");
process.exit(1);
}
// Global timeout
const globalTimeout = setTimeout(() => {
console.error("✗ Global timeout exceeded (45s)");
process.exit(1);
}, 45000);
try {
log("connecting...");
const cdp = await connect(5000);
log("getting pages...");
const pages = await cdp.getPages();
const page = pages.at(-1);
if (!page) {
console.error("✗ No active tab found");
process.exit(1);
}
log("attaching to page...");
const sessionId = await cdp.attachToPage(page.targetId);
log("evaluating...");
const expression = `(async () => { return (${code}); })()`;
const result = await cdp.evaluate(sessionId, expression);
log("formatting result...");
if (Array.isArray(result)) {
for (let i = 0; i < result.length; i++) {
if (i > 0) console.log("");
for (const [key, value] of Object.entries(result[i])) {
console.log(`${key}: ${value}`);
}
}
} else if (typeof result === "object" && result !== null) {
for (const [key, value] of Object.entries(result)) {
console.log(`${key}: ${value}`);
}
} else {
console.log(result);
}
log("closing...");
cdp.close();
log("done");
} catch (e) {
console.error("✗", e.message);
process.exit(1);
} finally {
clearTimeout(globalTimeout);
setTimeout(() => process.exit(0), 100);
}
/**
* Minimal guardrails extension for pi
*
* Prompts for confirmation before:
* - Deleting files not tracked in git
* - Installing packages system-wide
*
* Also sends OS notification when pi needs input.
*/
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { isToolCallEventType } from "@mariozechner/pi-coding-agent";
import { execSync, exec } from "node:child_process";
import { platform } from "node:os";
// Patterns for system-wide installs
const SYSTEM_INSTALL_PATTERNS = [
/\bbrew\s+install\b/,
/\bapt(-get)?\s+install\b/,
/\byum\s+install\b/,
/\bdnf\s+install\b/,
/\bpacman\s+-S\b/,
/\bnpm\s+(install|i)\s+(-g|--global)\b/,
/\bnpm\s+(-g|--global)\s+(install|i)\b/,
/\bpnpm\s+(add|install)\s+(-g|--global)\b/,
/\byarn\s+global\s+add\b/,
/\bpip3?\s+install\b(?!.*(-e|--editable|venv|virtualenv|\.venv))/,
/\bgem\s+install\b/,
/\bcargo\s+install\b/,
/\bgo\s+install\b/,
/\bsudo\b/,
];
// Patterns for file deletion
const DELETE_PATTERNS = [
/\brm\s+(-[rRfiv]+\s+)*(.+)/,
/\brmdir\b/,
/\bunlink\b/,
/\btrash\b/,
/\bgit\s+clean\s+-[a-zA-Z]*f/,
];
function isSystemInstall(command: string): string | null {
for (const pattern of SYSTEM_INSTALL_PATTERNS) {
if (pattern.test(command)) {
return command.match(pattern)?.[0] || "system install";
}
}
return null;
}
function extractDeleteTargets(command: string): string[] {
const targets: string[] = [];
// Match rm commands and extract file paths
const rmMatch = command.match(/\brm\s+(-[rRfiv]+\s+)*(.+)/);
if (rmMatch) {
const pathsPart = rmMatch[2];
// Split on spaces, but handle quoted paths simply
const paths = pathsPart.split(/\s+/).filter(p =>
p && !p.startsWith('-') && p !== '&&' && p !== '||' && p !== ';'
);
targets.push(...paths);
}
return targets;
}
function isTrackedByGit(filepath: string, cwd: string): boolean {
try {
// Check if file is tracked in git
execSync(`git ls-files --error-unmatch "${filepath}"`, {
cwd,
stdio: 'pipe',
});
return true;
} catch {
return false;
}
}
function isInGitRepo(cwd: string): boolean {
try {
execSync('git rev-parse --git-dir', { cwd, stdio: 'pipe' });
return true;
} catch {
return false;
}
}
function sendNotification(title: string, message: string) {
const os = platform();
try {
if (os === "darwin") {
// macOS - use osascript
exec(`osascript -e 'display notification "${message}" with title "${title}" sound name "Glass"'`);
} else if (os === "linux") {
// Linux - use notify-send
exec(`notify-send "${title}" "${message}"`);
} else if (os === "win32") {
// Windows - use PowerShell
exec(`powershell -Command "New-BurntToastNotification -Text '${title}', '${message}'"`);
}
} catch {
// Silently fail if notification fails
}
}
export default function (pi: ExtensionAPI) {
// Notify when agent completes and needs input
pi.on("agent_end", async (_event, ctx) => {
// Only notify if we have a UI (interactive mode)
if (!ctx.hasUI) return;
sendNotification("pi", "Ready for input");
});
pi.on("tool_call", async (event, ctx) => {
if (!isToolCallEventType("bash", event)) return;
if (!ctx.hasUI) return; // Skip in non-interactive modes
const command = event.input.command;
// Check for system-wide installs
const installMatch = isSystemInstall(command);
if (installMatch) {
const ok = await ctx.ui.confirm(
"⚠️ System-wide Install",
`This will install packages system-wide:\n\n ${command}\n\nAllow?`
);
if (!ok) {
return { block: true, reason: "User declined system-wide install" };
}
}
// Check for file deletions
for (const pattern of DELETE_PATTERNS) {
if (pattern.test(command)) {
const targets = extractDeleteTargets(command);
const inGitRepo = isInGitRepo(ctx.cwd);
// Find untracked files
const untrackedFiles = targets.filter(target => {
// Skip obvious temp/generated paths
if (target.includes('node_modules') ||
target.includes('.cache') ||
target.includes('/tmp/') ||
target.startsWith('/tmp/')) {
return false;
}
if (!inGitRepo) return true; // Not in git repo = always warn
return !isTrackedByGit(target, ctx.cwd);
});
if (untrackedFiles.length > 0 || !inGitRepo) {
const message = inGitRepo
? `Deleting untracked files:\n\n ${untrackedFiles.join('\n ')}\n\nThese are NOT in version control. Allow?`
: `Deleting files outside git repository:\n\n ${command}\n\nAllow?`;
const ok = await ctx.ui.confirm("⚠️ File Deletion", message);
if (!ok) {
return { block: true, reason: "User declined file deletion" };
}
}
break;
}
}
});
}
#!/usr/bin/env node
import { existsSync, readdirSync, readFileSync, statSync, watch } from "node:fs";
import { homedir } from "node:os";
import { join } from "node:path";
const LOG_ROOT = join(homedir(), ".cache/agent-web/logs");
function findLatestFile() {
if (!existsSync(LOG_ROOT)) return null;
const dirs = readdirSync(LOG_ROOT)
.filter((name) => /^\d{4}-\d{2}-\d{2}$/.test(name))
.map((name) => join(LOG_ROOT, name))
.filter((path) => statSafe(path)?.isDirectory())
.sort();
if (dirs.length === 0) return null;
const latestDir = dirs[dirs.length - 1];
const files = readdirSync(latestDir)
.filter((name) => name.endsWith(".jsonl"))
.map((name) => join(latestDir, name))
.map((path) => ({ path, mtime: statSafe(path)?.mtimeMs || 0 }))
.sort((a, b) => b.mtime - a.mtime);
return files[0]?.path || null;
}
function statSafe(path) {
try {
return statSync(path);
} catch {
return null;
}
}
const argIndex = process.argv.indexOf("--file");
const filePath = argIndex !== -1 ? process.argv[argIndex + 1] : findLatestFile();
const follow = process.argv.includes("--follow");
if (!filePath) {
console.error("✗ No log file found");
process.exit(1);
}
function readAll() {
if (!existsSync(filePath)) return;
const data = readFileSync(filePath, "utf8");
if (data.length > 0) process.stdout.write(data);
}
let offset = 0;
function readNew() {
if (!existsSync(filePath)) return;
const data = readFileSync(filePath, "utf8");
if (data.length <= offset) return;
const chunk = data.slice(offset);
offset = data.length;
process.stdout.write(chunk);
}
try {
readAll();
if (!follow) process.exit(0);
offset = statSafe(filePath)?.size || 0;
watch(filePath, { persistent: true }, () => readNew());
console.log(`✓ tailing ${filePath}`);
} catch (e) {
console.error("✗ tail failed:", e.message);
process.exit(1);
}
#!/usr/bin/env node
import { connect } from "./cdp.js";
const DEBUG = process.env.DEBUG === "1";
const log = DEBUG ? (...args) => console.error("[debug]", ...args) : () => {};
const url = process.argv[2];
const newTab = process.argv[3] === "--new";
if (!url) {
console.log("Usage: nav.js <url> [--new]");
console.log("\nExamples:");
console.log(" nav.js https://example.com # Navigate current tab");
console.log(" nav.js https://example.com --new # Open in new tab");
process.exit(1);
}
// Global timeout
const globalTimeout = setTimeout(() => {
console.error("✗ Global timeout exceeded (45s)");
process.exit(1);
}, 45000);
try {
log("connecting...");
const cdp = await connect(5000);
log("getting pages...");
let targetId;
if (newTab) {
log("creating new tab...");
const { targetId: newTargetId } = await cdp.send("Target.createTarget", {
url: "about:blank",
});
targetId = newTargetId;
} else {
const pages = await cdp.getPages();
const page = pages.at(-1);
if (!page) {
console.error("✗ No active tab found");
process.exit(1);
}
targetId = page.targetId;
}
log("attaching to page...");
const sessionId = await cdp.attachToPage(targetId);
log("navigating...");
await cdp.navigate(sessionId, url);
console.log(newTab ? "✓ Opened:" : "✓ Navigated to:", url);
log("closing...");
cdp.close();
log("done");
} catch (e) {
console.error("✗", e.message);
process.exit(1);
} finally {
clearTimeout(globalTimeout);
setTimeout(() => process.exit(0), 100);
}
#!/usr/bin/env node
import { existsSync, readdirSync, readFileSync, statSync } from "node:fs";
import { homedir } from "node:os";
import { join } from "node:path";
const LOG_ROOT = join(homedir(), ".cache/agent-web/logs");
function statSafe(path) {
try {
return statSync(path);
} catch {
return null;
}
}
function findLatestFile() {
if (!existsSync(LOG_ROOT)) return null;
const dirs = readdirSync(LOG_ROOT)
.filter((name) => /^\d{4}-\d{2}-\d{2}$/.test(name))
.map((name) => join(LOG_ROOT, name))
.filter((path) => statSafe(path)?.isDirectory())
.sort();
if (dirs.length === 0) return null;
const latestDir = dirs[dirs.length - 1];
const files = readdirSync(latestDir)
.filter((name) => name.endsWith(".jsonl"))
.map((name) => join(latestDir, name))
.map((path) => ({ path, mtime: statSafe(path)?.mtimeMs || 0 }))
.sort((a, b) => b.mtime - a.mtime);
return files[0]?.path || null;
}
const argIndex = process.argv.indexOf("--file");
const filePath = argIndex !== -1 ? process.argv[argIndex + 1] : findLatestFile();
if (!filePath) {
console.error("✗ No log file found");
process.exit(1);
}
const statusCounts = new Map();
const failures = [];
let totalResponses = 0;
let totalRequests = 0;
try {
const data = readFileSync(filePath, "utf8");
const lines = data.split("\n").filter(Boolean);
for (const line of lines) {
let entry;
try {
entry = JSON.parse(line);
} catch {
continue;
}
if (entry.type === "network.request") {
totalRequests += 1;
} else if (entry.type === "network.response") {
totalResponses += 1;
const status = String(entry.status ?? "unknown");
statusCounts.set(status, (statusCounts.get(status) || 0) + 1);
} else if (entry.type === "network.failure") {
failures.push({
requestId: entry.requestId,
errorText: entry.errorText,
});
}
}
} catch (e) {
console.error("✗ summary failed:", e.message);
process.exit(1);
}
console.log(`file: ${filePath}`);
console.log(`requests: ${totalRequests}`);
console.log(`responses: ${totalResponses}`);
const statuses = Array.from(statusCounts.entries()).sort(
(a, b) => Number(a[0]) - Number(b[0])
);
for (const [status, count] of statuses) {
console.log(`status ${status}: ${count}`);
}
if (failures.length > 0) {
console.log("failures:");
for (const failure of failures.slice(0, 10)) {
console.log(`- ${failure.errorText || "unknown"} (${failure.requestId})`);
}
if (failures.length > 10) {
console.log(`- ... ${failures.length - 10} more`);
}
}
{
"name": "web-browser",
"type": "module",
"dependencies": {
"ws": "^8.18.0"
},
"pi": {
"extensions": ["./index.ts"]
}
}
#!/usr/bin/env node
import { connect } from "./cdp.js";
const DEBUG = process.env.DEBUG === "1";
const log = DEBUG ? (...args) => console.error("[debug]", ...args) : () => {};
const message = process.argv.slice(2).join(" ");
if (!message) {
console.log("Usage: pick.js 'message'");
console.log("\nExample:");
console.log(' pick.js "Click the submit button"');
process.exit(1);
}
// Global timeout - 5 minutes for interactive picking
const globalTimeout = setTimeout(() => {
console.error("✗ Global timeout exceeded (5m)");
process.exit(1);
}, 300000);
const PICK_SCRIPT = `(message) => {
if (!message) throw new Error("pick() requires a message parameter");
return new Promise((resolve) => {
const selections = [];
const selectedElements = new Set();
const overlay = document.createElement("div");
overlay.style.cssText = "position:fixed;top:0;left:0;width:100%;height:100%;z-index:2147483647;pointer-events:none";
const highlight = document.createElement("div");
highlight.style.cssText = "position:absolute;border:2px solid #3b82f6;background:rgba(59,130,246,0.1);transition:all 0.1s";
overlay.appendChild(highlight);
const banner = document.createElement("div");
banner.style.cssText = "position:fixed;bottom:20px;left:50%;transform:translateX(-50%);background:#1f2937;color:white;padding:12px 24px;border-radius:8px;font:14px sans-serif;box-shadow:0 4px 12px rgba(0,0,0,0.3);pointer-events:auto;z-index:2147483647";
const updateBanner = () => {
banner.textContent = message + " (" + selections.length + " selected, Cmd/Ctrl+click to add, Enter to finish, ESC to cancel)";
};
updateBanner();
document.body.append(banner, overlay);
const cleanup = () => {
document.removeEventListener("mousemove", onMove, true);
document.removeEventListener("click", onClick, true);
document.removeEventListener("keydown", onKey, true);
overlay.remove();
banner.remove();
selectedElements.forEach((el) => { el.style.outline = ""; });
};
const onMove = (e) => {
const el = document.elementFromPoint(e.clientX, e.clientY);
if (!el || overlay.contains(el) || banner.contains(el)) return;
const r = el.getBoundingClientRect();
highlight.style.cssText = "position:absolute;border:2px solid #3b82f6;background:rgba(59,130,246,0.1);top:" + r.top + "px;left:" + r.left + "px;width:" + r.width + "px;height:" + r.height + "px";
};
const buildElementInfo = (el) => {
const parents = [];
let current = el.parentElement;
while (current && current !== document.body) {
const parentInfo = current.tagName.toLowerCase();
const id = current.id ? "#" + current.id : "";
const cls = current.className ? "." + current.className.trim().split(/\\s+/).join(".") : "";
parents.push(parentInfo + id + cls);
current = current.parentElement;
}
return {
tag: el.tagName.toLowerCase(),
id: el.id || null,
class: el.className || null,
text: (el.textContent || "").trim().slice(0, 200) || null,
html: el.outerHTML.slice(0, 500),
parents: parents.join(" > "),
};
};
const onClick = (e) => {
if (banner.contains(e.target)) return;
e.preventDefault();
e.stopPropagation();
const el = document.elementFromPoint(e.clientX, e.clientY);
if (!el || overlay.contains(el) || banner.contains(el)) return;
if (e.metaKey || e.ctrlKey) {
if (!selectedElements.has(el)) {
selectedElements.add(el);
el.style.outline = "3px solid #10b981";
selections.push(buildElementInfo(el));
updateBanner();
}
} else {
cleanup();
const info = buildElementInfo(el);
resolve(selections.length > 0 ? selections : info);
}
};
const onKey = (e) => {
if (e.key === "Escape") {
e.preventDefault();
cleanup();
resolve(null);
} else if (e.key === "Enter" && selections.length > 0) {
e.preventDefault();
cleanup();
resolve(selections);
}
};
document.addEventListener("mousemove", onMove, true);
document.addEventListener("click", onClick, true);
document.addEventListener("keydown", onKey, true);
});
}`;
try {
log("connecting...");
const cdp = await connect(5000);
log("getting pages...");
const pages = await cdp.getPages();
const page = pages.at(-1);
if (!page) {
console.error("✗ No active tab found");
process.exit(1);
}
log("attaching to page...");
const sessionId = await cdp.attachToPage(page.targetId);
log("waiting for user pick...");
const expression = `(${PICK_SCRIPT})(${JSON.stringify(message)})`;
const result = await cdp.evaluate(sessionId, expression, 300000);
log("formatting result...");
if (Array.isArray(result)) {
for (let i = 0; i < result.length; i++) {
if (i > 0) console.log("");
for (const [key, value] of Object.entries(result[i])) {
console.log(`${key}: ${value}`);
}
}
} else if (typeof result === "object" && result !== null) {
for (const [key, value] of Object.entries(result)) {
console.log(`${key}: ${value}`);
}
} else {
console.log(result);
}
log("closing...");
cdp.close();
log("done");
} catch (e) {
console.error("✗", e.message);
process.exit(1);
} finally {
clearTimeout(globalTimeout);
setTimeout(() => process.exit(0), 100);
}
/**
* Code Review Extension
*
* Provides a `/review` command focused on:
* 1. Validating code correctness
* 2. Surfacing important design decisions
*
* Usage:
* - `/review` - show interactive selector
* - `/review uncommitted` - review uncommitted changes
* - `/review branch main` - review against main branch
* - `/review commit abc123` - review specific commit
* - `/review pr 123` - review PR #123 (checks out locally)
* - `/review file src/foo.ts` - review specific files (snapshot)
* - `/review custom "focus on error handling"` - custom focus
* - `/post-review` - post findings to GitHub PR (interactive selection)
* - `/post-review 123` - post findings to specific PR
* - `/post-review P1` - only post P1 priority findings
* - `/post-review P0 P1` - only post P0 and P1 findings
* - `/post-review design` - only post design findings
* - `/post-review 123 P1 correctness` - combine PR number and filters
* - `/resume-review` - restore review session after /reload
* - `/end-review` - complete review and return to original position
*
* Project-specific guidelines:
* - If REVIEW_GUIDELINES.md exists in project root, it's included in the prompt.
*
* GitHub Integration:
* - Uses `gh` CLI for PR operations (checkout, post comments)
* - `/post-review` parses review findings and posts as PR review with inline comments
*/
import type { ExtensionAPI, ExtensionContext, ExtensionCommandContext } from "@mariozechner/pi-coding-agent";
import { Text } from "@mariozechner/pi-tui";
import path from "node:path";
import { promises as fs } from "node:fs";
// State for fresh session review tracking
let reviewOriginId: string | undefined = undefined;
let currentPrNumber: number | undefined = undefined;
let currentPrBaseBranch: string | undefined = undefined;
const REVIEW_STATE_TYPE = "review-session";
type ReviewSessionState = {
active: boolean;
originId?: string;
prNumber?: number;
prBaseBranch?: string;
};
// Types for PR review comments
type ReviewFinding = {
type: "correctness" | "design";
priority?: "P0" | "P1" | "P2" | "P3";
file: string;
line?: number;
issue: string;
details: string;
suggestion?: string;
};
type ParsedReview = {
findings: ReviewFinding[];
summary?: string;
};
function setReviewWidget(ctx: ExtensionContext, active: boolean) {
if (!ctx.hasUI) return;
if (!active) {
ctx.ui.setWidget("review", undefined);
return;
}
ctx.ui.setWidget("review", (_tui, theme) => {
const text = new Text(theme.fg("warning", "Review session active — use /end-review to return"), 0, 0);
return {
render(width: number) { return text.render(width); },
invalidate() { text.invalidate(); },
};
});
}
function getReviewState(ctx: ExtensionContext): ReviewSessionState | undefined {
let state: ReviewSessionState | undefined;
for (const entry of ctx.sessionManager.getBranch()) {
if (entry.type === "custom" && entry.customType === REVIEW_STATE_TYPE) {
state = entry.data as ReviewSessionState | undefined;
}
}
return state;
}
function applyReviewState(ctx: ExtensionContext) {
const state = getReviewState(ctx);
if (state?.active && state.originId) {
reviewOriginId = state.originId;
currentPrNumber = state.prNumber;
currentPrBaseBranch = state.prBaseBranch;
setReviewWidget(ctx, true);
return;
}
reviewOriginId = undefined;
currentPrNumber = undefined;
currentPrBaseBranch = undefined;
setReviewWidget(ctx, false);
}
// Parse review findings from assistant messages
function parseReviewFindings(messages: string[]): ParsedReview {
const findings: ReviewFinding[] = [];
let summary: string | undefined;
const combinedText = messages.join("\n\n");
// Extract findings using regex patterns matching the review output format
// Format: **[CORRECTNESS]** or **[DESIGN]** followed by **File:** `path:line`
const findingPattern = /\*\*\[(CORRECTNESS|DESIGN)\]\*\*(?:\s*\*\*\[(P[0-3])\]\*\*)?\s*\n+\*\*File:\*\*\s*`([^`]+)`\s*\n+\*\*Issue:\*\*\s*([^\n]+)\s*\n+\*\*Details:\*\*\s*([\s\S]*?)(?=\*\*Suggestion:\*\*|\*\*\[(?:CORRECTNESS|DESIGN)\]\*\*|## Summary|$)\s*(?:\*\*Suggestion:\*\*\s*([\s\S]*?))?(?=\*\*\[(?:CORRECTNESS|DESIGN)\]\*\*|## Summary|$)/gi;
let match;
while ((match = findingPattern.exec(combinedText)) !== null) {
const [, type, priority, filePath, issue, details, suggestion] = match;
// Parse file:line format
const fileMatch = filePath.match(/^(.+?)(?::(\d+))?$/);
if (fileMatch) {
findings.push({
type: type.toLowerCase() as "correctness" | "design",
priority: priority as "P0" | "P1" | "P2" | "P3" | undefined,
file: fileMatch[1].trim(),
line: fileMatch[2] ? parseInt(fileMatch[2], 10) : undefined,
issue: issue.trim(),
details: details.trim(),
suggestion: suggestion?.trim(),
});
}
}
// Extract summary section
const summaryMatch = combinedText.match(/## Summary\s*([\s\S]*?)(?=## Key Questions|$)/i);
if (summaryMatch) {
summary = summaryMatch[1].trim();
}
return { findings, summary };
}
// Get the latest assistant messages from the session for parsing
function getAssistantMessages(ctx: ExtensionContext, searchAll = false): string[] {
const messages: string[] = [];
const allAssistantMessages: string[] = [];
const entries = ctx.sessionManager.getBranch();
// Get messages from current review session (after the review prompt)
let foundReviewPrompt = false;
for (const entry of entries) {
if (entry.type === "message") {
const msg = entry.message;
// Collect all assistant messages as fallback
if (msg.role === "assistant") {
const extractText = (content: any): string[] => {
if (Array.isArray(content)) {
return content.filter(p => p.type === "text").map(p => p.text);
} else if (typeof content === "string") {
return [content];
}
return [];
};
allAssistantMessages.push(...extractText(msg.content));
}
if (msg.role === "user") {
const content = typeof msg.content === "string" ? msg.content :
Array.isArray(msg.content) ? msg.content.filter(p => p.type === "text").map(p => p.text).join("\n") : "";
if (content.includes("# Code Review Guidelines") || content.includes("Review the") && content.includes("changes")) {
foundReviewPrompt = true;
continue;
}
}
if (foundReviewPrompt && msg.role === "assistant") {
if (Array.isArray(msg.content)) {
for (const part of msg.content) {
if (part.type === "text") {
messages.push(part.text);
}
}
} else if (typeof msg.content === "string") {
messages.push(msg.content);
}
}
}
}
// If we didn't find a review prompt or searchAll is true, return all assistant messages
// that contain review-like content
if (messages.length === 0 || searchAll) {
// Filter to only messages that look like they contain review findings
const reviewMessages = allAssistantMessages.filter(m =>
m.includes("[CORRECTNESS]") ||
m.includes("[DESIGN]") ||
m.includes("**File:**") ||
m.includes("## Summary")
);
if (reviewMessages.length > 0) {
return reviewMessages;
}
// Last resort: return all assistant messages
return allAssistantMessages;
}
return messages;
}
// Format a finding as a GitHub review comment body
function formatFindingAsComment(finding: ReviewFinding): string {
let body = "";
// Header with type and priority
const typeEmoji = finding.type === "correctness" ? "🐛" : "💡";
const priorityBadge = finding.priority ? ` **[${finding.priority}]**` : "";
body += `${typeEmoji} **${finding.type.toUpperCase()}**${priorityBadge}\n\n`;
// Issue
body += `**Issue:** ${finding.issue}\n\n`;
// Details
body += `${finding.details}\n`;
// Suggestion
if (finding.suggestion) {
body += `\n**Suggestion:** ${finding.suggestion}`;
}
return body;
}
// Get the diff position for a file:line in the PR diff
async function getDiffPosition(
pi: ExtensionAPI,
prNumber: number,
file: string,
line: number
): Promise<number | null> {
// Get the PR diff
const { stdout, code } = await pi.exec("gh", [
"api",
`repos/{owner}/{repo}/pulls/${prNumber}`,
"-H", "Accept: application/vnd.github.diff",
]);
if (code !== 0) return null;
// Parse diff to find position
// The position is the line number in the diff, not the file
const lines = stdout.split("\n");
let currentFile = "";
let position = 0;
let inFile = false;
let newLineNumber = 0;
for (const diffLine of lines) {
if (diffLine.startsWith("diff --git")) {
// Extract filename from diff header
const match = diffLine.match(/b\/(.+)$/);
currentFile = match ? match[1] : "";
inFile = currentFile === file || file.endsWith(currentFile) || currentFile.endsWith(file);
position = 0;
newLineNumber = 0;
} else if (diffLine.startsWith("@@") && inFile) {
// Parse hunk header: @@ -old,count +new,count @@
const hunkMatch = diffLine.match(/@@ -\d+(?:,\d+)? \+(\d+)(?:,\d+)? @@/);
if (hunkMatch) {
newLineNumber = parseInt(hunkMatch[1], 10) - 1;
}
position++;
} else if (inFile && (diffLine.startsWith("+") || diffLine.startsWith("-") || diffLine.startsWith(" "))) {
position++;
if (!diffLine.startsWith("-")) {
newLineNumber++;
if (newLineNumber === line) {
return position;
}
}
}
}
return null;
}
// Review targets
type ReviewTarget =
| { type: "uncommitted" }
| { type: "baseBranch"; branch: string }
| { type: "commit"; sha: string; title?: string }
| { type: "pullRequest"; prNumber: number; baseBranch: string; title: string }
| { type: "file"; paths: string[] }
| { type: "custom"; instructions: string };
// The review rubric focused on correctness and design decisions
const REVIEW_RUBRIC = `# Code Review Guidelines
You are reviewing code changes with two primary goals:
1. **Validate correctness** — Find bugs, logic errors, and potential runtime issues
2. **Surface design decisions** — Identify architectural choices that deserve discussion
## What to Look For
### Correctness Issues (MUST flag)
- Logic errors, off-by-one bugs, incorrect conditions
- Null/undefined handling gaps
- Race conditions, deadlocks, resource leaks
- Error handling that swallows or mishandles errors
- Type mismatches or unsafe casts
- Security vulnerabilities (SQL injection, XSS, path traversal, etc.)
- Incorrect API usage or protocol violations
- Edge cases that would cause failures
### Design Decisions (SHOULD surface)
- New abstractions, interfaces, or patterns introduced
- Changes to data models or schemas
- New dependencies or significant library choices
- Performance trade-offs (time vs space, caching strategies)
- Error handling strategies (fail-fast vs graceful degradation)
- API design choices (naming, signatures, contracts)
- Coupling/cohesion changes
- Backwards compatibility implications
- Testing strategy choices
## Review Process
1. **Understand the change** — What is this trying to accomplish?
2. **Trace the data flow** — Follow inputs through transformations to outputs
3. **Consider edge cases** — What happens with empty, null, huge, or malformed inputs?
4. **Check error paths** — Are errors handled correctly and propagated appropriately?
5. **Evaluate the design** — Is this the right abstraction? Will it scale? Is it maintainable?
## Output Format
### For each finding, provide:
**[CORRECTNESS]** or **[DESIGN]** tag
**File:** \`path/to/file.ts:line\`
**Issue:** One-sentence summary
**Details:** Brief explanation (1-2 paragraphs max)
- Why this is a problem (for correctness) or why it deserves discussion (for design)
- Specific scenario where it would fail (for correctness)
**Suggestion:** How to fix it or what to consider
---
### Priority Levels (for correctness issues only):
- **[P0]** — Will cause data loss, security breach, or crash in production
- **[P1]** — Will cause incorrect behavior in common scenarios
- **[P2]** — Will cause incorrect behavior in edge cases
- **[P3]** — Code smell or potential future issue
### At the end, provide:
## Summary
**Correctness:** [PASS / NEEDS FIXES] — Brief assessment
**Design:** [List 2-3 most important design decisions that should be discussed]
## Key Questions for the Author
1. [Question about a design decision]
2. [Question about intended behavior]
3. [Question about trade-off made]
---
Focus on findings the author would want to know about. Skip trivial style issues unless they impact readability or correctness.`;
// Prompt templates
const UNCOMMITTED_PROMPT = `Review the current uncommitted changes (staged, unstaged, and untracked files).
Run \`git status\` and \`git diff\` to see the changes, then review them.`;
const BASE_BRANCH_PROMPT = `Review the code changes against the base branch '{branch}'.
First find the merge base: \`git merge-base HEAD {branch}\`
Then review the diff: \`git diff <merge-base-sha>\`
Focus on what would be merged into {branch}.`;
const COMMIT_PROMPT = `Review commit {sha}{titlePart}.
Run \`git show {sha}\` to see the changes, then review them.`;
const PR_PROMPT = `Review pull request #{prNumber} "{title}" against {baseBranch}.
First find the merge base: \`git merge-base HEAD {baseBranch}\`
Then review the diff: \`git diff <merge-base-sha>\`
This represents what would be merged.`;
const FILE_PROMPT = `Review the code in these files (snapshot review, not a diff):
{paths}
Read each file and review for correctness issues and design patterns.`;
async function loadProjectGuidelines(cwd: string): Promise<string | null> {
const guidelinesPath = path.join(cwd, "REVIEW_GUIDELINES.md");
try {
const content = await fs.readFile(guidelinesPath, "utf8");
return content.trim() || null;
} catch {
return null;
}
}
async function getLocalBranches(pi: ExtensionAPI): Promise<string[]> {
const { stdout, code } = await pi.exec("git", ["branch", "--format=%(refname:short)"]);
if (code !== 0) return [];
return stdout.trim().split("\n").filter(b => b.trim());
}
async function getRecentCommits(pi: ExtensionAPI, limit = 15): Promise<Array<{ sha: string; title: string }>> {
const { stdout, code } = await pi.exec("git", ["log", "--oneline", `-n`, `${limit}`]);
if (code !== 0) return [];
return stdout.trim().split("\n").filter(line => line.trim()).map(line => {
const [sha, ...rest] = line.trim().split(" ");
return { sha, title: rest.join(" ") };
});
}
async function hasUncommittedChanges(pi: ExtensionAPI): Promise<boolean> {
const { stdout, code } = await pi.exec("git", ["status", "--porcelain"]);
return code === 0 && stdout.trim().length > 0;
}
async function hasPendingChanges(pi: ExtensionAPI): Promise<boolean> {
const { stdout, code } = await pi.exec("git", ["status", "--porcelain"]);
if (code !== 0) return false;
const lines = stdout.trim().split("\n").filter(line => line.trim());
return lines.filter(line => !line.startsWith("??")).length > 0;
}
async function getCurrentBranch(pi: ExtensionAPI): Promise<string | null> {
const { stdout, code } = await pi.exec("git", ["branch", "--show-current"]);
return code === 0 && stdout.trim() ? stdout.trim() : null;
}
async function getDefaultBranch(pi: ExtensionAPI): Promise<string> {
const { stdout, code } = await pi.exec("git", ["symbolic-ref", "refs/remotes/origin/HEAD", "--short"]);
if (code === 0 && stdout.trim()) {
return stdout.trim().replace("origin/", "");
}
const branches = await getLocalBranches(pi);
if (branches.includes("main")) return "main";
if (branches.includes("master")) return "master";
return "main";
}
function parsePrReference(ref: string): number | null {
const trimmed = ref.trim();
const num = parseInt(trimmed, 10);
if (!isNaN(num) && num > 0) return num;
const urlMatch = trimmed.match(/github\.com\/[^/]+\/[^/]+\/pull\/(\d+)/);
if (urlMatch) return parseInt(urlMatch[1], 10);
return null;
}
async function getPrInfo(pi: ExtensionAPI, prNumber: number): Promise<{ baseBranch: string; title: string; headBranch: string } | null> {
const { stdout, code } = await pi.exec("gh", ["pr", "view", String(prNumber), "--json", "baseRefName,title,headRefName"]);
if (code !== 0) return null;
try {
const data = JSON.parse(stdout);
return { baseBranch: data.baseRefName, title: data.title, headBranch: data.headRefName };
} catch {
return null;
}
}
async function checkoutPr(pi: ExtensionAPI, prNumber: number): Promise<{ success: boolean; error?: string }> {
const { stdout, stderr, code } = await pi.exec("gh", ["pr", "checkout", String(prNumber)]);
if (code !== 0) return { success: false, error: stderr || stdout || "Failed to checkout PR" };
return { success: true };
}
function buildPrompt(target: ReviewTarget): string {
switch (target.type) {
case "uncommitted":
return UNCOMMITTED_PROMPT;
case "baseBranch":
return BASE_BRANCH_PROMPT.replace(/{branch}/g, target.branch);
case "commit": {
const titlePart = target.title ? ` ("${target.title}")` : "";
return COMMIT_PROMPT.replace("{sha}", target.sha).replace("{titlePart}", titlePart);
}
case "pullRequest":
return PR_PROMPT
.replace("{prNumber}", String(target.prNumber))
.replace("{title}", target.title)
.replace("{baseBranch}", target.baseBranch);
case "file":
return FILE_PROMPT.replace("{paths}", target.paths.map(p => `- ${p}`).join("\n"));
case "custom":
return target.instructions;
}
}
function getTargetDescription(target: ReviewTarget): string {
switch (target.type) {
case "uncommitted": return "uncommitted changes";
case "baseBranch": return `changes vs ${target.branch}`;
case "commit": return `commit ${target.sha.slice(0, 7)}${target.title ? `: ${target.title}` : ""}`;
case "pullRequest": return `PR #${target.prNumber}: ${target.title}`;
case "file": return `files: ${target.paths.join(", ")}`;
case "custom": return target.instructions.slice(0, 40) + (target.instructions.length > 40 ? "..." : "");
}
}
const REVIEW_PRESETS = [
{ value: "uncommitted", label: "Review uncommitted changes", description: "" },
{ value: "baseBranch", label: "Review against a branch", description: "(PR-style diff)" },
{ value: "commit", label: "Review a specific commit", description: "" },
{ value: "pullRequest", label: "Review a GitHub PR", description: "(checks out locally)" },
{ value: "file", label: "Review specific files", description: "(snapshot)" },
{ value: "custom", label: "Custom review focus", description: "" },
] as const;
export default function reviewExtension(pi: ExtensionAPI) {
// Restore review state on session events
pi.on("session_start", (_event, ctx) => applyReviewState(ctx));
pi.on("session_switch", (_event, ctx) => applyReviewState(ctx));
pi.on("session_tree", (_event, ctx) => applyReviewState(ctx));
async function getSmartDefault(): Promise<string> {
if (await hasUncommittedChanges(pi)) return "uncommitted";
const current = await getCurrentBranch(pi);
const defaultBranch = await getDefaultBranch(pi);
if (current && current !== defaultBranch) return "baseBranch";
return "commit";
}
async function showSelector(ctx: ExtensionContext): Promise<ReviewTarget | null> {
const smartDefault = await getSmartDefault();
// Sort presets with smart default first
const sortedPresets = REVIEW_PRESETS
.slice()
.sort((a, b) => {
if (a.value === smartDefault) return -1;
if (b.value === smartDefault) return 1;
return 0;
});
// Use simple ctx.ui.select() for better compatibility with non-TUI environments (e.g., Emacs RPC mode)
const options = sortedPresets.map(p => p.label);
while (true) {
const selected = await ctx.ui.select("Select review type:", options);
if (selected === undefined) return null;
// Find the preset by label
const preset = sortedPresets.find(p => p.label === selected);
if (!preset) return null;
switch (preset.value) {
case "uncommitted":
return { type: "uncommitted" };
case "baseBranch": {
const branches = await getLocalBranches(pi);
const defaultBranch = await getDefaultBranch(pi);
const sorted = branches.sort((a, b) => {
if (a === defaultBranch) return -1;
if (b === defaultBranch) return 1;
return a.localeCompare(b);
});
// Add "(default)" suffix to default branch for visibility
const branchOptions = sorted.map(b => b === defaultBranch ? `${b} (default)` : b);
const selectedBranch = await ctx.ui.select("Select base branch:", branchOptions);
if (selectedBranch === undefined) continue; // Go back to main menu
// Strip "(default)" suffix if present
const branch = selectedBranch.replace(" (default)", "");
return { type: "baseBranch", branch };
}
case "commit": {
const commits = await getRecentCommits(pi, 20);
if (!commits.length) {
ctx.ui.notify("No commits found", "error");
continue;
}
const commitOptions = commits.map(c => `${c.sha.slice(0, 7)} ${c.title}`);
const selectedCommit = await ctx.ui.select("Select commit:", commitOptions);
if (selectedCommit === undefined) continue; // Go back to main menu
// Find the commit by matching the option string
const commit = commits.find(c => `${c.sha.slice(0, 7)} ${c.title}` === selectedCommit);
if (commit) return { type: "commit", sha: commit.sha, title: commit.title };
continue;
}
case "pullRequest": {
if (await hasPendingChanges(pi)) {
ctx.ui.notify("Cannot checkout PR: uncommitted changes exist. Commit or stash first.", "error");
continue;
}
const prRef = await ctx.ui.input("PR number or URL:", "123");
if (!prRef?.trim()) continue;
const prNumber = parsePrReference(prRef);
if (!prNumber) {
ctx.ui.notify("Invalid PR reference", "error");
continue;
}
ctx.ui.notify(`Fetching PR #${prNumber}...`, "info");
const prInfo = await getPrInfo(pi, prNumber);
if (!prInfo) {
ctx.ui.notify(`Could not find PR #${prNumber}. Is gh authenticated?`, "error");
continue;
}
ctx.ui.notify(`Checking out PR #${prNumber}...`, "info");
const checkout = await checkoutPr(pi, prNumber);
if (!checkout.success) {
ctx.ui.notify(`Checkout failed: ${checkout.error}`, "error");
continue;
}
ctx.ui.notify(`Checked out PR #${prNumber}`, "info");
return { type: "pullRequest", prNumber, baseBranch: prInfo.baseBranch, title: prInfo.title };
}
case "file": {
const input = await ctx.ui.input("Files/folders to review (space-separated):", "src/");
if (!input?.trim()) continue;
const paths = input.trim().split(/\s+/).filter(p => p);
if (paths.length) return { type: "file", paths };
continue;
}
case "custom": {
const instructions = await ctx.ui.editor("Review focus:", "Check for security issues and race conditions");
if (instructions?.trim()) return { type: "custom", instructions: instructions.trim() };
continue;
}
}
}
}
function parseArgs(args: string | undefined): ReviewTarget | { type: "pr"; ref: string } | null {
if (!args?.trim()) return null;
const parts = args.trim().split(/\s+/);
const cmd = parts[0]?.toLowerCase();
switch (cmd) {
case "uncommitted":
return { type: "uncommitted" };
case "branch": {
const branch = parts[1];
return branch ? { type: "baseBranch", branch } : null;
}
case "commit": {
const sha = parts[1];
if (!sha) return null;
const title = parts.slice(2).join(" ") || undefined;
return { type: "commit", sha, title };
}
case "pr": {
const ref = parts[1];
return ref ? { type: "pr", ref } : null;
}
case "file": {
const paths = parts.slice(1);
return paths.length ? { type: "file", paths } : null;
}
case "custom": {
const instructions = parts.slice(1).join(" ");
return instructions ? { type: "custom", instructions } : null;
}
default:
return null;
}
}
async function handlePrCheckout(ctx: ExtensionContext, ref: string): Promise<ReviewTarget | null> {
if (await hasPendingChanges(pi)) {
ctx.ui.notify("Cannot checkout PR: uncommitted changes. Commit or stash first.", "error");
return null;
}
const prNumber = parsePrReference(ref);
if (!prNumber) {
ctx.ui.notify("Invalid PR reference", "error");
return null;
}
ctx.ui.notify(`Fetching PR #${prNumber}...`, "info");
const prInfo = await getPrInfo(pi, prNumber);
if (!prInfo) {
ctx.ui.notify(`Could not find PR #${prNumber}`, "error");
return null;
}
ctx.ui.notify(`Checking out PR #${prNumber}...`, "info");
const checkout = await checkoutPr(pi, prNumber);
if (!checkout.success) {
ctx.ui.notify(`Checkout failed: ${checkout.error}`, "error");
return null;
}
ctx.ui.notify(`Checked out PR #${prNumber}`, "info");
return { type: "pullRequest", prNumber, baseBranch: prInfo.baseBranch, title: prInfo.title };
}
async function executeReview(ctx: ExtensionCommandContext, target: ReviewTarget, useFreshSession: boolean): Promise<void> {
if (reviewOriginId) {
ctx.ui.notify("Already in a review. Use /end-review first.", "warning");
return;
}
if (useFreshSession) {
const originId = ctx.sessionManager.getLeafId() ?? undefined;
if (!originId) {
ctx.ui.notify("Failed to determine origin. Try from a session with messages.", "error");
return;
}
reviewOriginId = originId;
const lockedOriginId = originId;
const entries = ctx.sessionManager.getEntries();
const firstUserMessage = entries.find(e => e.type === "message" && e.message.role === "user");
if (!firstUserMessage) {
ctx.ui.notify("No user message found in session", "error");
reviewOriginId = undefined;
return;
}
try {
const result = await ctx.navigateTree(firstUserMessage.id, { summarize: false, label: "code-review" });
if (result.cancelled) {
reviewOriginId = undefined;
return;
}
} catch (error) {
reviewOriginId = undefined;
ctx.ui.notify(`Failed to start review: ${error instanceof Error ? error.message : String(error)}`, "error");
return;
}
reviewOriginId = lockedOriginId;
ctx.ui.setEditorText("");
setReviewWidget(ctx, true);
// Track PR info if this is a PR review
if (target.type === "pullRequest") {
currentPrNumber = target.prNumber;
currentPrBaseBranch = target.baseBranch;
pi.appendEntry(REVIEW_STATE_TYPE, {
active: true,
originId: lockedOriginId,
prNumber: target.prNumber,
prBaseBranch: target.baseBranch,
});
} else {
currentPrNumber = undefined;
currentPrBaseBranch = undefined;
pi.appendEntry(REVIEW_STATE_TYPE, { active: true, originId: lockedOriginId });
}
}
const prompt = buildPrompt(target);
const projectGuidelines = await loadProjectGuidelines(ctx.cwd);
let fullPrompt = `${REVIEW_RUBRIC}\n\n---\n\n${prompt}`;
if (projectGuidelines) {
fullPrompt += `\n\n---\n\n## Project-Specific Guidelines\n\n${projectGuidelines}`;
}
const desc = getTargetDescription(target);
const mode = useFreshSession ? " (fresh session)" : "";
ctx.ui.notify(`Starting review: ${desc}${mode}`, "info");
pi.sendUserMessage(fullPrompt);
}
// Register /review command
pi.registerCommand("review", {
description: "Review code for correctness and design decisions",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Review requires interactive mode", "error");
return;
}
if (reviewOriginId) {
ctx.ui.notify("Already in a review. Use /end-review first.", "warning");
return;
}
const { code } = await pi.exec("git", ["rev-parse", "--git-dir"]);
if (code !== 0) {
ctx.ui.notify("Not a git repository", "error");
return;
}
let target: ReviewTarget | null = null;
let fromSelector = false;
const parsed = parseArgs(args);
if (parsed) {
if (parsed.type === "pr") {
target = await handlePrCheckout(ctx, parsed.ref);
} else {
target = parsed;
}
}
if (!target) fromSelector = true;
while (true) {
if (!target && fromSelector) {
target = await showSelector(ctx);
}
if (!target) {
ctx.ui.notify("Review cancelled", "info");
return;
}
const entries = ctx.sessionManager.getEntries();
const messageCount = entries.filter(e => e.type === "message").length;
let useFreshSession = false;
if (messageCount > 0) {
const choice = await ctx.ui.select("Start review in:", ["Empty branch", "Current session"]);
if (choice === undefined) {
if (fromSelector) {
target = null;
continue;
}
ctx.ui.notify("Review cancelled", "info");
return;
}
useFreshSession = choice === "Empty branch";
}
await executeReview(ctx, target, useFreshSession);
return;
}
},
});
// Register /resume-review command to restore state after reload
pi.registerCommand("resume-review", {
description: "Resume a review session (restores state after /reload)",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Requires interactive mode", "error");
return;
}
// Check if already in a review
if (reviewOriginId) {
ctx.ui.notify("Already in an active review session", "info");
return;
}
// Try to restore state from session
const state = getReviewState(ctx);
if (state?.active) {
reviewOriginId = state.originId;
currentPrNumber = state.prNumber;
currentPrBaseBranch = state.prBaseBranch;
setReviewWidget(ctx, true);
let msg = "Review session restored";
if (currentPrNumber) {
msg += ` (PR #${currentPrNumber})`;
}
ctx.ui.notify(msg, "success");
return;
}
// No active review state, but check if there are review findings in the session
// Search all messages since old reviews didn't save state
const messages = getAssistantMessages(ctx, true);
if (messages.length > 0) {
const parsed = parseReviewFindings(messages);
if (parsed.findings.length > 0) {
ctx.ui.notify(
`Found ${parsed.findings.length} review findings in conversation. Use /post-review <pr-number> to post them.`,
"success"
);
return;
}
}
ctx.ui.notify("No review findings found in this session. Make sure you're in the right session branch.", "warning");
},
});
// Review summary prompt
const REVIEW_SUMMARY_PROMPT = `Summarize this code review for future reference.
Include:
1. What was reviewed (scope, files)
2. Key findings:
- Correctness issues found (with priority)
- Design decisions surfaced
3. Overall verdict
4. Action items
Format as a structured summary that can be acted upon later.`;
// Register /end-review command
pi.registerCommand("end-review", {
description: "Complete review and return to original position",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Requires interactive mode", "error");
return;
}
if (!reviewOriginId) {
const state = getReviewState(ctx);
if (state?.active && state.originId) {
reviewOriginId = state.originId;
} else if (state?.active) {
setReviewWidget(ctx, false);
pi.appendEntry(REVIEW_STATE_TYPE, { active: false });
ctx.ui.notify("Review state cleared", "warning");
return;
} else {
ctx.ui.notify("Not in a review session", "info");
return;
}
}
const choice = await ctx.ui.select("Summarize review?", ["Summarize", "No summary"]);
if (choice === undefined) {
ctx.ui.notify("Cancelled. Use /end-review to try again.", "info");
return;
}
const wantsSummary = choice === "Summarize";
const originId = reviewOriginId;
if (wantsSummary) {
// Show notification instead of custom loader UI for better compatibility with non-TUI environments
ctx.ui.notify("Summarizing review...", "info");
try {
const result = await ctx.navigateTree(originId!, {
summarize: true,
customInstructions: REVIEW_SUMMARY_PROMPT,
replaceInstructions: true,
});
setReviewWidget(ctx, false);
reviewOriginId = undefined;
currentPrNumber = undefined;
currentPrBaseBranch = undefined;
pi.appendEntry(REVIEW_STATE_TYPE, { active: false });
if (result.cancelled) {
ctx.ui.notify("Navigation cancelled", "info");
return;
}
try {
if (!ctx.ui.getEditorText().trim()) {
ctx.ui.setEditorText("Address the review findings");
}
} catch {
// getEditorText/setEditorText may not be available in all environments
}
ctx.ui.notify("Review complete!", "info");
} catch (error) {
ctx.ui.notify(`Failed: ${error instanceof Error ? error.message : String(error)}`, "error");
}
} else {
try {
const result = await ctx.navigateTree(originId!, { summarize: false });
if (result.cancelled) {
ctx.ui.notify("Navigation cancelled", "info");
return;
}
setReviewWidget(ctx, false);
reviewOriginId = undefined;
currentPrNumber = undefined;
currentPrBaseBranch = undefined;
pi.appendEntry(REVIEW_STATE_TYPE, { active: false });
ctx.ui.notify("Review complete!", "info");
} catch (error) {
ctx.ui.notify(`Failed: ${error instanceof Error ? error.message : String(error)}`, "error");
}
}
},
});
// Register /post-review command for posting findings to GitHub PR
pi.registerCommand("post-review", {
description: "Post review findings as GitHub PR comments. Use /post-review [PR#] [P0|P1|P2|P3|design|correctness]",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
ctx.ui.notify("Requires interactive mode", "error");
return;
}
// Try to restore state from session if not in memory (e.g., after /reload)
if (!currentPrNumber) {
const state = getReviewState(ctx);
if (state?.prNumber) {
currentPrNumber = state.prNumber;
currentPrBaseBranch = state.prBaseBranch;
reviewOriginId = state.originId;
if (state.active) {
setReviewWidget(ctx, true);
}
}
}
// Parse args: [PR number] [priority/type filters...]
// e.g., "/post-review 123 P1 P0" or "/post-review P1" or "/post-review design"
let prNumber = currentPrNumber;
let baseBranch = currentPrBaseBranch;
const priorityFilters: Set<string> = new Set();
const typeFilters: Set<string> = new Set();
if (args?.trim()) {
const parts = args.trim().split(/\s+/);
for (const part of parts) {
const upper = part.toUpperCase();
if (/^P[0-3]$/i.test(part)) {
priorityFilters.add(upper);
} else if (part.toLowerCase() === "design") {
typeFilters.add("design");
} else if (part.toLowerCase() === "correctness") {
typeFilters.add("correctness");
} else {
// Try to parse as PR number
const parsed = parsePrReference(part);
if (parsed) {
prNumber = parsed;
const prInfo = await getPrInfo(pi, prNumber);
if (prInfo) {
baseBranch = prInfo.baseBranch;
}
}
}
}
}
if (!prNumber) {
ctx.ui.notify("No PR specified. Use /post-review <pr-number> or run from a PR review session.", "error");
return;
}
// Check gh CLI is available
const { code: ghCode } = await pi.exec("gh", ["auth", "status"]);
if (ghCode !== 0) {
ctx.ui.notify("GitHub CLI not authenticated. Run: gh auth login", "error");
return;
}
// Parse review findings from session
let messages = getAssistantMessages(ctx);
let parsed = parseReviewFindings(messages);
// If no findings found, try searching all messages (for old reviews without state)
if (parsed.findings.length === 0) {
messages = getAssistantMessages(ctx, true);
parsed = parseReviewFindings(messages);
}
if (messages.length === 0) {
ctx.ui.notify("No review messages found in session", "error");
return;
}
if (parsed.findings.length === 0) {
ctx.ui.notify("No structured findings found. Make sure the review output includes **[CORRECTNESS]** or **[DESIGN]** tags.", "warning");
return;
}
// Apply priority/type filters
let filteredFindings = parsed.findings;
if (priorityFilters.size > 0 || typeFilters.size > 0) {
filteredFindings = parsed.findings.filter(f => {
const matchesPriority = priorityFilters.size === 0 || (f.priority && priorityFilters.has(f.priority));
const matchesType = typeFilters.size === 0 || typeFilters.has(f.type);
return matchesPriority && matchesType;
});
}
if (filteredFindings.length === 0) {
const filterDesc = [...priorityFilters, ...typeFilters].join(", ") || "specified filters";
ctx.ui.notify(`No findings match ${filterDesc}`, "warning");
return;
}
ctx.ui.notify(`Found ${filteredFindings.length} findings (of ${parsed.findings.length} total)`, "info");
// Let user select which findings to post
const findingOptions = filteredFindings.map((f, i) => {
const priority = f.priority ? `[${f.priority}]` : "";
const type = f.type === "correctness" ? "🐛" : "💡";
const loc = f.line ? `${f.file}:${f.line}` : f.file;
return `${type} ${priority} ${loc}: ${f.issue.slice(0, 50)}${f.issue.length > 50 ? "..." : ""}`;
});
// Add "All" and "Select individually" options
const selectionMode = await ctx.ui.select(
`Post ${filteredFindings.length} findings:`,
["Post all", "Select which to post", "Cancel"]
);
if (selectionMode === undefined || selectionMode === "Cancel") {
ctx.ui.notify("Cancelled", "info");
return;
}
let selectedFindings: ReviewFinding[] = [];
if (selectionMode === "Post all") {
selectedFindings = filteredFindings;
} else {
// Let user pick individual findings
for (let i = 0; i < filteredFindings.length; i++) {
const finding = filteredFindings[i];
const preview = formatFindingAsComment(finding);
const loc = finding.line ? `${finding.file}:${finding.line}` : finding.file;
const action = await ctx.ui.select(
`[${i + 1}/${filteredFindings.length}] ${loc}`,
["Include", "Edit & include", "Skip"]
);
if (action === undefined) {
ctx.ui.notify("Cancelled", "info");
return;
}
if (action === "Skip") {
continue;
}
if (action === "Edit & include") {
// Let user edit the comment
const edited = await ctx.ui.editor(
`Edit comment for ${loc}:`,
preview
);
if (edited === undefined) {
continue; // Skip this one
}
// Create a modified finding with the edited content
selectedFindings.push({
...finding,
// Store the edited body directly - we'll use it later
_editedBody: edited.trim(),
} as ReviewFinding & { _editedBody?: string });
} else {
selectedFindings.push(finding);
}
}
}
if (selectedFindings.length === 0) {
ctx.ui.notify("No findings selected", "info");
return;
}
// Ask for review action
const reviewAction = await ctx.ui.select("Post review as:", [
"Comment (no approval status)",
"Request changes",
"Approve with comments",
]);
if (reviewAction === undefined) {
ctx.ui.notify("Cancelled", "info");
return;
}
const event = reviewAction === "Request changes" ? "REQUEST_CHANGES"
: reviewAction === "Approve with comments" ? "APPROVE"
: "COMMENT";
// Optionally edit the review summary
let reviewBody = "";
if (parsed.summary) {
reviewBody = `## Review Summary\n\n${parsed.summary}`;
}
const editSummary = await ctx.ui.confirm(
"Edit summary?",
"Do you want to edit the review summary before posting?"
);
if (editSummary) {
const editedSummary = await ctx.ui.editor(
"Edit review summary:",
reviewBody || `Code review with ${selectedFindings.length} findings.`
);
if (editedSummary !== undefined) {
reviewBody = editedSummary.trim();
}
}
if (!reviewBody) {
reviewBody = `Code review completed with ${selectedFindings.length} findings.`;
}
// Build review comments
const comments: Array<{ path: string; line?: number; body: string }> = [];
const generalComments: string[] = [];
for (const finding of selectedFindings) {
// Use edited body if available, otherwise format normally
const body = (finding as any)._editedBody || formatFindingAsComment(finding);
if (finding.line) {
comments.push({
path: finding.file,
line: finding.line,
body,
});
} else {
// No line number - add to general comments
generalComments.push(`**${finding.file}**\n\n${body}`);
}
}
// Add general comments to review body
if (generalComments.length > 0) {
reviewBody += (reviewBody ? "\n\n---\n\n" : "") + "## Additional Findings\n\n" + generalComments.join("\n\n---\n\n");
}
// Post using gh api
// First, we need to get the latest commit SHA for the PR
const { stdout: prData, code: prCode } = await pi.exec("gh", [
"pr", "view", String(prNumber), "--json", "headRefOid"
]);
if (prCode !== 0) {
ctx.ui.notify(`Failed to get PR info: ${prData}`, "error");
return;
}
let commitSha: string;
try {
const prJson = JSON.parse(prData);
commitSha = prJson.headRefOid;
} catch {
ctx.ui.notify("Failed to parse PR info", "error");
return;
}
// Build the review payload
// For line comments, we need to use the pull request review API
// which requires the commit_id and the position in the diff (not line number)
// For simplicity, we'll post line-specific comments using the
// newer "line" parameter which works with multi-line comments API
const reviewPayload: {
body: string;
event: string;
commit_id: string;
comments?: Array<{ path: string; line: number; body: string; side: string }>;
} = {
body: reviewBody,
event,
commit_id: commitSha,
};
// Add line comments if we have any
if (comments.length > 0) {
reviewPayload.comments = comments
.filter(c => c.line !== undefined)
.map(c => ({
path: c.path,
line: c.line!,
body: c.body,
side: "RIGHT", // Comment on the new version of the file
}));
}
// Post the review
ctx.ui.notify("Posting review to GitHub...", "info");
const payloadJson = JSON.stringify(reviewPayload);
// Write payload to temp file since pi.exec doesn't support stdin
const tempFile = `/tmp/pi-review-${Date.now()}.json`;
await fs.writeFile(tempFile, payloadJson);
const { stdout: reviewResult, stderr: reviewError, code: reviewCode } = await pi.exec("gh", [
"api",
"--method", "POST",
`repos/{owner}/{repo}/pulls/${prNumber}/reviews`,
"--input", tempFile,
]);
// Clean up temp file
await fs.unlink(tempFile).catch(() => {});
if (reviewCode !== 0) {
// Try to extract error message
const errorMsg = reviewError || reviewResult || "Unknown error";
// Common error: line not part of diff
if (errorMsg.includes("pull_request_review_thread.line") || errorMsg.includes("not part of the diff")) {
ctx.ui.notify("Some line comments failed (lines not in diff). Posting as general review...", "warning");
// Retry without line comments
const fallbackPayload = {
body: reviewBody + "\n\n---\n\n## Line-specific Findings\n\n" +
comments.map(c => `**${c.path}:${c.line}**\n\n${c.body}`).join("\n\n---\n\n"),
event,
commit_id: commitSha,
};
const fallbackFile = `/tmp/pi-review-fallback-${Date.now()}.json`;
await fs.writeFile(fallbackFile, JSON.stringify(fallbackPayload));
const { code: fallbackCode } = await pi.exec("gh", [
"api",
"--method", "POST",
`repos/{owner}/{repo}/pulls/${prNumber}/reviews`,
"--input", fallbackFile,
]);
await fs.unlink(fallbackFile).catch(() => {});
if (fallbackCode !== 0) {
ctx.ui.notify("Failed to post review. Check gh auth and permissions.", "error");
return;
}
} else {
ctx.ui.notify(`Failed to post review: ${errorMsg}`, "error");
return;
}
}
const commentCount = reviewPayload.comments?.length || 0;
ctx.ui.notify(
`✓ Posted review to PR #${prNumber} (${commentCount} inline comments, ${event.toLowerCase().replace("_", " ")})`,
"success"
);
},
});
}
#!/usr/bin/env node
import { tmpdir } from "node:os";
import { join } from "node:path";
import { writeFileSync } from "node:fs";
import { connect } from "./cdp.js";
const DEBUG = process.env.DEBUG === "1";
const log = DEBUG ? (...args) => console.error("[debug]", ...args) : () => {};
// Global timeout
const globalTimeout = setTimeout(() => {
console.error("✗ Global timeout exceeded (15s)");
process.exit(1);
}, 15000);
try {
log("connecting...");
const cdp = await connect(5000);
log("getting pages...");
const pages = await cdp.getPages();
const page = pages.at(-1);
if (!page) {
console.error("✗ No active tab found");
process.exit(1);
}
log("attaching to page...");
const sessionId = await cdp.attachToPage(page.targetId);
log("taking screenshot...");
const data = await cdp.screenshot(sessionId);
const timestamp = new Date().toISOString().replace(/[:.]/g, "-");
const filename = `screenshot-${timestamp}.png`;
const filepath = join(tmpdir(), filename);
writeFileSync(filepath, data);
console.log(filepath);
log("closing...");
cdp.close();
log("done");
} catch (e) {
console.error("✗", e.message);
process.exit(1);
} finally {
clearTimeout(globalTimeout);
setTimeout(() => process.exit(0), 100);
}
#!/usr/bin/env node
import { spawn, execSync } from "node:child_process";
import { dirname, join } from "node:path";
import { fileURLToPath } from "node:url";
import { existsSync } from "node:fs";
import { platform } from "node:os";
const useProfile = process.argv[2] === "--profile";
if (process.argv[2] && process.argv[2] !== "--profile") {
console.log("Usage: start.js [--profile]");
console.log("\nOptions:");
console.log(" --profile Copy your default Chrome profile (cookies, logins)");
console.log("\nExamples:");
console.log(" start.js # Start with fresh profile");
console.log(" start.js --profile # Start with your Chrome profile");
process.exit(1);
}
// Detect Chrome path based on platform
function getChromePath() {
const os = platform();
if (os === "darwin") {
const paths = [
"/Applications/Google Chrome.app/Contents/MacOS/Google Chrome",
"/Applications/Chromium.app/Contents/MacOS/Chromium",
];
for (const p of paths) {
if (existsSync(p)) return p;
}
return paths[0]; // Default to Chrome
} else if (os === "linux") {
const paths = [
"/usr/bin/google-chrome",
"/usr/bin/google-chrome-stable",
"/usr/bin/chromium",
"/usr/bin/chromium-browser",
];
for (const p of paths) {
if (existsSync(p)) return p;
}
return "google-chrome"; // Assume it's in PATH
} else if (os === "win32") {
return "C:\\Program Files\\Google\\Chrome\\Application\\chrome.exe";
}
return "google-chrome";
}
// Kill existing Chrome (platform-specific)
function killChrome() {
try {
if (platform() === "darwin") {
execSync("killall 'Google Chrome' 2>/dev/null || true", { stdio: "ignore" });
} else if (platform() === "linux") {
execSync("pkill -f 'chrome|chromium' 2>/dev/null || true", { stdio: "ignore" });
} else if (platform() === "win32") {
execSync("taskkill /F /IM chrome.exe 2>nul || exit 0", { stdio: "ignore" });
}
} catch {}
}
// Get Chrome profile directory
function getProfileDir() {
const os = platform();
if (os === "darwin") {
return `${process.env["HOME"]}/Library/Application Support/Google/Chrome/`;
} else if (os === "linux") {
return `${process.env["HOME"]}/.config/google-chrome/`;
} else if (os === "win32") {
return `${process.env["LOCALAPPDATA"]}\\Google\\Chrome\\User Data\\`;
}
return `${process.env["HOME"]}/.config/google-chrome/`;
}
killChrome();
// Wait a bit for processes to fully die
await new Promise((r) => setTimeout(r, 1000));
// Setup profile directory
const cacheDir = `${process.env["HOME"]}/.cache/scraping`;
execSync(`mkdir -p ${cacheDir}`, { stdio: "ignore" });
if (useProfile) {
const profileDir = getProfileDir();
if (existsSync(profileDir)) {
// Sync profile with rsync (much faster on subsequent runs)
execSync(`rsync -a --delete "${profileDir}" ${cacheDir}/`, { stdio: "pipe" });
}
}
const chromePath = getChromePath();
// Start Chrome in background (detached so Node can exit)
spawn(
chromePath,
[
"--remote-debugging-port=9222",
`--user-data-dir=${cacheDir}`,
"--profile-directory=Default",
"--disable-search-engine-choice-screen",
"--no-first-run",
"--disable-features=ProfilePicker",
],
{ detached: true, stdio: "ignore" }
).unref();
// Wait for Chrome to be ready by checking the debugging endpoint
let connected = false;
for (let i = 0; i < 30; i++) {
try {
const response = await fetch("http://localhost:9222/json/version");
if (response.ok) {
connected = true;
break;
}
} catch {
await new Promise((r) => setTimeout(r, 500));
}
}
if (!connected) {
console.error("✗ Failed to connect to Chrome");
process.exit(1);
}
// Start background watcher for logs/network (detached)
const scriptDir = dirname(fileURLToPath(import.meta.url));
const watcherPath = join(scriptDir, "watch.js");
spawn(process.execPath, [watcherPath], { detached: true, stdio: "ignore" }).unref();
console.log(
`✓ Chrome started on :9222${useProfile ? " with your profile" : ""}`
);
/**
* Todo extension for managing file-based todos with plan mode.
*
* Todos are stored as markdown files in .pi/todos/<id>.md with JSON frontmatter.
*
* File format:
* {
* "id": "deadbeef",
* "title": "Add tests",
* "tags": ["qa"],
* "status": "open",
* "created_at": "2026-01-25T17:00:00.000Z"
* }
*
* Notes about the work go here.
*
* Commands:
* - `/todo` - List and manage todos
* - `/plan` - Start a Socratic planning session to refine an idea into a todo
*
* The plan mode uses the Socratic method: asking one question at a time,
* preferring multiple choice when possible, to collaboratively refine
* an idea into a well-specified todo with clear scope and success criteria.
*/
import type { ExtensionAPI, ExtensionContext, Theme } from "@mariozechner/pi-coding-agent";
import { StringEnum } from "@mariozechner/pi-ai";
import { Type } from "@sinclair/typebox";
import path from "node:path";
import fs from "node:fs/promises";
import { existsSync, readFileSync, readdirSync } from "node:fs";
import crypto from "node:crypto";
import { Text } from "@mariozechner/pi-tui";
const TODO_DIR_NAME = ".pi/todos";
const TODO_ID_PREFIX = "TODO-";
const TODO_ID_PATTERN = /^[a-f0-9]{8}$/i;
interface TodoFrontMatter {
id: string;
title: string;
tags: string[];
status: string;
created_at: string;
}
interface TodoRecord extends TodoFrontMatter {
body: string;
}
const TodoParams = Type.Object({
action: StringEnum([
"list",
"get",
"create",
"update",
"append",
"delete",
"close",
"reopen",
] as const),
id: Type.Optional(
Type.String({ description: "Todo id (TODO-<hex> or raw hex filename)" }),
),
title: Type.Optional(Type.String({ description: "Short summary shown in lists" })),
status: Type.Optional(Type.String({ description: "Todo status (open, closed, etc.)" })),
tags: Type.Optional(Type.Array(Type.String({ description: "Todo tag" }))),
body: Type.Optional(
Type.String({ description: "Long-form details (markdown). Update replaces; append adds." }),
),
});
type TodoAction = "list" | "get" | "create" | "update" | "append" | "delete" | "close" | "reopen";
type TodoToolDetails =
| { action: "list"; todos: TodoFrontMatter[]; error?: string }
| { action: "get" | "create" | "update" | "append" | "delete" | "close" | "reopen"; todo?: TodoRecord; error?: string };
function formatTodoId(id: string): string {
return `${TODO_ID_PREFIX}${id}`;
}
function normalizeTodoId(id: string): string {
let trimmed = id.trim();
if (trimmed.startsWith("#")) {
trimmed = trimmed.slice(1);
}
if (trimmed.toUpperCase().startsWith(TODO_ID_PREFIX)) {
trimmed = trimmed.slice(TODO_ID_PREFIX.length);
}
return trimmed.toLowerCase();
}
function validateTodoId(id: string): { id: string } | { error: string } {
const normalized = normalizeTodoId(id);
if (!normalized || !TODO_ID_PATTERN.test(normalized)) {
return { error: `Invalid todo id "${id}". Expected TODO-<hex> format.` };
}
return { id: normalized };
}
function isTodoClosed(status: string): boolean {
return ["closed", "done"].includes(status.toLowerCase());
}
function getTodosDir(cwd: string): string {
return path.resolve(cwd, TODO_DIR_NAME);
}
function getTodoPath(todosDir: string, id: string): string {
return path.join(todosDir, `${id}.md`);
}
function findJsonObjectEnd(content: string): number {
let depth = 0;
let inString = false;
let escaped = false;
for (let i = 0; i < content.length; i += 1) {
const char = content[i];
if (inString) {
if (escaped) {
escaped = false;
continue;
}
if (char === "\\") {
escaped = true;
continue;
}
if (char === "\"") {
inString = false;
}
continue;
}
if (char === "\"") {
inString = true;
continue;
}
if (char === "{") {
depth += 1;
continue;
}
if (char === "}") {
depth -= 1;
if (depth === 0) return i;
}
}
return -1;
}
function splitFrontMatter(content: string): { frontMatter: string; body: string } {
if (!content.startsWith("{")) {
return { frontMatter: "", body: content };
}
const endIndex = findJsonObjectEnd(content);
if (endIndex === -1) {
return { frontMatter: "", body: content };
}
const frontMatter = content.slice(0, endIndex + 1);
const body = content.slice(endIndex + 1).replace(/^\r?\n+/, "");
return { frontMatter, body };
}
function parseFrontMatter(text: string, idFallback: string): TodoFrontMatter {
const data: TodoFrontMatter = {
id: idFallback,
title: "",
tags: [],
status: "open",
created_at: "",
};
const trimmed = text.trim();
if (!trimmed) return data;
try {
const parsed = JSON.parse(trimmed) as Partial<TodoFrontMatter> | null;
if (!parsed || typeof parsed !== "object") return data;
if (typeof parsed.id === "string" && parsed.id) data.id = parsed.id;
if (typeof parsed.title === "string") data.title = parsed.title;
if (typeof parsed.status === "string" && parsed.status) data.status = parsed.status;
if (typeof parsed.created_at === "string") data.created_at = parsed.created_at;
if (Array.isArray(parsed.tags)) {
data.tags = parsed.tags.filter((tag): tag is string => typeof tag === "string");
}
} catch {
return data;
}
return data;
}
function parseTodoContent(content: string, idFallback: string): TodoRecord {
const { frontMatter, body } = splitFrontMatter(content);
const parsed = parseFrontMatter(frontMatter, idFallback);
return {
id: idFallback,
title: parsed.title,
tags: parsed.tags ?? [],
status: parsed.status,
created_at: parsed.created_at,
body: body ?? "",
};
}
function serializeTodo(todo: TodoRecord): string {
const frontMatter = JSON.stringify(
{
id: todo.id,
title: todo.title,
tags: todo.tags ?? [],
status: todo.status,
created_at: todo.created_at,
},
null,
2,
);
const body = todo.body ?? "";
const trimmedBody = body.replace(/^\n+/, "").replace(/\s+$/, "");
if (!trimmedBody) return `${frontMatter}\n`;
return `${frontMatter}\n\n${trimmedBody}\n`;
}
async function ensureTodosDir(todosDir: string) {
await fs.mkdir(todosDir, { recursive: true });
}
async function readTodoFile(filePath: string, idFallback: string): Promise<TodoRecord> {
const content = await fs.readFile(filePath, "utf8");
return parseTodoContent(content, idFallback);
}
async function writeTodoFile(filePath: string, todo: TodoRecord) {
await fs.writeFile(filePath, serializeTodo(todo), "utf8");
}
async function generateTodoId(todosDir: string): Promise<string> {
for (let attempt = 0; attempt < 10; attempt += 1) {
const id = crypto.randomBytes(4).toString("hex");
const todoPath = getTodoPath(todosDir, id);
if (!existsSync(todoPath)) return id;
}
throw new Error("Failed to generate unique todo id");
}
async function listTodos(todosDir: string): Promise<TodoFrontMatter[]> {
let entries: string[] = [];
try {
entries = await fs.readdir(todosDir);
} catch {
return [];
}
const todos: TodoFrontMatter[] = [];
for (const entry of entries) {
if (!entry.endsWith(".md")) continue;
const id = entry.slice(0, -3);
const filePath = path.join(todosDir, entry);
try {
const content = await fs.readFile(filePath, "utf8");
const { frontMatter } = splitFrontMatter(content);
const parsed = parseFrontMatter(frontMatter, id);
todos.push({
id,
title: parsed.title,
tags: parsed.tags ?? [],
status: parsed.status,
created_at: parsed.created_at,
});
} catch {
// ignore unreadable todo
}
}
// Sort: open first, then by creation date
return [...todos].sort((a, b) => {
const aClosed = isTodoClosed(a.status);
const bClosed = isTodoClosed(b.status);
if (aClosed !== bClosed) return aClosed ? 1 : -1;
return (a.created_at || "").localeCompare(b.created_at || "");
});
}
function listTodosSync(todosDir: string): TodoFrontMatter[] {
let entries: string[] = [];
try {
entries = readdirSync(todosDir);
} catch {
return [];
}
const todos: TodoFrontMatter[] = [];
for (const entry of entries) {
if (!entry.endsWith(".md")) continue;
const id = entry.slice(0, -3);
const filePath = path.join(todosDir, entry);
try {
const content = readFileSync(filePath, "utf8");
const { frontMatter } = splitFrontMatter(content);
const parsed = parseFrontMatter(frontMatter, id);
todos.push({
id,
title: parsed.title,
tags: parsed.tags ?? [],
status: parsed.status,
created_at: parsed.created_at,
});
} catch {
// ignore
}
}
return [...todos].sort((a, b) => {
const aClosed = isTodoClosed(a.status);
const bClosed = isTodoClosed(b.status);
if (aClosed !== bClosed) return aClosed ? 1 : -1;
return (a.created_at || "").localeCompare(b.created_at || "");
});
}
function serializeTodoForAgent(todo: TodoRecord): string {
const payload = { ...todo, id: formatTodoId(todo.id) };
return JSON.stringify(payload, null, 2);
}
function serializeTodoListForAgent(todos: TodoFrontMatter[]): string {
const open = todos.filter(t => !isTodoClosed(t.status));
const closed = todos.filter(t => isTodoClosed(t.status));
const mapTodo = (todo: TodoFrontMatter) => ({ ...todo, id: formatTodoId(todo.id) });
return JSON.stringify({ open: open.map(mapTodo), closed: closed.map(mapTodo) }, null, 2);
}
function renderTodoHeading(theme: Theme, todo: TodoFrontMatter): string {
const closed = isTodoClosed(todo.status);
const titleColor = closed ? "dim" : "text";
const tagText = todo.tags.length ? theme.fg("dim", ` [${todo.tags.join(", ")}]`) : "";
return (
theme.fg("accent", formatTodoId(todo.id)) +
" " +
theme.fg(titleColor, todo.title || "(untitled)") +
tagText
);
}
function renderTodoList(theme: Theme, todos: TodoFrontMatter[], expanded: boolean): string {
if (!todos.length) return theme.fg("dim", "No todos");
const open = todos.filter(t => !isTodoClosed(t.status));
const closed = todos.filter(t => isTodoClosed(t.status));
const lines: string[] = [];
const pushSection = (label: string, sectionTodos: TodoFrontMatter[]) => {
lines.push(theme.fg("muted", `${label} (${sectionTodos.length})`));
if (!sectionTodos.length) {
lines.push(theme.fg("dim", " none"));
return;
}
const maxItems = expanded ? sectionTodos.length : Math.min(sectionTodos.length, 5);
for (let i = 0; i < maxItems; i++) {
lines.push(` ${renderTodoHeading(theme, sectionTodos[i])}`);
}
if (!expanded && sectionTodos.length > maxItems) {
lines.push(theme.fg("dim", ` ... ${sectionTodos.length - maxItems} more`));
}
};
pushSection("Open todos", open);
lines.push("");
pushSection("Closed todos", closed);
return lines.join("\n");
}
function renderTodoDetail(theme: Theme, todo: TodoRecord, expanded: boolean): string {
const summary = renderTodoHeading(theme, todo);
if (!expanded) return summary;
const tags = todo.tags.length ? todo.tags.join(", ") : "none";
const createdAt = todo.created_at || "unknown";
const bodyText = todo.body?.trim() ? todo.body.trim() : "No details yet.";
const bodyLines = bodyText.split("\n");
const lines = [
summary,
theme.fg("muted", `Status: ${todo.status}`),
theme.fg("muted", `Tags: ${tags}`),
theme.fg("muted", `Created: ${createdAt}`),
"",
theme.fg("muted", "Body:"),
...bodyLines.map((line) => theme.fg("text", ` ${line}`)),
];
return lines.join("\n");
}
// Plan mode prompt for Socratic questioning
function buildPlanPrompt(idea: string): string {
return `You are entering PLAN MODE to help refine an idea into a well-specified todo.
## The Idea
${idea}
## Your Role
Use the Socratic method to collaboratively refine this idea. Your goal is to understand what the user really wants and help them think through it clearly.
## Process
**1. Understanding Phase**
- Ask questions ONE AT A TIME - never multiple questions in one message
- Prefer multiple choice questions when possible (easier to answer)
- Focus on: purpose, scope, constraints, success criteria
- If something is unclear, ask for clarification before moving on
**2. Exploring Approaches** (when appropriate)
- Propose 2-3 different approaches with trade-offs
- Lead with your recommendation and explain why
- Keep it conversational
**3. Presenting the Design**
- Once you understand the scope, present a summary in small sections (200-300 words each)
- Check after each section: "Does this look right so far?"
- Cover: what will be done, what won't be done, how to know it's complete
**4. Creating the Todo**
- When the user confirms the design, use the \`todo\` tool to create it
- Title: Clear, actionable summary (e.g., "Add unit tests for auth module")
- Body: Include the refined spec with:
- **Goal**: What we're trying to achieve
- **Scope**: What's included and excluded
- **Approach**: How to tackle it
- **Done when**: Clear success criteria
- Tags: Relevant categories
## Key Principles
- **One question at a time** - Don't overwhelm
- **Multiple choice preferred** - When you can offer good options
- **YAGNI** - Help remove unnecessary scope
- **Validate incrementally** - Check understanding before moving on
- **Be flexible** - Go back if something doesn't make sense
Start by acknowledging the idea and asking your first clarifying question.`;
}
// Refine mode prompt for existing todos
function buildRefinePrompt(todoId: string, title: string, body: string): string {
return `You are entering PLAN MODE to refine an existing todo.
## Current Todo
**${formatTodoId(todoId)}**: ${title}
${body ? `**Current details:**\n${body}` : "*No details yet.*"}
## Your Role
Use the Socratic method to help clarify and improve this todo. The goal is to make it more actionable with clearer scope and success criteria.
## Process
**1. Review & Question**
- Ask ONE question at a time about unclear aspects
- Prefer multiple choice when possible
- Focus on: missing details, unclear scope, ambiguous success criteria
**2. Refine Together**
- Propose improvements based on answers
- Check each change: "Does this capture what you meant?"
**3. Update the Todo**
- When refined, use the \`todo\` tool with action "update" to save changes
- Keep the same id: ${formatTodoId(todoId)}
- Update body with refined spec including:
- **Goal**: What we're trying to achieve
- **Scope**: What's included and excluded
- **Approach**: How to tackle it
- **Done when**: Clear success criteria
## Key Principles
- **One question at a time**
- **Multiple choice preferred**
- **YAGNI** - Help trim unnecessary scope
- **Validate incrementally**
Start by reviewing the todo and asking your first question about what needs clarification.`;
}
export default function todosExtension(pi: ExtensionAPI) {
pi.on("session_start", async (_event, ctx) => {
const todosDir = getTodosDir(ctx.cwd);
await ensureTodosDir(todosDir);
});
const todosDirLabel = TODO_DIR_NAME;
pi.registerTool({
name: "todo",
label: "Todo",
description:
`Manage file-based todos in ${todosDirLabel} (list, get, create, update, append, delete, close, reopen). ` +
"Title is the short summary; body is long-form markdown notes (update replaces, append adds). " +
"Todo ids are shown as TODO-<hex>; id parameters accept TODO-<hex> or the raw hex filename.",
parameters: TodoParams,
async execute(_toolCallId, params, _signal, _onUpdate, ctx) {
const todosDir = getTodosDir(ctx.cwd);
const action: TodoAction = params.action;
switch (action) {
case "list": {
const todos = await listTodos(todosDir);
return {
content: [{ type: "text", text: serializeTodoListForAgent(todos) }],
details: { action: "list", todos },
};
}
case "get": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "get", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "get", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "get", error: "not found" },
};
}
const todo = await readTodoFile(filePath, normalizedId);
return {
content: [{ type: "text", text: serializeTodoForAgent(todo) }],
details: { action: "get", todo },
};
}
case "create": {
if (!params.title) {
return {
content: [{ type: "text", text: "Error: title required" }],
details: { action: "create", error: "title required" },
};
}
await ensureTodosDir(todosDir);
const id = await generateTodoId(todosDir);
const filePath = getTodoPath(todosDir, id);
const todo: TodoRecord = {
id,
title: params.title,
tags: params.tags ?? [],
status: params.status ?? "open",
created_at: new Date().toISOString(),
body: params.body ?? "",
};
await writeTodoFile(filePath, todo);
return {
content: [{ type: "text", text: serializeTodoForAgent(todo) }],
details: { action: "create", todo },
};
}
case "update": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "update", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "update", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "update", error: "not found" },
};
}
const existing = await readTodoFile(filePath, normalizedId);
if (params.title !== undefined) existing.title = params.title;
if (params.status !== undefined) existing.status = params.status;
if (params.tags !== undefined) existing.tags = params.tags;
if (params.body !== undefined) existing.body = params.body;
await writeTodoFile(filePath, existing);
return {
content: [{ type: "text", text: serializeTodoForAgent(existing) }],
details: { action: "update", todo: existing },
};
}
case "append": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "append", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "append", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "append", error: "not found" },
};
}
const existing = await readTodoFile(filePath, normalizedId);
if (params.body && params.body.trim()) {
const spacer = existing.body.trim().length ? "\n\n" : "";
existing.body = `${existing.body.replace(/\s+$/, "")}${spacer}${params.body.trim()}\n`;
}
await writeTodoFile(filePath, existing);
return {
content: [{ type: "text", text: serializeTodoForAgent(existing) }],
details: { action: "append", todo: existing },
};
}
case "close": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "close", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "close", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "close", error: "not found" },
};
}
const existing = await readTodoFile(filePath, normalizedId);
existing.status = "closed";
await writeTodoFile(filePath, existing);
return {
content: [{ type: "text", text: serializeTodoForAgent(existing) }],
details: { action: "close", todo: existing },
};
}
case "reopen": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "reopen", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "reopen", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "reopen", error: "not found" },
};
}
const existing = await readTodoFile(filePath, normalizedId);
existing.status = "open";
await writeTodoFile(filePath, existing);
return {
content: [{ type: "text", text: serializeTodoForAgent(existing) }],
details: { action: "reopen", todo: existing },
};
}
case "delete": {
if (!params.id) {
return {
content: [{ type: "text", text: "Error: id required" }],
details: { action: "delete", error: "id required" },
};
}
const validated = validateTodoId(params.id);
if ("error" in validated) {
return {
content: [{ type: "text", text: validated.error }],
details: { action: "delete", error: validated.error },
};
}
const normalizedId = validated.id;
const displayId = formatTodoId(normalizedId);
const filePath = getTodoPath(todosDir, normalizedId);
if (!existsSync(filePath)) {
return {
content: [{ type: "text", text: `Todo ${displayId} not found` }],
details: { action: "delete", error: "not found" },
};
}
const existing = await readTodoFile(filePath, normalizedId);
await fs.unlink(filePath);
return {
content: [{ type: "text", text: `Deleted ${serializeTodoForAgent(existing)}` }],
details: { action: "delete", todo: existing },
};
}
}
},
renderCall(args, theme) {
const action = typeof args.action === "string" ? args.action : "";
const id = typeof args.id === "string" ? args.id : "";
const normalizedId = id ? normalizeTodoId(id) : "";
const title = typeof args.title === "string" ? args.title : "";
let text = theme.fg("toolTitle", theme.bold("todo ")) + theme.fg("muted", action);
if (normalizedId) {
text += " " + theme.fg("accent", formatTodoId(normalizedId));
}
if (title) {
text += " " + theme.fg("dim", `"${title}"`);
}
return new Text(text, 0, 0);
},
renderResult(result, { expanded, isPartial }, theme) {
const details = result.details as TodoToolDetails | undefined;
if (isPartial) {
return new Text(theme.fg("warning", "Processing..."), 0, 0);
}
if (!details) {
const text = result.content[0];
return new Text(text?.type === "text" ? text.text : "", 0, 0);
}
if (details.error) {
return new Text(theme.fg("error", `Error: ${details.error}`), 0, 0);
}
if (details.action === "list") {
const text = renderTodoList(theme, details.todos, expanded);
return new Text(text, 0, 0);
}
if (!details.todo) {
const text = result.content[0];
return new Text(text?.type === "text" ? text.text : "", 0, 0);
}
let text = renderTodoDetail(theme, details.todo, expanded);
const actionLabel =
details.action === "create" ? "Created" :
details.action === "update" ? "Updated" :
details.action === "append" ? "Appended to" :
details.action === "delete" ? "Deleted" :
details.action === "close" ? "Closed" :
details.action === "reopen" ? "Reopened" :
null;
if (actionLabel) {
const lines = text.split("\n");
lines[0] = theme.fg("success", "✓ ") + theme.fg("muted", `${actionLabel} `) + lines[0];
text = lines.join("\n");
}
return new Text(text, 0, 0);
},
});
// /todo command - list and manage todos
pi.registerCommand("todo", {
description: "List and manage todos from .pi/todos",
getArgumentCompletions: (argumentPrefix: string) => {
const todos = listTodosSync(getTodosDir(process.cwd()));
if (!todos.length) return null;
// Filter by prefix if provided
const prefix = argumentPrefix.toLowerCase();
const matches = todos.filter(todo => {
const searchText = `${formatTodoId(todo.id)} ${todo.title}`.toLowerCase();
return searchText.includes(prefix);
});
if (!matches.length) return null;
return matches.map((todo) => {
const title = todo.title || "(untitled)";
const tags = todo.tags.length ? ` • ${todo.tags.join(", ")}` : "";
return {
value: formatTodoId(todo.id),
label: `${formatTodoId(todo.id)} ${title}`,
description: `${todo.status || "open"}${tags}`,
};
});
},
handler: async (args, ctx) => {
const todosDir = getTodosDir(ctx.cwd);
const todos = await listTodos(todosDir);
if (!ctx.hasUI) {
// Non-interactive mode: just print the list
const open = todos.filter(t => !isTodoClosed(t.status));
const closed = todos.filter(t => isTodoClosed(t.status));
console.log(`\nOpen todos (${open.length}):`);
if (open.length === 0) {
console.log(" none");
} else {
for (const todo of open) {
const tags = todo.tags.length ? ` [${todo.tags.join(", ")}]` : "";
console.log(` ${formatTodoId(todo.id)} ${todo.title || "(untitled)"}${tags}`);
}
}
console.log(`\nClosed todos (${closed.length}):`);
if (closed.length === 0) {
console.log(" none");
} else {
for (const todo of closed) {
const tags = todo.tags.length ? ` [${todo.tags.join(", ")}]` : "";
console.log(` ${formatTodoId(todo.id)} ${todo.title || "(untitled)"}${tags}`);
}
}
console.log("");
return;
}
// Interactive mode: show select dialog
if (todos.length === 0) {
ctx.ui.notify("No todos yet. Use /plan to create one!", "info");
return;
}
const options = todos.map(todo => {
const closed = isTodoClosed(todo.status);
const statusIndicator = closed ? "✓ " : "○ ";
const tags = todo.tags.length ? ` [${todo.tags.join(", ")}]` : "";
return `${statusIndicator}${formatTodoId(todo.id)} ${todo.title || "(untitled)"}${tags}`;
});
const selected = await ctx.ui.select("Todos:", options);
if (selected === undefined) return;
const selectedTodo = todos[selected];
if (!selectedTodo) return;
// Show actions for the selected todo
const closed = isTodoClosed(selectedTodo.status);
const actions = closed
? ["View details", "Refine (plan mode)", "Reopen", "Delete"]
: ["View details", "Refine (plan mode)", "Close", "Delete"];
const action = await ctx.ui.select(`${formatTodoId(selectedTodo.id)}: ${selectedTodo.title}`, actions);
if (action === undefined) return;
const filePath = getTodoPath(todosDir, selectedTodo.id);
if (actions[action] === "View details") {
const todo = await readTodoFile(filePath, selectedTodo.id);
const body = todo.body?.trim() || "No details.";
ctx.ui.notify(`${formatTodoId(todo.id)}: ${todo.title}\n\n${body}`, "info");
} else if (actions[action] === "Refine (plan mode)") {
const todo = await readTodoFile(filePath, selectedTodo.id);
const prompt = buildRefinePrompt(todo.id, todo.title, todo.body);
ctx.ui.setEditorText(prompt);
} else if (actions[action] === "Close") {
const todo = await readTodoFile(filePath, selectedTodo.id);
todo.status = "closed";
await writeTodoFile(filePath, todo);
ctx.ui.notify(`Closed ${formatTodoId(todo.id)}`, "success");
} else if (actions[action] === "Reopen") {
const todo = await readTodoFile(filePath, selectedTodo.id);
todo.status = "open";
await writeTodoFile(filePath, todo);
ctx.ui.notify(`Reopened ${formatTodoId(todo.id)}`, "success");
} else if (actions[action] === "Delete") {
const confirm = await ctx.ui.confirm("Delete todo?", `Delete ${formatTodoId(selectedTodo.id)}: ${selectedTodo.title}?`);
if (confirm) {
await fs.unlink(filePath);
ctx.ui.notify(`Deleted ${formatTodoId(selectedTodo.id)}`, "info");
}
}
},
});
// /plan command - Socratic planning mode
pi.registerCommand("plan", {
description: "Start a Socratic planning session to refine an idea into a todo",
handler: async (args, ctx) => {
if (!ctx.hasUI) {
console.log("Plan mode requires interactive mode.");
return;
}
const idea = args?.trim();
if (!idea) {
// No idea provided, prompt for one
const input = await ctx.ui.input("What would you like to plan?", "e.g., Add user authentication");
if (!input?.trim()) {
ctx.ui.notify("No idea provided. Cancelled.", "info");
return;
}
const prompt = buildPlanPrompt(input.trim());
ctx.ui.setEditorText(prompt);
ctx.ui.notify("Plan mode started. Answer the questions to refine your idea.", "info");
} else {
// Idea provided as argument
const prompt = buildPlanPrompt(idea);
ctx.ui.setEditorText(prompt);
ctx.ui.notify("Plan mode started. Answer the questions to refine your idea.", "info");
}
},
});
}
#!/usr/bin/env node
import { createWriteStream, existsSync, mkdirSync, readFileSync, writeFileSync } from "node:fs";
import { homedir } from "node:os";
import { join } from "node:path";
import { connect } from "./cdp.js";
const LOG_ROOT = join(homedir(), ".cache/agent-web/logs");
const PID_FILE = join(LOG_ROOT, ".pid");
function ensureDir(dir) {
if (!existsSync(dir)) {
mkdirSync(dir, { recursive: true });
}
}
function isProcessAlive(pid) {
try {
process.kill(pid, 0);
return true;
} catch {
return false;
}
}
function getDateDir() {
const now = new Date();
const yyyy = String(now.getFullYear());
const mm = String(now.getMonth() + 1).padStart(2, "0");
const dd = String(now.getDate()).padStart(2, "0");
return join(LOG_ROOT, `${yyyy}-${mm}-${dd}`);
}
function safeFileName(value) {
return value.replace(/[^a-zA-Z0-9._-]/g, "_");
}
function compactStack(stackTrace) {
if (!stackTrace || !Array.isArray(stackTrace.callFrames)) return null;
return stackTrace.callFrames.slice(0, 8).map((frame) => ({
functionName: frame.functionName || null,
url: frame.url || null,
lineNumber: frame.lineNumber,
columnNumber: frame.columnNumber,
}));
}
function serializeRemoteObject(obj) {
if (!obj || typeof obj !== "object") return obj;
const value =
Object.prototype.hasOwnProperty.call(obj, "value")
? obj.value
: obj.unserializableValue || obj.description || null;
return {
type: obj.type || null,
subtype: obj.subtype || null,
value,
description: obj.description || null,
};
}
ensureDir(LOG_ROOT);
if (existsSync(PID_FILE)) {
try {
const existing = Number(readFileSync(PID_FILE, "utf8").trim());
if (existing && isProcessAlive(existing)) {
console.log("✓ watch already running");
process.exit(0);
}
} catch {
// Ignore and overwrite stale pid.
}
}
writeFileSync(PID_FILE, String(process.pid));
const dateDir = getDateDir();
ensureDir(dateDir);
const targetState = new Map();
const sessionToTarget = new Map();
function getStreamForTarget(targetId) {
const state = targetState.get(targetId);
if (state?.stream) return state.stream;
const filename = `${safeFileName(targetId)}.jsonl`;
const filepath = join(dateDir, filename);
const stream = createWriteStream(filepath, { flags: "a" });
if (state) state.stream = stream;
return stream;
}
function writeLog(targetId, payload) {
const stream = getStreamForTarget(targetId);
const record = {
ts: new Date().toISOString(),
targetId,
...payload,
};
stream.write(`${JSON.stringify(record)}\n`);
}
async function enableSession(cdp, sessionId) {
await cdp.send("Runtime.enable", {}, sessionId);
await cdp.send("Log.enable", {}, sessionId);
await cdp.send("Network.enable", {}, sessionId);
await cdp.send("Page.enable", {}, sessionId);
}
async function attachToTarget(cdp, targetInfo) {
if (targetInfo.type !== "page") return;
if (targetState.has(targetInfo.targetId)) return;
const { sessionId } = await cdp.send("Target.attachToTarget", {
targetId: targetInfo.targetId,
flatten: true,
});
targetState.set(targetInfo.targetId, {
sessionId,
url: targetInfo.url || null,
title: targetInfo.title || null,
stream: null,
});
sessionToTarget.set(sessionId, targetInfo.targetId);
await enableSession(cdp, sessionId);
writeLog(targetInfo.targetId, {
type: "target.attached",
url: targetInfo.url || null,
title: targetInfo.title || null,
});
}
async function main() {
const cdp = await connect(5000);
cdp.on("Target.targetCreated", async (params) => {
try {
await attachToTarget(cdp, params.targetInfo);
} catch (e) {
console.error("watch: attach error:", e.message);
}
});
cdp.on("Target.targetDestroyed", (params) => {
const targetId = params.targetId;
const state = targetState.get(targetId);
if (state?.stream) state.stream.end();
targetState.delete(targetId);
if (state?.sessionId) sessionToTarget.delete(state.sessionId);
});
cdp.on("Target.targetInfoChanged", (params) => {
const info = params.targetInfo;
const state = targetState.get(info.targetId);
if (state) {
state.url = info.url || state.url;
state.title = info.title || state.title;
writeLog(info.targetId, {
type: "target.info",
url: info.url || null,
title: info.title || null,
});
}
});
cdp.on("Runtime.consoleAPICalled", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
writeLog(targetId, {
type: "console",
level: params.type || null,
args: (params.args || []).map(serializeRemoteObject),
stack: compactStack(params.stackTrace),
});
});
cdp.on("Runtime.exceptionThrown", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
const details = params.exceptionDetails || {};
writeLog(targetId, {
type: "exception",
text: details.text || null,
description: details.exception?.description || null,
lineNumber: details.lineNumber,
columnNumber: details.columnNumber,
url: details.url || null,
stack: compactStack(details.stackTrace),
});
});
cdp.on("Log.entryAdded", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
const entry = params.entry || {};
writeLog(targetId, {
type: "log",
level: entry.level || null,
source: entry.source || null,
text: entry.text || null,
url: entry.url || null,
lineNumber: entry.lineNumber ?? null,
});
});
cdp.on("Network.requestWillBeSent", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
const request = params.request || {};
writeLog(targetId, {
type: "network.request",
requestId: params.requestId,
method: request.method || null,
url: request.url || null,
documentURL: params.documentURL || null,
initiator: params.initiator?.type || null,
hasPostData: !!request.hasPostData,
});
});
cdp.on("Network.responseReceived", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
const response = params.response || {};
writeLog(targetId, {
type: "network.response",
requestId: params.requestId,
url: response.url || null,
status: response.status,
statusText: response.statusText || null,
mimeType: response.mimeType || null,
fromDiskCache: !!response.fromDiskCache,
fromServiceWorker: !!response.fromServiceWorker,
});
});
cdp.on("Network.loadingFailed", (params, sessionId) => {
const targetId = sessionToTarget.get(sessionId);
if (!targetId) return;
writeLog(targetId, {
type: "network.failure",
requestId: params.requestId,
errorText: params.errorText || null,
canceled: !!params.canceled,
});
});
await cdp.send("Target.setDiscoverTargets", { discover: true });
const pages = await cdp.getPages();
for (const page of pages) {
await attachToTarget(cdp, page);
}
console.log("✓ watch started");
}
try {
await main();
} catch (e) {
console.error("✗ watch failed:", e.message);
process.exit(1);
}
import type { ExtensionAPI } from "@mariozechner/pi-coding-agent";
import { Text } from "@mariozechner/pi-tui";
import { dirname } from "node:path";
import { fileURLToPath } from "node:url";
const __dirname = dirname(fileURLToPath(import.meta.url));
const scriptsDir = `${__dirname}/scripts`;
const BROWSER_INSTRUCTIONS = `# Web Browser Skill
You can control a Chrome browser using the Chrome DevTools Protocol (CDP) via the scripts in:
${scriptsDir}
## Quick Reference
### Start Chrome (required first)
\`\`\`bash
node ${scriptsDir}/start.js # Fresh profile
node ${scriptsDir}/start.js --profile # Copy user's profile (cookies, logins)
\`\`\`
Starts Chrome on :9222 with remote debugging enabled.
### Navigate
\`\`\`bash
node ${scriptsDir}/nav.js https://example.com # Navigate current tab
node ${scriptsDir}/nav.js https://example.com --new # Open in new tab
\`\`\`
### Evaluate JavaScript
\`\`\`bash
node ${scriptsDir}/eval.js 'document.title'
node ${scriptsDir}/eval.js 'document.querySelectorAll("a").length'
node ${scriptsDir}/eval.js 'JSON.stringify(Array.from(document.querySelectorAll("a")).map(a => ({ text: a.textContent.trim(), href: a.href })))'
\`\`\`
Execute JavaScript in active tab. Use single quotes for the expression. Returns evaluated result.
### Screenshot
\`\`\`bash
node ${scriptsDir}/screenshot.js
\`\`\`
Takes screenshot of current viewport, returns temp file path. Use Read tool to view the image.
### Pick Elements (Interactive)
\`\`\`bash
node ${scriptsDir}/pick.js "Click the submit button"
\`\`\`
Interactive element picker. User clicks to select, Cmd/Ctrl+Click for multi-select, Enter to finish.
Returns element info (tag, id, class, text, html, parents).
### Dismiss Cookie Dialogs
\`\`\`bash
node ${scriptsDir}/dismiss-cookies.js # Accept cookies
node ${scriptsDir}/dismiss-cookies.js --reject # Reject cookies
\`\`\`
Automatically dismisses EU cookie consent dialogs. Run after navigating to a page.
### Background Logging
Automatically started by start.js. Logs console, errors, and network to:
~/.cache/agent-web/logs/YYYY-MM-DD/<targetId>.jsonl
\`\`\`bash
node ${scriptsDir}/logs-tail.js # Dump current log
node ${scriptsDir}/logs-tail.js --follow # Keep following
node ${scriptsDir}/net-summary.js # Summarize network responses
\`\`\`
## Common Patterns
### Navigate and screenshot
\`\`\`bash
node ${scriptsDir}/nav.js https://example.com && sleep 2 && node ${scriptsDir}/screenshot.js
\`\`\`
### Fill a form field
\`\`\`bash
node ${scriptsDir}/eval.js 'document.querySelector("#email").value = "test@example.com"'
\`\`\`
### Click a button
\`\`\`bash
node ${scriptsDir}/eval.js 'document.querySelector("button[type=submit]").click()'
\`\`\`
### Get page content
\`\`\`bash
node ${scriptsDir}/eval.js 'document.body.innerText'
\`\`\`
### Wait for element
\`\`\`bash
node ${scriptsDir}/eval.js 'await new Promise(r => { const check = () => document.querySelector(".loaded") ? r(true) : setTimeout(check, 100); check(); })'
\`\`\`
`;
export default function (pi: ExtensionAPI) {
pi.registerCommand("browser", {
description: "Enable web browser control via Chrome DevTools Protocol",
handler: async (args, ctx) => {
// Inject the browser instructions as a message so the LLM knows how to use the tools
pi.sendMessage(
{
customType: "web-browser-skill",
content: BROWSER_INSTRUCTIONS,
display: true,
},
{ triggerTurn: false }
);
if (args) {
// If args provided, treat as initial instruction
pi.sendUserMessage(`Using the web browser: ${args}`);
} else {
ctx.ui.notify("Web browser skill activated. Chrome scripts available.", "info");
}
},
});
// Register a custom renderer for the browser skill messages
pi.registerMessageRenderer("web-browser-skill", {
render(message, theme) {
const header = theme.fg("accent", theme.bold("🌐 Web Browser Skill Activated"));
const hint = theme.fg("muted", `Scripts in: ${scriptsDir}`);
return new Text(`${header}\n${hint}`, 1, 0);
},
});
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment