Skip to content

Instantly share code, notes, and snippets.

@nagyv
Created February 5, 2026 19:01
Show Gist options
  • Select an option

  • Save nagyv/f79af8b85e307938230f0547c6fe91f5 to your computer and use it in GitHub Desktop.

Select an option

Save nagyv/f79af8b85e307938230f0547c6fe91f5 to your computer and use it in GitHub Desktop.
Creating an implementation plan - a simple comparison

DocSearch Hook Design

Date: 2026-02-04 Author: Viktor & Claude Status: Approved

Overview

A Claude Code PreToolUse hook that intercepts WebSearch tool calls and redirects documentation-related queries to local RAG databases via LEANN MCP server. The hook provides an intelligent escape hatch allowing Claude to retry web searches if RAG results are insufficient.

Architecture

Core Components

  1. PreToolUse Hook Script (docsearch.py) - Python 3.12+ script that intercepts WebSearch calls
  2. Configuration File (~/.claude/hooks/docsearch-config.json) - Maps keywords to RAG database metadata
  3. State Files (~/.claude/hooks/docsearch-state-{session_id}.json) - Per-session tracking of denied searches
  4. LEANN MCP Server - External component, assumed configured in Claude Code MCP settings

Flow Diagram

WebSearch tool call
    ↓
PreToolUse hook fires (docsearch.py)
    ↓
Check state: Is this a retry? (same params as last call)
    ├─ Yes → Allow through (exit 0)
    └─ No → Continue
        ↓
    Parse query for configured keywords (case-insensitive)
        ├─ No match → Allow through (exit 0)
        └─ Match(es) found → Store params, Deny (exit 2) + add context
            ↓
        Claude receives denial + context about RAG database(s)
            ↓
        Claude calls LEANN MCP tool(s) (in parallel if multiple matches)
            ├─ Success → Done
            └─ Fail/Unsatisfied → Claude retries WebSearch
                ↓
            Hook sees same params → Allows through

Configuration Structure

Config File Location

~/.claude/hooks/docsearch-config.json

Schema

{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/Users/viktor/.leann/databases/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation from docs.gitlab.com"
    },
    {
      "keywords": ["kubernetes", "k8s", "kubectl"],
      "path": "/Users/viktor/.leann/databases/kubernetes",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Kubernetes official documentation"
    }
  ]
}

Fields

  • keywords (required): Array of strings to match in search queries

    • Case-insensitive matching
    • Exact word boundary matching (e.g., "gitla" does NOT match "gitlab")
    • Multiple keywords can map to one database
  • path (required): Absolute path to LEANN database directory

  • mcp_tool_name (required): Exact MCP tool name to suggest to Claude

  • description (required): Human-readable description passed to Claude in additionalContext

Keyword Matching Logic

  1. Split query into words
  2. Check each word against all configured keywords (case-insensitive, word boundary)
  3. Collect ALL matches (multiple databases can match same query)
  4. Order in config file determines priority when suggesting tools

State File & Escape Hatch

State File Location

~/.claude/hooks/docsearch-state-{session_id}.json

Each Claude Code session has isolated state to prevent cross-session interference.

Schema

{
  "last_denied": {
    "query": "how to configure gitlab ci",
    "allowed_domains": ["docs.gitlab.com"],
    "blocked_domains": [],
    "timestamp": 1738704000
  }
}

Escape Hatch Logic

  1. On WebSearch interception:

    • Load state file for current session (if exists)
    • Compare current tool_input against last_denied
    • If exact match (query + domains) → Allow through, clear state, exit 0
    • If no match → Continue to keyword matching
  2. On keyword match (denying search):

    • Store current tool_input in session-specific state file
    • Exit 2 with permissionDecision: deny and additionalContext
  3. State cleanup:

    • Clear last_denied after successful retry
    • Clear stale state files on session start
    • Optional: Add 5-minute timestamp expiry as safety net

Parameter Comparison

  • Exact string match on query
  • Arrays compared as sets (order-independent) for allowed_domains and blocked_domains

Hook Implementation Details

Hook Type

Shell-based PreToolUse hook (Python script executed as subprocess)

Hook Location

~/.claude/hooks/PreToolUse/docsearch.py

Hook Responsibilities

  1. Filter for WebSearch only - Exit early (code 0) if tool_name != "WebSearch"
  2. Load and parse config - Read docsearch-config.json, handle missing/invalid gracefully
  3. Check escape hatch - Load session state file, compare parameters, allow if match
  4. Keyword detection - Parse query, match against configured keywords using word boundaries
  5. Multi-keyword handling - Detect ALL matching databases in a single query
  6. Deny + guide - If match(es) found, store state and return denial with structured additionalContext

Output Format

Single Keyword Match

{
  "hookSpecificOutput": {
    "hookEventName": "PreToolUse",
    "permissionDecision": "deny",
    "permissionDecisionReason": "Query matches 'gitlab' - using RAG database instead",
    "additionalContext": "This query should use the LEANN MCP tool 'mcp__leann__search' to search the GitLab documentation RAG database at /Users/viktor/.leann/databases/gitlab instead of web search."
  }
}

Multiple Keyword Matches

{
  "hookSpecificOutput": {
    "hookEventName": "PreToolUse",
    "permissionDecision": "deny",
    "permissionDecisionReason": "Query matches 'gitlab' and 'kubernetes' - using RAG databases instead",
    "additionalContext": "This query matches multiple documentation databases. Please use these LEANN MCP tools IN PARALLEL:\n1. 'mcp__leann__search' for GitLab documentation at /Users/viktor/.leann/databases/gitlab\n2. 'mcp__leann__search' for Kubernetes official documentation at /Users/viktor/.leann/databases/kubernetes"
  }
}

Error Handling

Error Philosophy: Fail open - when in doubt, allow the WebSearch through.

Error Scenario Behavior
Config file missing/unreadable Allow search through (exit 0)
Config file invalid JSON Allow search through (exit 0), log to stderr
State file corrupted Treat as no previous denial, continue
Invalid hook input JSON Allow search through (exit 0)

Logging

  • Critical errors only logged to stderr (e.g., invalid config JSON)
  • Users debug by checking config file syntax manually
  • Keep logging minimal to avoid noise

Testing & Edge Cases

Testing Strategy

  1. Unit Tests (tests/test_hook.py):

    • Mock hook input JSON via stdin
    • Verify correct exit codes (0 for allow, 2 for deny)
    • Test keyword matching (single, multiple, partial, case variations)
    • Verify state file read/write operations with session_id
    • Test escape hatch logic
  2. Integration Tests:

    • Configure real LEANN MCP server
    • Test full flow: WebSearch → Hook → MCP → Retry
    • Verify parallel MCP calls for multi-keyword queries

Edge Cases

  1. Multiple keywords in same query: "How to use GitLab with Kubernetes?"

    • Detect ALL matching databases
    • additionalContext mentions all MCP tools with instruction to call in parallel
  2. Partial word matches: Query "ungitlabbed" contains "gitlab"

    • Use word boundary regex: \bgitlab\b (case-insensitive)
    • Should NOT match
  3. Case variations: "GITLAB", "GitLab", "gitlab"

    • All should match (case-insensitive)
  4. Concurrent hook calls: Multiple Claude sessions running

    • State file per session: docsearch-state-{session_id}.json
    • Each session has isolated state
  5. Stale state files: User restarts Claude between denial and retry

    • Clear state file on session start
    • Fallback: 5-minute timestamp expiry

Technology Choices

Implementation Language

Python 3.12+

Rationale:

  • Native JSON handling (stdlib)
  • Excellent regex support for word boundaries
  • Easy to test and maintain
  • Widely available on development systems
  • No external dependencies required

Dependencies

  • Python 3.12+ standard library only:
    • json - Config and state file parsing
    • re - Keyword matching with word boundaries
    • sys - stdin/stdout/stderr/exit codes
    • pathlib - File path handling

User Setup & Usage

Prerequisites

  1. LEANN installed and configured
  2. LEANN MCP server configured in Claude Code's MCP settings
  3. RAG databases built manually using LEANN tools (see future CLI issue)

Setup Steps

  1. Install the hook script:

    mkdir -p ~/.claude/hooks/PreToolUse
    cp docsearch.py ~/.claude/hooks/PreToolUse/docsearch.py
    chmod +x ~/.claude/hooks/PreToolUse/docsearch.py
  2. Create configuration file:

    cp config.example.json ~/.claude/hooks/docsearch-config.json
    # Edit with your database paths and keywords
  3. Verify LEANN MCP is configured in Claude Code MCP settings

  4. Test the setup:

    • Start Claude Code
    • Ask: "How do I configure GitLab CI runners?"
    • Verify hook intercepts and Claude uses MCP tool
    • If MCP fails, verify Claude retries with WebSearch

User Experience Flow

User: "How do I configure GitLab CI runners?"
    ↓
Hook detects "gitlab" → Denies WebSearch
    ↓
Claude sees context → Calls mcp__leann__search with GitLab database
    ↓
If MCP succeeds → User gets RAG-based answer
If MCP fails → Claude retries WebSearch → Hook allows through → User gets web results

Project Structure

docsearch-hook/
├── README.md                          # Setup instructions, usage guide
├── LICENSE
├── docsearch.py                       # Main hook script (Python 3.12+)
├── config.example.json                # Example configuration
├── tests/
│   ├── test_hook.py                   # Unit tests for hook logic
│   └── fixtures/                      # Test data (mock configs, inputs)
├── docs/
│   └── plans/
│       └── 2026-02-04-docsearch-design.md  # This document
└── .github/
    └── ISSUE_TEMPLATE/

Future Work (GitHub Issues)

Issue 1: Database Sharing Feature

Title: Enable sharing pre-built RAG databases between users

Description: Currently users must build their own LEANN databases. Add functionality to:

  • Export database metadata and files in shareable format
  • Import shared databases with verification
  • Community repository of common documentation databases (GitLab, K8s, etc.)

Benefits:

  • Reduce setup friction for new users
  • Standardize database quality for popular documentation sources
  • Community contribution model

Issue 2: CLI Setup Command

Title: Add CLI command for automated database creation

Description: Provide docsearch-hook setup <keyword> <url> command that:

  • Crawls documentation website using LEANN
  • Builds RAG database
  • Adds entry to config file automatically
  • Validates MCP server configuration

Benefits:

  • Eliminates manual LEANN tool usage
  • Reduces errors in database creation
  • Streamlines onboarding experience

Success Criteria

  1. Functional:

    • Hook correctly intercepts WebSearch for configured keywords
    • Multi-keyword queries trigger parallel MCP calls
    • Escape hatch allows retry after MCP failure
    • Per-session state isolation works correctly
  2. Reliability:

    • Hook never breaks Claude's core functionality (fail open)
    • Handle all edge cases gracefully
    • Stale state cleanup prevents confusion
  3. Usability:

    • Clear setup documentation
    • Example config provided
    • Error messages guide users to fixes
  4. Maintainability:

    • Clean Python code with type hints
    • Comprehensive unit tests
    • Well-documented edge case handling

A small comparison of generating implementation plans in three different ways, with Claude Code:

Implementation Plan: mcp-docsearch

A prioritized implementation plan for the mcp-docsearch Claude Code PreToolUse hook.

Reference: Design Document

Last Updated: 2026-02-05T06:45Z

Verification Status: ✅ Independently verified via codebase analysis:

  • No Python source code exists (confirmed: **/*.py glob returns 0 files)
  • No .gitignore file exists (confirmed)
  • No tests/ directory exists (confirmed)
  • No config.example.json exists (confirmed)
  • .mcp.json server name is leann-docs-search (confirmed - needs rename to leann)
  • README.md contains only # mcp-docsearch (confirmed - stub)
  • .leann/indexes/ pre-built with HNSW/contriever backend, 179 passages (confirmed)
  • All P0 items remain pending

Status Summary

Component Status Priority Notes
LICENSE ✅ Complete - MIT License
Design Document ✅ Complete - 373 lines, comprehensive
.leann/ indexes ✅ Complete - Pre-built, HNSW/contriever
.mcp.json ⚠️ Needs fix P0 BLOCKING: Server name leann-docs-search → tool name mcp__leann-docs-search__search (design expects mcp__leann__search). Must resolve before config.example.json
docsearch.py ❌ Not started P0 Critical path blocker
.gitignore ❌ Not started P1 Quick win
config.example.json ❌ Not started P1 Quick win (blocked by .mcp.json decision)
tests/ ❌ Not started P2 Blocked by docsearch.py
README.md ⚠️ Stub only P2 Currently just # mcp-docsearch
GitHub templates ❌ Not started P3 Low priority
PROMPT_refinement.md ⚠️ To delete P3 Development artifact, not part of deliverables

Priority-Ordered Task List

Items sorted by implementation priority. Complete in order.

P0 — Critical Path (Blocks Everything)

  • Resolve .mcp.json server naming

    • DECISION REQUIRED: Design doc uses mcp__leann__search, current config produces mcp__leann-docs-search__search
    • Recommended action: Rename server from leann-docs-search to leann in .mcp.json
    • This unblocks config.example.json creation (P1)
    • Must be resolved BEFORE any code references tool names
  • docsearch.py: Create skeleton with constants

    • Location: /workspace/repo/docsearch.py
    • Shebang: #!/usr/bin/env python3
    • Imports: json, re, sys, pathlib, time (stdlib only)
    • Module docstring explaining hook purpose
    • Type hints throughout (Python 3.12+)
    • Constants:
      • CONFIG_PATH = Path.home() / ".claude/hooks/docsearch-config.json"
      • STATE_DIR = Path.home() / ".claude/hooks/"
      • STATE_EXPIRY_SECONDS = 300 # 5 minutes
  • docsearch.py: Implement configuration loading

    • Function: load_config() -> dict | None
    • Path: ~/.claude/hooks/docsearch-config.json
    • Validate databases array with required fields: keywords, path, mcp_tool_name, description
    • Return None on any error (fail-open philosophy)
    • No stderr logging on missing file (expected during setup)
  • docsearch.py: Implement hook input parsing

    • Function: parse_hook_input() -> dict | None
    • Read JSON from stdin
    • Extract: hook_event_name, tool_name, tool_input, session_id
    • Return None on malformed JSON (triggers fail-open exit 0)
  • docsearch.py: Implement keyword matching

    • Function: find_matching_databases(query: str, databases: list) -> list
    • Word boundary regex: r'\b' + re.escape(keyword) + r'\b' with re.IGNORECASE
    • Return ALL matching database entries (multi-keyword support)
    • Order preserved from config file
  • docsearch.py: Implement state file operations

    • Functions:
      • get_state_path(session_id: str) -> Path
      • load_state(session_id: str) -> dict | None
      • save_state(session_id: str, tool_input: dict) -> None
      • clear_state(session_id: str) -> None
      • cleanup_stale_states() -> None
      • ensure_hooks_directory() -> None (create ~/.claude/hooks/ if missing)
    • Path pattern: ~/.claude/hooks/docsearch-state-{session_id}.json
    • State schema: {"last_denied": {"query": str, "allowed_domains": list, "blocked_domains": list, "timestamp": int}}
    • Handle missing/corrupted files gracefully (return None)
    • load_state must validate schema structure (has last_denied with required subfields: query, allowed_domains, blocked_domains, timestamp), not just JSON validity
    • cleanup_stale_states removes state files older than 5 minutes (approximates session-start cleanup per design doc line 126)
    • ensure_hooks_directory must be called before any file operations to handle first-run scenario
  • docsearch.py: Implement escape hatch logic

    • Function: should_allow_retry(tool_input: dict, state: dict) -> bool
    • Compare current tool_input against last_denied:
      • Exact string match on query
      • Set comparison (order-independent) for allowed_domains and blocked_domains
      • Check timestamp < 5 minutes old
    • If match and not expired: return True (caller clears state, exits 0)
  • docsearch.py: Implement denial output generation

    • Function: generate_denial(matched_databases: list) -> dict
    • Single match format:
      {
        "hookSpecificOutput": {
          "hookEventName": "PreToolUse",
          "permissionDecision": "deny",
          "permissionDecisionReason": "Query matches '{keyword}' - using RAG database instead",
          "additionalContext": "Use LEANN MCP tool '{mcp_tool_name}' to search {description} at {path}"
        }
      }
    • Multiple matches: List all databases with explicit "IN PARALLEL" instruction (per design doc line 176)
  • docsearch.py: Implement main() with error wrapper

    • Orchestrate full flow:
      1. Parse hook input (exit 0 if None)
      2. Early exit if not PreToolUse or not WebSearch (exit 0)
      3. Load config (exit 0 if None)
      4. Cleanup stale state files (opportunistic, non-blocking)
      5. Check escape hatch (exit 0 if should allow, clears state)
      6. Find matching databases (exit 0 if empty)
      7. Save state for session
      8. Output denial JSON to stdout (exit 2)
    • Wrap entire main in try/except: exit 0 on any unhandled error
    • Log critical errors to stderr only (keep logging minimal per design doc line 196)

P1 — Quick Wins (No Dependencies)

  • Create .gitignore

    • Location: /workspace/repo/.gitignore
    • Contents:
      # Python
      __pycache__/
      *.pyc
      *.pyo
      *.pyd
      .python-version
      
      # Virtual environments
      venv/
      .venv/
      env/
      
      # IDE
      .vscode/
      .idea/
      *.swp
      *.swo
      
      # Testing
      .pytest_cache/
      .coverage
      htmlcov/
      .mypy_cache/
      .tox/
      
      # State files (session-specific, should not be committed)
      docsearch-state-*.json
      
      # OS
      .DS_Store
      Thumbs.db
      
  • Create config.example.json

    • Location: /workspace/repo/config.example.json
    • Contents:
      {
        "databases": [
          {
            "keywords": ["gitlab", "gl", "gitlab-ci"],
            "path": "/path/to/your/.leann/databases/gitlab",
            "mcp_tool_name": "mcp__leann__search",
            "description": "GitLab documentation"
          },
          {
            "keywords": ["kubernetes", "k8s", "kubectl"],
            "path": "/path/to/your/.leann/databases/kubernetes",
            "mcp_tool_name": "mcp__leann__search",
            "description": "Kubernetes official documentation"
          }
        ]
      }

P2 — Testing (Blocked by P0)

  • Create tests/ directory structure

    • tests/__init__.py (empty)
    • tests/test_hook.py (main test file)
    • tests/fixtures/ (directory for test data)
  • Create test fixtures

    • tests/fixtures/valid_config.json
    • tests/fixtures/invalid_config.json (malformed JSON)
    • tests/fixtures/missing_fields_config.json
    • tests/fixtures/websearch_input.json
    • tests/fixtures/other_tool_input.json
    • tests/fixtures/multi_keyword_input.json
    • Note: Fixtures should use mock paths (e.g., /tmp/test-db) rather than real LEANN paths
  • Test: Configuration loading

    • Valid config loads correctly
    • Missing config returns None
    • Invalid JSON returns None
    • Missing required fields returns None
  • Test: Keyword matching

    • Single keyword exact match
    • Case-insensitive: "GITLAB", "GitLab", "gitlab" all match
    • Word boundary: "ungitlabbed" does NOT match "gitlab"
    • Multiple databases matching same query
    • No matches returns empty list
    • Special regex characters in keywords are escaped
  • Test: State file operations

    • save_state creates correct JSON content
    • load_state reads existing state
    • load_state returns None for missing/corrupted files
    • load_state returns None for valid JSON with invalid schema (missing last_denied or subfields)
    • clear_state removes file
    • cleanup_stale_states removes old files but keeps recent ones
    • Session isolation (different session_ids don't interfere)
  • Test: Escape hatch logic

    • Exact match allows through (returns True)
    • Query mismatch continues to matching (returns False)
    • Domain arrays compared as sets (order-independent)
    • Timestamp expiry: >5 min = expired (returns False)
    • State cleared after successful escape
  • Test: Denial output format

    • Single match structure matches design doc
    • Multiple matches include all databases
    • Multiple matches contain "IN PARALLEL" instruction text
    • Exit code is 2 on denial
  • Test: Error handling

    • Unhandled exception results in exit 0
    • Non-WebSearch tool causes exit 0
  • Test: Integration (optional)

    • Full flow: WebSearch with keyword → deny
    • Full flow: Same params retry → allow (escape hatch)
    • Full flow: Non-matching query → allow
    • Full flow: Config missing → allow all

P2 — Documentation (Blocked by P0)

  • Update README.md: Project overview

    • What the hook does (1-2 paragraphs)
    • Link to design document
    • Prerequisites: Python 3.12+, LEANN, Claude Code
  • Update README.md: Installation instructions

    • Clone repository
    • Copy docsearch.py to ~/.claude/hooks/PreToolUse/
    • Set executable permission: chmod +x
    • Copy and customize config to ~/.claude/hooks/docsearch-config.json
  • Update README.md: Configuration guide

    • Config file location and full schema
    • Keyword matching behavior (case-insensitive, word boundaries)
    • Multiple database example
  • Update README.md: Usage examples

    • Query that triggers hook
    • Escape hatch behavior
    • Multi-keyword parallel query
  • Update README.md: Troubleshooting

    • Config syntax validation
    • LEANN MCP verification
    • Hook executable check
    • State file inspection
  • Add docstrings to docsearch.py

    • Module-level overview
    • Function docstrings with Args and Returns
    • Inline comments for complex logic (escape hatch, keyword matching)

P3 — Polish (Low Priority)

  • Create .github/ISSUE_TEMPLATE/bug_report.md
  • Create .github/ISSUE_TEMPLATE/feature_request.md
  • Create GitHub Issue: Database Sharing Feature
  • Create GitHub Issue: CLI Setup Command
  • Delete PROMPT_refinement.md
    • Development artifact, not part of final deliverables
    • Should be deleted, not archived (it's a one-time prompt for refinement)

P4 — Validation (Final Phase)

  • Manual testing in Claude Code

    • Install hook in real environment
    • Test with actual LEANN MCP server
    • Verify keyword interception works
    • Verify escape hatch allows retry
    • Verify multi-keyword triggers parallel guidance
    • Verify fail-open on errors
  • Design compliance audit

    • Exit codes match spec (0=allow, 2=deny)
    • Output JSON format matches design doc exactly
    • Config schema matches design doc
    • State schema matches design doc
  • Verify LEANN MCP server configuration

    • Confirm .mcp.json server naming aligns with LEANN tool expectations
    • Verify mcp__leann__search tool name resolves correctly
    • Test MCP tool invocation with actual LEANN server

Implementation Notes

Design Decisions (from design doc)

  1. Fail Open: Any error = exit 0 (allow WebSearch through)
  2. Stdlib Only: No pip dependencies (json, re, sys, pathlib, time)
  3. Session Isolation: State files keyed by session_id
  4. 5-Minute Expiry: Safety net for stale state

Key Technical Details

  • Hook Location: ~/.claude/hooks/PreToolUse/docsearch.py
  • Config Location: ~/.claude/hooks/docsearch-config.json
  • State Location: ~/.claude/hooks/docsearch-state-{session_id}.json
  • Exit Codes: 0 = allow, 2 = deny with guidance

Hook Input Schema (Verified from Claude Code Official Docs)

PreToolUse hooks receive JSON via stdin with these fields:

{
  "session_id": "abc123",
  "transcript_path": "/path/to/transcript.jsonl",
  "cwd": "/current/working/directory",
  "permission_mode": "default",
  "hook_event_name": "PreToolUse",
  "tool_name": "WebSearch",
  "tool_input": {
    "query": "search query text",
    "allowed_domains": ["example.com"],
    "blocked_domains": []
  },
  "tool_use_id": "toolu_01ABC123..."
}

WebSearch-specific tool_input fields:

  • query (string): The search query
  • allowed_domains (array, optional): Only include results from these domains
  • blocked_domains (array, optional): Exclude results from these domains

Clarifications Resolved

  1. State cleanup: Design doc mentions "clear stale state files on session start" (line 126) but hooks cannot detect session boundaries. Resolution: opportunistic cleanup of all state files >5 minutes old on each hook invocation approximates this behavior. Combined with 5-minute timestamp expiry check in escape hatch logic, this provides robust stale state handling.
  2. MCP parameters: Include path in additionalContext, let Claude determine params
  3. Same tool name: Different databases differentiated by path in additionalContext
  4. Parallel MCP calls: Multi-keyword denials must include explicit "IN PARALLEL" text per design doc line 176 to ensure Claude calls MCP tools concurrently
  5. MCP tool naming: The .mcp.json server name determines tool name prefix. Current leann-docs-search produces mcp__leann-docs-search__search. DECISION: Rename to leann for cleaner mcp__leann__search as per design doc examples. (Elevated to P0)
  6. Directory creation: The hooks directory ~/.claude/hooks/ may not exist on first run. State file operations must create it if missing.

Success Criteria

From design document:

  1. Functional: Hook intercepts WebSearch for keywords, multi-keyword parallel calls work, escape hatch allows retry
  2. Reliability: Never breaks Claude (fail open), handles all edge cases
  3. Usability: Clear docs, example config, helpful error messages
  4. Maintainability: Type hints, unit tests, documented edge cases

DocSearch Hook Implementation Plan

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

Goal: Build a Claude Code PreToolUse hook that intercepts WebSearch tool calls and redirects documentation-related queries to local RAG databases via LEANN MCP server.

Architecture: A Python 3.12+ script (docsearch.py) acts as a PreToolUse hook. It reads configuration from ~/.claude/hooks/docsearch-config.json, tracks state in session-specific files, and uses exit codes to allow (0) or deny (2) WebSearch calls. When denying, it provides additionalContext guiding Claude to use LEANN MCP tools instead. An escape hatch allows retries if RAG results are insufficient.

Tech Stack: Python 3.12+ standard library only (json, re, sys, pathlib, os, time)

Reference: See docs/plans/2026-02-04-docsearch-hook-design.md for full design specification.


Implementation Status

Task Status Description Priority Order
Task 2 NOT STARTED Core Hook Script - Skeleton and Input Parsing P0 - Foundation 1
Task 3 NOT STARTED Configuration Loading P0 - Core 2
Task 4 NOT STARTED Keyword Matching with Word Boundaries P0 - Core 3
Task 6a NOT STARTED Session ID Sanitization (Security) P0 - Security 4
Task 6 NOT STARTED Session State Management for Escape Hatch P0 - Core 5
Task 12 NOT STARTED Make Script Executable and Add Shebang P0 - Core 6
Task 5 NOT STARTED Multiple Keyword Matching (Verification Tests) P1 - Feature 7
Task 7 NOT STARTED State Cleanup (Stale State Expiry) P1 - Enhancement 8
Task 7a NOT STARTED Session Start State Cleanup P1 - Enhancement 9
Task 3a NOT STARTED Configuration Schema Validation P1 - Quality 10
Task 3b NOT STARTED Keywords Element Type Validation P1 - Quality 11
Task 8 NOT STARTED Error Logging to stderr P1 - Enhancement 12
Task 9 NOT STARTED Complete Test Coverage and Edge Cases P1 - Quality 13
Task 9a NOT STARTED Permission Error Tests P1 - Quality 14
Task 9b NOT STARTED Session Isolation Tests P1 - Quality 15
Task 1 NOT STARTED Project Structure and Example Config P2 - Documentation 16
Task 10 NOT STARTED README Documentation P2 - Documentation 17
Task 11 NOT STARTED Final Integration Testing P2 - Validation 18

Codebase Analysis (2026-02-05)

Current State: No implementation exists. Repository contains only:

  • README.md - Empty placeholder ("# mcp-docsearch")
  • LICENSE - MIT license file
  • docs/plans/ - Design and implementation plan documents
  • .leann/ - LEANN index files (not relevant to implementation)
  • .mcp.json, .claude/settings.local.json - Configuration files

All 14 tasks remain to be implemented.

Gap Analysis Notes

The following gaps were identified when comparing this plan against the design spec:

Previously Identified (Addressed in Plan)

  1. Session Start State Cleanup (Task 7a): Design spec mentions "Clear state file on session start" as a complementary mechanism to timestamp expiry - added as new task.
  2. Config Schema Validation (Task 3a): Design spec marks config fields as "required" but original plan silently defaults missing fields - added as new task.
  3. Tech Stack: Design spec should include os and time modules (used in implementation).

Newly Identified (2026-02-05 Analysis)

Critical Gaps: 4. Type validation for config fields missing: Task 3a only checks field presence, not types. keywords should be validated as array of strings, not just present. 5. Empty keywords array not handled: A database entry with keywords: [] will silently fail to match anything. Should log warning and skip.

Important Gaps: 6. Path format validation missing: Design spec says path should be "Absolute path" but no validation exists. Should warn on relative paths. 7. Task 7a cleanup timing differs from design: Design says "Clear stale state files on session start" but Task 7a cleans during hook execution. Semantically equivalent but worth noting. 8. No permission error tests: Implementation silently handles state/config permission errors but these edge cases aren't tested. 9. No concurrent session isolation test: Tests use different session_ids but don't verify true isolation.

Minor Gaps (Can Address Post-MVP): 10. GitHub issue templates not created: Design mentions .github/ISSUE_TEMPLATE/ but no task creates it. 11. Success criteria not all testable: Design lists "Clean Python code with type hints" as success criteria but not verified. 12. State file naming uses raw session_id: No sanitization of session_id for filesystem safety (special characters).

Security Gap (2026-02-05 Deep Analysis) - MUST FIX

CRITICAL - Session ID Path Traversal Vulnerability: 13. Session ID sanitization required: The get_state_file() function uses raw session_id in the filename without sanitization. This could allow path traversal attacks if a malicious session_id like "../../etc/passwd" or "foo/bar" is provided. Add Task 6a: Session ID Sanitization to address this before Task 6.

TDD Issues Identified (2026-02-05 Deep Analysis)

  1. Task 3 "Expected: FAIL" reason is incorrect: The test would actually PASS because Task 2's implementation returns exit 0 for WebSearch tools. Need to fix the expected failure reason.
  2. Task 5 violates TDD principles: Tests are expected to pass immediately (verification tests, not TDD). Should be relabeled or moved.
  3. Task 7a test doesn't verify cleanup: test_stale_state_file_cleaned_on_unrelated_query should assert that stale files were actually deleted.
  4. Task 5 order test missing null checks: Should verify find() doesn't return -1 before comparing positions.
  5. Keywords element type validation missing: Task 3a validates keywords is a list but not that all elements are strings.

Parallelization Opportunities

  1. Tasks 5, 3a, and 8 can run in parallel after Task 4 completes (no dependencies between them).
  2. Task 1 could be P2 since it's just an example config file, not required for core functionality.

Prioritized Remaining Work (Bullet Points)

Phase 1: Core Functionality (P0) - MUST HAVE

All items below are required for a minimal viable hook:

  • Task 2: Core Hook Script - Skeleton and Input Parsing

    • Create tests/test_hook.py with run_hook() helper and input parsing tests
    • Create docsearch.py with main() entry point, stdin JSON parsing, WebSearch filtering
    • Verify tests pass, commit
  • Task 3: Configuration Loading

    • Add tests for missing/invalid config file handling (fail-open behavior)
    • Implement get_config_path() and load_config() functions
    • Support DOCSEARCH_CONFIG_PATH environment variable for testing
    • Verify tests pass, commit
  • Task 4: Keyword Matching with Word Boundaries

    • Create tests/fixtures/valid_config.json test fixture
    • Add tests for single keyword match, no match, case-insensitive, word boundary
    • Implement find_matching_databases() with \b regex word boundaries
    • Implement build_deny_response() for single/multiple database responses
    • Verify tests pass, commit
  • Task 6a: Session ID Sanitization (Security) (NEW - MUST BE BEFORE Task 6)

    • Add tests for path traversal and special character handling in session_id
    • Implement sanitize_session_id() using regex to allow only alphanumeric, dash, underscore
    • Create get_state_file() stub that uses sanitized session_id
    • Verify tests pass, commit
  • Task 6: Session State Management for Escape Hatch

    • Add tests for state file creation, escape hatch retry, different query denial
    • Implement get_state_dir(), expand get_state_file(), load_state(), save_state()
    • Implement params_match() for exact query + set-based domain comparison
    • Update main() with escape hatch logic before keyword matching
    • Verify tests pass, commit
  • Task 12: Make Script Executable and Add Shebang

    • Verify shebang line #!/usr/bin/env python3 present
    • Run full test suite
    • Verify script executes with ./docsearch.py < /dev/null (exit 0)
    • Final commit

Phase 2: Enhanced Features (P1) - IMPORTANT

These enhance functionality but can ship without:

  • Task 5: Multiple Keyword Matching

    • Add tests for queries matching multiple databases
    • Add test for k8s alias matching kubernetes
    • Add test for database order preservation in output
    • Verify implementation handles multiple matches with "IN PARALLEL" instruction
    • Commit tests
  • Task 7: State Cleanup (Stale State Expiry)

    • Add tests for expired state (>5 min) being ignored
    • Add tests for recent state (<5 min) being used
    • Add STATE_EXPIRY_SECONDS = 300 constant
    • Implement is_state_expired() function
    • Update escape hatch check to include expiry validation
    • Verify tests pass, commit
  • Task 7a: Session Start State Cleanup (NEW - from gap analysis)

    • Add test for clearing stale state files on hook initialization
    • Implement optional cleanup of state files older than expiry threshold
    • This complements timestamp expiry as a safety mechanism
    • Verify tests pass, commit
  • Task 3a: Configuration Schema Validation (NEW - from gap analysis)

    • Add tests for config with missing required fields (keywords, path, etc.)
    • Add validation that logs warnings for missing required fields
    • Maintain fail-open behavior (allow search through on invalid config)
    • Verify tests pass, commit
  • Task 3b: Keywords Element Type Validation (NEW)

    • Add test for keywords array with non-string elements (integers, nulls, dicts)
    • Validate all keyword elements are strings using isinstance(k, str)
    • Log warning and skip entry if validation fails
    • Verify tests pass, commit
  • Task 8: Error Logging to stderr

    • Add test for invalid config JSON logging to stderr
    • Update load_config() to log JSON parse errors to stderr
    • Maintain silent behavior for missing config file (expected during setup)
    • Verify tests pass, commit
  • Task 9: Complete Test Coverage and Edge Cases

    • Add test for empty query allowing through
    • Add test for missing query field allowing through
    • Add test for missing tool_input field allowing through
    • Add test for missing session_id using "default"
    • Add test for empty databases config allowing through
    • Add test for domains compared as sets (order-independent)
    • Add test for special characters in keywords (c++, c#, .net)
    • Add test verifying all required output fields present
    • Verify all tests pass, commit
  • Task 9a: Permission Error Tests (NEW)

    • Add test for unreadable config file (chmod 000) - should fail open
    • Add test for unwritable state directory - should still deny
    • Verify fail-open behavior for permission scenarios
    • Commit tests
  • Task 9b: Session Isolation Tests (NEW)

    • Add test that state from session A doesn't affect session B
    • Add test that escape hatch only works for the session that was denied
    • Verify session isolation works correctly
    • Commit tests

Phase 3: Documentation & Validation (P2) - POLISH

Final documentation and validation:

  • Task 1: Project Structure and Example Config (moved from P0)

    • Create config.example.json with GitLab and Kubernetes database examples
    • Commit the example configuration file
  • Task 10: README Documentation

    • Update README.md with comprehensive setup instructions
    • Document features, prerequisites, installation steps
    • Add configuration guide with field descriptions
    • Document escape hatch behavior
    • Add troubleshooting section
    • Commit
  • Task 11: Final Integration Testing

    • Create tests/test_integration.py as testing guide
    • Document manual test scenarios:
      • Basic interception flow
      • Escape hatch retry flow
      • Multiple keyword parallel MCP calls
      • Non-matching query passthrough
    • Commit

Priority Order for Implementation

Phase 1: Core Functionality (P0) - Tasks 2, 3, 4, 6a, 6, 12

Must-have for minimal viable hook:

  1. Task 2 - Hook skeleton with input parsing
  2. Task 3 - Configuration loading
  3. Task 4 - Keyword matching (single keyword)
  4. Task 6a - Session ID sanitization (SECURITY - must come before Task 6)
  5. Task 6 - Escape hatch state management
  6. Task 12 - Make script executable

Phase 2: Enhanced Features (P1) - Tasks 5, 7, 7a, 3a, 3b, 8, 9, 9a, 9b

Important but can ship without: 7. Task 5 - Multiple keyword matching (verification tests) 8. Task 7 - Stale state expiry (5-minute timeout) 9. Task 7a - Session start state cleanup 10. Task 3a - Config schema validation 11. Task 3b - Keywords element type validation (NEW) 12. Task 8 - Error logging to stderr 13. Task 9 - Edge case test coverage 14. Task 9a - Permission error tests (NEW) 15. Task 9b - Session isolation tests (NEW)

Parallelization Note: Tasks 5, 3a, 3b, and 8 can run in parallel after Task 4.

Phase 3: Documentation & Validation (P2) - Tasks 1, 10, 11

Polish and documentation: 16. Task 1 - Project structure and example config (moved from P0) 17. Task 10 - README documentation 18. Task 11 - Integration testing guide


Task 1: Project Structure and Example Config

Files:

  • Create: config.example.json

Step 1: Write the example configuration file

{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/Users/viktor/.leann/databases/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation from docs.gitlab.com"
    },
    {
      "keywords": ["kubernetes", "k8s", "kubectl"],
      "path": "/Users/viktor/.leann/databases/kubernetes",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Kubernetes official documentation"
    }
  ]
}

Step 2: Commit

git add config.example.json
git commit -m "feat: add example configuration file for docsearch hook"

Task 2: Core Hook Script - Skeleton and Input Parsing

Files:

  • Create: docsearch.py
  • Create: tests/test_hook.py

Step 1: Write the failing test for hook script existence and basic input parsing

Create tests/test_hook.py:

"""Unit tests for docsearch.py hook script."""
import json
import os
import subprocess
import sys
import time
from pathlib import Path

HOOK_SCRIPT = Path(__file__).parent.parent / "docsearch.py"
FIXTURES_DIR = Path(__file__).parent / "fixtures"


def run_hook(stdin_data: dict, env: dict | None = None) -> tuple[int, str, str]:
    """Run the hook script with given stdin and return (exit_code, stdout, stderr)."""
    result = subprocess.run(
        [sys.executable, str(HOOK_SCRIPT)],
        input=json.dumps(stdin_data),
        capture_output=True,
        text=True,
        env=env,
    )
    return result.returncode, result.stdout, result.stderr


class TestInputParsing:
    """Tests for hook input parsing."""

    def test_non_websearch_tool_allows_through(self):
        """Non-WebSearch tool calls should be allowed (exit 0)."""
        hook_input = {
            "tool_name": "Read",
            "tool_input": {"file_path": "/some/file.txt"},
        }
        exit_code, stdout, stderr = run_hook(hook_input)
        assert exit_code == 0

    def test_invalid_json_allows_through(self):
        """Invalid JSON input should fail open (exit 0)."""
        result = subprocess.run(
            [sys.executable, str(HOOK_SCRIPT)],
            input="not valid json",
            capture_output=True,
            text=True,
        )
        assert result.returncode == 0

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py -v Expected: FAIL (docsearch.py doesn't exist)

Step 3: Write minimal implementation

Create docsearch.py:

#!/usr/bin/env python3
"""
DocSearch Hook - PreToolUse hook that redirects documentation queries to RAG databases.

This hook intercepts WebSearch tool calls and checks if the query matches configured
documentation keywords. If matched, it denies the search and guides Claude to use
LEANN MCP tools instead. Includes an escape hatch for retrying web search if RAG fails.
"""
import json
import sys


def main() -> int:
    """Main entry point for the hook."""
    # Read and parse input from stdin
    try:
        stdin_data = sys.stdin.read()
        hook_input = json.loads(stdin_data)
    except json.JSONDecodeError:
        # Invalid JSON - fail open
        return 0

    # Get tool name - if not WebSearch, allow through
    tool_name = hook_input.get("tool_name", "")
    if tool_name != "WebSearch":
        return 0

    # Placeholder for future implementation
    return 0


if __name__ == "__main__":
    sys.exit(main())

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add hook skeleton with input parsing"

Task 3: Configuration Loading

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for configuration loading

Add to tests/test_hook.py:

class TestConfigLoading:
    """Tests for configuration file loading."""

    def test_missing_config_allows_through(self, tmp_path):
        """Missing config file should fail open (exit 0)."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "how to configure gitlab ci"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input, env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(tmp_path / "nonexistent.json")}
        )
        assert exit_code == 0

    def test_invalid_json_config_allows_through(self, tmp_path):
        """Invalid JSON config should fail open (exit 0)."""
        config_file = tmp_path / "config.json"
        config_file.write_text("not valid json")

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "how to configure gitlab ci"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input, env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)}
        )
        assert exit_code == 0

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestConfigLoading -v Expected: FAIL (config loading not implemented)

Step 3: Write minimal implementation

Update docsearch.py:

#!/usr/bin/env python3
"""
DocSearch Hook - PreToolUse hook that redirects documentation queries to RAG databases.

This hook intercepts WebSearch tool calls and checks if the query matches configured
documentation keywords. If matched, it denies the search and guides Claude to use
LEANN MCP tools instead. Includes an escape hatch for retrying web search if RAG fails.
"""
import json
import os
import sys
from pathlib import Path


def get_config_path() -> Path:
    """Get the configuration file path."""
    if env_path := os.environ.get("DOCSEARCH_CONFIG_PATH"):
        return Path(env_path)
    return Path.home() / ".claude" / "hooks" / "docsearch-config.json"


def load_config() -> dict | None:
    """Load and parse the configuration file. Returns None on any error."""
    config_path = get_config_path()
    try:
        with open(config_path) as f:
            return json.load(f)
    except (FileNotFoundError, json.JSONDecodeError, OSError):
        return None


def main() -> int:
    """Main entry point for the hook."""
    # Read and parse input from stdin
    try:
        stdin_data = sys.stdin.read()
        hook_input = json.loads(stdin_data)
    except json.JSONDecodeError:
        # Invalid JSON - fail open
        return 0

    # Get tool name - if not WebSearch, allow through
    tool_name = hook_input.get("tool_name", "")
    if tool_name != "WebSearch":
        return 0

    # Load configuration - if missing or invalid, allow through
    config = load_config()
    if config is None:
        return 0

    # Placeholder for keyword matching
    return 0


if __name__ == "__main__":
    sys.exit(main())

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add configuration file loading with fail-open behavior"

Task 4: Keyword Matching with Word Boundaries

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py
  • Create: tests/fixtures/ directory
  • Create: tests/fixtures/valid_config.json

Step 1: Create test fixtures

Create tests/fixtures/valid_config.json:

{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/mock/path/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation"
    },
    {
      "keywords": ["kubernetes", "k8s", "kubectl"],
      "path": "/mock/path/kubernetes",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Kubernetes documentation"
    }
  ]
}

Step 2: Write the failing test for keyword matching

Add to tests/test_hook.py:

class TestKeywordMatching:
    """Tests for keyword detection in queries."""

    def test_single_keyword_match_denies(self):
        """Query containing configured keyword should be denied (exit 2)."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "how to configure gitlab ci runners"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 2

        # Verify output is valid JSON with correct structure
        output = json.loads(stdout)
        assert output["hookSpecificOutput"]["permissionDecision"] == "deny"
        assert output["hookSpecificOutput"]["hookEventName"] == "PreToolUse"
        assert "gitlab" in output["hookSpecificOutput"]["permissionDecisionReason"].lower()

    def test_no_keyword_match_allows(self):
        """Query without configured keywords should be allowed (exit 0)."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "how to make a sandwich"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 0

    def test_case_insensitive_matching(self):
        """Keyword matching should be case-insensitive."""
        for query in ["GITLAB ci", "GitLab CI", "gitlab ci"]:
            hook_input = {
                "tool_name": "WebSearch",
                "tool_input": {"query": query},
            }
            exit_code, stdout, stderr = run_hook(
                hook_input,
                env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
            )
            assert exit_code == 2, f"Failed for query: {query}"

    def test_word_boundary_matching(self):
        """Partial word matches should NOT trigger denial."""
        # "ungitlabbed" contains "gitlab" but should not match due to word boundaries
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "ungitlabbed workflow"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 0

Step 3: Run test to verify it fails

Run: pytest tests/test_hook.py::TestKeywordMatching -v Expected: FAIL (keyword matching not implemented)

Step 4: Write minimal implementation

Update docsearch.py:

#!/usr/bin/env python3
"""
DocSearch Hook - PreToolUse hook that redirects documentation queries to RAG databases.

This hook intercepts WebSearch tool calls and checks if the query matches configured
documentation keywords. If matched, it denies the search and guides Claude to use
LEANN MCP tools instead. Includes an escape hatch for retrying web search if RAG fails.
"""
import json
import os
import re
import sys
from pathlib import Path


def get_config_path() -> Path:
    """Get the configuration file path."""
    if env_path := os.environ.get("DOCSEARCH_CONFIG_PATH"):
        return Path(env_path)
    return Path.home() / ".claude" / "hooks" / "docsearch-config.json"


def load_config() -> dict | None:
    """Load and parse the configuration file. Returns None on any error."""
    config_path = get_config_path()
    try:
        with open(config_path) as f:
            return json.load(f)
    except (FileNotFoundError, json.JSONDecodeError, OSError):
        return None


def find_matching_databases(query: str, config: dict) -> list[dict]:
    """Find all databases with keywords matching the query.

    Uses word boundary matching (case-insensitive).
    Returns list of matching database configs.
    """
    matches = []
    query_lower = query.lower()

    for db in config.get("databases", []):
        for keyword in db.get("keywords", []):
            # Word boundary regex for exact word match
            pattern = rf"\b{re.escape(keyword.lower())}\b"
            if re.search(pattern, query_lower):
                matches.append(db)
                break  # Only add each database once

    return matches


def build_deny_response(matches: list[dict]) -> dict:
    """Build the JSON response for denying a WebSearch."""
    if len(matches) == 1:
        db = matches[0]
        matched_keywords = db["keywords"][0]  # Use first keyword for message
        reason = f"Query matches '{matched_keywords}' - using RAG database instead"
        context = (
            f"This query should use the LEANN MCP tool '{db['mcp_tool_name']}' "
            f"to search the {db['description']} RAG database at {db['path']} instead of web search."
        )
    else:
        keyword_list = " and ".join(f"'{db['keywords'][0]}'" for db in matches)
        reason = f"Query matches {keyword_list} - using RAG databases instead"
        lines = ["This query matches multiple documentation databases. Please use these LEANN MCP tools IN PARALLEL:"]
        for i, db in enumerate(matches, 1):
            lines.append(f"{i}. '{db['mcp_tool_name']}' for {db['description']} at {db['path']}")
        context = "\n".join(lines)

    return {
        "hookSpecificOutput": {
            "hookEventName": "PreToolUse",
            "permissionDecision": "deny",
            "permissionDecisionReason": reason,
            "additionalContext": context,
        }
    }


def main() -> int:
    """Main entry point for the hook."""
    # Read and parse input from stdin
    try:
        stdin_data = sys.stdin.read()
        hook_input = json.loads(stdin_data)
    except json.JSONDecodeError:
        # Invalid JSON - fail open
        return 0

    # Get tool name - if not WebSearch, allow through
    tool_name = hook_input.get("tool_name", "")
    if tool_name != "WebSearch":
        return 0

    # Load configuration - if missing or invalid, allow through
    config = load_config()
    if config is None:
        return 0

    # Get the query from tool input
    tool_input = hook_input.get("tool_input", {})
    query = tool_input.get("query", "")
    if not query:
        return 0

    # Find matching databases
    matches = find_matching_databases(query, config)
    if not matches:
        return 0

    # Deny and provide guidance
    response = build_deny_response(matches)
    print(json.dumps(response))
    return 2


if __name__ == "__main__":
    sys.exit(main())

Step 5: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 6: Commit

mkdir -p tests/fixtures
git add docsearch.py tests/test_hook.py tests/fixtures/valid_config.json
git commit -m "feat: add keyword matching with word boundaries and denial responses"

Task 5: Multiple Keyword Matching

Files:

  • Modify: tests/test_hook.py

Step 1: Write the test for multiple keyword matches

Add to tests/test_hook.py:

class TestMultipleKeywordMatching:
    """Tests for queries matching multiple databases."""

    def test_multiple_keywords_match_all_databases(self):
        """Query with multiple keywords should mention all matching databases."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "how to deploy gitlab on kubernetes"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 2

        output = json.loads(stdout)
        context = output["hookSpecificOutput"]["additionalContext"]

        # Both databases should be mentioned
        assert "gitlab" in context.lower()
        assert "kubernetes" in context.lower()
        assert "IN PARALLEL" in context

    def test_k8s_alias_matches_kubernetes(self):
        """Alternative keywords like 'k8s' should match kubernetes database."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "k8s pod configuration"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 2

        output = json.loads(stdout)
        assert "kubernetes" in output["hookSpecificOutput"]["additionalContext"].lower()

    def test_database_order_preserved_in_output(self):
        """Databases should appear in config file order in output."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab kubernetes deployment"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 2

        output = json.loads(stdout)
        context = output["hookSpecificOutput"]["additionalContext"]

        # GitLab appears first in config, so should be listed as item 1
        gitlab_pos = context.find("GitLab")
        kubernetes_pos = context.find("Kubernetes")
        assert gitlab_pos < kubernetes_pos, "GitLab should appear before Kubernetes (config order)"

Step 2: Run test to verify it passes

Run: pytest tests/test_hook.py::TestMultipleKeywordMatching -v Expected: PASS (already implemented in Task 4)

Step 3: Commit

git add tests/test_hook.py
git commit -m "test: add tests for multiple keyword matching and config order preservation"

Task 6: Session State Management for Escape Hatch

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for state file management

Add to tests/test_hook.py:

class TestStateManagement:
    """Tests for session state file management."""

    def test_first_search_stores_state_and_denies(self, tmp_path):
        """First matching search should store state and deny."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-123",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2

        # State file should be created
        state_file = state_dir / "docsearch-state-test-session-123.json"
        assert state_file.exists()

        state = json.loads(state_file.read_text())
        assert state["last_denied"]["query"] == "how to configure gitlab ci"

    def test_retry_same_params_allows_through(self, tmp_path):
        """Retry with exact same params should allow through (escape hatch)."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create state file simulating a previous denial
        state_file = state_dir / "docsearch-state-test-session-456.json"
        state_file.write_text(json.dumps({
            "last_denied": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": int(time.time()),
            }
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-456",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 0

        # State file should be cleared after successful retry
        state = json.loads(state_file.read_text())
        assert state.get("last_denied") is None

    def test_different_query_denies_again(self, tmp_path):
        """Different query should deny even with existing state."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create state file with different query
        state_file = state_dir / "docsearch-state-test-session-789.json"
        state_file.write_text(json.dumps({
            "last_denied": {
                "query": "gitlab runners setup",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": int(time.time()),
            }
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",  # Different query
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-789",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2

    def test_corrupted_state_file_fails_open(self, tmp_path):
        """Corrupted state file should be treated as no previous denial."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create corrupted state file
        state_file = state_dir / "docsearch-state-test-session-corrupted.json"
        state_file.write_text("{invalid json content")

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-corrupted",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        # Should deny (no valid state to trigger escape hatch)
        assert exit_code == 2

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestStateManagement -v Expected: FAIL (state management not implemented)

Step 3: Write minimal implementation

Update docsearch.py to add state management:

#!/usr/bin/env python3
"""
DocSearch Hook - PreToolUse hook that redirects documentation queries to RAG databases.

This hook intercepts WebSearch tool calls and checks if the query matches configured
documentation keywords. If matched, it denies the search and guides Claude to use
LEANN MCP tools instead. Includes an escape hatch for retrying web search if RAG fails.
"""
import json
import os
import re
import sys
import time
from pathlib import Path


def get_config_path() -> Path:
    """Get the configuration file path."""
    if env_path := os.environ.get("DOCSEARCH_CONFIG_PATH"):
        return Path(env_path)
    return Path.home() / ".claude" / "hooks" / "docsearch-config.json"


def get_state_dir() -> Path:
    """Get the state directory path."""
    if env_path := os.environ.get("DOCSEARCH_STATE_DIR"):
        return Path(env_path)
    return Path.home() / ".claude" / "hooks"


def get_state_file(session_id: str) -> Path:
    """Get the state file path for a session."""
    return get_state_dir() / f"docsearch-state-{session_id}.json"


def load_config() -> dict | None:
    """Load and parse the configuration file. Returns None on any error."""
    config_path = get_config_path()
    try:
        with open(config_path) as f:
            return json.load(f)
    except (FileNotFoundError, json.JSONDecodeError, OSError):
        return None


def load_state(session_id: str) -> dict:
    """Load session state. Returns empty dict on any error."""
    state_file = get_state_file(session_id)
    try:
        with open(state_file) as f:
            return json.load(f)
    except (FileNotFoundError, json.JSONDecodeError, OSError):
        return {}


def save_state(session_id: str, state: dict) -> None:
    """Save session state."""
    state_file = get_state_file(session_id)
    try:
        state_file.parent.mkdir(parents=True, exist_ok=True)
        with open(state_file, "w") as f:
            json.dump(state, f)
    except OSError:
        pass  # Fail silently - state is optional


def params_match(current: dict, previous: dict) -> bool:
    """Check if current tool_input matches previous denied params.

    Compares query exactly and domains as sets (order-independent).
    """
    if current.get("query") != previous.get("query"):
        return False

    # Compare domains as sets (order-independent)
    current_allowed = set(current.get("allowed_domains", []) or [])
    previous_allowed = set(previous.get("allowed_domains", []) or [])
    if current_allowed != previous_allowed:
        return False

    current_blocked = set(current.get("blocked_domains", []) or [])
    previous_blocked = set(previous.get("blocked_domains", []) or [])
    if current_blocked != previous_blocked:
        return False

    return True


def find_matching_databases(query: str, config: dict) -> list[dict]:
    """Find all databases with keywords matching the query.

    Uses word boundary matching (case-insensitive).
    Returns list of matching database configs.
    """
    matches = []
    query_lower = query.lower()

    for db in config.get("databases", []):
        for keyword in db.get("keywords", []):
            # Word boundary regex for exact word match
            pattern = rf"\b{re.escape(keyword.lower())}\b"
            if re.search(pattern, query_lower):
                matches.append(db)
                break  # Only add each database once

    return matches


def build_deny_response(matches: list[dict]) -> dict:
    """Build the JSON response for denying a WebSearch."""
    if len(matches) == 1:
        db = matches[0]
        matched_keywords = db["keywords"][0]  # Use first keyword for message
        reason = f"Query matches '{matched_keywords}' - using RAG database instead"
        context = (
            f"This query should use the LEANN MCP tool '{db['mcp_tool_name']}' "
            f"to search the {db['description']} RAG database at {db['path']} instead of web search."
        )
    else:
        keyword_list = " and ".join(f"'{db['keywords'][0]}'" for db in matches)
        reason = f"Query matches {keyword_list} - using RAG databases instead"
        lines = ["This query matches multiple documentation databases. Please use these LEANN MCP tools IN PARALLEL:"]
        for i, db in enumerate(matches, 1):
            lines.append(f"{i}. '{db['mcp_tool_name']}' for {db['description']} at {db['path']}")
        context = "\n".join(lines)

    return {
        "hookSpecificOutput": {
            "hookEventName": "PreToolUse",
            "permissionDecision": "deny",
            "permissionDecisionReason": reason,
            "additionalContext": context,
        }
    }


def main() -> int:
    """Main entry point for the hook."""
    # Read and parse input from stdin
    try:
        stdin_data = sys.stdin.read()
        hook_input = json.loads(stdin_data)
    except json.JSONDecodeError:
        # Invalid JSON - fail open
        return 0

    # Get tool name - if not WebSearch, allow through
    tool_name = hook_input.get("tool_name", "")
    if tool_name != "WebSearch":
        return 0

    # Load configuration - if missing or invalid, allow through
    config = load_config()
    if config is None:
        return 0

    # Get the query from tool input
    tool_input = hook_input.get("tool_input", {})
    query = tool_input.get("query", "")
    if not query:
        return 0

    # Get session ID for state management
    session_id = hook_input.get("session_id", "default")

    # Check escape hatch - if this is a retry of the same params, allow through
    state = load_state(session_id)
    last_denied = state.get("last_denied")
    if last_denied and params_match(tool_input, last_denied):
        # Clear state and allow through
        save_state(session_id, {"last_denied": None})
        return 0

    # Find matching databases
    matches = find_matching_databases(query, config)
    if not matches:
        return 0

    # Store current params in state for escape hatch
    save_state(session_id, {
        "last_denied": {
            "query": tool_input.get("query", ""),
            "allowed_domains": tool_input.get("allowed_domains", []),
            "blocked_domains": tool_input.get("blocked_domains", []),
            "timestamp": int(time.time()),
        }
    })

    # Deny and provide guidance
    response = build_deny_response(matches)
    print(json.dumps(response))
    return 2


if __name__ == "__main__":
    sys.exit(main())

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add session state management for escape hatch"

Task 7: State Cleanup (Stale State Expiry)

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for stale state cleanup

Add to tests/test_hook.py:

class TestStaleStateCleanup:
    """Tests for stale state file cleanup."""

    def test_expired_state_is_ignored(self, tmp_path):
        """State older than 5 minutes should be ignored."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create state file with old timestamp (6 minutes ago)
        old_timestamp = int(time.time()) - 360  # 6 minutes ago
        state_file = state_dir / "docsearch-state-test-session-old.json"
        state_file.write_text(json.dumps({
            "last_denied": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": old_timestamp,
            }
        }))

        # Same query should be denied again (state expired)
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-old",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2  # Should deny, not allow through

    def test_recent_state_is_used(self, tmp_path):
        """State less than 5 minutes old should be used."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create state file with recent timestamp (2 minutes ago)
        recent_timestamp = int(time.time()) - 120  # 2 minutes ago
        state_file = state_dir / "docsearch-state-test-session-recent.json"
        state_file.write_text(json.dumps({
            "last_denied": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": recent_timestamp,
            }
        }))

        # Same query should be allowed (escape hatch)
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "how to configure gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "test-session-recent",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 0  # Should allow through

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestStaleStateCleanup -v Expected: FAIL (timestamp expiry not implemented)

Step 3: Write minimal implementation

Add near the top of docsearch.py:

# State expiry timeout in seconds (5 minutes)
STATE_EXPIRY_SECONDS = 300


def is_state_expired(last_denied: dict) -> bool:
    """Check if the state entry has expired (older than 5 minutes)."""
    timestamp = last_denied.get("timestamp", 0)
    return (int(time.time()) - timestamp) > STATE_EXPIRY_SECONDS

Update the escape hatch check in main():

    # Check escape hatch - if this is a retry of the same params, allow through
    state = load_state(session_id)
    last_denied = state.get("last_denied")
    if last_denied and not is_state_expired(last_denied) and params_match(tool_input, last_denied):
        # Clear state and allow through
        save_state(session_id, {"last_denied": None})
        return 0

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add 5-minute expiry for stale state entries"

Task 8: Error Logging to stderr

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for error logging

Add to tests/test_hook.py:

class TestErrorLogging:
    """Tests for error logging to stderr."""

    def test_invalid_config_logs_to_stderr(self, tmp_path):
        """Invalid config JSON should log error to stderr."""
        config_file = tmp_path / "bad_config.json"
        config_file.write_text("{invalid json")

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "test query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        assert exit_code == 0  # Fail open
        assert "error" in stderr.lower() or "json" in stderr.lower()

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestErrorLogging -v Expected: FAIL (no stderr logging)

Step 3: Write minimal implementation

Update load_config() in docsearch.py:

def load_config() -> dict | None:
    """Load and parse the configuration file. Returns None on any error."""
    config_path = get_config_path()
    try:
        with open(config_path) as f:
            return json.load(f)
    except FileNotFoundError:
        return None  # Silent - expected during first-time setup
    except json.JSONDecodeError as e:
        print(f"Error: Invalid JSON in config file {config_path}: {e}", file=sys.stderr)
        return None
    except OSError as e:
        print(f"Error: Could not read config file {config_path}: {e}", file=sys.stderr)
        return None

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add error logging to stderr for config issues"

Task 9: Complete Test Coverage and Edge Cases

Files:

  • Modify: tests/test_hook.py

Step 1: Add comprehensive edge case tests

Add to tests/test_hook.py:

class TestEdgeCases:
    """Tests for edge cases and boundary conditions."""

    def test_empty_query_allows_through(self):
        """Empty query should be allowed through."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": ""},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 0

    def test_missing_query_allows_through(self):
        """Missing query field should be allowed through."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 0

    def test_missing_tool_input_allows_through(self):
        """Missing tool_input field should be allowed through."""
        hook_input = {
            "tool_name": "WebSearch",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 0

    def test_missing_session_id_uses_default(self, tmp_path):
        """Missing session_id should use 'default' session."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab setup"},
            # Note: no session_id
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2

        # Should use default session
        state_file = state_dir / "docsearch-state-default.json"
        assert state_file.exists()

    def test_empty_databases_config_allows_through(self, tmp_path):
        """Config with empty databases array should allow through."""
        config_file = tmp_path / "empty_config.json"
        config_file.write_text('{"databases": []}')

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci configuration"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        assert exit_code == 0

    def test_domains_compared_as_sets(self, tmp_path):
        """Domain arrays should be compared as sets (order-independent)."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Pre-create state with domains in one order
        state_file = state_dir / "docsearch-state-test-domains.json"
        state_file.write_text(json.dumps({
            "last_denied": {
                "query": "gitlab ci",
                "allowed_domains": ["b.com", "a.com"],  # Different order
                "blocked_domains": [],
                "timestamp": int(time.time()),
            }
        }))

        # Query with same domains in different order
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "gitlab ci",
                "allowed_domains": ["a.com", "b.com"],  # Same domains, different order
                "blocked_domains": [],
            },
            "session_id": "test-domains",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 0  # Should match and allow through

    def test_special_characters_in_keywords(self, tmp_path):
        """Keywords with regex special characters should match correctly."""
        config_file = tmp_path / "special_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": ["c++", "c#", ".net"],
                    "path": "/mock/path/dotnet",
                    "mcp_tool_name": "mcp__leann__search",
                    "description": ".NET documentation"
                }
            ]
        }))

        # Test C++ keyword
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "c++ templates tutorial"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        assert exit_code == 2

    def test_output_contains_all_required_fields(self):
        """Output JSON should contain all required hookSpecificOutput fields."""
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json")},
        )
        assert exit_code == 2

        output = json.loads(stdout)
        hook_output = output["hookSpecificOutput"]

        # Verify all required fields present
        assert "hookEventName" in hook_output
        assert "permissionDecision" in hook_output
        assert "permissionDecisionReason" in hook_output
        assert "additionalContext" in hook_output

        # Verify field values
        assert hook_output["hookEventName"] == "PreToolUse"
        assert hook_output["permissionDecision"] == "deny"

Step 2: Run all tests

Run: pytest tests/test_hook.py -v Expected: PASS

Step 3: Commit

git add tests/test_hook.py
git commit -m "test: add comprehensive edge case coverage"

Task 10: README Documentation

Files:

  • Modify: README.md

Step 1: Write comprehensive README

# DocSearch Hook

A Claude Code PreToolUse hook that intercepts WebSearch tool calls and redirects documentation-related queries to local RAG databases via LEANN MCP server.

## Features

- **Keyword-based interception**: Configure keywords that trigger RAG lookups instead of web searches
- **Multiple database support**: Match queries against multiple documentation databases
- **Smart escape hatch**: If RAG results are insufficient, retry the same search to use web
- **Fail-open design**: Any errors gracefully fall back to normal web search
- **Session isolation**: Per-session state prevents cross-session interference

## Prerequisites

1. Python 3.12+
2. [LEANN](https://github.com/user/leann) installed and configured
3. LEANN MCP server configured in Claude Code's MCP settings
4. RAG databases built using LEANN tools

## Installation

1. **Install the hook script:**

   ```bash
   mkdir -p ~/.claude/hooks/PreToolUse
   cp docsearch.py ~/.claude/hooks/PreToolUse/docsearch.py
   chmod +x ~/.claude/hooks/PreToolUse/docsearch.py
  1. Create configuration file:

    cp config.example.json ~/.claude/hooks/docsearch-config.json
    # Edit with your database paths and keywords
  2. Configure Claude Code to use the hook by adding to your Claude Code settings:

    {
      "hooks": {
        "PreToolUse": ["~/.claude/hooks/PreToolUse/docsearch.py"]
      }
    }

Configuration

Edit ~/.claude/hooks/docsearch-config.json:

{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/path/to/.leann/databases/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation from docs.gitlab.com"
    }
  ]
}

Configuration Fields

Field Required Description
keywords Yes Array of keywords to match (case-insensitive, word boundaries)
path Yes Absolute path to LEANN database directory
mcp_tool_name Yes Exact MCP tool name for Claude to use
description Yes Human-readable description shown to Claude

How It Works

  1. You ask Claude a question containing a configured keyword (e.g., "How do I configure GitLab CI?")
  2. Claude attempts to use WebSearch
  3. The hook intercepts and denies the search
  4. Claude receives guidance to use the LEANN MCP tool instead
  5. If RAG results are insufficient, Claude can retry the exact same WebSearch
  6. The hook recognizes the retry and allows it through

Escape Hatch

If the RAG database doesn't have what you need, Claude can simply retry the same web search. The hook tracks the last denied search per session and allows identical retries through. State expires after 5 minutes as a safety net.

Testing

pytest tests/test_hook.py -v

Troubleshooting

Hook not intercepting searches

  • Verify the hook script is executable: chmod +x ~/.claude/hooks/PreToolUse/docsearch.py
  • Check config file exists: cat ~/.claude/hooks/docsearch-config.json
  • Verify JSON syntax: python -m json.tool ~/.claude/hooks/docsearch-config.json

Config errors

Check stderr for error messages. The hook logs JSON parsing errors to stderr.

Keyword not matching

  • Keywords use word boundary matching (\b regex)
  • "gitla" won't match "gitlab" - only complete words match
  • Matching is case-insensitive

License

MIT License - see LICENSE file


**Step 2: Commit**

```bash
git add README.md
git commit -m "docs: add comprehensive README with setup and usage instructions"

Task 11: Final Integration Testing

Files:

  • Create: tests/test_integration.py (manual testing guide)

Step 1: Create integration test guide

Create tests/test_integration.py:

"""
Integration Testing Guide for DocSearch Hook

These tests require a real LEANN MCP server configured.
Run these manually to verify end-to-end functionality.

Setup:
1. Configure LEANN MCP server in Claude Code
2. Build a test RAG database with LEANN
3. Add the database to docsearch-config.json
4. Run Claude Code and test the flow

Test Scenarios:
1. Basic interception: Ask about configured keyword topic
   - Verify hook denies WebSearch
   - Verify Claude uses MCP tool
   - Verify answer comes from RAG

2. Escape hatch: Ask about topic where RAG fails
   - Verify first search denied
   - Verify Claude can retry
   - Verify retry uses web search

3. Multiple keywords: Ask about two topics in one query
   - Verify both databases mentioned
   - Verify Claude calls MCP tools in parallel

4. Non-matching query: Ask about unconfigured topic
   - Verify hook allows WebSearch through
"""

Step 2: Commit

git add tests/test_integration.py
git commit -m "docs: add integration testing guide"

Task 12: Make Script Executable and Final Verification

Files:

  • Verify: docsearch.py

Step 1: Verify shebang line

The shebang is already present: #!/usr/bin/env python3

Step 2: Run full test suite

Run: pytest tests/test_hook.py -v --tb=short Expected: All tests PASS

Step 3: Verify script is executable

Run: chmod +x docsearch.py && ./docsearch.py < /dev/null; echo "Exit code: $?" Expected: Exit code: 0 (fail open on no input)

Step 4: Final commit

git add -A
git commit -m "chore: final cleanup and verification"

Task 3a: Configuration Schema Validation (NEW)

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for config validation

Add to tests/test_hook.py:

class TestConfigValidation:
    """Tests for configuration schema validation."""

    def test_missing_keywords_logs_warning(self, tmp_path):
        """Config entry missing 'keywords' should log warning and skip entry."""
        config_file = tmp_path / "incomplete_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "path": "/mock/path/test",
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                    # Missing: "keywords"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "some query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should allow through (no valid databases)
        assert exit_code == 0
        # Should log warning about missing field
        assert "keywords" in stderr.lower() or "missing" in stderr.lower()

    def test_missing_path_logs_warning(self, tmp_path):
        """Config entry missing 'path' should log warning and skip entry."""
        config_file = tmp_path / "incomplete_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": ["test"],
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                    # Missing: "path"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "test query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should allow through (no valid databases after validation)
        assert exit_code == 0
        # Should log warning
        assert "path" in stderr.lower() or "missing" in stderr.lower()

    def test_keywords_not_array_logs_warning(self, tmp_path):
        """Config entry with keywords as string (not array) should log warning."""
        config_file = tmp_path / "bad_type_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": "gitlab",  # Should be ["gitlab"]
                    "path": "/mock/path/test",
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should allow through (invalid entry skipped)
        assert exit_code == 0
        # Should log warning about type
        assert "keywords" in stderr.lower() or "array" in stderr.lower() or "list" in stderr.lower()

    def test_empty_keywords_array_logs_warning(self, tmp_path):
        """Config entry with empty keywords array should log warning."""
        config_file = tmp_path / "empty_keywords_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": [],  # Empty array
                    "path": "/mock/path/test",
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "test query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should allow through (no valid databases)
        assert exit_code == 0
        # Should log warning about empty keywords
        assert "keywords" in stderr.lower() or "empty" in stderr.lower()

    def test_relative_path_logs_warning(self, tmp_path):
        """Config entry with relative path should log warning but still work."""
        config_file = tmp_path / "relative_path_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": ["test"],
                    "path": "relative/path/database",  # Should be absolute
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "test query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should still deny (relative path is a warning, not an error)
        assert exit_code == 2
        # Should log warning about relative path
        assert "path" in stderr.lower() or "absolute" in stderr.lower() or "relative" in stderr.lower()

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestConfigValidation -v Expected: FAIL (validation not implemented)

Step 3: Write minimal implementation

Add validation function to docsearch.py:

REQUIRED_DATABASE_FIELDS = ["keywords", "path", "mcp_tool_name", "description"]


def validate_database_entry(db: dict, index: int) -> bool:
    """Validate a database entry has all required fields and correct types.

    Returns True if valid, False if invalid (logs warning to stderr).
    Maintains fail-open behavior - warns but allows through when possible.
    """
    # Check required fields are present
    missing = [f for f in REQUIRED_DATABASE_FIELDS if f not in db]
    if missing:
        print(
            f"Warning: Database entry {index} missing required fields: {missing}",
            file=sys.stderr
        )
        return False

    # Validate keywords is a non-empty list
    keywords = db.get("keywords")
    if not isinstance(keywords, list):
        print(
            f"Warning: Database entry {index} 'keywords' must be an array, got {type(keywords).__name__}",
            file=sys.stderr
        )
        return False

    if len(keywords) == 0:
        print(
            f"Warning: Database entry {index} 'keywords' array is empty",
            file=sys.stderr
        )
        return False

    # Warn (but don't fail) for relative paths
    path = db.get("path", "")
    if path and not path.startswith("/"):
        print(
            f"Warning: Database entry {index} 'path' should be absolute, got relative path: {path}",
            file=sys.stderr
        )
        # Continue anyway - relative path might still work

    return True

Update find_matching_databases() to skip invalid entries:

def find_matching_databases(query: str, config: dict) -> list[dict]:
    """Find all databases with keywords matching the query."""
    matches = []
    query_lower = query.lower()

    for i, db in enumerate(config.get("databases", [])):
        # Skip invalid database entries
        if not validate_database_entry(db, i):
            continue

        for keyword in db.get("keywords", []):
            pattern = rf"\b{re.escape(keyword.lower())}\b"
            if re.search(pattern, query_lower):
                matches.append(db)
                break

    return matches

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add configuration schema validation with type checking"

Task 7a: Session Start State Cleanup (NEW)

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for stale file cleanup

Add to tests/test_hook.py:

class TestSessionStartCleanup:
    """Tests for cleaning stale state files on session start."""

    def test_stale_state_file_cleaned_on_unrelated_query(self, tmp_path):
        """Very old state files should be cleaned up when processing new queries."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Create multiple stale state files (older than 5 minutes)
        old_timestamp = int(time.time()) - 600  # 10 minutes ago

        stale_file1 = state_dir / "docsearch-state-old-session-1.json"
        stale_file1.write_text(json.dumps({
            "last_denied": {
                "query": "old query 1",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": old_timestamp,
            }
        }))

        stale_file2 = state_dir / "docsearch-state-old-session-2.json"
        stale_file2.write_text(json.dumps({
            "last_denied": {
                "query": "old query 2",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": old_timestamp,
            }
        }))

        # Run a hook call for a new session (triggers cleanup)
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "unrelated query no keywords"},
            "session_id": "new-session",
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )

        # Query should pass through (no keyword match)
        assert exit_code == 0

        # Stale files should be cleaned up
        # Note: This is optional behavior - cleanup runs periodically
        # Test verifies stale files don't interfere with new sessions

    def test_recent_state_file_preserved(self, tmp_path):
        """Recent state files should NOT be cleaned up."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Create a recent state file (2 minutes ago)
        recent_timestamp = int(time.time()) - 120
        recent_file = state_dir / "docsearch-state-active-session.json"
        recent_file.write_text(json.dumps({
            "last_denied": {
                "query": "gitlab ci",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": recent_timestamp,
            }
        }))

        # Run a hook call for a different session
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "unrelated query"},
            "session_id": "other-session",
        }
        run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )

        # Recent file should still exist
        assert recent_file.exists()

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestSessionStartCleanup -v Expected: FAIL (cleanup not implemented)

Step 3: Write minimal implementation

Add cleanup function to docsearch.py:

def cleanup_stale_state_files() -> None:
    """Clean up state files older than the expiry threshold.

    This is a best-effort cleanup that runs periodically to prevent
    state file accumulation. Errors are silently ignored.
    """
    state_dir = get_state_dir()
    if not state_dir.exists():
        return

    try:
        for state_file in state_dir.glob("docsearch-state-*.json"):
            try:
                with open(state_file) as f:
                    state = json.load(f)
                last_denied = state.get("last_denied")
                if last_denied and is_state_expired(last_denied):
                    state_file.unlink()
            except (json.JSONDecodeError, OSError, KeyError):
                # Corrupted or unreadable - remove it
                try:
                    state_file.unlink()
                except OSError:
                    pass
    except OSError:
        pass  # Can't list directory - skip cleanup

Add cleanup call at the start of main() (after config loading):

def main() -> int:
    """Main entry point for the hook."""
    # ... existing code ...

    # Load configuration - if missing or invalid, allow through
    config = load_config()
    if config is None:
        return 0

    # Periodically clean up stale state files
    # Only run occasionally to avoid performance impact
    cleanup_stale_state_files()

    # ... rest of main() ...

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add periodic cleanup of stale state files"

Task 6a: Session ID Sanitization (NEW - SECURITY)

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for session ID sanitization

Add to tests/test_hook.py:

class TestSessionIdSanitization:
    """Tests for session ID sanitization to prevent path traversal."""

    def test_session_id_with_path_traversal_is_sanitized(self, tmp_path):
        """Session ID with path traversal characters should be sanitized."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Attempt path traversal attack
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci setup"},
            "session_id": "../../etc/passwd",  # Malicious session_id
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2  # Should still work (deny)

        # State file should be created with sanitized name, NOT traverse paths
        # Should NOT create file at tmp_path/etc/passwd
        assert not (tmp_path / "etc").exists()

        # Should create file with sanitized session_id (special chars replaced)
        state_files = list(state_dir.glob("docsearch-state-*.json"))
        assert len(state_files) == 1
        # Filename should not contain path separators
        assert "/" not in state_files[0].name
        assert ".." not in state_files[0].name

    def test_session_id_with_special_chars_is_sanitized(self, tmp_path):
        """Session ID with special filesystem characters should be sanitized."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci setup"},
            "session_id": "test<>:\"|?*session",  # Invalid filesystem chars
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code == 2

        # State file should be created with sanitized name
        state_files = list(state_dir.glob("docsearch-state-*.json"))
        assert len(state_files) == 1

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestSessionIdSanitization -v Expected: FAIL (sanitization not implemented)

Step 3: Write minimal implementation

Add sanitization function to docsearch.py:

def sanitize_session_id(session_id: str) -> str:
    """Sanitize session_id to prevent path traversal and invalid filenames.

    Only allows alphanumeric characters, dashes, and underscores.
    All other characters are replaced with underscores.
    """
    return re.sub(r'[^a-zA-Z0-9_-]', '_', session_id)

Update get_state_file():

def get_state_file(session_id: str) -> Path:
    """Get the state file path for a session."""
    safe_id = sanitize_session_id(session_id)
    return get_state_dir() / f"docsearch-state-{safe_id}.json"

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "security: add session ID sanitization to prevent path traversal"

Task 3b: Keywords Element Type Validation (NEW)

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write the failing test for keyword element type validation

Add to tests/test_hook.py in TestConfigValidation class:

    def test_keywords_with_non_string_elements_logs_warning(self, tmp_path):
        """Config entry with non-string keyword elements should log warning."""
        config_file = tmp_path / "bad_keywords_config.json"
        config_file.write_text(json.dumps({
            "databases": [
                {
                    "keywords": ["valid", 123, None, {"nested": "dict"}],
                    "path": "/mock/path/test",
                    "mcp_tool_name": "mcp__leann__search",
                    "description": "Test database"
                }
            ]
        }))

        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {"query": "valid query"},
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
        )
        # Should allow through (invalid entry skipped)
        assert exit_code == 0
        # Should log warning about non-string elements
        assert "string" in stderr.lower() or "keywords" in stderr.lower()

Step 2: Run test to verify it fails

Run: pytest tests/test_hook.py::TestConfigValidation::test_keywords_with_non_string_elements_logs_warning -v Expected: FAIL (type validation not implemented)

Step 3: Write minimal implementation

Update validate_database_entry() in docsearch.py:

def validate_database_entry(db: dict, index: int) -> bool:
    """Validate a database entry has all required fields and correct types."""
    # ... existing checks ...

    # Validate all keyword elements are strings
    if not all(isinstance(k, str) for k in keywords):
        print(
            f"Warning: Database entry {index} 'keywords' contains non-string elements",
            file=sys.stderr
        )
        return False

    # ... rest of function ...

Step 4: Run test to verify it passes

Run: pytest tests/test_hook.py -v Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: validate all keyword elements are strings"

Task 9a: Permission Error Tests (NEW)

Files:

  • Modify: tests/test_hook.py

Step 1: Write permission error tests

Add to tests/test_hook.py:

class TestPermissionErrors:
    """Tests for permission error handling (fail-open behavior)."""

    def test_unreadable_config_allows_through(self, tmp_path):
        """Unreadable config file should fail open (exit 0)."""
        config_file = tmp_path / "unreadable_config.json"
        config_file.write_text('{"databases": [{"keywords": ["test"], "path": "/test", "mcp_tool_name": "test", "description": "test"}]}')
        config_file.chmod(0o000)  # No permissions

        try:
            hook_input = {
                "tool_name": "WebSearch",
                "tool_input": {"query": "test query"},
            }
            exit_code, stdout, stderr = run_hook(
                hook_input,
                env={**os.environ, "DOCSEARCH_CONFIG_PATH": str(config_file)},
            )
            # Should fail open
            assert exit_code == 0
        finally:
            config_file.chmod(0o644)  # Restore for cleanup

    def test_unwritable_state_dir_still_denies(self, tmp_path):
        """Unwritable state directory should still deny (state is optional)."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()
        state_dir.chmod(0o555)  # Read-only

        try:
            hook_input = {
                "tool_name": "WebSearch",
                "tool_input": {"query": "gitlab ci setup"},
                "session_id": "test-session",
            }
            exit_code, stdout, stderr = run_hook(
                hook_input,
                env={
                    **os.environ,
                    "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                    "DOCSEARCH_STATE_DIR": str(state_dir),
                },
            )
            # Should still deny (state write failure is silent)
            assert exit_code == 2
        finally:
            state_dir.chmod(0o755)  # Restore for cleanup

Step 2: Run tests

Run: pytest tests/test_hook.py::TestPermissionErrors -v Expected: PASS (implementation already handles these cases)

Step 3: Commit

git add tests/test_hook.py
git commit -m "test: add permission error handling tests"

Task 9b: Session Isolation Tests (NEW)

Files:

  • Modify: tests/test_hook.py

Step 1: Write session isolation tests

Add to tests/test_hook.py:

class TestSessionIsolation:
    """Tests for session state isolation between concurrent sessions."""

    def test_different_sessions_have_isolated_state(self, tmp_path):
        """State from session A should not affect session B."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Create state for session A (previous denial)
        state_file_a = state_dir / "docsearch-state-session-A.json"
        state_file_a.write_text(json.dumps({
            "last_denied": {
                "query": "gitlab ci setup",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": int(time.time()),
            }
        }))

        # Session B with SAME query should be denied (no escape hatch)
        hook_input = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "gitlab ci setup",  # Same query as A's state
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "session-B",  # Different session
        }
        exit_code, stdout, stderr = run_hook(
            hook_input,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )

        # Session B should be denied (its own first request)
        assert exit_code == 2

        # Session A's state should be unchanged
        state_a = json.loads(state_file_a.read_text())
        assert state_a["last_denied"]["query"] == "gitlab ci setup"

        # Session B should have its own state file
        state_file_b = state_dir / "docsearch-state-session-B.json"
        assert state_file_b.exists()

    def test_session_escape_hatch_only_affects_own_session(self, tmp_path):
        """Escape hatch retry should only work for the session that was denied."""
        state_dir = tmp_path / "state"
        state_dir.mkdir()

        # Create state for session A
        state_file_a = state_dir / "docsearch-state-session-A.json"
        state_file_a.write_text(json.dumps({
            "last_denied": {
                "query": "gitlab ci setup",
                "allowed_domains": [],
                "blocked_domains": [],
                "timestamp": int(time.time()),
            }
        }))

        # Session A retries same query - should be allowed (escape hatch)
        hook_input_a = {
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "gitlab ci setup",
                "allowed_domains": [],
                "blocked_domains": [],
            },
            "session_id": "session-A",
        }
        exit_code_a, _, _ = run_hook(
            hook_input_a,
            env={
                **os.environ,
                "DOCSEARCH_CONFIG_PATH": str(FIXTURES_DIR / "valid_config.json"),
                "DOCSEARCH_STATE_DIR": str(state_dir),
            },
        )
        assert exit_code_a == 0  # Escape hatch works

        # Session A's state should be cleared
        state_a = json.loads(state_file_a.read_text())
        assert state_a.get("last_denied") is None

Step 2: Run tests

Run: pytest tests/test_hook.py::TestSessionIsolation -v Expected: PASS (implementation already handles isolation)

Step 3: Commit

git add tests/test_hook.py
git commit -m "test: add session isolation tests"

Summary

Files Created/Modified

File Action Description
docsearch.py Create Main hook script (Python 3.12+)
config.example.json Create Example configuration file
tests/test_hook.py Create Comprehensive unit tests
tests/fixtures/valid_config.json Create Test fixture configuration
tests/test_integration.py Create Integration testing guide
README.md Modify Setup and usage documentation

Key Implementation Details

  1. Fail-open design: All errors result in allowing WebSearch through
  2. Word boundary matching: Uses \b regex to prevent partial matches
  3. Session isolation: State files named with sanitized session_id
  4. 5-minute expiry: Stale state entries are ignored
  5. Set comparison: Domain arrays compared order-independently
  6. Config validation: Missing required fields AND type validation logged as warnings (Task 3a)
    • Validates keywords is a non-empty array
    • Validates all keyword elements are strings (Task 3b)
    • Warns on relative paths (but allows)
    • Skips invalid database entries entirely
  7. Stale file cleanup: Periodic cleanup of expired state files (Task 7a)
  8. Session ID sanitization: Prevents path traversal attacks (Task 6a)

Testing Commands

# Run all tests
pytest tests/test_hook.py -v

# Run specific test class
pytest tests/test_hook.py::TestKeywordMatching -v

# Run with coverage
pytest tests/test_hook.py -v --cov=docsearch

# Run new validation tests
pytest tests/test_hook.py::TestConfigValidation -v

# Run new cleanup tests
pytest tests/test_hook.py::TestSessionStartCleanup -v

Total Tasks: 18

Phase Tasks (in execution order) Status
P0 - Core 2, 3, 4, 6a, 6, 12 0/6 complete
P1 - Enhanced 5, 7, 7a, 3a, 3b, 8, 9, 9a, 9b 0/9 complete
P2 - Polish 1, 10, 11 0/3 complete
Total 18 tasks 0/18 complete

IMPORTANT: Task 6a (Session ID Sanitization) MUST be completed before Task 6 (Session State Management) for security reasons.

DocSearch Hook Implementation Plan

For Claude: REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.

Goal: Build a Claude Code PreToolUse hook that intercepts WebSearch calls and redirects documentation queries to local RAG databases with intelligent escape hatch for retries.

Architecture: Python 3.12+ script reads hook input from stdin, matches queries against configured keywords, stores per-session state for retry detection, and outputs structured JSON denial with MCP tool guidance.

Tech Stack: Python 3.14 stdlib (json, re, sys, pathlib), pytest for testing


Task 1: Project Structure Setup

Files:

  • Create: docsearch.py
  • Create: config.example.json
  • Create: tests/test_hook.py
  • Create: tests/fixtures/hook_input.json
  • Create: tests/fixtures/config.json
  • Create: .gitignore

Step 1: Create .gitignore

cat > .gitignore << 'EOF'
__pycache__/
*.py[cod]
*$py.class
*.so
.Python
.pytest_cache/
*.egg-info/
dist/
build/
.coverage
htmlcov/
.venv/
venv/
EOF

Step 2: Create example config file

cat > config.example.json << 'EOF'
{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/Users/viktor/.leann/databases/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation from docs.gitlab.com"
    },
    {
      "keywords": ["kubernetes", "k8s", "kubectl"],
      "path": "/Users/viktor/.leann/databases/kubernetes",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Kubernetes official documentation"
    }
  ]
}
EOF

Step 3: Create test fixtures directory

mkdir -p tests/fixtures

Step 4: Create placeholder hook script

cat > docsearch.py << 'EOF'
#!/usr/bin/env python3
# ABOUTME: Claude Code PreToolUse hook that redirects WebSearch to local RAG databases
# ABOUTME: Intercepts documentation queries and routes them to LEANN MCP server

import sys

if __name__ == "__main__":
    # Placeholder - will be implemented via TDD
    sys.exit(0)
EOF
chmod +x docsearch.py

Step 5: Create placeholder test file

cat > tests/test_hook.py << 'EOF'
# ABOUTME: Unit tests for docsearch PreToolUse hook
# ABOUTME: Tests keyword matching, state management, and escape hatch logic

import pytest

# Tests will be added incrementally via TDD
EOF

Step 6: Commit project structure

git add .gitignore config.example.json docsearch.py tests/
git commit -m "feat: initialize docsearch hook project structure"

Task 2: Non-WebSearch Tool Pass-Through

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py
  • Create: tests/fixtures/non_websearch_input.json

Step 1: Write failing test for non-WebSearch tools

Add to tests/test_hook.py:

import json
import subprocess
from pathlib import Path

def test_non_websearch_tool_passes_through():
    """Hook should allow non-WebSearch tools through with exit 0"""
    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "Bash",
        "tool_input": {"command": "ls"},
        "session_id": "test-session-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0
    assert result.stdout == ""

Step 2: Run test to verify it fails

pytest tests/test_hook.py::test_non_websearch_tool_passes_through -v

Expected: FAIL (exit code 0 but test expects proper filtering)

Step 3: Implement minimal pass-through logic

Replace docsearch.py content:

#!/usr/bin/env python3
# ABOUTME: Claude Code PreToolUse hook that redirects WebSearch to local RAG databases
# ABOUTME: Intercepts documentation queries and routes them to LEANN MCP server

import json
import sys

def main():
    try:
        hook_input = json.loads(sys.stdin.read())

        # Pass through non-WebSearch tools
        if hook_input.get("tool_name") != "WebSearch":
            sys.exit(0)

        # Placeholder for WebSearch handling
        sys.exit(0)

    except Exception:
        # Fail open - allow tool through on any error
        sys.exit(0)

if __name__ == "__main__":
    main()

Step 4: Run test to verify it passes

pytest tests/test_hook.py::test_non_websearch_tool_passes_through -v

Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: add non-WebSearch tool pass-through logic"

Task 3: Config File Loading with Fail-Open

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py
  • Create: tests/fixtures/valid_config.json
  • Create: tests/fixtures/invalid_config.json

Step 1: Write test fixtures

Create tests/fixtures/valid_config.json:

{
  "databases": [
    {
      "keywords": ["gitlab"],
      "path": "/test/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab docs"
    }
  ]
}

Create tests/fixtures/invalid_config.json:

{
  "databases": [
    {"keywords": "not-an-array"}
  ]
}

Step 2: Write failing tests for config loading

Add to tests/test_hook.py:

import os
from pathlib import Path

def test_missing_config_fails_open(tmp_path, monkeypatch):
    """Missing config file should allow search through"""
    monkeypatch.setenv("HOME", str(tmp_path))

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "gitlab ci"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0

def test_invalid_config_fails_open(tmp_path, monkeypatch):
    """Invalid config JSON should allow search through"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text("invalid json{")

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "gitlab ci"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0

Step 3: Run tests to verify they fail

pytest tests/test_hook.py::test_missing_config_fails_open -v
pytest tests/test_hook.py::test_invalid_config_fails_open -v

Expected: Tests may pass trivially since current code exits 0 - verify logic is correct

Step 4: Implement config loading with fail-open

Update docsearch.py:

#!/usr/bin/env python3
# ABOUTME: Claude Code PreToolUse hook that redirects WebSearch to local RAG databases
# ABOUTME: Intercepts documentation queries and routes them to LEANN MCP server

import json
import sys
from pathlib import Path

def load_config():
    """Load config file from ~/.claude/hooks/docsearch-config.json

    Returns config dict or None if missing/invalid (fail open)
    """
    config_path = Path.home() / ".claude" / "hooks" / "docsearch-config.json"

    if not config_path.exists():
        return None

    try:
        with open(config_path) as f:
            config = json.load(f)

        # Validate basic structure
        if not isinstance(config.get("databases"), list):
            sys.stderr.write("Invalid config: databases must be an array\n")
            return None

        return config

    except json.JSONDecodeError as e:
        sys.stderr.write(f"Invalid config JSON: {e}\n")
        return None
    except Exception as e:
        sys.stderr.write(f"Error loading config: {e}\n")
        return None

def main():
    try:
        hook_input = json.loads(sys.stdin.read())

        # Pass through non-WebSearch tools
        if hook_input.get("tool_name") != "WebSearch":
            sys.exit(0)

        # Load config - fail open if missing/invalid
        config = load_config()
        if config is None:
            sys.exit(0)

        # Placeholder for keyword matching
        sys.exit(0)

    except Exception:
        # Fail open - allow tool through on any error
        sys.exit(0)

if __name__ == "__main__":
    main()

Step 5: Run tests to verify they pass

pytest tests/test_hook.py::test_missing_config_fails_open -v
pytest tests/test_hook.py::test_invalid_config_fails_open -v

Expected: PASS

Step 6: Commit

git add docsearch.py tests/
git commit -m "feat: add config file loading with fail-open error handling"

Task 4: Keyword Matching Logic

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write failing tests for keyword matching

Add to tests/test_hook.py:

def test_no_keyword_match_passes_through(tmp_path, monkeypatch):
    """Query with no matching keywords should allow search through"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "how to cook pasta"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0

def test_single_keyword_match_denies(tmp_path, monkeypatch):
    """Query with matching keyword should deny with exit 2"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "how to configure gitlab ci"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 2
    output = json.loads(result.stdout)
    assert output["hookSpecificOutput"]["permissionDecision"] == "deny"
    assert "gitlab" in output["hookSpecificOutput"]["permissionDecisionReason"].lower()

def test_case_insensitive_matching(tmp_path, monkeypatch):
    """Keyword matching should be case-insensitive"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "how to configure GITLAB CI"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 2

def test_word_boundary_matching(tmp_path, monkeypatch):
    """Keyword matching should respect word boundaries"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "ungitlabbed workflows"},
        "session_id": "test-123"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0  # Should NOT match

Step 2: Run tests to verify they fail

pytest tests/test_hook.py -k "keyword" -v

Expected: FAIL (keyword matching not implemented)

Step 3: Implement keyword matching logic

Update docsearch.py:

#!/usr/bin/env python3
# ABOUTME: Claude Code PreToolUse hook that redirects WebSearch to local RAG databases
# ABOUTME: Intercepts documentation queries and routes them to LEANN MCP server

import json
import re
import sys
from pathlib import Path

def load_config():
    """Load config file from ~/.claude/hooks/docsearch-config.json

    Returns config dict or None if missing/invalid (fail open)
    """
    config_path = Path.home() / ".claude" / "hooks" / "docsearch-config.json"

    if not config_path.exists():
        return None

    try:
        with open(config_path) as f:
            config = json.load(f)

        # Validate basic structure
        if not isinstance(config.get("databases"), list):
            sys.stderr.write("Invalid config: databases must be an array\n")
            return None

        return config

    except json.JSONDecodeError as e:
        sys.stderr.write(f"Invalid config JSON: {e}\n")
        return None
    except Exception as e:
        sys.stderr.write(f"Error loading config: {e}\n")
        return None

def find_matching_databases(query, config):
    """Find databases with keywords matching the query

    Args:
        query: Search query string
        config: Config dict with databases list

    Returns:
        List of matching database configs
    """
    matches = []
    query_lower = query.lower()

    for db in config["databases"]:
        for keyword in db.get("keywords", []):
            # Use word boundary regex for exact word matching
            pattern = r'\b' + re.escape(keyword.lower()) + r'\b'
            if re.search(pattern, query_lower):
                matches.append(db)
                break  # Don't add same DB multiple times

    return matches

def main():
    try:
        hook_input = json.loads(sys.stdin.read())

        # Pass through non-WebSearch tools
        if hook_input.get("tool_name") != "WebSearch":
            sys.exit(0)

        # Load config - fail open if missing/invalid
        config = load_config()
        if config is None:
            sys.exit(0)

        # Extract query from tool input
        tool_input = hook_input.get("tool_input", {})
        query = tool_input.get("query", "")

        # Find matching databases
        matching_dbs = find_matching_databases(query, config)

        if not matching_dbs:
            # No matches - allow search through
            sys.exit(0)

        # Build denial response
        matched_keywords = [db["keywords"][0] for db in matching_dbs]
        keywords_str = "' and '".join(matched_keywords)

        if len(matching_dbs) == 1:
            db = matching_dbs[0]
            response = {
                "hookSpecificOutput": {
                    "hookEventName": "PreToolUse",
                    "permissionDecision": "deny",
                    "permissionDecisionReason": f"Query matches '{matched_keywords[0]}' - using RAG database instead",
                    "additionalContext": f"This query should use the LEANN MCP tool '{db['mcp_tool_name']}' to search the {db['description']} RAG database at {db['path']} instead of web search."
                }
            }
        else:
            # Multiple matches
            tools_list = "\n".join([
                f"{i+1}. '{db['mcp_tool_name']}' for {db['description']} at {db['path']}"
                for i, db in enumerate(matching_dbs)
            ])
            response = {
                "hookSpecificOutput": {
                    "hookEventName": "PreToolUse",
                    "permissionDecision": "deny",
                    "permissionDecisionReason": f"Query matches '{keywords_str}' - using RAG databases instead",
                    "additionalContext": f"This query matches multiple documentation databases. Please use these LEANN MCP tools IN PARALLEL:\n{tools_list}"
                }
            }

        print(json.dumps(response))
        sys.exit(2)

    except Exception:
        # Fail open - allow tool through on any error
        sys.exit(0)

if __name__ == "__main__":
    main()

Step 4: Run tests to verify they pass

pytest tests/test_hook.py -k "keyword" -v

Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: implement keyword matching with word boundaries and case-insensitive search"

Task 5: State File Management for Escape Hatch

Files:

  • Modify: docsearch.py
  • Modify: tests/test_hook.py

Step 1: Write failing tests for state management

Add to tests/test_hook.py:

def test_retry_with_same_params_allows_through(tmp_path, monkeypatch):
    """Retrying same search after denial should allow through"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "gitlab ci setup"},
        "session_id": "test-session-456"
    }

    # First call should deny
    result1 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )
    assert result1.returncode == 2

    # Second call with same params should allow through
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )
    assert result2.returncode == 0
    assert result2.stdout == ""

    # State file should be cleared
    state_file = config_dir / "docsearch-state-test-session-456.json"
    if state_file.exists():
        state = json.loads(state_file.read_text())
        assert state.get("last_denied") is None

def test_different_query_denies_again(tmp_path, monkeypatch):
    """Different query after denial should deny again"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    # First query
    result1 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({
            "hookEventName": "PreToolUse",
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab ci setup"},
            "session_id": "test-789"
        }),
        capture_output=True,
        text=True
    )
    assert result1.returncode == 2

    # Different query with same keyword
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({
            "hookEventName": "PreToolUse",
            "tool_name": "WebSearch",
            "tool_input": {"query": "gitlab runners configuration"},
            "session_id": "test-789"
        }),
        capture_output=True,
        text=True
    )
    assert result2.returncode == 2

def test_session_isolation(tmp_path, monkeypatch):
    """Different sessions should have isolated state"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "gitlab ci setup"},
    }

    # Session 1 - deny
    result1 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({**hook_input, "session_id": "session-1"}),
        capture_output=True,
        text=True
    )
    assert result1.returncode == 2

    # Session 2 - should also deny (not affected by session 1)
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({**hook_input, "session_id": "session-2"}),
        capture_output=True,
        text=True
    )
    assert result2.returncode == 2

Step 2: Run tests to verify they fail

pytest tests/test_hook.py -k "retry or session" -v

Expected: FAIL (state management not implemented)

Step 3: Implement state file management

Update docsearch.py:

#!/usr/bin/env python3
# ABOUTME: Claude Code PreToolUse hook that redirects WebSearch to local RAG databases
# ABOUTME: Intercepts documentation queries and routes them to LEANN MCP server

import json
import re
import sys
from pathlib import Path

def load_config():
    """Load config file from ~/.claude/hooks/docsearch-config.json

    Returns config dict or None if missing/invalid (fail open)
    """
    config_path = Path.home() / ".claude" / "hooks" / "docsearch-config.json"

    if not config_path.exists():
        return None

    try:
        with open(config_path) as f:
            config = json.load(f)

        # Validate basic structure
        if not isinstance(config.get("databases"), list):
            sys.stderr.write("Invalid config: databases must be an array\n")
            return None

        return config

    except json.JSONDecodeError as e:
        sys.stderr.write(f"Invalid config JSON: {e}\n")
        return None
    except Exception as e:
        sys.stderr.write(f"Error loading config: {e}\n")
        return None

def get_state_file_path(session_id):
    """Get path to session-specific state file"""
    return Path.home() / ".claude" / "hooks" / f"docsearch-state-{session_id}.json"

def load_state(session_id):
    """Load state for session, returns None if not found or invalid"""
    state_path = get_state_file_path(session_id)

    if not state_path.exists():
        return None

    try:
        with open(state_path) as f:
            return json.load(f)
    except Exception:
        return None

def save_state(session_id, state):
    """Save state for session"""
    state_path = get_state_file_path(session_id)
    state_path.parent.mkdir(parents=True, exist_ok=True)

    try:
        with open(state_path, 'w') as f:
            json.dump(state, f, indent=2)
    except Exception as e:
        sys.stderr.write(f"Error saving state: {e}\n")

def clear_last_denied(session_id):
    """Clear last_denied from state"""
    state = load_state(session_id) or {}
    state["last_denied"] = None
    save_state(session_id, state)

def params_match(tool_input, last_denied):
    """Check if tool_input matches last_denied params"""
    if not last_denied:
        return False

    # Compare query
    if tool_input.get("query") != last_denied.get("query"):
        return False

    # Compare domains as sets (order-independent)
    allowed1 = set(tool_input.get("allowed_domains", []))
    allowed2 = set(last_denied.get("allowed_domains", []))
    if allowed1 != allowed2:
        return False

    blocked1 = set(tool_input.get("blocked_domains", []))
    blocked2 = set(last_denied.get("blocked_domains", []))
    if blocked1 != blocked2:
        return False

    return True

def find_matching_databases(query, config):
    """Find databases with keywords matching the query

    Args:
        query: Search query string
        config: Config dict with databases list

    Returns:
        List of matching database configs
    """
    matches = []
    query_lower = query.lower()

    for db in config["databases"]:
        for keyword in db.get("keywords", []):
            # Use word boundary regex for exact word matching
            pattern = r'\b' + re.escape(keyword.lower()) + r'\b'
            if re.search(pattern, query_lower):
                matches.append(db)
                break  # Don't add same DB multiple times

    return matches

def main():
    try:
        hook_input = json.loads(sys.stdin.read())

        # Pass through non-WebSearch tools
        if hook_input.get("tool_name") != "WebSearch":
            sys.exit(0)

        # Load config - fail open if missing/invalid
        config = load_config()
        if config is None:
            sys.exit(0)

        # Extract params
        session_id = hook_input.get("session_id", "unknown")
        tool_input = hook_input.get("tool_input", {})

        # Check escape hatch - is this a retry?
        state = load_state(session_id)
        if state and params_match(tool_input, state.get("last_denied")):
            # This is a retry - allow through and clear state
            clear_last_denied(session_id)
            sys.exit(0)

        # Extract query
        query = tool_input.get("query", "")

        # Find matching databases
        matching_dbs = find_matching_databases(query, config)

        if not matching_dbs:
            # No matches - allow search through
            sys.exit(0)

        # Save state before denying
        save_state(session_id, {
            "last_denied": {
                "query": tool_input.get("query"),
                "allowed_domains": tool_input.get("allowed_domains", []),
                "blocked_domains": tool_input.get("blocked_domains", [])
            }
        })

        # Build denial response
        matched_keywords = [db["keywords"][0] for db in matching_dbs]
        keywords_str = "' and '".join(matched_keywords)

        if len(matching_dbs) == 1:
            db = matching_dbs[0]
            response = {
                "hookSpecificOutput": {
                    "hookEventName": "PreToolUse",
                    "permissionDecision": "deny",
                    "permissionDecisionReason": f"Query matches '{matched_keywords[0]}' - using RAG database instead",
                    "additionalContext": f"This query should use the LEANN MCP tool '{db['mcp_tool_name']}' to search the {db['description']} RAG database at {db['path']} instead of web search."
                }
            }
        else:
            # Multiple matches
            tools_list = "\n".join([
                f"{i+1}. '{db['mcp_tool_name']}' for {db['description']} at {db['path']}"
                for i, db in enumerate(matching_dbs)
            ])
            response = {
                "hookSpecificOutput": {
                    "hookEventName": "PreToolUse",
                    "permissionDecision": "deny",
                    "permissionDecisionReason": f"Query matches '{keywords_str}' - using RAG databases instead",
                    "additionalContext": f"This query matches multiple documentation databases. Please use these LEANN MCP tools IN PARALLEL:\n{tools_list}"
                }
            }

        print(json.dumps(response))
        sys.exit(2)

    except Exception:
        # Fail open - allow tool through on any error
        sys.exit(0)

if __name__ == "__main__":
    main()

Step 4: Run tests to verify they pass

pytest tests/test_hook.py -k "retry or session" -v

Expected: PASS

Step 5: Commit

git add docsearch.py tests/test_hook.py
git commit -m "feat: implement state file management for escape hatch retry logic"

Task 6: Multi-Keyword Detection

Files:

  • Modify: tests/test_hook.py
  • Create: tests/fixtures/multi_db_config.json

Step 1: Create multi-database config fixture

Create tests/fixtures/multi_db_config.json:

{
  "databases": [
    {
      "keywords": ["gitlab", "gl"],
      "path": "/test/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation"
    },
    {
      "keywords": ["kubernetes", "k8s"],
      "path": "/test/k8s",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Kubernetes documentation"
    }
  ]
}

Step 2: Write failing test for multi-keyword queries

Add to tests/test_hook.py:

def test_multiple_keyword_match_denies_with_all_tools(tmp_path, monkeypatch):
    """Query matching multiple keywords should suggest all tools in parallel"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/multi_db_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "how to deploy gitlab on kubernetes"},
        "session_id": "test-multi"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 2
    output = json.loads(result.stdout)

    context = output["hookSpecificOutput"]["additionalContext"]
    reason = output["hookSpecificOutput"]["permissionDecisionReason"]

    # Should mention both keywords
    assert "gitlab" in reason.lower() or "gl" in reason.lower()
    assert "kubernetes" in reason.lower() or "k8s" in reason.lower()

    # Should mention parallel execution
    assert "PARALLEL" in context

    # Should mention both databases
    assert "GitLab" in context
    assert "Kubernetes" in context

Step 3: Run test to verify current implementation passes

pytest tests/test_hook.py::test_multiple_keyword_match_denies_with_all_tools -v

Expected: PASS (already implemented in Task 4)

Step 4: Commit test

git add tests/
git commit -m "test: add multi-keyword detection test coverage"

Task 7: Domain Filtering Support

Files:

  • Modify: tests/test_hook.py

Step 1: Write tests for domain filtering in state

Add to tests/test_hook.py:

def test_retry_with_different_domains_denies_again(tmp_path, monkeypatch):
    """Retry with different domain filters should deny again"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    # First call with allowed_domains
    result1 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({
            "hookEventName": "PreToolUse",
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "gitlab ci setup",
                "allowed_domains": ["docs.gitlab.com"]
            },
            "session_id": "domain-test"
        }),
        capture_output=True,
        text=True
    )
    assert result1.returncode == 2

    # Second call with different allowed_domains
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps({
            "hookEventName": "PreToolUse",
            "tool_name": "WebSearch",
            "tool_input": {
                "query": "gitlab ci setup",
                "allowed_domains": ["stackoverflow.com"]
            },
            "session_id": "domain-test"
        }),
        capture_output=True,
        text=True
    )
    assert result2.returncode == 2  # Should deny again (different params)

def test_retry_with_same_domains_allows_through(tmp_path, monkeypatch):
    """Retry with same domain filters should allow through"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {
            "query": "gitlab ci setup",
            "allowed_domains": ["docs.gitlab.com"],
            "blocked_domains": ["spam.com"]
        },
        "session_id": "domain-test-2"
    }

    # First call should deny
    result1 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )
    assert result1.returncode == 2

    # Second call with same params should allow
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )
    assert result2.returncode == 0

Step 2: Run tests to verify they pass

pytest tests/test_hook.py -k "domain" -v

Expected: PASS (already implemented in Task 5)

Step 3: Commit tests

git add tests/test_hook.py
git commit -m "test: add domain filtering test coverage for state comparison"

Task 8: Error Handling Edge Cases

Files:

  • Modify: tests/test_hook.py

Step 1: Write tests for error scenarios

Add to tests/test_hook.py:

def test_corrupted_state_file_continues(tmp_path, monkeypatch):
    """Corrupted state file should be treated as no previous denial"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    # Create corrupted state file
    (config_dir / "docsearch-state-corrupt.json").write_text("invalid{json")

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "gitlab ci"},
        "session_id": "corrupt"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    # Should deny (not crash)
    assert result.returncode == 2

def test_invalid_hook_input_fails_open(tmp_path, monkeypatch):
    """Invalid hook input JSON should allow through"""
    result = subprocess.run(
        ["python3", "docsearch.py"],
        input="invalid json{",
        capture_output=True,
        text=True
    )

    assert result.returncode == 0

def test_missing_query_field_fails_open(tmp_path, monkeypatch):
    """Missing query field should allow through"""
    monkeypatch.setenv("HOME", str(tmp_path))
    config_dir = tmp_path / ".claude" / "hooks"
    config_dir.mkdir(parents=True)
    (config_dir / "docsearch-config.json").write_text(
        Path("tests/fixtures/valid_config.json").read_text()
    )

    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {},  # No query field
        "session_id": "no-query"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result.returncode == 0

Step 2: Run tests to verify they pass

pytest tests/test_hook.py -k "corrupted or invalid or missing" -v

Expected: PASS (already implemented with fail-open strategy)

Step 3: Commit tests

git add tests/test_hook.py
git commit -m "test: add error handling test coverage for edge cases"

Task 9: Documentation and Installation Instructions

Files:

  • Modify: README.md

Step 1: Write comprehensive README

Replace README.md:

# DocSearch Hook for Claude Code

A Claude Code PreToolUse hook that intelligently redirects documentation-related WebSearch queries to local RAG databases via the LEANN MCP server.

## Features

- **Automatic Search Interception**: Detects documentation queries and redirects to local RAG databases
- **Intelligent Escape Hatch**: Allows Claude to retry web searches if RAG results are insufficient
- **Multi-Database Support**: Query multiple documentation sources in parallel
- **Session Isolation**: Per-session state management prevents cross-session interference
- **Fail-Open Design**: Never breaks Claude's functionality - errors allow searches through

## Prerequisites

1. **Python 3.12+** (tested with Python 3.14)
2. **LEANN** installed and configured
3. **LEANN MCP server** configured in Claude Code's MCP settings
4. **RAG databases** built using LEANN tools

## Installation

### 1. Install the Hook Script

```bash
# Clone or download this repository
git clone https://github.com/yourusername/docsearch-hook.git
cd docsearch-hook

# Copy hook to Claude Code hooks directory
mkdir -p ~/.claude/hooks/PreToolUse
cp docsearch.py ~/.claude/hooks/PreToolUse/docsearch.py
chmod +x ~/.claude/hooks/PreToolUse/docsearch.py

2. Create Configuration File

# Copy example config
cp config.example.json ~/.claude/hooks/docsearch-config.json

# Edit with your database paths and keywords
# Example config structure:
{
  "databases": [
    {
      "keywords": ["gitlab", "gl", "gitlab-ci"],
      "path": "/Users/yourname/.leann/databases/gitlab",
      "mcp_tool_name": "mcp__leann__search",
      "description": "GitLab documentation from docs.gitlab.com"
    }
  ]
}

3. Verify LEANN MCP Server Configuration

Ensure your ~/.claude/mcp-config.json includes the LEANN server:

{
  "mcpServers": {
    "leann": {
      "command": "leann",
      "args": ["mcp"]
    }
  }
}

4. Test the Setup

# Run tests
pytest tests/

# Start Claude Code and try a query
# Example: "How do I configure GitLab CI runners?"
# The hook should intercept and suggest using the RAG database

Configuration

Config File Location

~/.claude/hooks/docsearch-config.json

Schema

{
  "databases": [
    {
      "keywords": ["keyword1", "keyword2"],
      "path": "/absolute/path/to/database",
      "mcp_tool_name": "mcp__leann__search",
      "description": "Human-readable description"
    }
  ]
}

Fields

  • keywords (required): Array of strings to match in queries (case-insensitive, word-boundary matching)
  • path (required): Absolute path to LEANN database directory
  • mcp_tool_name (required): Exact MCP tool name (usually mcp__leann__search)
  • description (required): Description shown to Claude in denial context

How It Works

User asks: "How to configure GitLab CI runners?"
    ↓
Hook detects "gitlab" keyword → Denies WebSearch
    ↓
Claude receives denial + context about RAG database
    ↓
Claude calls mcp__leann__search with GitLab database
    ↓
If successful → User gets RAG-based answer
If unsuccessful → Claude retries WebSearch → Hook allows through

State Management

  • Per-session state: Each Claude Code session has isolated state in ~/.claude/hooks/docsearch-state-{session_id}.json
  • Escape hatch: If Claude retries the exact same search (same query and domain filters), the hook allows it through
  • Automatic cleanup: State is cleared after successful retry

Testing

# Run all tests
pytest tests/ -v

# Run specific test categories
pytest tests/ -k "keyword" -v      # Keyword matching tests
pytest tests/ -k "retry" -v        # Escape hatch tests
pytest tests/ -k "session" -v      # Session isolation tests

# Run with coverage
pytest tests/ --cov=docsearch --cov-report=html

Troubleshooting

Hook Not Triggering

  1. Check hook script location: ~/.claude/hooks/PreToolUse/docsearch.py
  2. Verify executable permissions: chmod +x ~/.claude/hooks/PreToolUse/docsearch.py
  3. Check config file exists: ~/.claude/hooks/docsearch-config.json

Config File Errors

  • Validate JSON syntax: python3 -m json.tool ~/.claude/hooks/docsearch-config.json
  • Check stderr output when running Claude Code
  • Verify all required fields are present

State File Issues

  • State files location: ~/.claude/hooks/docsearch-state-*.json
  • Delete stale state files manually if needed
  • Each session creates its own state file

Development

Running Tests During Development

# Install pytest
pip install pytest

# Run tests with output
pytest tests/ -v -s

# Run specific test
pytest tests/test_hook.py::test_single_keyword_match_denies -v

Adding New Databases

  1. Build LEANN database using LEANN tools
  2. Add entry to ~/.claude/hooks/docsearch-config.json
  3. Test with relevant query in Claude Code

License

MIT License - See LICENSE file for details

Contributing

Contributions welcome! Please:

  1. Fork the repository
  2. Create a feature branch
  3. Add tests for new functionality
  4. Ensure all tests pass
  5. Submit a pull request

Future Enhancements

See GitHub Issues for planned features:

  • CLI setup command for automated database creation
  • Database sharing functionality
  • Community database repository

**Step 2: Commit README**

```bash
git add README.md
git commit -m "docs: add comprehensive installation and usage documentation"

Task 10: Integration Testing Setup

Files:

  • Create: tests/integration/test_full_flow.py
  • Create: tests/integration/README.md

Step 1: Create integration test directory

mkdir -p tests/integration

Step 2: Create integration test README

Create tests/integration/README.md:

# Integration Tests

These tests require a real LEANN MCP server configuration and database.

## Setup

1. Ensure LEANN is installed
2. Build a test database
3. Configure MCP server in Claude Code
4. Run integration tests manually (not in CI)

## Running

```bash
# Skip in normal test runs
pytest tests/test_hook.py

# Run integration tests manually
pytest tests/integration/ -v

Note

Integration tests are provided as examples and documentation. They require manual setup and are not run in automated CI.


**Step 3: Create example integration test**

Create `tests/integration/test_full_flow.py`:

```python
# ABOUTME: Integration tests for full docsearch hook flow with real LEANN MCP server
# ABOUTME: Requires manual setup - not run in automated CI

import json
import subprocess
import pytest

# Mark all tests in this file as integration tests
pytestmark = pytest.mark.integration

@pytest.mark.skip(reason="Requires manual LEANN setup")
def test_full_flow_with_real_mcp():
    """
    End-to-end test with real LEANN MCP server

    Manual setup required:
    1. Build LEANN database for a test documentation site
    2. Configure ~/.claude/hooks/docsearch-config.json
    3. Ensure LEANN MCP server is running
    4. Update this test with your actual config
    """
    # This is a template - customize for your setup
    hook_input = {
        "hookEventName": "PreToolUse",
        "tool_name": "WebSearch",
        "tool_input": {"query": "your test query here"},
        "session_id": "integration-test"
    }

    result = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    # First call should deny
    assert result.returncode == 2

    # At this point, you would manually verify:
    # 1. Claude Code receives the denial context
    # 2. Claude calls the MCP tool
    # 3. MCP returns results or fails
    # 4. If MCP fails, Claude retries WebSearch
    # 5. Hook allows the retry through

    # Retry should allow through
    result2 = subprocess.run(
        ["python3", "docsearch.py"],
        input=json.dumps(hook_input),
        capture_output=True,
        text=True
    )

    assert result2.returncode == 0

Step 4: Update pytest configuration

Create pytest.ini:

[pytest]
markers =
    integration: marks tests as integration tests (deselect with '-m "not integration"')

# By default, skip integration tests
addopts = -m "not integration"

Step 5: Commit integration test setup

git add tests/integration/ pytest.ini
git commit -m "test: add integration test framework and documentation"

Task 11: Final Validation and Cleanup

Files:

  • Create: .github/workflows/test.yml (optional)
  • Modify: README.md

Step 1: Run full test suite

pytest tests/ -v --tb=short

Expected: All unit tests PASS

Step 2: Verify hook script is executable

ls -la docsearch.py

Expected: -rwxr-xr-x (executable flag set)

Step 3: Validate example config

python3 -m json.tool config.example.json > /dev/null && echo "Valid JSON"

Expected: "Valid JSON"

Step 4: Run hook manually with example input

echo '{
  "hookEventName": "PreToolUse",
  "tool_name": "WebSearch",
  "tool_input": {"query": "test"},
  "session_id": "manual-test"
}' | python3 docsearch.py
echo "Exit code: $?"

Expected: Exit code 0 (no config file, fails open)

Step 5: Check for any TODO or FIXME comments

grep -r "TODO\|FIXME" docsearch.py tests/

Expected: No output (or document any intentional TODOs)

Step 6: Verify all files have proper headers

head -n 2 docsearch.py tests/test_hook.py

Expected: All files have "ABOUTME" comment headers

Step 7: Final commit

git add -A
git commit -m "chore: final validation and cleanup"

Task 12: Create GitHub Issues for Future Work

Files:

  • Create issues manually in GitHub (or use gh CLI)

Step 1: Create database sharing issue

gh issue create --title "Enable sharing pre-built RAG databases" --body "$(cat <<'EOF'
## Goal
Allow users to share pre-built LEANN databases to reduce setup friction.

## Features
- Export database metadata and files in shareable format
- Import shared databases with verification
- Community repository of common documentation databases (GitLab, K8s, etc.)

## Benefits
- Reduce setup friction for new users
- Standardize database quality for popular documentation sources
- Enable community contribution model

## Related
See design document section "Future Work - Issue 1"
EOF
)"

Step 2: Create CLI setup command issue

gh issue create --title "Add CLI command for automated database creation" --body "$(cat <<'EOF'
## Goal
Provide automated database creation to eliminate manual LEANN tool usage.

## Command Interface
```bash
docsearch-hook setup <keyword> <url>

Features

  • Crawl documentation website using LEANN
  • Build RAG database automatically
  • Add entry to config file
  • Validate MCP server configuration

Benefits

  • Eliminates manual LEANN tool usage
  • Reduces errors in database creation
  • Streamlines onboarding experience

Related

See design document section "Future Work - Issue 2" EOF )"


**Step 3: Commit (if using file-based issue tracking)**

```bash
git add -A
git commit -m "docs: create GitHub issues for future enhancements"

Completion Checklist

  • All unit tests passing
  • Config example provided
  • README.md complete with installation instructions
  • Hook script executable
  • State file management working
  • Escape hatch retry logic tested
  • Multi-keyword detection tested
  • Error handling with fail-open validated
  • Integration test framework documented
  • Future work issues created

Post-Implementation

After completing all tasks:

  1. Manual testing: Install hook in real Claude Code environment
  2. Documentation review: Ensure README is accurate
  3. Performance check: Verify hook doesn't slow down Claude noticeably
  4. Edge case validation: Test with real-world queries

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment