Skip to content

Instantly share code, notes, and snippets.

@bsamadi
Last active January 28, 2026 15:08
Show Gist options
  • Select an option

  • Save bsamadi/d22e839b1ab4f78803d81f9f8b29a5cf to your computer and use it in GitHub Desktop.

Select an option

Save bsamadi/d22e839b1ab4f78803d81f9f8b29a5cf to your computer and use it in GitHub Desktop.

Resume current task

cat .rtts/tasks.md | grep "In Progress" -A 20

Check what's next

cat .rtts/tasks.md | grep "Next (Prioritized" -A 15

Start new task

git checkout -b feature/REQ-ID-description

Commit format

git commit -m "[REQ-ID] Description"

Complete task checklist:

1. Move task to "Completed This Session"

2. Update requirements.md: [~] → [x]

3. Update tests.md: 🟡/🔴 → ✅

4. Capture learnings

5. Update session context


---

*Session started: [timestamp] | Last update: [timestamp]*

specs.md

# Project Specifications

> **TL;DR:** What we're building and why

---

## Vision

[2-4 sentences describing the product vision, target users, and core value proposition]

**Example:**
A task management app for remote teams that emphasizes asynchronous communication. Designed for distributed teams across time zones who need visibility without constant meetings. Core value: Replace status meetings with contextual updates.

---

## Architecture Decisions

### [YYYY-MM-DD] Decision Title

**REQ-IDs:** [Comma-separated list of affected requirements]

**Context:**  
What problem are we solving? What constraints exist?

**Decision:**  
What we chose to do.

**Alternatives Considered:**
- **Option A:** Why we didn't choose this
- **Option B:** Why we didn't choose this

**Consequences:**
- ✅ Benefit we gain
- ✅ Another benefit
- ⚠️ Tradeoff we accept
- ⚠️ Another tradeoff

---

### Example: 2026-01-22 Use JWT for authentication

**REQ-IDs:** FR-AUTH-001, FR-AUTH-002, FR-AUTH-003

**Context:**  
Need stateless authentication for API that scales horizontally. Mobile clients need long-lived sessions. No existing session infrastructure.

**Decision:**  
Use JWT tokens stored in httpOnly cookies. 15-minute access tokens with 30-day refresh tokens.

**Alternatives Considered:**
- **Session cookies + Redis:** More secure but requires Redis infrastructure and complicates horizontal scaling
- **OAuth only:** Better for third-party auth but overkill for email/password and adds complexity

**Consequences:**
- ✅ Stateless, scales horizontally without coordination
- ✅ Mobile clients get seamless experience
- ⚠️ Token revocation requires blacklist (add Redis later if needed)
- ⚠️ Slightly larger cookie payload vs session ID

---

## Technical Stack

**Language:** [Primary language + version]  
**Framework:** [Main framework + version]  
**Database:** [Database + version]  
**Infrastructure:** [Hosting platform]  
**Key Libraries:** [Critical dependencies]

**Example:**
- **Language:** TypeScript 5.3
- **Frontend:** React 18, Next.js 14
- **Backend:** Node.js 20, Express 4
- **Database:** PostgreSQL 16, Prisma ORM
- **Auth:** jose (JWT), bcrypt
- **Infrastructure:** Vercel (frontend), Railway (backend)

---

## Conventions

### Code Style

- **File naming:** kebab-case for files, PascalCase for components
- **Folder structure:** Feature-based (`/features/auth/`, `/features/tasks/`)
- **Imports:** Absolute imports with `@/` prefix
- **Functions:** Verb-first naming (`getUserById`, `createTask`)

### Git Workflow

- **Branches:** `feature/REQ-ID-description`
- **Commits:** `[REQ-ID] Description` (imperative mood)
- **PR Title:** Same as first commit message
- **Merge:** Squash and merge to main

### API Design

- **REST principles:** Standard HTTP methods
- **Routes:** Plural nouns (`/api/users`, not `/api/user`)
- **Response format:**
```json
  {
    "data": { ... },
    "meta": { "timestamp": "...", "version": "..." }
  }
```
- **Error format:**
```json
  {
    "error": {
      "code": "AUTH_INVALID",
      "message": "User-friendly message",
      "field": "email" // optional
    }
  }
```

### Testing

- **Unit tests:** `*.test.ts` next to implementation
- **Integration tests:** `/tests/integration/`
- **Test naming:** `describe('what it does', () => { it('should [behavior]') })`
- **Coverage target:** 80% (don't chase 100%)

### Documentation

- **5-minute rule:** Every module has README or docstring
- **Complex functions:** Inline comments for "why", not "what"
- **API docs:** OpenAPI spec in `/docs/api.yaml`
- **Architecture diagrams:** Mermaid in specs.md

---

## Session Memory

### User Preferences

- [Discovered preference about UX]
- [Discovered preference about code organization]
- [Discovered preference about error handling]

**Example:**
- Prefers card-based layouts over tables
- Likes verbose variable names over abbreviations
- Wants error messages user-friendly, not technical

### Active Context

**Working On:** [Current focus area]  
**Next After This:** [What's queued]  
**Blocked On:** [What's waiting for external input]

**Example:**
- **Working On:** Authentication MVP (login, reset, remember-me)
- **Next After This:** User profile management
- **Blocked On:** Email service credentials from Ops (ETA: 2026-01-30)

### Known Issues

- [Issue that needs tracking but not urgent]
- [Technical debt to address later]

**Example:**
- Rate limiter uses in-memory store (switch to Redis before production)
- Email templates hardcoded in code (extract to `/templates` folder)

---

*Last updated: [YYYY-MM-DD]*

The RTTS Workflow

Daily Workflow

1. Read tasks.md → "What am I doing?"
2. Check requirements.md → "What's the REQ-ID?"
3. Review tests.md → "What tests exist?"
4. Code with commits → "[REQ-ID] Description"
5. Update all 3 files → Status markers
6. Capture session context → End of day

Adding a New Feature

1. Add to requirements.md
   - Create REQ-ID
   - Write acceptance criteria
   - Add "Why" statement

2. Create test placeholder in tests.md
   - Add TEST-ID (status: ⚫ Not Started)
   - Link to REQ-ID

3. Add to tasks.md backlog
   - Create TASK-XXX
   - Link to REQ-ID
   - Estimate effort

4. When ready to start:
   - Move from backlog to "Next"
   - Then to "In Progress"
   - Update requirements.md: [ ] → [~]

Working on a Task

1. Update tasks.md
   - Move task to "In Progress"
   - Note start time

2. Write tests first
   - Update tests.md status: ⚫ → 🟡

3. Implement feature
   - Commit format: [REQ-ID] Description
   - Keep commits atomic

4. Verify tests pass
   - Update tests.md status: 🟡 → ✅

5. Complete task
   - Update requirements.md: [~] → [x]
   - Move task to "Completed This Session"
   - Capture learnings

Making an Architecture Decision

1. Add to specs.md
   - Date the decision
   - Document context and alternatives
   - List consequences

2. Link affected REQ-IDs
   - Update requirements if criteria change

3. Update conventions if needed
   - Add to specs.md conventions section

4. Capture in session memory
   - Note in tasks.md "Decisions Made Today"

End of Session

1. In tasks.md:
   - Update "In Progress" with current state
   - Note what's left to do
   - Capture session context (decisions, blockers, preferences)
   - Update timestamp

2. Commit and push
   - Ensure all status updates are committed
   - Push .rtts/ changes to remote

3. Next session starts by reading tasks.md

REQ-ID Format

Structure

[CATEGORY]-[FEATURE]-[NUMBER]

Examples:

  • FR-AUTH-001 (Functional Requirement - Authentication - 001)
  • NFR-PERF-002 (Non-Functional Requirement - Performance - 002)
  • BUG-LOGIN-005 (Bug - Login - 005)
  • TECH-REFACTOR-001 (Technical Debt - Refactor - 001)

Category Prefixes

Prefix Meaning Use For
FR Functional Requirement User-facing features
NFR Non-Functional Requirement Performance, security, scalability
BUG Bug Fix Defects in existing functionality
TECH Technical Work Refactors, infrastructure, tooling
L Legacy Pre-RTTS code (brownfield)

Numbering

  • Sequential within feature area
  • Pad with zeros: 001, 002, 010, 100
  • Never reuse numbers
  • Gaps are okay (deleted requirements)

Test Naming

Structure

TEST-[CATEGORY]-[NUMBER]

Maps 1:1 with REQ-IDs (without feature name).

Examples:

  • FR-AUTH-001 → TEST-AUTH-001
  • NFR-PERF-002 → TEST-PERF-002
  • BUG-LOGIN-005 → TEST-LOGIN-005

Test Types

Type Scope When to Use
Unit Single function/module Pure logic, utilities, helpers
Integration Multiple modules + I/O API endpoints, database queries
E2E Full user flow Critical paths, happy paths
Manual Human verification Email delivery, visual checks
Load Performance under stress NFR performance requirements

Agent Constitutional Protocol

Constitutional principles enforced through agent behavior, not documents.

Anti-Ghost Policy

before_code:
  - CHECK: "Does REQ-ID exist in requirements.md?"
  - CHECK: "Does TEST-ID exist in tests.md?"
  - BLOCK: If either missing
  - PROMPT: "Create REQ-ID first or link to existing?"

during_code:
  - ENFORCE: Commit format "[REQ-ID] Description"
  - ENFORCE: Link to requirements.md in PR description
  - WARN: If file has no REQ-ID in recent commits

after_code:
  - UPDATE: requirements.md status → [~] to [x]
  - UPDATE: tests.md status → 🟡 to ✅
  - UPDATE: tasks.md → Move to "Completed"
  - APPEND: Commit hash to requirements.md

Traceability

on_commit:
  - VALIDATE: Commit message starts with [REQ-ID]
  - APPEND: Commit hash to requirements.md
  - LOG: Files changed for this REQ-ID

on_revert:
  - SEARCH: All commits with target REQ-ID
  - PROMPT: "Found N commits. Revert all? (y/n)"
  - EXECUTE: Git revert if approved
  - UPDATE: All 4 files to reflect revert

Test-First

workflow:
  - WRITE: Test specification in tests.md first
  - CREATE: Actual test file (status 🔴 Fail)
  - IMPLEMENT: Code to make test pass
  - VERIFY: Test passes (status ✅ Pass)
  - UPDATE: All status markers

enforcement:
  - BLOCK: Status change to [x] without ✅ in tests.md
  - WARN: Large implementation without corresponding tests
  - SUGGEST: "Write test first" when task starts

Documentation

five_minute_rule:
  - CHECK: New module has README or docstring?
  - WARN: If missing, ask "Should I add docs?"
  - SUGGEST: Template based on module type

clarity:
  - PREFER: Examples over abstract explanations
  - PREFER: "Run this command" over "You could try..."
  - PREFER: Specific over vague
  - SHOW: Actual code snippets

commit_messages:
  - REQUIRE: Imperative mood ("Add feature" not "Added feature")
  - REQUIRE: [REQ-ID] prefix
  - SUGGEST: 50 char subject, 72 char body wrap

Session Management

on_pause:
  - CAPTURE: Current task state in tasks.md
  - CAPTURE: Decisions made this session
  - CAPTURE: User preferences discovered
  - CAPTURE: Blockers encountered
  - UPDATE: Timestamp
  - COMMIT: ".rtts/ changes before pause"

on_resume:
  - READ: tasks.md "In Progress" section
  - DISPLAY: "You were working on [TASK]"
  - DISPLAY: "Next step: [concrete action]"
  - LOAD: Relevant REQ-ID from requirements.md
  - LOAD: Test status from tests.md

Agent Workflow Example

// Pseudocode for agent behavior

async function start_feature(description: string) {
  // 1. Constitutional check
  const existing_req = await search_requirements(description);
  
  if (!existing_req) {
    const should_create = await ask_user(
      "No matching REQ-ID found. Should I:\n" +
      "a) Create new REQ-ID\n" +
      "b) Link to existing REQ-ID\n" +
      "c) Cancel"
    );
    
    if (should_create === 'a') {
      await create_requirement(description);
    }
  }
  
  const req_id = await get_active_req_id();
  
  // 2. Test-first enforcement
  const test_exists = await check_test_exists(req_id);
  
  if (!test_exists) {
    await create_test_placeholder(req_id);
    return "Created TEST-ID. Writing test specification...";
  }
  
  // 3. Implementation
  await write_tests(req_id);
  await implement_feature(req_id);
  
  // 4. Traceability
  const commit_hash = await commit(`[${req_id}] ${description}`);
  await append_commit_to_requirements(req_id, commit_hash);
  
  // 5. Status synchronization
  await update_status('requirements.md', req_id, '[x]');
  await update_status('tests.md', test_id, '✅');
  await move_task_to_completed(req_id);
  
  return "Feature complete. All files updated.";
}

async function resume_session() {
  // Read current state
  const in_progress = await read_section('tasks.md', 'In Progress');
  
  if (in_progress.length === 0) {
    const next_task = await read_section('tasks.md', 'Next');
    return `No active task. Next up: ${next_task[0]}`;
  }
  
  const current_task = in_progress[0];
  const req_id = current_task.req_id;
  const next_step = current_task.next_step;
  
  // Load context
  const requirement = await load_requirement(req_id);
  const test_status = await load_test_status(req_id);
  
  return {
    message: `Resuming: ${current_task.description}`,
    next_step: next_step,
    context: { requirement, test_status }
  };
}

Brownfield Adoption

Phase 1: Setup (Day 1)

# 1. Create RTTS structure
mkdir .rtts
cd .rtts
touch requirements.md tests.md tasks.md specs.md

# 2. Copy templates from this handbook

Phase 2: Inventory (Day 1-2)

# In requirements.md, create Legacy section:

## Legacy (Pre-RTTS)

**L-USER-001** `[x]` User registration
**Why:** Existed before RTTS adoption
**Test:** Manual testing only
**Location:** `src/auth/register.ts`

**L-TASK-001** `[x]` Create task
**Why:** Core feature, already deployed
**Test:** Some unit tests in `tests/tasks/`
**Location:** `src/tasks/create.ts`

Phase 3: Prioritize (Day 2)

# In tasks.md:

## Migration Priorities

### Critical Path (Touch frequently)
- [ ] L-USER-001: Add proper tests, convert to FR-USER-001
- [ ] L-TASK-001: Add proper tests, convert to FR-TASK-001

### Stable (Low priority)
- [ ] L-SETTINGS-001: Working fine, migrate when changed

### Deprecated (Remove)
- [ ] L-OLDFEATURE-001: Schedule for removal

Phase 4: New Work (Day 3+)

  • All new features follow full RTTS
  • Bug fixes get BUG-XXX-NNN REQ-IDs
  • Refactors get TECH-XXX-NNN REQ-IDs
  • When touching legacy code, add REQ-ID and tests

Phase 5: Gradual Migration

  • Touch legacy → Add to requirements.md
  • Fix bug in legacy → Create BUG-XXX-NNN
  • Refactor legacy → Create TECH-XXX-NNN
  • Eventually: All code has REQ-IDs

Tips & Best Practices

Do's ✅

  • Read tasks.md first every session - It's your entry point
  • Write acceptance criteria before code - Clarity prevents rework
  • Keep commits atomic - One logical change per commit
  • Update status markers immediately - Don't let them drift
  • Capture session context - Future you will thank you
  • Ask "why?" three times - Ensures requirements make sense
  • Prefer small REQ-IDs - Easier to test, easier to review

Don'ts ❌

  • Don't code without REQ-ID - No exceptions, ever
  • Don't skip tests - They're mandatory, not optional
  • Don't update requirements.md without tests.md - Keep them in sync
  • Don't write vague acceptance criteria - "Works well" is not testable
  • Don't forget session context - Decisions get lost without capture
  • Don't create mega-requirements - Split into smaller REQ-IDs
  • Don't orphan tasks - Every task needs a REQ-ID

Common Pitfalls

"This is just a quick fix, I don't need a REQ-ID"
→ Quick fixes become ghost code. Create BUG-XXX-NNN even for 1-line changes.

"I'll write tests after I get it working"
→ Tests never get written. Write test spec first, even if implementation comes later.

"The requirement is obvious, no need to document why"
→ Future developers won't have context. Always include "Why" statement.

"I'll update the docs after finishing all features"
→ Docs never get updated. Update .rtts/ files as you work.

"This is legacy code, doesn't need RTTS"
→ Technical debt compounds. Tag as [L] in requirements.md at minimum.


FAQ

Do I need all 4 files?

Yes. Each file has a specific purpose:

  • requirements.md = Source of truth
  • tests.md = Verification proof
  • tasks.md = Session continuity
  • specs.md = Decision context

Can I add more files to .rtts/?

Yes, but be careful. The power is in simplicity. Common additions:

  • metrics.md - Performance baselines, KPIs
  • incidents.md - Production issues log
  • glossary.md - Domain terminology

What about documentation for users?

RTTS is for development documentation. User-facing docs live elsewhere:

  • /docs/ for user guides
  • /api/ for API docs
  • README.md in root for project overview

How do I handle cross-cutting requirements?

Create a REQ-ID that references multiple areas:

**NFR-SEC-001** `[x]` All API routes require authentication

**Why:** Security baseline for entire API surface

**Acceptance Criteria:**
- Every route in /api/* checks for valid JWT
- Exceptions explicitly listed: /api/auth/*, /api/health

**Test:** `TEST-SEC-001` (integration suite)

**Affects:** FR-USER-*, FR-TASK-*, FR-PROJ-*

What if I disagree with agent suggestions?

You're in charge. Agents enforce rules to help, but:

  • You can override with explanation
  • You can adjust preferences in specs.md
  • You can pause enforcement for experiments
  • Document your reasoning in specs.md

How do I handle refactoring?

Create TECH-REFACTOR-NNN requirements:

**TECH-REFACTOR-001** `[~]` Extract auth middleware

**Why:** Auth logic duplicated across 5 routes

**Acceptance Criteria:**
- Single middleware in /src/middleware/auth.ts
- All routes use same middleware
- Existing tests still pass

**Test:** `TEST-REFACTOR-001` (verify no behavior change)

Do tests.md tests match actual test files exactly?

No. tests.md is the specification. Actual test files implement it.

Think of it like:

  • tests.md = "What needs verification"
  • test files = "How it's verified"

Can I use RTTS with [framework/language]?

Yes. RTTS is tool-agnostic. It's just markdown files. Works with:

  • Any language (Python, Go, Rust, JS, etc.)
  • Any framework (React, Vue, Django, Rails, etc.)
  • Any testing framework (Jest, pytest, Go test, etc.)

How do I handle external dependencies?

Document in specs.md, track in requirements.md:

**NFR-EXT-001** `[!]` Email service integration

**Why:** Password reset requires email delivery

**Acceptance Criteria:**
- SMTP credentials configured
- Test email can be sent
- Error handling for failed sends

**Test:** `TEST-EXT-001`

**Blocked By:** Ops team setting up SendGrid account

CLI Tool (Optional)

While RTTS works perfectly with manual file editing, a CLI can help enforce rules:

# Hypothetical rtts CLI

rtts init                    # Scaffold .rtts/ files
rtts add-req "User login"    # Add to requirements.md
rtts add-test REQ-001        # Add to tests.md
rtts status                  # Show coverage matrix
rtts validate                # Check for ghost code
rtts resume                  # Display current task

Note: CLI is optional. RTTS is markdown-first. Build CLI only if it adds value.


Example: Complete Feature Cycle

Step 1: Add Requirement

# In requirements.md:

**FR-EXPORT-001** `[ ]` Export tasks as CSV

**Why:** Users need to analyze tasks in Excel

**Acceptance Criteria:**
- GET /api/tasks/export returns CSV file
- Includes columns: ID, title, status, assignee, created, due
- Respects current filters (status, assignee, date range)
- Max 10,000 rows per export

**Test:** `TEST-EXPORT-001`

Step 2: Add Test Placeholder

# In tests.md:

### TEST-EXPORT-001: Export tasks as CSV

**REQ-ID:** FR-EXPORT-001  
**Type:** Integration  
**Status:** ⚫ Not Started
```typescript
describe('GET /api/tasks/export', () => {
  it('returns CSV with correct headers')
  it('includes all task data in rows')
  it('respects status filter')
  it('respects date range filter')
  it('limits to 10,000 rows')
})
```

Step 3: Add to Tasks

# In tasks.md:

## Next

### TASK-042: CSV export endpoint `[ ]`

**REQ-ID:** FR-EXPORT-001  
**Est:** 3 hours

**Subtasks:**
1. Create CSV serialization utility
2. Implement /api/tasks/export endpoint
3. Add filter support
4. Write tests
5. Update status markers

Step 4: Start Work

# Update tasks.md:
Move TASK-042 to "In Progress"
Update status: [ ][~]

# Update requirements.md:
Change FR-EXPORT-001: [ ][~]

Step 5: Write Tests

# Create test file
touch tests/tasks/export.test.ts

# Implement tests (they'll fail - that's good)
# Update tests.md:
Change TEST-EXPORT-001 status: ⚫ → 🔴 Fail (0/5)

Step 6: Implement Feature

# Write code
touch src/utils/csv.ts
touch src/api/tasks/export.ts

# Commit atomically
git commit -m "[FR-EXPORT-001] Add CSV serialization utility"
git commit -m "[FR-EXPORT-001] Implement export endpoint"
git commit -m "[FR-EXPORT-001] Add filter support"

Step 7: Verify Tests

npm test -- export.test.ts
# All 5 tests pass ✅
# Update tests.md:
Change TEST-EXPORT-001 status: 🔴 → ✅ Pass (5/5)

Step 8: Complete Task

# Update requirements.md:
Change FR-EXPORT-001: [~][x]
Append commits: `a1b2c3d`, `e4f5g6h`, `i7j8k9l`

# Update tasks.md:
Move TASK-042 to "Completed This Session"
Capture learnings:
- csv-stringify library works well
- Filter logic reusable from list endpoint
- Consider pagination for >10k rows later

Step 9: Document Decision (if made)

# In specs.md (if architectural choice):

### 2026-01-28 Use csv-stringify for CSV generation

**REQ-IDs:** FR-EXPORT-001

**Decision:** Use csv-stringify npm package

**Alternatives:**
- Manual CSV string building: Error-prone with escaping
- Papa Parse: Overkill, designed for parsing not generation

**Consequences:**
- ✅ Handles edge cases (commas, quotes, newlines)
- ✅ Well-maintained, 3M weekly downloads
- ⚠️ Additional 50KB dependency

Success Metrics

You're doing RTTS right if:

✅ Every file in /src has a REQ-ID in git blame
✅ Every REQ-ID has test coverage in tests.md
✅ You can pause and resume without context loss
✅ New developers understand what to work on from tasks.md
✅ You can revert features by REQ-ID, not by guessing commits
✅ "Why does this code exist?" is answerable from requirements.md
✅ No code review asks "where's the test?"
✅ Architecture decisions are documented with rationale

Warning signs:

⚠️ Files committed without [REQ-ID] prefix
⚠️ requirements.md out of sync with code
⚠️ tests.md shows ⚫ Not Started for merged code
⚠️ tasks.md not updated for weeks
⚠️ Someone asks "why did we decide X?" and no one knows
⚠️ New features have no acceptance criteria
⚠️ "Ghost code" appearing without traceability


Comparison with Full STRIVE

Aspect Full STRIVE RTTS
Files 7+ docs + phases/ 4 files
Constitution 01-Constitution.md Agent behavior
Complexity High ceremony Minimal ceremony
Phases Formal DISCUSS→PLAN→EXECUTE→VERIFY Continuous flow
Team Size 5+ people 1-4 people
CLI Required Optional
Learning Curve Steep Gentle
Enforcement Automated audits Agent + manual

What's Preserved:

  • Anti-ghost policy (REQ-ID traceability)
  • Test coverage mandate
  • Architecture decisions
  • Session continuity
  • Git-aware workflow

What's Simplified:

  • No separate Constitution doc
  • No formal phase gates
  • No complex folder structure
  • No Timeline document
  • No multi-agent orchestration

Getting Help

Read First

  1. This handbook (you're reading it!)
  2. Your .rtts/tasks.md (where am I?)
  3. Your .rtts/requirements.md (what's the REQ-ID?)

Common Questions

  • "Where do I start?" → Read tasks.md
  • "How do I add a feature?" → See "Adding a New Feature" workflow
  • "What if I need to refactor?" → Create TECH-XXX-NNN requirement
  • "How do I handle legacy code?" → Tag as [L] in requirements.md

Philosophy

  • Traceability over documentation volume
  • Examples over explanations
  • Action over theory
  • Agent enforcement over human memory

License & Attribution

RTTS is derived from STRIVE (Synchronized Testing, Requirements, Integration, Visualization, Engineering).

  • STRIVE: Full-featured methodology for teams
  • RTTS: Lightweight subset for solo devs and small teams

Both emphasize:

  • No ghost code
  • Test-driven development
  • Requirements traceability
  • Session continuity

Choose STRIVE for large teams with complex coordination needs.
Choose RTTS for solo/small teams who want discipline without overhead.


Version History

v1.0 (2026-01-28)

  • Initial release
  • 4-file structure: requirements, tests, tasks, specs
  • Agent constitutional protocol
  • Brownfield adoption guide
  • Complete workflow examples

RTTS Handbook v1.0 — Requirements. Tests. Tasks. Specs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment