Skip to content

Instantly share code, notes, and snippets.

@swapp1990
Created February 15, 2026 06:16
Show Gist options
  • Select an option

  • Save swapp1990/79be1939c2d0e2fedf96a38a59fc6181 to your computer and use it in GitHub Desktop.

Select an option

Save swapp1990/79be1939c2d0e2fedf96a38a59fc6181 to your computer and use it in GitHub Desktop.
FanFlix Story Writing Agent - MVP Plan

Story Writing Agent — MVP Plan

1. Problem Statement

FanFlix needs a way to generate complete short stories autonomously from a premise or story idea. The existing LMWFY writing system only handles paragraph-level operations (continue, rewrite, insert) with a human in the loop. There is no capability to produce a full, coherent multi-chapter story end-to-end without manual intervention at every step.

2. Solution Overview

Add a new story_agent product to the LMWFY backend that orchestrates complete story generation. A single API call with a premise, genre, and optional structure hints triggers an agent that plans the story (outline, chapters, character arcs), then writes each chapter sequentially — passing accumulated context forward so the narrative stays coherent. The client polls one job ID and receives chapters as they complete, with the full story available at the end.

3. Comparison Table

Approach Pros Cons
Single monolithic job (one LLM call for entire story) Simple, one job Context window limits, no progress visibility, poor quality for longer stories
Orchestrator + chapter jobs (chosen) Progress per chapter, context management, reusable chapter handler, can retry individual chapters More complex, needs parent-child job relationship
Client-driven orchestration (app calls chapter API in sequence) Backend stays simple Client complexity, unreliable on mobile (app backgrounding), duplicates agent logic per client

4. User Flow

  1. User provides a story premise (e.g., "A detective in 1920s Paris discovers that the city's gargoyles come alive at night") plus optional parameters: genre, target chapter count, tone, character names, content rating
  2. Client calls POST /api/story-agent/generate with the premise and parameters
  3. Backend creates an orchestrator job (status: pending) and returns the job ID
  4. Worker picks up the orchestrator job and runs the story agent handler
  5. Agent Step 1 — Plan: LLM generates a story outline (title, chapter summaries, character list, narrative arc) and stores it in the job's progress field
  6. Agent Step 2 — Write: For each chapter in the outline, the agent generates the chapter text, passing the outline + all previously written chapters as context. Each completed chapter is appended to the job's progress
  7. Agent Step 3 — Polish (optional): A final pass reviews the complete story for consistency and generates a synopsis
  8. Client polls GET /api/story-agent/job/{id} and sees: current phase (planning/writing/polishing), chapters completed so far, and when done — the full story with metadata
  9. On completion, the job's output contains: title, synopsis, chapter list with text, character list, and generation metadata

5. Scope

P0 — Must Have (MVP)

  • New story_agent product registered in the backend product registry
  • Orchestrator job type (generate_story) with a dedicated handler
  • Story planning phase: LLM generates structured outline from premise
  • Chapter writing phase: sequential chapter generation with full prior context
  • Progress tracking: job record updated after each chapter so clients can see incremental progress
  • API endpoints: POST to create story job, GET to poll status and retrieve chapters
  • New system and user prompts for story planning and chapter writing stored in the prompts collection
  • Credit charging for the full story generation (single charge on completion)
  • Support for genre, chapter count (2-5), and content rating (SFW/NSFW) parameters
  • Error handling: if a chapter fails, the job fails with partial results preserved

P1 — Should Have

  • Polish phase: consistency review pass after all chapters are written
  • Character voice consistency: character descriptions fed into each chapter prompt
  • Synopsis generation: auto-generate a 2-3 sentence synopsis on completion
  • Configurable tone/style parameter (literary, casual, dramatic, humorous)
  • Retry logic: failed chapter generation retries once before failing the whole job

v2+ — Out of Scope

  • Interactive editing of agent-generated stories (use existing LMWFY paragraph ops)
  • Multi-model support (e.g., different models for planning vs writing)
  • Cover art generation pipeline
  • Story serialization (ongoing multi-session stories)
  • Reader feedback loop that improves future stories
  • Collaborative stories (multiple users influencing the agent)
  • Translation to other languages

6. Risks & Mitigations

Risk Impact Mitigation
Context window overflow for longer stories (chapters 4-5 may exceed token limits) Chapter quality degrades or API errors MVP caps at 5 chapters; use rolling context window (outline + last 2 chapters + summary of earlier ones)
Story coherence breaks across chapters Inconsistent characters, plot holes Pass full outline to every chapter prompt; include character sheet in context
Long generation time (5 chapters = 5+ LLM calls, potentially 2-3 min total) Client timeout, user frustration Progress field shows completed chapters; client can display them incrementally
High credit cost per story vs per paragraph Users burn credits fast Set clear credit cost; show estimate before starting
LLM generates low-quality outline leading to bad story Wasted generation, poor output Use a dedicated planning prompt with structured output format (JSON outline)
Worker blocks on long-running orchestrator job Other jobs in queue starved Story agent runs in its own collection with dedicated worker(s)

7. Success Criteria Checklist

Story Agent Registration

  • story_agent product registered in product registry with its own collection
  • Worker(s) started for story_agent jobs at app startup
  • Health endpoint returns healthy status

Story Generation Flow

  • POST endpoint accepts premise + parameters and returns job ID (202)
  • GET endpoint returns job status, current phase, and completed chapters
  • Planning phase produces a structured outline (title, chapter summaries, characters)
  • Each chapter is written with outline + prior chapters as context
  • Completed job contains full story: title, all chapters, synopsis, metadata
  • Failed job preserves any chapters that were successfully generated

Quality & Constraints

  • Generated stories have coherent narrative arc across all chapters
  • Character names and traits remain consistent chapter to chapter
  • Stories respect the requested genre and content rating
  • Chapter count matches the requested count (default: 3)
  • Each chapter is 800-1500 words (configurable via prompt tuning)

Integration

  • Credits charged once on story completion (not per chapter)
  • Prompts stored in MongoDB prompts collection (not hardcoded)
  • Job schema compatible with existing job dashboard/monitoring

8. End-to-End Test List

  • E2E-1: Create a story job with just a premise (no optional params) → job completes with 3 chapters, title, and synopsis
  • E2E-2: Create a story job with all params (premise, genre, 5 chapters, NSFW, character names) → job respects all parameters
  • E2E-3: Poll a story job mid-generation → response shows current phase and partially completed chapters
  • E2E-4: Create a story job with invalid/empty premise → returns 400 error, no job created
  • E2E-5: Simulate LLM failure during chapter 3 → job status is failed, chapters 1-2 preserved in output
  • E2E-6: Create story job → verify credits charged only once on completion, not on failure
  • E2E-7: Two story jobs created concurrently → both complete independently without interference
  • E2E-8: Story job with 5 chapters → verify chapter 5 references events from chapter 1 (coherence check)

9. Manual Testing Checklist

Smoke Test (2 min)

  • POST a simple premise → get 202 with job_id
  • Poll job_id → see it move through pending → in_progress → completed
  • Read the output → story has title, chapters with text, makes sense

Feature Test (5 min)

  • Create story with genre "sci-fi" → verify sci-fi elements in output
  • Create story with 5 chapters → verify exactly 5 chapters generated
  • Create story with NSFW flag → verify content is appropriately mature
  • Poll mid-generation → see partial progress (e.g., "writing chapter 2 of 4")
  • Check credits after completion → correct amount deducted
  • Create story with character names provided → names appear consistently

Regression Test (2 min)

  • Existing LMWFY paragraph jobs still work (continue, rewrite, etc.)
  • Existing narrator voice jobs still work
  • Story agent worker doesn't interfere with other product workers
  • Health endpoints for all products return healthy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment