Skip to content

Instantly share code, notes, and snippets.

@siddhant-mohan
Created February 7, 2026 05:18
Show Gist options
  • Select an option

  • Save siddhant-mohan/0d39295390570f0a9e413d10a9b6e965 to your computer and use it in GitHub Desktop.

Select an option

Save siddhant-mohan/0d39295390570f0a9e413d10a9b6e965 to your computer and use it in GitHub Desktop.
O-0035: Mission Control Complete Guide - How We Built 20 AI Employees Working 24/7

A Complete Guide to Building Mission Control: How We Built 20 AI Employees Working 24/7

We run Qodex with 20 AI employees working around the clock. They create content, optimize SEO, write code, verify deployments, and report back. They take tasks, complete them, and hand them off to each other.

This is not a vision deck. This is what happened yesterday.


What Happened Yesterday: A Day in the Life of Mission Control

Let me show you exactly what our AI team accomplished in a single 24-hour period.

At 2:00 AM, while everyone slept, our Growth Skill woke up. It connected to Google Search Console, Google Analytics, and our SERP tracking APIs. It pulled performance data across 2,400+ indexed pages. Then it ran an analysis.

The finding: 72 keywords where we had strong impressions (20K+) but CTR was bleeding. Average CTR across these keywords? 0.8%. The top competitors? 4-6%. The root cause was clear: our meta titles were not optimized for click-through.

The Growth Skill did not send an email asking for help. It created 72 micro-tasks in Notion. Each task had the keyword, the current title, the current CTR, the competitors' titles, and specific recommendations. Each was assigned to the SEO Skill with priority P2.

By 3:00 AM, the SEO Skill started picking up tasks. It analyzed Google's top 5 results for each keyword. It studied what made those titles clickable. It wrote optimized title and meta description recommendations and appended its output to each Notion task. Then it changed the skill assignment from "seo" to "blog" and reset status.

By 6:00 AM, the Blog Skill was working through content updates. For tasks that needed copy changes, it revised headlines and intro paragraphs. It added its output to the Notion pages and assigned them to the Dev Skill.

By 10:00 AM, the Dev Skill picked up the first batch. It went to GitHub. It created pull requests with the SEO changes. Each PR was linked back to the Notion task. It tagged the human reviewer (Aditya) and added a comment: "PR ready for review."

By 2:00 PM, Aditya had reviewed and merged 28 PRs. The changes deployed automatically.

By 3:00 PM, our QA Skill detected the deployments. It visited each live URL. It verified the new meta titles were rendering correctly. It checked that no pages were broken. It submitted the updated pages to Google's Indexing API. Everything passed. Status: Verified.

But the QA Skill did one more thing: it created follow-up tasks scheduled for 4 weeks out. Those tasks will trigger the Growth Skill to check whether CTR improved on those specific keywords. If it improved, we learn what works. If it did not, we iterate again.

This is the continuous improvement loop running 24/7 without human intervention.

By the time the team woke up, 47 SEO improvements were live. The remaining 25 were queued for human review. Total human time required: 2 hours of PR review.

That is Mission Control.


[DIAGRAM PLACEHOLDER: Flow diagram showing Growth → SEO → Blog → Dev → Human Review → QA → Follow-up Loop]


Why We Built This: The Problem

Six months ago, we had AI assistants. They were good at individual tasks. But they worked in silos.

Our SEO agent did not know what the content agent wrote. Our content agent did not know what keywords the SEO agent researched. Our dev agent did not know the business context of either.

Everything required manual coordination. A human had to read the SEO output, then manually copy it into a prompt for the content agent. Then take that output and create a task for the dev agent. Then remember to check the deployment a week later.

We had AI-powered tools, but we still needed a human project manager orchestrating every handoff.

The real problem was not capability. It was coordination.


The Foundation: Understanding OpenClaw

Before we built Mission Control, we needed a platform to run agents on. We chose OpenClaw.

OpenClaw is an agent runtime that lets you run Claude-based agents with persistent memory, scheduled jobs, and tool access. Think of it as the operating system for our AI employees.

Key concepts:

Workspaces. Each product gets its own workspace. A workspace is a folder containing the agent's configuration, skills, memory, and drafts. Our Qodex workspace lives at /home/ubuntu/.openclaw/workspace-outx/.

Job Scheduling. OpenClaw runs cron jobs that wake up agents at configured intervals. Our main processing loop runs every 5 minutes. The agent checks Notion for tasks, picks one up, and executes it.

Tool Access. Agents can read/write files, execute shell commands, make API calls, browse the web, and interact with external services. We gave our agents access to GitHub (via PAT), Google APIs (via service account), and Notion (via integration token).

Memory. Each agent has a memory folder for persistent context. This is where we store things the agent needs to remember across sessions: recent wins, ongoing experiments, lessons learned.


[DIAGRAM PLACEHOLDER: OpenClaw architecture showing Runtime → Workspaces → Jobs → Tools]


From One Agent to Ten: The Architecture Decision

When we started, the obvious approach was to build separate agents: SEO Agent, Content Agent, Social Agent, Dev Agent, QA Agent.

We built it. It was a mistake.

Each agent needed:

  • Its own product knowledge (what is Qodex? who are our competitors?)
  • Its own brand guidelines (what tone do we use? what phrases to avoid?)
  • Its own context about the current state of the business

We were duplicating context across 8 agents. When we updated the product messaging, we had to update 8 different prompts. When agents handed off work, context was lost because each agent had its own limited memory.

The realization: We were building a team of strangers who had to constantly re-introduce themselves.

The solution: One unified agent with multiple skills.

We now have a single agent (codename: "outx") that has 9 specialized skills. The agent is one person who knows how to do many jobs, rather than a team of specialists who do not talk to each other.

Each skill is a markdown file with detailed instructions for that specific capability. When a task comes in tagged with a skill (like "seo" or "blog"), the agent reads the relevant skill file and executes that capability.

Why this works:

  1. Shared product knowledge. The agent always knows the full business context. No duplication.
  2. Consistent brand voice. One set of writing guidelines applies everywhere.
  3. Clean handoffs. When SEO hands off to Blog, the full context travels with the task.
  4. Efficient resources. One agent process instead of 10. Lower memory, lower cost.

[DIAGRAM PLACEHOLDER: Architecture showing Jarvis (manager) → Qodex Agent → 9 Skills branching out]


The Skill System: What Each Department Does

Here are the 9 skills in our current stack:

product-mgmt (The Brain) The orchestrator skill. Routes tasks, chains workflows, creates follow-up tasks. When Growth identifies an opportunity, product-mgmt decides whether it becomes a blog task, an SEO task, or a social post.

growth (The Analyst) Connects to Google Search Console and Google Analytics. Runs weekly performance analysis. Identifies opportunities: high-impression/low-CTR pages, declining traffic, keyword gaps. Creates micro-tasks in Notion for other skills.

seo (The Researcher and Optimizer) Two phases: research (before content) and optimize (after content). Research phase: keyword analysis, SERP study, content briefs. Optimize phase: meta titles, descriptions, schema markup.

blog (The Writer) Takes SEO research briefs and writes long-form content. 1,500-3,000 word articles optimized for search and conversion. Handles revisions based on human feedback.

social-media (The Amplifier) Creates LinkedIn posts, Twitter threads, Reddit content. Adapts messaging per platform. Includes visual recommendations and posting strategy.

newsletter (The Email Marketer) Writes email campaigns and drip sequences. Manages lead nurturing flows.

dev-outx-landing (The Engineer) Creates pull requests for blog files, SEO meta updates, landing pages. Never commits to main directly. Links PRs back to Notion tasks.

qa-testing (The Verifier) Checks deployments after merge. Validates SEO tags are rendering. Submits to Google Indexing API. Creates follow-up tasks to track results.

github-digest (The Reporter) Summarizes PR activity and commits. Keeps the team informed about code changes.


The Folder Structure: Where Everything Lives

workspace-outx/
├── SOUL.md          # Agent personality, values, communication style
├── AGENTS.md        # Team structure, who reports to who
├── USER.md          # Human team context
├── TOOLS.md         # API credentials, Notion schema, workflow rules
├── PRODUCT.md       # Full product context, messaging, competitors
├── skills/
│   ├── product-mgmt/SKILL.md
│   ├── growth/SKILL.md
│   ├── seo/SKILL.md
│   ├── blog/SKILL.md
│   ├── social-media/SKILL.md
│   ├── newsletter/SKILL.md
│   ├── dev-outx-landing/SKILL.md
│   ├── qa-testing/SKILL.md
│   └── github-digest/SKILL.md
├── drafts/
│   ├── blog/        # Blog drafts before publishing
│   ├── social/      # Social content drafts
│   ├── newsletter/  # Email drafts
│   └── reports/     # Analysis reports
├── approved/        # Human-approved content
└── memory/          # Persistent context across sessions

The key files:

  • SOUL.md - Defines the agent's personality, values, and communication style. This is why our content has a consistent voice.
  • PRODUCT.md - Full product context. Every skill reads this to understand what we sell, who we sell to, and how we position ourselves.
  • TOOLS.md - API credentials and operational rules. Notion database schema, workflow sequences, human reviewer assignments.

The Tools: What the Agent Can Access

We gave our agent its own Gmail identity: jarvis@jarvis-486506.iam.gserviceaccount.com. This service account has read-only access to:

  • Google Search Console - Query data, indexed pages, performance metrics
  • Google Analytics - Traffic data, user behavior, conversion funnels
  • Google Postmaster - Email deliverability metrics
  • PageSpeed Insights - Performance scores for any URL

We also gave the agent:

  • GitHub PAT - Create branches, commits, and pull requests
  • Notion API token - Read/write tasks, append content, add comments
  • Brave Search API - Research competitors and analyze SERPs

The agent cannot access payment systems, delete production data, or merge code. Those guardrails are non-negotiable.


Notion as the Assembly Line: How Tasks Flow

Every task lives in a Notion database. The schema:

| Property | Purpose | |----------|---------|| | Projects | Task title | | Status | Not Started → In Progress → Completed → Deployed → Verified | | Priority | P1 (urgent) through P5 (whenever) | | Skill | Which skill should work on this | | Assigned to | Current owner (human or agent) | | Due Date | When it needs to be done |

The workflow:

  1. Cron job runs every 5 minutes
  2. Agent queries Notion: Status = "Not Started" AND Skill is set
  3. Picks highest priority task (P1 first, then by due date)
  4. Sets Status → "In Progress"
  5. Reads the page content (including output from previous skills)
  6. Reads the relevant skill file
  7. Executes the work
  8. Appends output to the page wrapped in XML tags:
    <skill-output skill="seo" phase="research" timestamp="2026-02-07T05:00:00Z">
    [Structured output here]
    </skill-output>
  9. Adds comment with timestamp (audit trail)
  10. Either moves to next skill OR marks complete and assigns to human

The magic: When Blog runs after SEO, it reads the <skill-output skill="seo"> block from the page. Context flows through the pipeline via the Notion page itself. No external state management needed.


[DIAGRAM PLACEHOLDER: Task flow showing Status transitions and skill handoffs]


The Feedback Loop: @outx-pm

Here is something most AI agent systems miss: the ability for humans to course-correct mid-stream.

In Notion, any human can comment @outx-pm [feedback] on any task. The next time the agent runs, it:

  1. Scans completed tasks for unprocessed @outx-pm mentions
  2. Reads the feedback
  3. Processes the revision
  4. Appends updated output
  5. Replies with acknowledgment: "Feedback acknowledged. Here's what I changed..."

This is not "start over from scratch." The agent reads what it already produced, understands the feedback, and makes targeted revisions. Just like a human employee responding to review comments.

Example:

  • Human: "@outx-pm the intro is too long, make it punchier"
  • Agent reads existing draft, rewrites intro to 3 sentences, appends revised version, replies explaining the change

The feedback loop means humans stay in control without micromanaging. Write your feedback, tag the agent, move on. The agent handles it.


The 90/10 Split: Agents Work, Humans Verify

Look at our Notion task tracker on any given day. You will see:

  • 90% of task assignments are agent-to-agent (SEO → Blog → Dev → QA)
  • 10% of task assignments are agent-to-human (final review, deployment approval)

Humans are not doing the work. Humans are approving the work.

This is the key insight: AI agents should handle execution. Humans should handle judgment.

When a blog post is written, the agent does not ask permission to write it. It writes it, then asks for review. When SEO meta tags are optimized, the agent does not schedule a meeting. It creates a PR and assigns a reviewer.

The audit trail is complete. Every task has timestamped comments showing exactly what happened, when, and by which skill. If something goes wrong, you can trace exactly where it broke down.


Continuous Improvement: The SEO Loop

SEO is not a one-time fix. It is a continuous loop of improve → measure → iterate.

Here is how Mission Control handles this:

Week 0: Growth Skill identifies a page with declining CTR. Creates task.

Week 0-1: SEO → Blog → Dev → Human Review → Deploy → QA Verify

Week 4: QA Skill created a follow-up task dated 4 weeks out. That task now triggers.

Week 4 check: Growth Skill pulls fresh GSC data for that specific URL. Did CTR improve? Did impressions change? Did position move?

If improved: Task is marked Verified with a win note. We learn what worked.

If not improved: Task is marked Need Fix. Goes back into the SEO queue with learnings attached. We iterate.

This loop runs continuously. We are not just shipping fixes. We are measuring outcomes and feeding them back into the system.


Heartbeats and Daily Standups

The agent does not just process tasks. It has operational rhythms.

Heartbeat (every 4 hours): The agent checks its own health. Are all API connections working? Are there blocked tasks? Any errors in recent runs? If something is wrong, it posts to the team channel.

Daily Standup (8 AM): The agent summarizes what it did in the last 24 hours. Tasks completed, PRs created, content published, issues encountered. Posted to the team channel so humans can glance at progress.

Weekly Growth Report (Monday 6 AM): Deep analysis of SEO performance, traffic trends, top opportunities. Creates the week's priority tasks.

These rhythms keep the agent visible without being noisy.


The Team Structure

Jarvis (Manager Agent)
    └── Qodex Agent (Executor)
            ├── product-mgmt
            ├── growth
            ├── seo
            ├── blog
            ├── social-media
            ├── newsletter
            ├── dev-outx-landing
            ├── qa-testing
            └── github-digest

Jarvis is the manager agent. It handles cross-team coordination, escalations, and high-level decisions. When something is ambiguous, it asks a human.

Qodex Agent is the executor. It picks up tasks from Notion, runs the appropriate skill, and delivers output. Most of the daily work happens here.

Skills are the specialized capabilities. Each skill has its own detailed instructions but shares the agent's core context (product knowledge, brand voice, business goals).


Lessons Learned

Start with one skill, expand slowly. We launched with just SEO. Added blog after SEO was working well. Expanded to social after blog was stable. Rushing to "build all the agents" creates more chaos than value.

Notion is underrated. We evaluated LangChain, CrewAI, and custom orchestration. Notion beat them all for our use case. Tasks, comments, handoffs, audit trail, human access. All in one tool everyone already knows.

Audit trails are non-negotiable. Every task should show exactly what happened. When an SEO fix does not work, you need to trace back: what did the agent see? What did it recommend? What actually changed?

Humans should approve, not do. The fastest path to agent ROI is letting agents execute while humans review. If humans are still writing first drafts, you are using AI wrong.

Continuous loops beat one-time fixes. The follow-up task mechanism is the most valuable thing we built. Without it, we ship improvements and forget about them. With it, we learn what actually works.


What Is Next

Mission Control is not finished. It is evolving.

We are adding:

  • Cross-product coordination (agents working across Qodex, Qodex Docs, Qodex Mentions)
  • Smarter prioritization (using past performance data to rank tasks)
  • Proactive opportunity detection (agent suggesting tasks, not just executing them)

The goal is simple: every hour a human spends on repeatable work is an hour stolen from judgment work. Mission Control gives that time back.


[DIAGRAM PLACEHOLDER: Full system architecture with all components]


Want to build your own Mission Control? Start with one skill. One cron job. One Notion database. Prove it works, then expand. The architecture can scale, but the foundation has to be solid.

That is how you build 20 AI employees working 24/7.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment