Skip to content

Instantly share code, notes, and snippets.

@hamelsmu
Created February 8, 2026 23:16
Show Gist options
  • Select an option

  • Save hamelsmu/bd0579da5430aa37f749a6370e730e7b to your computer and use it in GitHub Desktop.

Select an option

Save hamelsmu/bd0579da5430aa37f749a6370e730e7b to your computer and use it in GitHub Desktop.
Competitive Analysis: Aishwarya Reganti's Free AI Evals Course vs Hamel & Shreya's Maven Course

Executive Summary: Aishwarya Reganti's Free AI Evals Course vs. Hamel & Shreya's Maven Course

Overall Assessment: Highly Compatible, No Major Contradictions

The courses are complementary rather than competitive, serving different market segments with aligned principles but different depth/breadth.


Key Findings by Video

Videos 1-3: Foundations (Strong Alignment)

  • Model vs Product Evaluation: Both make identical distinction using different terminology
  • Benchmark Illusion: Aishwarya's concept = your "illusory benefit of generic metrics"
  • Positioning: Both target practitioners building AI products

Videos 4-6: Frameworks & Methods (Perfect Alignment)

  • Input-Expected-Actual framework maps to your reference-based/reference-free metrics
  • Three evaluation methods (Human, Code, LLM Judge) = identical to your Chapter 5 hierarchy
  • Calibration principle: Both emphasize LLM judges must be validated against human judgment

Videos 7-9: Production & Lifecycle (Aligned Concepts)

  • Flywheel concept: Same continuous improvement cycle, identical terminology
  • Guardrails definition: Both define as synchronous, real-time safety checks
  • Lifecycle frameworks: Video 9's 7-step checklist complements your Analyze-Measure-Improve cycle

Video 10: Misconceptions (Strong Overlap)

Your course addresses 4 of 5 misconceptions:

  • ✅ Don't rely solely on benchmarks
  • ✅ Evaluation is continuous/cross-functional
  • ✅ Both simple metrics AND LLM judges have roles
  • ✅ Context-specific evaluation required
  • ⚠️ Gap: Offline evals vs A/B testing relationship not explicitly covered in your materials

Videos 11-13: Phoenix Tutorials (Tool-Specific Implementation)

  • No contradictions: Phoenix implements your tool-agnostic principles
  • Complementary: Shows "how to implement" what you teach as "why it matters"
  • Risk: Students might conflate Phoenix features with fundamental methodology

Notable Differences (Not Contradictions)

1. Dataset Sizing Discrepancy

  • Aishwarya: Start with ~50 examples
  • Your course: Start with ~100 traces
  • Resolution: Both are rough heuristics; your course uses different sizes for different contexts (20-100 depending on task)

2. Depth vs Accessibility

  • Aishwarya: Accessible overview with practical frameworks (10-min videos)
  • Your course: Comprehensive methodology with statistical rigor (150-page reader, 4 weeks)

3. Tool Philosophy

  • Aishwarya: Phoenix-centric hands-on tutorials
  • Your course: Tool-agnostic principles, explicitly states "if new LLM comes out tomorrow, methodology still applies"

4. Error Analysis Methodology

  • Aishwarya: Doesn't cover open coding/axial coding depth
  • Your course: Detailed grounded theory methodology (Chapter 3)

Competitive Positioning Analysis

Your Course's Unique Strengths:

  1. Three Gulfs Model - Unique conceptual framework
  2. Open/Axial Coding - Rigorous error analysis methodology
  3. Statistical Validation - TPR/TNR, bootstrapping, confidence intervals
  4. Collaborative Practices - Inter-annotator agreement (IAA), alignment sessions
  5. Architecture-Specific Coverage - RAG (Chapter 7), multi-turn (Chapter 6), agentic systems (Chapter 8)
  6. Custom Tooling Emphasis - Chapter 10 on review interfaces
  7. Cost Optimization - Dedicated Chapter 11

Aishwarya's Course Strengths:

  1. Lower price point - Free videos vs your $5,000 (major accessibility)
  2. Shorter commitment - 13 videos vs 4-week intensive
  3. Practical prioritization - 2x2 matrices for metric selection
  4. Phoenix integration - Hands-on platform tutorials
  5. OpenAI association - Kiriti Badam's credibility
  6. Problem-first branding - Clear frustrated buyer positioning

Market Positioning

Aishwarya's Course:

  • Entry-level/introductory - "AI Evals for Everyone"
  • Frustrated buyer segment - "stuck with noisy, expensive, useless evals"
  • Lead generation funnel - Free → $2,500-$3,000 paid courses
  • Phoenix ecosystem - Tool-specific implementation

Your Course:

  • Professional/comprehensive - Deep technical training
  • Serious practitioners - Moving beyond POCs to production
  • Premium positioning - $5,000 signals depth and rigor
  • Tool-agnostic - Methodology over platform features

Relationship: Sequential market segments rather than direct competition. Her free course could actually funnel students who want deeper methodology to yours.


Recommendations

For Your Course Materials:

  1. Add FAQ on dataset sizing - Address 50 vs 100 discrepancy
  2. Consider Phoenix comparison - Students will encounter it; explain how it implements your principles
  3. Add A/B testing section - Only gap where Aishwarya covers something you don't
  4. Reference complementary resources - Could acknowledge her free intro as supplementary

For Marketing:

  1. Emphasize depth differentiators - Your statistical rigor, error analysis methodology, custom tooling
  2. Position as "advanced implementation" - For teams beyond basic tutorials
  3. Highlight 150-page reader - Major material differentiator
  4. Feature enterprise credentials - 2,000+ students, 35+ implementations

No Defensive Actions Needed:

Her course validates demand for evals education and shares your core principles. No contradictory advice that would confuse students. The $2,500 price difference and Phoenix-specific focus creates clear market segmentation.


Bottom Line

Zero contradictions, high complementarity, different market segments. Students who take both courses would receive reinforcing messages with Aishwarya providing accessible entry and you providing comprehensive mastery. Your course's depth, statistical rigor, and tool-agnostic methodology remain strong differentiators worth the premium pricing.


Detailed Analysis: Aishwarya Reganti - Background, Lead Magnets & Marketing

Professional Background & What She's Known For

Education & Academic Credentials

  • Carnegie Mellon University - Language Technologies Institute (Class of 2019)
  • Research affiliations with University of Michigan, MIT, and Oxford
  • 1,002+ citations on Google Scholar
  • 35+ research papers published at top-tier conferences (NeurIPS, AAAI, CVPR, EMNLP)

Research Expertise

Primary research areas:

  1. Social media aggression detection - Trolling, cyberbullying, flaming, hate speech
  2. Fake news detection & fact verification - Created Factify5WQA and Factify 2 shared tasks
  3. Meme interpretation - Multimodal ML approaches
  4. Artificial Social Intelligence - How AI and social media merge to create misinformation risks

Career Trajectory

  • Current: Founder & CEO, LevelUp Labs (AI advisory & implementation startup)
  • Previous: Tech Lead & Forward-Deployed AI Engineer at AWS Generative AI Innovation Center
    • Led and implemented 30+ AI solutions for AWS clients
  • Clients: Hitachi Digital, Deloitte, multiple Fortune 500 companies
  • Claim: "50+ enterprise implementations" across OpenAI, Google, Amazon, Databricks

Teaching & Speaking

  • Taught professional courses at MIT and Oxford
  • TEDx Jacksonville speaker (March 2025) - "Social media and the age of AI misinformation"
  • 95,000+ LinkedIn followers (recognized as "one of the most prominent voices in enterprise AI")

Lead Magnets & Free Content Strategy

1. Free 10-Day Email Course: "Agentic AI For Everyone"

Link: https://problem-first-ai.kit.com/84cdfacd2b

What it includes:

  • 10-day email course with daily lessons
  • 4 live sessions included
  • Topics: AI agents, workflows vs agents, RAG, MCP (Model Context Protocol), multi-agent systems
  • Purpose: Primary lead generation funnel for paid Maven courses

2. GitHub Repository: "awesome-generative-ai-guide"

Link: https://github.com/aishwaryanr/awesome-generative-ai-guide

Performance:

  • 24,200+ stars (massive engagement)
  • 5,200+ forks
  • Updated regularly

What it includes:

  • Monthly GenAI paper summaries
  • 60+ GenAI interview questions
  • 30+ curated free GenAI courses
  • Code notebooks for generative AI applications

3. Year End AI Resources Handbook (Maven)

What it includes:

  • 90+ hand-curated FREE resources
  • 5 structured learning paths (fundamentals to production)
  • 60+ project ideas categorized by difficulty

4. Maven Lightning Lessons (Free Live Sessions)

  • "Don't Build AI Products Like Traditional Software"
  • "Why AI Agents Aren't Enough for Real-World Applications"
  • "Build your AI Moat as Software Developer in 2026"

5. O'Reilly Article: "Evals Are NOT All You Need"

Published: January 2026

Purpose:

  • Thought leadership positioning
  • Establishes "evals aren't enough" narrative
  • Creates demand for their "improvement flywheel" solution

6. YouTube Content

  • "Machine Learning: Teach by Doing" series - 37 videos
  • 100,000+ views on the playlist
  • Free AI Evals course with 13 videos

LinkedIn Content Strategy

Posting Frequency & Approach

  • Posts #genai content daily on LinkedIn
  • Simplifies complex generative AI research into digestible insights
  • Focus on providing free, valuable educational resources

Content Types:

  1. Resource sharing - GitHub repository updates, free courses
  2. Course promotions - Maven course launches
  3. Research insights - Latest AI papers
  4. Thought leadership - Industry trends, best practices

Go-to-Market Funnel Analysis

Top of Funnel (Awareness):

  1. Daily LinkedIn posts (95K followers) → Free value
  2. GitHub repository (24.2K stars) → Developer community
  3. O'Reilly article → Thought leadership
  4. TEDx talk → Credibility
  5. YouTube videos → 100K+ views

Middle of Funnel (Consideration):

  1. Free 10-day email course → Email capture, 4 live sessions
  2. Year-End AI Resources Handbook → Gated resource
  3. Free resource bundles → Demonstrates expertise
  4. Lenny's Newsletter feature → Third-party validation

Bottom of Funnel (Conversion):

  1. $3,000 flagship course (Building Agentic AI) → 1,500+ students, 4.9/5 rating
  2. $2,500 evals course (Beyond Evals) → New launch
  3. LevelUp Labs consulting → Enterprise services

Maven Evals Courses Comparison

Complete List of Maven Courses (15 Total)

Pure Evals Courses (6 courses)

  1. AI Evals For Engineers & PMs - Hamel Husain & Shreya Shankar

    • Price: $5,000 | Next: March 16, 2026 | Rating: 4.8/5
  2. AI Evals for PMs Certification - Marily Nika et al.

    • Price: $999 | Next: March 2, 2026 | Rating: 4.6/5
  3. AI Evals and Analytics Playbook - Stella Liu & Amy Chen

    • Price: $2,250 | Next: February 21, 2026 | Rating: 5.0/5
  4. AI Evals for Product Development - Shane Butler

    • Price: $1,500 | Next: April 6, 2026
  5. Beyond Evals: Designing Improvement Flywheels - Aishwarya Reganti & Kiriti Badam

    • Price: $2,500 | Next: March 14, 2026
  6. Building AI Applications for Data Scientists and Software Engineers - Hugo Bowne-Anderson & Stefan Krawczyk

    • Price: $2,100 | Next: March 10, 2026 | Rating: 4.7/5

Key Competitive Threats

High Threat:

  1. Daily content machine - 95K LinkedIn followers, daily posts
  2. Free email course - Strong lead gen (10 days + 4 live sessions)
  3. GitHub authority - 24K stars demonstrates massive reach
  4. Lower pricing - $2,500 vs your $5,000
  5. OpenAI association - Kiriti's role carries weight

Medium Threat:

  1. Multiple lead magnets - More entry points
  2. O'Reilly thought leadership - Industry voice
  3. Lenny's Newsletter feature - Same platform as your course
  4. Enterprise client logos - F500 social proof

Low Threat:

  1. Less comprehensive - 3 weeks vs your 4 weeks
  2. No course reviews yet - New course vs your 4.8/5
  3. Lighter on methodology - Less error analysis depth
  4. No 150-page reader equivalent

Sources

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment