Hello team! I've set up our internal QA testing program. Please read this carefully and follow the instructions.
- Accept GitHub invitation (check your email)
- Visit TestFlight repo: https://github.com/unclecode/crawl4ai-cloud-testflight
- Read the full README - it has 206+ test cases organized by category
- Test credentials will be shared separately via WhatsApp
Test the dashboard at: https://api.crawl4ai.com/dashboard
- Email/password registration
- Email/password login
- OAuth login (Google, GitHub)
- Password reset / Forgot password
- Magic link login
- Linking/unlinking social accounts
- Account settings management
- Create API keys
- View API keys (hint display)
- Delete API keys
- Copy API keys
- View pricing plans (Free, Plus, Pro)
- Stripe checkout flow
- Subscription management
- Plan upgrades/downgrades
- Cancellation flow
- Reactivation after cancellation
- Test Credit Card:
4242 4242 4242 4242(CVC/expiry: any future date)
- Daily crawl quota display
- Storage usage tracking
- LLM token usage (if using managed LLM)
- Current limits display (Free vs Paid)
- Session usage tracking
- Quick Start panel (code examples)
- Sessions page
- Crawl history page
- Storage management
- Settings page
- LLM Providers management (BYOK)
- Responsive design (mobile/tablet)
This is our main priority - test thoroughly!
- All API calls require:
X-API-Keyheader - Create API keys from dashboard first
1. Sync Crawl (may not be in final release)
POST /v1/crawl
- Single URL
- Synchronous response (waits for result)
- Max timeout based on plan
2. Batch Crawl
POST /v1/crawl/batch
- Multiple URLs (max 10)
- Synchronous response
- Parallel processing
3. Async Crawl (MOST IMPORTANT)
POST /v1/crawl/async
- Multiple URLs (up to 100)
- Returns job ID
- Poll for status/results
- Production-ready
4. Job Management
GET /v1/crawl/jobs/{job_id} # Get job status
DELETE /v1/crawl/jobs/{job_id} # Cancel job
GET /v1/crawl/jobs # List your jobs
5. Sessions API (CDP Browser Sessions)
POST /v1/sessions # Create session
GET /v1/sessions/{id} # Session status
DELETE /v1/sessions/{id} # Close session
WS /v1/cdp/{session_id} # WebSocket proxy
6. Storage API
GET /v1/crawl/storage # Storage usage
GET /v1/crawl/jobs/{id}/download # Download results (S3)
Basic Crawling:
- Simple HTML extraction
- Markdown conversion
- Screenshot capture
- Custom headers/cookies
- JavaScript rendering
- Page timeout configuration
- User agent customization
Extraction Strategies:
LLM Extraction (AI-powered)
- Managed service (use our API keys)
- BYOK (bring your own key)
- Different providers:
- OpenAI (GPT-4o, GPT-4o-mini)
- Anthropic (Claude Haiku, Claude Sonnet)
- Groq (Llama models)
- Google (Gemini)
- Deepseek
- Schema-based extraction (JSON schema)
- Custom instructions
- Token usage tracking
JSON CSS Extraction (selector-based, no LLM cost)
- CSS selector patterns
- Multiple field extraction
- Nested selectors
- Array extraction
JSON XPath Extraction
- XPath expressions
- Complex DOM queries
Rate Limits & Quotas:
Free Plan:
- 1 concurrent request
- 50 daily crawls
- 10 requests/minute
- 60s timeout
- 100MB storage
- No LLM managed service (BYOK only)
Paid Plan (Plus):
- 5 concurrent requests
- 500 daily crawls
- Higher rate limits
- 300s timeout
- 1GB storage
- LLM managed service (100K tokens/month)
Paid Plan (Pro):
- 20 concurrent requests
- 5000 daily crawls
- Highest rate limits
- 600s timeout
- 10GB storage
- LLM managed service (1M tokens/month)
Error Handling:
- Rate limit errors (429)
- Authentication errors (401)
- Timeout errors
- Invalid URL errors
- Quota exceeded errors
- Invalid extraction strategy errors
- Session creation failures
I'll share these SDKs separately:
- Python SDK - Install and test
- Node.js SDK - Install and test
- cURL examples - In the README
Test all three methods!
When you find a bug (and you will!):
- Use Chrome browser ONLY (for consistency)
- Open DevTools (Press F12)
- Switch to Console tab
- Take screenshot showing:
- The bug on the page
- DevTools Console (to capture errors)
- You can take ONE screenshot with both, or TWO separate screenshots
- Go to Issues tab in TestFlight repo
- Click "New Issue" → "Bug Report"
- Fill in all required fields:
- Test Case ID (if applicable)
- Severity (Critical/High/Medium/Low)
- Account Type (Free/Paid)
- Steps to Reproduce (clear, numbered steps)
- Expected Behavior
- Actual Behavior
- Screenshots (REQUIRED - upload your screenshots)
- Browser & Version (should be Chrome)
- Device / OS
- Additional Context (console errors, network errors)
CRITICAL: After submitting, add the label: needs-triage
- Without this label, your issue won't sync to our main repo!
Your issue will automatically sync to the main repo for review.
Why screenshots with DevTools? Our AI (Claude) reads them to debug faster!
Please be systematic:
- Work through test cases one by one (206 total in README)
- Don't skip categories
- Document findings as you go
- Test BOTH Free and Paid accounts
- Test different scenarios (valid/invalid inputs)
- Test edge cases (empty strings, very long URLs, special characters)
- Test error conditions (rate limits, timeouts)
- Desktop (required)
- Mobile (if possible)
- Tablet (if possible)
- Chrome REQUIRED for all testing
- Keep DevTools open while testing
- Monitor Console for errors
- Even small UI issues matter
- Typos and grammar errors count
- Confusing UX flows are bugs too
- Performance issues (slow loading)
- ALWAYS add
needs-triagelabel to issues - This is what triggers the sync to main repo
Focus on these areas in order:
-
Async Crawl API (highest priority - production feature)
- Job creation
- Job polling
- Job cancellation
- Result retrieval
- Error handling
-
Dashboard Features
- Authentication flows
- API key management
- Billing/subscription flows
- Usage tracking accuracy
-
LLM Extraction
- Managed service (our keys)
- BYOK (user keys)
- Different providers
- Token usage tracking
-
Rate Limits & Quotas
- Free vs Paid differences
- Concurrent request limits
- Daily quota enforcement
- Rate limiting behavior
-
Error Handling
- Timeout handling
- Quota exceeded messages
- Invalid input handling
- Network error recovery
-
Sessions API (if time permits)
- Session creation
- CDP WebSocket connection
- Session cleanup
-
Batch/Sync Crawl (lower priority)
- May not ship in final release
Stripe Test Mode Credit Card:
- Card Number:
4242 4242 4242 4242 - Expiry: Any future date (e.g., 12/26)
- CVC: Any 3 digits (e.g., 123)
- ZIP/Postal Code: Any valid ZIP code
Test Different Scenarios:
- Successful payment
- Upgrade from Free to Plus
- Upgrade from Plus to Pro
- Downgrade from Pro to Plus
- Cancel subscription (grace period)
- Reactivate canceled subscription
- Payment failure (use card
4000 0000 0000 0002for declined)
- Register new account (email/password)
- Verify email (check magic link)
- Login to dashboard
- Create first API key
- Try Quick Start example
- View usage stats
- Upgrade to paid plan
- Test increased limits
- Login with Google/GitHub
- Link additional OAuth provider
- Unlink OAuth provider
- Test login with remaining provider
- Get API key from dashboard
- Test sync crawl (simple URL)
- Test batch crawl (multiple URLs)
- Test async crawl (job workflow)
- Test LLM extraction
- Test CSS extraction
- Monitor rate limits
- Test quota enforcement
- Create browser session
- Connect via WebSocket (if you know how)
- Verify session shows in dashboard
- Delete session
- Verify cleanup
- Test with invalid API key (401)
- Exceed rate limit (429)
- Exceed daily quota
- Invalid URL format
- Timeout scenario (very slow URL)
- Invalid extraction strategy
- Technical questions about features
- Clarification on expected behavior
- Test credential issues
- Access problems
- Bugs and errors
- UI/UX problems
- Feature requests (as separate issue type)
- Accept GitHub invites
- Read this guide and README
- Familiarize with dashboard
- Run first tests
- Work through test cases systematically
- Report bugs as you find them
- Test different account types
- Test SDKs
- Test error conditions
- Test performance limits
- Mobile testing
- Integration testing
- Re-test fixed bugs
- Final smoke tests
- Documentation review
We're aiming for:
- All 206 test cases executed
- All P0 (Critical) bugs found and fixed
- All P1 (High) bugs found and fixed
- 90%+ of P2 (Medium) bugs found
- Platform stable for beta launch
Your thorough testing will help us launch a rock-solid product. Every bug you find makes Crawl4AI Cloud better for our users.
Remember the essentials:
- Chrome browser only
- DevTools screenshots with every bug
needs-triagelabel on all issues- Test systematically
- Report everything
Let's make this the best web crawling service available!
Questions? Ask in the WhatsApp group.
Ready? Accept your GitHub invite and start testing!