Query your local Screenpipe API for the last 48 hours (all modalities):
# All content (OCR + audio + UI events)
curl -s "http://localhost:3030/search?limit=500&start_time=$(date -v-48H -u +%Y-%m-%dT%H:%M:%SZ)"
# Or query specific modalities:
# OCR (screen text): &content_type=ocr
# Audio transcripts: &content_type=audio
# UI events: &content_type=uiThen ask the AI to analyze with this prompt:
Prompt:
Analyze this screenpipe data and extract all todos, reminders, and commitments. Look for:
Explicit signals:
- [ ]unchecked markdown- "TODO:", "FIXME:", "need to", "have to", "should", "must"
- "don't forget", "remember to", "by tomorrow", "deadline"
Commitment language (especially in audio transcripts):
- "I'll", "I will", "I promised", "I told [person] I would"
- "Follow up with", "Reply to", "Send [person]"
- Verbal commitments in meetings/calls
Meeting action items:
- Any bullet points from meeting notes
- Names + context suggesting follow-up needed
- Things said out loud in transcripts
Output format:
- [Task] β Source: [app/audio], [timestamp]
- [Task] β Source: [app/audio]
- [Person]: [context]
- [Subject/thread]
Be exhaustive. Better to surface a false positive than miss a real todo.