https://www.youtube.com/watch?v=EH8Mdx948Kw
Reasoning mode enabled
- GPT-5.2
- Claude Opus 4.5
- Gemini 3
- ONLY documents
- Say "Not found"
- Cite sources
- Match method
- Selof-check
- Multi-Model
-
Base your answer ONLY on the uploaded documents. Nothing else.
-
If info isn't found, say "Not found." Don't guess.
-
For each claim, cit: [Document, Page/Section, Quote].
-
If uncertain, mark as [Unverified];.
-
[Your question]
-
Only respond with information you are 100% confident is from the file.
- AI trained as helpful assistant
- Tries to keep you satisfied
- Fills in the gaps with halucinations
- High reasoning model reduces haluncinations
- Opus 4.5
- Extended reasoning mode - Thinking On
- Gemini 3 Pro
- Strong on complex PDFs
- Reasoning On
- GPT-5.2
= Extended reasoning mode
- Thinking On
- Opus 4.5
Need to "ground" the prompt using an LLM with trusted, external, or specific context to anchor its output to verifiable, real-world data.
Instead of allowing the AI to rely solely on its general training data, which can lead to inaccuracies or "hallucination", a grounded prompt forces the model to use the provided information to generate accurate, relevant, and trustworthy responses.
"Base your answer ONLY on the uploaded documents. Nothing else."
Not on the internet or its training data.
"If information isn't found, say 'Not found in documents.' Don't guess."
This is Anthropic's #1 recommendation for reducing hallucinations.
"For each claim, cite the specific location--document name, page/section, and a relevant quote."
Reduces hallucinations. Makes it easier for humans to validate the source.
Confident <----> Unknown "I found it" "Not found"
Found something but not 100% sure.
"If you find something related but aren't fully confident it answers the question, mark it as [Unverified]."
Add this for contracts, financial analysis, legal docs. Trade-off between getting a lot of responses that might be a little wrong against getting a little response but entirely confident.
"Only respond with information you are 100% confident is from the file."
Ask AI to verify its own work.
"Rescan the document. For each claim, give me the exact quote that supports it. If you can't find a quote, take the claim back."
Use one AI to check the work of another AI.
- Get analysis from Model A.
- Upload docs + analysis to Model B.
- Cross-check for accuracy.
"Review this analysis against the uploaded documents. Flag any claims that aren't directly supported."
Possibly the most thorough option. - Built for grounded search - Quotes match citations
- Upload the analysis from ChatGPT or Claude
- Ask: "Which claims are NOT supported by the sources?"
Output
- Claim by claim verificaiton
- Clickable source links
- Powered by Gemini 3