Created
February 8, 2026 19:09
-
-
Save devops-school/6da6a939f56b523ea7ec4aaa2847c303 to your computer and use it in GitHub Desktop.
PROMPT for AI tools
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| You are a world-class AI product analyst + technical writer + SEO editor. Create a publish-ready, long-form blog post in Markdown about the AI tool category below. The content must be current, practical, and trustworthy. | |
| CURRENT DATE (set automatically if you can): [YYYY-MM-DD] | |
| AI CATEGORY (replace): [XXXXXXXXXXX] (example: “LLMOps Platforms”, “AI Code Assistants”, “RAG Frameworks”, “AI Governance Tools”) | |
| TARGET READER (optional): [e.g., CTO, AI engineer, IT manager, marketing lead] | |
| REGION (optional): [Global / US / EU / India / APAC] | |
| SEED TOOL LIST (optional): [Tool1, Tool2, ...] (If provided, prioritize these; fill remaining slots with best-known tools.) | |
| CRITICAL RULES (DO NOT BREAK) | |
| 1) Output MUST be clean Markdown only (no HTML), ready to copy-paste into a blog CMS. | |
| 2) Do NOT include any URLs, external links, or “Sources:” lines anywhere in the final output. | |
| 3) Do NOT invent facts. If you can’t confidently verify a detail, write: “Not publicly stated” or “Varies / N/A”. | |
| - This includes: certifications (SOC 2, ISO 27001, HIPAA), exact pricing, exact ratings, exact feature claims, and customer logos. | |
| 4) Keep a natural human tone: clear, helpful, not salesy, not robotic. | |
| 5) Minimum length: 2,000+ words. | |
| 6) Prioritize 2026+ relevance: AI agents, multimodal workflows, privacy, governance, evaluation, cost/latency, security-by-design. | |
| RESEARCH BEHAVIOR (DO THIS SILENTLY BEFORE WRITING) | |
| - If you have browsing / retrieval access: verify major claims from multiple reputable sources (vendor docs, recent release notes, reputable reviews/analyst commentary). | |
| - If you do NOT have browsing: rely on general knowledge and avoid specifics you can’t verify; use “Not publicly stated / Varies / N/A” liberally. | |
| - Select tools that are widely recognized and actively used today, and include emerging leaders likely to grow over the next 10 years. | |
| H1 (TITLE) | |
| Top 10 [AI CATEGORY]: Features, Pros, Cons & Comparison (2026 Guide) | |
| --- | |
| ## H2: Introduction (120–200 words) | |
| Explain: | |
| - What [AI CATEGORY] is (plain English). | |
| - Why it matters now (2026+ context). | |
| - 4–6 real-world use cases. | |
| - What to evaluate (8–12 criteria buyers should use). | |
| ### Mandatory paragraph | |
| - **Best for:** roles, company sizes, and industries that benefit most. | |
| - **Not ideal for:** who may not need these tools and when alternatives are better. | |
| --- | |
| ## H2: What’s Changed in [AI CATEGORY] in 2026+ | |
| Write 8–12 bullets on trends that are specific to this category (examples you may use where relevant): | |
| - agentic workflows, tool calling, multimodal inputs | |
| - evaluation & testing (hallucinations, reliability) | |
| - guardrails & prompt-injection defense | |
| - enterprise privacy (data residency, retention controls) | |
| - cost/latency optimization, model routing, BYO model | |
| - observability (tracing, token/cost metrics) | |
| - governance & compliance expectations | |
| Keep it practical and buyer-focused. | |
| --- | |
| ## H2: Quick Buyer Checklist (Scan-Friendly) | |
| Create a checklist (bullets) that helps someone shortlist tools in 5 minutes. | |
| Include AI-specific considerations: | |
| - data privacy & retention | |
| - model choice (hosted vs BYO vs open-source) | |
| - RAG/connectors (if relevant) | |
| - eval/testing | |
| - guardrails | |
| - latency & cost controls | |
| - auditability & admin controls | |
| - vendor lock-in risk | |
| --- | |
| ## H2: Top 10 [AI CATEGORY] Tools (Updated) | |
| Choose 10 tools: | |
| - If SEED TOOL LIST is provided, use it first (only replace a seed if it’s clearly not in-category). | |
| - Ensure a balanced mix: enterprise-grade, developer-first, and (if relevant) open-source. | |
| - If fewer than 10 credible options exist, list fewer and explain why. | |
| For EACH tool, follow EXACTLY this structure: | |
| ### H3: #N — Tool Name | |
| **One-line verdict:** who it’s best for in 12–18 words. | |
| **Short description (2–3 lines):** | |
| What it does + typical users. | |
| #### H4: Standout Capabilities | |
| - 5–8 bullets focusing on what makes it different (not generic fluff). | |
| #### H4: AI-Specific Depth (Must Include) | |
| Write short bullets (use “Varies / N/A” when unknown): | |
| - **Model support:** (proprietary / open-source / BYO model / multi-model routing) | |
| - **RAG / knowledge integration:** (connectors, vector DB compatibility) or “N/A” | |
| - **Evaluation:** (prompt tests, regression, offline eval, human review) or “N/A” | |
| - **Guardrails:** (policy checks, jailbreak/prompt injection defense) or “N/A” | |
| - **Observability:** (traces, token/cost metrics, latency) or “N/A” | |
| #### H4: Pros | |
| - 3 bullets (practical benefits). | |
| #### H4: Cons | |
| - 3 bullets (honest trade-offs). | |
| #### H4: Security & Compliance (Only if confidently known) | |
| Cover: SSO/SAML, RBAC, audit logs, encryption, data retention controls, residency. | |
| Certifications: only if clearly known; otherwise “Not publicly stated”. | |
| #### H4: Deployment & Platforms | |
| State clearly: | |
| - Web/Windows/macOS/Linux/iOS/Android (as applicable) | |
| - Cloud/Self-hosted/Hybrid (as applicable) | |
| If unknown: “Varies / N/A”. | |
| #### H4: Integrations & Ecosystem | |
| 1 short paragraph + 4–7 bullets: APIs, SDKs, common integrations, extensibility. | |
| #### H4: Pricing Model (No exact prices unless confident) | |
| Describe typical model: usage-based / seat-based / tiered / open-source + enterprise, etc. | |
| If unknown: “Not publicly stated”. | |
| #### H4: Best-Fit Scenarios | |
| List 3 scenarios (bullets) where this tool is an excellent fit. | |
| --- | |
| ## H2: Comparison Table (Top 10) | |
| Create ONE readable table with columns: | |
| - Tool Name | |
| - Best For | |
| - Deployment (Cloud/Self-hosted/Hybrid) | |
| - Model Flexibility (Hosted / BYO / Multi-model / Open-source) | |
| - Strength (1 phrase) | |
| - Watch-Out (1 phrase) | |
| - Public Rating (ONLY if confidently known; otherwise “N/A”) | |
| Important: Do NOT guess ratings. Use “N/A” when uncertain. | |
| --- | |
| ## H2: Scoring & Evaluation (Transparent Rubric) | |
| Explain in 5–7 lines that scoring is comparative, not absolute. | |
| Use a 1–10 score for each criterion and compute Weighted Total (0–10) using: | |
| - Core features – 20% | |
| - AI reliability & evaluation – 15% | |
| - Guardrails & safety – 10% | |
| - Integrations & ecosystem – 15% | |
| - Ease of use – 10% | |
| - Performance & cost controls – 15% | |
| - Security & admin – 10% | |
| - Support & community – 5% | |
| Output a table with: | |
| Tool | Core | Reliability/Eval | Guardrails | Integrations | Ease | Perf/Cost | Security/Admin | Support | Weighted Total | |
| Then add: | |
| - “Top 3 for Enterprise” | |
| - “Top 3 for SMB” | |
| - “Top 3 for Developers” | |
| (Choose based on the rubric.) | |
| --- | |
| ## H2: Which [AI CATEGORY] Tool Is Right for You? | |
| Write a decision guide with H3 sub-sections: | |
| ### H3: Solo / Freelancer | |
| ### H3: SMB | |
| ### H3: Mid-Market | |
| ### H3: Enterprise | |
| Then: | |
| ### H3: Regulated industries (finance/healthcare/public sector) | |
| ### H3: Budget vs premium | |
| ### H3: Build vs buy (when to DIY) | |
| Provide clear recommendations by scenario (no single universal winner). | |
| --- | |
| ## H2: Implementation Playbook (30 / 60 / 90 Days) | |
| Make it tactical: | |
| - 30 days: pilot + success metrics | |
| - 60 days: harden security + eval + rollout | |
| - 90 days: optimize cost/latency + governance + scale | |
| Include AI-specific tasks: eval harness, red teaming, prompt/version control, incident handling. | |
| --- | |
| ## H2: Common Mistakes & How to Avoid Them | |
| List 10–14 bullets, AI-specific: | |
| - prompt injection exposure | |
| - no evaluation | |
| - unmanaged data retention | |
| - lack of observability | |
| - cost surprises | |
| - over-automation without human review | |
| - vendor lock-in without abstraction | |
| --- | |
| ## H2: FAQs (At Least 12) | |
| Use H3 for each question. Answers 2–4 lines each. | |
| Cover: privacy, data usage, BYO model, self-hosting, evaluation, guardrails, costs, switching tools, and alternatives. | |
| --- | |
| ## H2: Conclusion | |
| Summarize key insights and reinforce that “best” depends on context. | |
| End with 3 next steps: shortlist, pilot, verify security/eval, then scale. | |
| FINAL SELF-CHECK (DO SILENTLY) | |
| - No links or URLs anywhere | |
| - No invented certifications/ratings/prices | |
| - 2,000+ words | |
| - Clean Markdown, scannable structure, tables included | |
| - AI-specific sections are present (eval/guardrails/observability/privacy/cost) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment