When a user asks ChatGPT "what's the best project management tool for remote teams?" or Perplexity "which CRM should I use for a small agency?" — the answer doesn't come from a ranked list of blue links. The AI reads, interprets, and synthesizes content from across the web, then generates a recommendation.
The sites that get mentioned aren't necessarily the ones ranking #1 on Google. They're the ones that AI systems can find, understand, and trust enough to cite.
We built AIVerdict to measure exactly this. After auditing thousands of websites, we identified 5 distinct dimensions that determine whether AI will recommend a site. Each measures a different dimension of AI readiness — and each requires a different optimization strategy.
Why traditional SEO metrics aren't enough
Google's algorithm evaluates hundreds of ranking signals: backlinks, page speed, keyword relevance, user engagement. These still matter. But AI systems like ChatGPT, Perplexity, and Gemini operate fundamentally differently:
- They don't rank pages — they synthesize answers from multiple sources
- They need structured, extractable content — not just keyword-optimized copy
- They evaluate trust differently — looking at entity recognition, data presence in training sets, and citation reliability
- They can't execute JavaScript in most crawling scenarios — if your content is JS-rendered, it might be invisible
A site can rank #1 for its target keywords and still be completely invisible to AI recommendation engines. That's the gap the 5 dimensions measure.
The 5 Dimensions
Can AI systems access your content?
This is the foundation. Before AI can understand or recommend your site, it needs to be able to reach it. Crawlability measures whether your robots.txt allows AI crawlers, whether your sitemap is discoverable, whether you have proper canonical URLs, structured data, and entity definitions.
Common failures: blocking AI-specific bots (GPTBot, ClaudeBot, PerplexityBot) in robots.txt, missing sitemaps, no Organization schema, missing meta descriptions that AI uses for quick summarization.
This dimension runs 12 automated checks covering indexing, bot directives, structured data, entity definition, social metadata, crawl efficiency, sitemap, canonical URLs, meta descriptions, HTTPS, page speed, and OG images.
Does AI understand what your site is about?
This dimension uses AI to analyze AI readiness — we use Claude to read your actual page content and evaluate it across four dimensions: content clarity and extractability, citation readiness, competitive positioning, and topical depth.
Content Quality catches problems that technical checks miss: pages full of marketing buzzwords with no specific claims, features pages that list names without explanations, help centers with no crawlable article content, and missing comparison pages that would help AI recommend you over competitors.
This is where agencies get the richest actionable insights, because each dimension comes with specific, page-level recommendations.
Can AI answer questions using your content?
When someone asks an AI assistant a question, the AI needs content formatted in ways it can extract and relay. Answerability measures whether your content contains the structures AI systems prefer: clear definitions, structured lists, data tables, FAQ sections, and step-by-step instructions.
A site might have great content that a human can easily understand, but if it's all in dense paragraphs with no structural cues, AI systems will struggle to extract specific answers from it. This dimension measures the gap between having good content and having AI-extractable content.
Is your content in a format AI can read?
This dimension measures a technical reality that surprises many site owners: most AI crawlers don't execute JavaScript. If your content is rendered client-side by React, Vue, Angular, or any JS framework, AI systems might see an empty page.
We test each crawled page to determine whether the content is available in raw HTML (server-rendered) or requires JavaScript execution. A site built entirely on a JS framework with no SSR can score 0% on Extractability — meaning AI systems literally cannot read any of the content, regardless of how good it is.
This is often the easiest dimension to fix (enable SSR or static generation) and has the highest impact, because it's binary: either AI can read your content or it can't.
How the dimensions work together
Each dimension measures a different dimension, but they're not independent. A site needs to pass a minimum threshold on each to be reliably recommended by AI:
- High Crawlability + Low Extractability = AI can find your site but can't read the content (JS-rendered SPA)
- High Content Quality + Low Authority = Great content that AI doesn't trust enough to cite (new brand)
- High everything + Low Answerability = AI knows about you but can't extract specific answers to relay to users
The overall AI Visibility Score blends all 5 dimensions with weights calibrated to their relative impact on AI recommendations. But the individual dimension scores are where the actionable insights live.
What this means for agencies
If you're an SEO agency, AI visibility is a new service line waiting to happen. Your clients are already asking (or will be asking) "why isn't ChatGPT recommending us?"
The 5-dimension framework gives you:
- A concrete audit deliverable — run an AI visibility audit alongside your traditional SEO audit
- Clear optimization roadmap — each dimension has specific, actionable fixes
- Measurable progress — track scores monthly to show improvement over time
- A differentiated service — most agencies aren't offering this yet
The agencies that start measuring and optimizing AI visibility now will have a significant competitive advantage as AI search continues to grow.
Try it yourself
You can run a free AI visibility audit on any website right now. Enter a URL and get your score across all 5 dimensions in under 30 seconds. No account required for your first audit.