AI search is no longer theoretical. ChatGPT, Perplexity, Gemini, and other AI systems are actively recommending software products, citing specific pricing, and comparing features. But they can only recommend what they can understand — and extract.

We ran each of the 50 sites through AIVerdict's AURA audit (AI Understanding & Readiness Assessment), measuring Crawlability, Authority, Content Quality, Answerability, and Extractability. Every audit analyzed the homepage plus up to 8 subpages, checking over 30 signals that determine whether AI systems can discover, understand, and cite your content.

The headline numbers

82
Average AURA Score
96
Highest (Stripe)
68
Weakest Dimension Avg
38
Score Range (58–96)

The average AURA score across all 50 SaaS sites was 82 out of 100. That sounds solid — until you see which dimension is dragging everything down.

The dimension-by-dimension breakdown

Each AURA audit measures five distinct dimensions. Here's how the 50 sites performed on average across each one:

Content Quality
90
Crawlability
87
Authority
84
Answerability
80
Extractability
68

The surprise: Content Quality is the strongest dimension at 90, while Extractability is the weakest at 68. Most SaaS sites produce genuinely good content. But that content isn't structured in ways AI can easily parse — missing Schema.org markup, poor semantic HTML, images without alt text, and a lack of structured data like lists and tables.

In other words: the content is good, but it's not machine-readable. AI systems need structured signals — JSON-LD, heading hierarchies, semantic tags — to confidently extract and cite information. Without them, even great content gets overlooked.

Finding 1: Extractability is the industry's biggest blind spot

The average Extractability score was 68 out of 100 — the lowest of any dimension by a significant margin. Extractability measures how well your content is structured for AI parsing: Schema.org markup (JSON-LD), semantic HTML (article, section, main tags), image alt text, structured data like lists and tables, and proper OG/meta tags.

The problem isn't missing content — it's content that lacks the structured signals AI systems rely on to extract and cite specific information.

warning Lowest Extractability scores
  • Clearscope — 40/100. An SEO tool whose own site is largely inaccessible to AI crawlers.
  • Asana — 41/100. Despite a perfect 100 Crawlability score, most page content is JavaScript-rendered.
  • ReportGarden — 42/100. Feature pages exist but content can't be extracted from the rendered page.
  • ActiveCampaign — 43/100. JS-heavy frontend hides content from non-browser crawlers.

Asana is the most striking example: it scores 100 on Crawlability (perfect robots.txt, sitemaps, meta tags) but just 41 on Extractability. AI crawlers can discover every page — but the pages lack Schema.org markup, have weak semantic HTML structure, and provide few machine-readable signals. The content reads well to humans, but AI has little structured data to anchor its extraction.

The fix is straightforward: add JSON-LD structured data (Product, FAQ, HowTo, Article), use semantic HTML tags (article, section, main), ensure images have meaningful alt text, and organize content with lists and tables. These aren't complex engineering changes — they're markup decisions that most product teams haven't prioritized for AI systems.

Finding 2: Content Quality is stronger than expected

Content Quality averaged 90 out of 100 — the highest of any dimension. This measures how well AI can extract meaning, structure, and citable facts from your pages when it can access them.

The top scorers share common traits:

  • Stripe (97), HubSpot (97), Webflow (97) — Deep, specific documentation with named features, concrete examples, and structured comparisons.
  • Shopify (96), Ahrefs (96), Moz (96), Supabase (96), Mailchimp (96) — Detailed feature pages, guides, and transparent pricing information.

Even the bottom scorers are above average:

  • BrightEdge (72) — Enterprise positioning with limited public content depth.
  • ReportGarden (75) — Feature pages that name capabilities without fully explaining them.
  • Zoho (78) — Broad product suite with thin content across many pages.

The takeaway: most SaaS companies already produce AI-quality content. The bottleneck isn't what they write — it's whether AI can actually access and extract it.

Finding 3: Authority doesn't correlate with company size

You might assume a well-funded SaaS company automatically scores high on Authority. The data says otherwise — and the reason is how Authority is measured.

Authority is scored relative to your niche and locale, not in absolute terms. AURA identifies your product category, finds the top brands in that space, and measures how you compare. A company that dominates a small niche can score 100, while a household name in a crowded market scores 60.

trending_up Authority is niche-relative
  • Plausible.io scores 100 Authority — because it's the defining brand in "privacy-focused web analytics." Small niche, clear leader.
  • Basecamp scores 60 Authority — because in "project management software" it competes with Asana (100), ClickUp (100), and Monday (90). A well-known name, but not the niche leader.
  • Contentful scores 60 Authority — a leading headless CMS, but measured against the broader "digital experience platform" category where larger players dominate.
  • Semrush scores 60 Authority — despite being an industry leader, its niche was identified as "digital marketing analytics" rather than "SEO software," placing it against a broader competitive set.

Authority is built through third-party signals: Wikipedia presence, mentions in AI training data, news coverage, review platforms, and brand recognition by AI models. But the score is always relative to your competitive niche. The takeaway: dominating a specific niche matters more than being broadly well-known.

Finding 4: SEO companies show mixed results

We audited several SEO and marketing tools. The results are telling:

search SEO tool AURA rankings
  • Conductor — 91 (perfect Crawlability, strong across all dimensions)
  • AgencyAnalytics — 85 (100 Authority, solid overall)
  • Ahrefs — 85 (96 Content Quality, but just 54 Extractability)
  • Surfer SEO — 84 (100 Authority, 63 Extractability)
  • Mangools — 83 (well-balanced, 87 Extractability)
  • Moz — 83 (96 Content Quality, but 55 Extractability)
  • Semrush — 80 (60 Authority despite market dominance)
  • Screaming Frog — 74 (95 Answerability, but just 44 Extractability)
  • Clearscope — 74 (90 Content Quality behind a 40 Extractability wall)
  • BrightEdge — 58 (weakest overall, 45 Answerability, 51 Extractability)

The pattern repeats: even SEO companies that understand crawlers have Extractability blind spots. Ahrefs produces excellent content (96) but scores just 54 on Extractability — lacking structured markup that AI needs to parse it. Screaming Frog scores 95 on Answerability but just 44 on Extractability. These companies know how to write for search engines, but their own sites lack the Schema.org and semantic HTML signals that AI systems depend on.

Finding 5: The biggest movers

Several sites showed dramatic changes from our initial audit, revealing how quickly AURA scores can shift:

rocket_launch Notable shifts
  • Canva jumped from 72 to 93 — Extractability went from 28 to 82. A dramatic improvement, likely from SSR or pre-rendering changes. Now ranked #2 overall.
  • Conductor rose from 83 to 91 — Answerability improved from 50 to 75, Content Quality from 66 to 92.
  • Webflow climbed from 77 to 86 — Answerability surged from 41 to 95, the single biggest dimension improvement in our dataset.
  • Shopify rose from 81 to 91 — Content Quality jumped from 82 to 96, with Answerability climbing from 78 to 91.

Canva's turnaround is the headline story. Two weeks ago, this $40B company's content was nearly invisible to AI crawlers despite perfect brand recognition. Now it's the #2 site in our rankings. This proves that Extractability issues — while costly — are entirely fixable with the right engineering prioritization.

The full rankings

Click any column header to sort. Scores are color-coded: green (80+), yellow (60-79), orange (40-59), red (<40).

C Crawlability · Au Authority · Q Content Quality · A Answerability · E Extractability · Crawler blocked

# Site Overall C Au Q A E

What this means for your site

These are well-funded, well-staffed SaaS companies with dedicated marketing and engineering teams. If they have Extractability blind spots, your site almost certainly does too.

The good news: every dimension gap we found is fixable — and Canva's jump from 72 to 93 proves it. The specific fixes depend on your site's architecture, content structure, and competitive landscape. A site with low Extractability needs better Schema.org markup, semantic HTML, and structured data. A site with low Authority needs third-party signal building. The priority order matters as much as the fixes themselves.

The SaaS companies that optimize their AURA scores now will have a significant advantage as AI-driven search continues to replace traditional browsing. The ones that don't will wonder why ChatGPT keeps recommending their competitors.

Methodology

All 50 audits were run using AIVerdict's AURA audit on March 27, 2026 (updated from the original March 13–15 run with improved browser rendering and authority scoring). Each audit analyzed the homepage plus up to 8 automatically selected subpages (pricing, features, about, docs, etc.). Scores are based on our 5-dimension AURA framework: Crawlability, Authority, Content Quality, Answerability, and Extractability. For sites with multiple audits, the most recent score was used. Sites that redirect (e.g., searchmetrics.com → conductor.com, freshsales.io → freshworks.com) were consolidated under the destination domain.