We audited Hotjar
Hotjar is one of the most recognizable behavior analytics tools on the web. 1,306,323 websites use it. It's been around since 2007. Recently acquired by Contentsquare, it's still the brand most SEO professionals and product teams reach for when they need heatmaps, session recordings, and surveys.
So when we ran an AURA audit on hotjar.com, we expected a strong score. Here's what we got — link to the full audit report:
- Authority: 85
- Citation Readiness: 86
- Topical Depth: 89
- Competitive Positioning: 84
- Content Clarity: 76
- Answerability: 74
- Extractability: 46
Pretty solid, especially in Authority and Topical Depth. But there are gaps. The Extractability score (46) reflects missing structured data, low image alt text coverage (17%), and thin content on key product pages. Our audit's own narrative noted: “Product pages are very thin (e.g., /product/heatmaps at 233 words) that offer minimal detail” and “the lack of a user-facing FAQ” on the main domain.
Real, identifiable issues. So far, so normal.
Then we asked ChatGPT
Before publishing this case study, we wanted to verify the audit's findings against reality. So we asked ChatGPT five questions a real user might ask:
- “What is Hotjar and what does it do? Be specific about features and pricing.”
- “How much does Hotjar cost?”
- “What exactly does Hotjar's heatmap tool do?”
- “How is Hotjar's session recording different from FullStory's?”
- “What specific features does Hotjar Surveys offer?”
ChatGPT answered every single question with deep, accurate detail. Pricing tiers (with specific dollar amounts). Heatmap types (click, scroll, move) with use cases. A side-by-side FullStory comparison. A complete breakdown of Hotjar Surveys' question types, targeting options, and AI insights.
If you asked us before this test “is Hotjar's site doing a good job for AI visibility?”, we'd have shown you the 73 audit score and pointed to the gaps. But ChatGPT's answers weren't vague or wrong. They were comprehensive and accurate.
This created an apparent contradiction: AURA says there are real gaps. ChatGPT acts like there are none. How can both be true?
The resolution — what AURA actually measures
The contradiction dissolves once you understand what each tool is measuring.
ChatGPT's answers reflect everything it can pull from:
- Training data — 19 years of articles, blog posts, reviews, tutorials, and case studies that mention Hotjar
- Real-time browsing — when asked specific questions, ChatGPT searched and found pages on help.hotjar.com (the help subdomain we don't crawl), fullsession.com (a third-party comparison site), and HotjarPricing.com (a third-party pricing aggregator)
- The site itself — including hotjar.com/pricing, which AURA correctly identified as “exceptionally detailed and well-structured” with concrete prices and session limits
- Cross-references — countless inbound links, integrations, tutorials, and Stack Overflow answers
AURA measures something more specific: how AI-ready your site is intrinsically — what's actually on hotjar.com, structured in ways AI systems can directly extract, without depending on outside factors.
Both measurements are correct. They're answering different questions:
- ChatGPT's answers: “Can AI find good information about Hotjar today?” → Yes, easily.
- AURA's score: “How much of Hotjar's AI visibility comes from the site itself versus everything else?” → A meaningful chunk depends on factors outside the site.
The 73 already accounts for brand equity
A common misconception about AI visibility audits is that they ignore brand recognition. AURA doesn't.
Our Authority dimension explicitly measures brand equity through:
- Domain age (Hotjar got full credit — 19 years)
- AI brand recognition test (Hotjar scored 85/100 — recognized as a known brand)
- News mentions (5 recent articles)
- Review platform presence (Software Advice, GetApp, Gartner)
- Wikipedia presence (none found, which is interesting given their size)
Hotjar's Authority score of 85 is already factored into the 73 overall. If we ran the same audit on a fictional small SaaS with identical site structure but no brand equity, the Authority dimension would drop to maybe 30–40, and the overall would drop to around 60–65.
This means AURA already differentiates between Hotjar and a no-name competitor. The 73 is honest. It reflects Hotjar's actual brand position. The smaller brand wouldn't get the same score.
So why isn't Hotjar a 90?
If brand equity is already credited and ChatGPT clearly knows Hotjar perfectly, why does AURA give Hotjar a 73 instead of a 90?
Because AURA also identified real, specific gaps:
- Missing structured data (Schema score: 0). Not a critical issue for AI agents reading raw text, but the most direct way to communicate “this is who we are, this is what we offer.”
- Thin product pages. Six core product pages average 213 words of crawlable content. Most of the actual product description lives in visual elements that crawlers can't read.
- No on-site FAQ. Hotjar's 1,527-URL sitemap has 342 blog posts and 60 case studies, but no user-facing FAQ on the main domain. (Our audit explicitly noted: “this likely exists on a non-crawled subdomain” — which is exactly right; help.hotjar.com is doing a lot of the work.)
- Weak /compare page. Relies on testimonials rather than quantitative side-by-side comparisons. As our audit put it: “less useful for direct citation in a competitive context.”
- Acquisition messaging dominance. The “Hotjar is now part of Contentsquare” message is communicated with overwhelming consistency, creating ambiguity about Hotjar's standalone identity.
These gaps are real. They're identifiable. They have potential consequences. But for Hotjar today, those consequences are being absorbed by everything else: the help subdomain, the training data depth, the third-party content, the overwhelming brand recognition.
The audit's real value isn't measuring today
Here's the uncomfortable truth most AI visibility audits don't admit: for established brands, your current AI visibility is largely outside your direct control. It's a function of training data (frozen in time), real-time browsing (depends on AI provider behavior), and third-party content (written by people who don't work for you).
What IS in your control is your site itself. AURA measures that.
The 73 isn't a measurement of “how visible are you in AI today.” It's a measurement of “how resilient is your AI visibility to changes you can't control.”
For Hotjar, the things outside their control include:
- Training data refresh: When the next round of AI models is trained, the new data will reflect Hotjar's current site (with thin product pages and acquisition messaging) and the Contentsquare integration. The deep historical knowledge ChatGPT has today will fade as new AI models prioritize fresh data.
- Subdomain restructuring: help.hotjar.com is currently providing the deep product documentation ChatGPT relies on. If the Contentsquare integration consolidates this onto contentsquare.com or restructures it, Hotjar's most important AI knowledge source disappears overnight.
- Real-time agentic AI: When AI agents need to extract specific facts from a site live (not from training data), they need clean structured content. Brand equity doesn't help here. Our audit's Extractability findings matter more in this scenario.
- Third-party content drift: HotjarPricing.com and FullSession's comparison content currently fill gaps. There's no guarantee they'll be maintained or remain accurate as Hotjar evolves.
The 73 isn't saying “you have a problem today.” It's saying “you have specific dependencies on factors outside your control. Here are the gaps that would matter if those factors changed.”
What this means for your client
If you're an SEO agency reading this, here's the thing you need to internalize:
The same AURA score means different things for different brands.
Imagine two fictional clients with identical 73/100 audit scores:
Client A: 19-year-old SaaS with 1.3M users, extensive third-party coverage, comprehensive help subdomain, deep AI training data presence. The 73 reflects gaps that don't currently hurt them.
Client B: 3-year-old SaaS with 5,000 users, minimal third-party content, no help subdomain, no significant AI training data presence. The 73 reflects gaps that are actively making them invisible.
Same audit score. Wildly different real-world consequences.
When you deliver an audit report to a client, the score is only half the story. The other half is contextual interpretation:
- High brand equity, lots of supporting infrastructure: The audit identifies long-term risk and missed opportunities. Fix the gaps for resilience, not visibility.
- Low brand equity, few supporting factors: The audit identifies immediate visibility problems. Fix the gaps because they're actively costing you.
- Brands going through change (acquisition, rebrand, migration): The audit identifies what could break when supporting factors shift.
This is a service you can sell. Not “here's your audit score” but “here's your audit score interpreted in the context of your brand's actual position.” The interpretation is where the real value lives.
The honest verdict on Hotjar
For Hotjar specifically, the audit shouldn't be a source of alarm. Their AI visibility is strong. ChatGPT clearly knows them well. Real users asking real questions get real answers.
But the gaps the audit identified are real, and they represent risk. If we were advising Hotjar, we'd say:
- The /compare page is a competitive vulnerability. Right now, AI cites third-party comparison content (FullSession's comparison page) instead of yours. You've ceded the competitive narrative to others. Fix this with quantitative side-by-side data.
- The thin product pages create dependency on help.hotjar.com and training data. When Contentsquare consolidates the help center (which is likely), you'll suddenly need substantive product descriptions on the main domain. Build them now, before you need them.
- The acquisition messaging is creating identity drift. AI is increasingly confused about whether Hotjar is a standalone brand, a product line, or a marketing label. Clarify the relationship explicitly. Add structured data (Organization schema) declaring exactly what Hotjar is.
- Schema.org isn't critical for citation but matters for agentic AI. As more AI agents need to extract specific facts in real-time (booking, comparison, automated research), structured data becomes more valuable. Adding it is cheap insurance against the next phase of AI search.
None of these are urgent. All of them are worth doing.
The lesson
AI visibility audits don't measure your current visibility — they measure your site's intrinsic AI readiness. For brands with extensive supporting factors (training data, third parties, help subdomains, brand recognition), the audit identifies risk and opportunity. For brands without those factors, the audit identifies immediate problems.
The 73 we gave Hotjar is honest. ChatGPT's perfect answers are also honest. Both are measuring the same thing from different angles.
If you're auditing your own site or a client's, the question to ask isn't “what's our score?” It's “what's our score, and which of the supporting factors do we currently rely on?” Once you know that, you know which gaps to fix urgently and which to fix for the long term.
The most dangerous mistake is assuming that strong current visibility means a strong site. They're not the same thing. Hotjar's case proves it.