Understanding Your Score
How Your Visibility Score Is Calculated
Your overall visibility score is a number from 0 to 100. It's a weighted average of four metrics, each measuring a different dimension of how AI platforms represent your brand.
Score Formula
- Mention Rate (40% weight) — the percentage of your keywords where AI platforms mention your brand. If you have 20 keywords and your brand appears in responses to 14 of them, your mention rate is 70%.
- Accuracy Rate (30% weight) — of the responses that mention your brand, what percentage are factually correct? This compares AI claims against the brand data you provided (pricing, features, descriptions).
- Sentiment Score (15% weight) — are AI responses positive, neutral, or negative about your brand? Positive sentiment scores higher. This is normalized to a 0-100 scale.
- Citation Rate (15% weight) — of the responses that mention your brand, what percentage include a link back to your domain? Citations are strongest on Perplexity and Google AI Overviews; ChatGPT and Claude rarely cite sources.
Score Ranges
Scores map to four performance tiers:
- 0-39 (Critical) — Your brand is largely invisible to AI platforms, or AI is actively spreading wrong information about you. Immediate action required. Focus on high-priority recommendations and hallucination fixes.
- 40-59 (Warning) — AI knows you exist but coverage is inconsistent or partially inaccurate. You have a foundation to build on. Prioritize accuracy fixes and content structure improvements.
- 60-79 (Good) — Solid visibility with room to grow. AI mentions your brand on most queries and is mostly accurate. Focus on improving citation rates and closing platform-specific gaps.
- 80-100 (Excellent) — Strong AI visibility across platforms. AI mentions you accurately with good sentiment and citations. Focus on maintaining this position and monitoring for new hallucinations.
Per-Platform Scores
Your dashboard shows individual scores for each AI platform: ChatGPT, Perplexity, Google AI, and Claude. These use the same formula but are calculated only from that platform's responses.
Why Platform Scores Differ
Each AI platform works differently, so scores naturally vary:
- ChatGPT relies mostly on training data (parametric knowledge). It rarely cites sources, so citation rates are typically low. Focus on mention rate and accuracy here.
- Perplexity retrieves information in real time (RAG) and always cites sources. Citation rates are highest here. If Perplexity isn't citing you, your content isn't citation-worthy.
- Google AI Overviews uses Google's search index plus an LLM. If you rank well in traditional search, you'll likely score higher here.
- Claude uses training data only with no web access. It's conservative about claims, so mention rates may be lower but accuracy tends to be higher when it does mention you.
What Each Metric Means for You
- Low mention rate? AI doesn't know you exist for those queries. You need to create content that AI can discover and reference. See How LLMs Choose What to Cite.
- Low accuracy rate? AI is talking about you but getting facts wrong. This is the most dangerous situation — wrong pricing, outdated features, or competitor confusion can cost you customers. See Hallucination Detection.
- Low sentiment score? AI describes you in neutral or negative terms. Improve this by building stronger E-E-A-T signals, earning positive reviews, and creating authoritative content.
- Low citation rate? AI mentions you but doesn't link back to your site. Improve this with structured data, FAQ schema, and creating definitive, quotable content.
How to Improve Your Score
The fastest path to a higher score is your Recommendations page. Start with High-priority, Low-effort items. Run a new audit after implementing changes to measure progress. Most brands see measurable improvement within 2-4 weeks of implementing recommendations.