Documentation

Launch Tool

LLM Brand Visibility Analyzer

This tool measures how often AI-powered platforms mention, recommend, or cite your brand when users ask product-related questions. As consumers increasingly use AI assistants like ChatGPT, Perplexity, and Google's AI Overviews to research purchases, understanding your brand's visibility in these responses is crucial.

What This Tool Answers

  • • When someone asks AI "What's the best [product]?", does your brand appear?
  • • Which AI platforms mention your brand most frequently?
  • • What sentiment surrounds your brand in AI responses?
  • • How do you compare to competitors in AI recommendations?

How It Works

1

Enter Brand

Provide your website URL and optional brand name

2

Generate Queries

AI suggests realistic purchase-intent questions

3

Select Platforms

Choose which AI models to test against

4

Analyze Results

See citation rates, sentiment, and recommendations

Core Concepts

The GEO Framework

This tool is built on the Generative Engine Optimization (GEO) framework, which helps you understand not just if you're being cited, but why and wherein the customer journey.

Key GEO Principles

  • Work backwards from content — Generate queries based on what your content actually covers
  • Classify by intent — Understand what users are trying to accomplish
  • Map to funnel stages — See where you have visibility gaps in the buyer journey
  • Score content alignment — Measure how well your content matches user queries

Brand Visibility vs. Traditional SEO

Traditional SEO measures your ranking in search engine results pages (SERPs). LLM Brand Visibility measures something different: whether AI systems mention your brand in their generated responses.

Traditional SEO

  • • Your website appears in search results
  • • Users click through to your site
  • • Based on crawled web pages
  • • You control your ranking factors

LLM Visibility

  • • AI mentions your brand in responses
  • • Users may not click—they get the answer directly
  • • Based on training data + real-time search
  • • You have limited direct control

Search Platforms vs. Chat Platforms

AI platforms fall into two categories, each with different implications for your brand:

AspectSearch PlatformsChat Platforms
ExamplesPerplexity, Google AI Overviews, GeminiChatGPT, Claude, Meta AI
Data SourceReal-time web search + training dataTraining data only (knowledge cutoff)
CitationsProvides clickable source URLsNo source links (usually)
Business ImpactDrives direct trafficBuilds brand awareness
Detection MethodGrounded (URL matching)Text-match (keyword search)

Detection Methods

Grounded Detection

Used for search platforms that return actual source URLs.

  • • Extracts URLs from citation metadata
  • • Matches your domain against cited sources
  • • High confidence (actual link to your site)
  • • Provides rank position in citations

Text-Match Detection

Used for chat platforms without structured citations.

  • • Searches response text for brand mentions
  • • Checks multiple name variations
  • • Medium confidence (may miss/false-positive)
  • • Extracts surrounding context

Advanced Analysis (GEO Framework)

Enable advanced features in the "Advanced Analysis" panel to unlock deeper insights into your brand visibility. All features are off by default and can be toggled on as needed.

Query Classification

User Intent

What action is the user trying to take?

LearnCompareBuyExploreSolveEvaluate

Funnel Stage

Where is the user in their buying journey?

AwarenessConsiderationEvaluationPurchasePost-Purchase

How it works: Click "Classify Queries" to automatically analyze your queries using rule-based grammar matching and ML classification.

Content Analysis

Content-Derived Queries

Instead of guessing what queries to test, derive them from your actual content. The tool analyzes your webpage and generates queries that your content should be cited for.

Process: Fetch page → Extract content chunks (headings, paragraphs) → Analyze topics → Generate relevant conversational queries → Classify by intent/funnel

Match Rate Scoring

See how well each query aligns with your actual content. Low match scores indicate queries your content may not adequately address.

60%+ High 30-60% Medium <30% Low

Enhanced Competitor Analysis

Competitor URL Extraction

For search platforms (Gemini, Perplexity), extract competitor URLs from citations. See which competitor sites are being cited instead of yours.

Competitor Benchmarking

Compare your visibility score against extracted competitors. The leaderboard shows your position relative to competitors based on citation count, mention count, and sentiment.

Coverage Analysis

When enabled, shows a visual breakdown of your query coverage across funnel stages, user intents, and content types. Identifies gaps where you have no queries or low visibility.

Gap Detection

The tool automatically identifies coverage gaps and provides recommendations:

  • Missing funnel stages: "Add educational content for awareness stage"
  • Low intent coverage: "Create comparison pages for compare intent"
  • Content type gaps: "Publish how-to guides for tutorial content"

Metrics & Scoring

Citation Rate

The percentage of AI platforms that mentioned your brand in their response.

# Formula:

Citation Rate = (Models that found brand / Total models tested) × 100

70%+

Strong visibility

30-70%

Moderate visibility

<30%

Low visibility

Sentiment Analysis

Analyzes the tone of text surrounding your brand mention using keyword detection.

Positive Indicators

recommend, best, top, excellent, quality, trusted, reliable, leading, popular, preferred

Neutral

No strong positive or negative signals detected in surrounding context

Negative Indicators

avoid, poor, worst, unreliable, expensive, disappointing, issues, problems, complaints

# Scoring Logic:

if (positiveCount > negativeCount + 1) → "positive"

else if (negativeCount > positiveCount + 1) → "negative"

else → "neutral"

Confidence Score

A 0-100% score indicating how certain the detection is. Higher confidence means stronger evidence of an intentional brand mention.

Score Components:

  • +50% Base score when brand is found
  • +10% For each matching identifier (URL, domain, name) up to +30%
  • +10% For each additional mention (up to +30%)
  • +5% For recommendation context words ("recommend", "try", "visit") up to +30%

Rank Position

For search platforms, this indicates your position in the citation list. For chat platforms, it's extracted from numbered recommendation lists.

1Top recommendation
3Mid-tier mention
5+Lower visibility

Detection Logic

How Brand Detection Works

The tool searches for your brand using multiple name variations to catch different ways AI might reference you.

Brand Variations Generated

For URL https://www.bestbuy.com with name "Best Buy":

bestbuybest buybest-buybestbuy.comBest BuyBestBuy

Detection Process

  1. 1Extract domain from your URL (e.g., "bestbuy" from bestbuy.com)
  2. 2Generate variations: spaces, hyphens, camelCase, full URL, etc.
  3. 3Search the AI response for any variation (case-insensitive)
  4. 4Extract ~100 characters of context around the mention
  5. 5Analyze sentiment and calculate confidence score

Competitor Detection

The tool also identifies competitors mentioned alongside your brand, helping you understand the competitive landscape in AI responses.

Known Brands Tracked

40+ brands across categories: Electronics, Fashion, Jewelry, Home, Watches

Examples: Amazon, Best Buy, Walmart, Nordstrom, Tiffany, Wayfair, IKEA, etc.

AI Platforms

The tool tests your brand against 10 AI platforms, representing the major consumer AI services people use for product research.

Search Platforms (Grounded)

Gemini 2.0 Flash

budget

Google AI / Search$0.075/1M input

Gemini 2.0 Pro

premium

Gemini Advanced$1.25/1M input

Perplexity Sonar

budget

Perplexity Free$1.00/1M input

Perplexity Sonar Pro

premium

Perplexity Pro$3.00/1M input

Chat Platforms (Text-Match)

GPT-4o Mini

budget

ChatGPT Free$0.15/1M input

GPT-4o

premium

ChatGPT Plus$2.50/1M input

Claude 3.5 Haiku

budget

Claude Free$0.80/1M input

Claude 3.5 Sonnet

premium

Claude Pro$3.00/1M input

Llama 3.1 70B

budget

Meta AI$0.52/1M input

Model Presets

Quick Check

2 models, ~$0.01/query

Gemini Flash + GPT-4o Mini

Balanced

4 models, ~$0.02/query

+ Perplexity + Claude Haiku

Comprehensive

6 models, ~$0.05/query

+ GPT-4o + Llama

Execution Modes

All Queries × All Models

Complete visibility matrix

Tests every query against every selected model. Most comprehensive but highest cost.

Tests = queries × models

All Queries × One Model

Deep dive on a single platform

Tests all queries against one chosen model. Good for platform-specific analysis.

Tests = queries × 1

One Query × All Models

Quick spot check

Tests one query across all models. Fast way to compare platforms.

Tests = 1 × models

Parallel Execution

Tests run in parallel for faster results:

  • Query-level: Up to 3 queries run simultaneously
  • Model-level: All models for a query run in parallel
  • Result: 6-10x faster than sequential execution

Budget & Costs

How Costs Are Calculated

Each API call costs based on tokens used (roughly, words processed):

Cost = (input_tokens / 1M × input_rate) + (output_tokens / 1M × output_rate)

Typical test: ~500 input tokens, ~300 output tokens per query per model

Budget Controls

Click the budget display in the header to set spending limits:

  • Default limit: $1.00 per session
  • Warning: Shown at 80% of budget
  • Presets: $0.25, $0.50, $1.00, $5.00

Mock Mode

Enable Mock Mode to test the full workflow without any API costs. Generates realistic fake results for testing and demonstration.

Interpreting Results

What Good Results Look Like

70%+Citation rate: Strong AI visibility
PositiveSentiment: AI recommends your brand favorably
#1-3Rank: Top recommendation position
80%+Confidence: Strong, clear mentions

Common Issues & Solutions

Low citation rate (<30%)

Your brand may lack presence in AI training data. Focus on creating high-quality content that AI systems can reference.

Negative sentiment

AI may have learned from negative reviews or press. Address underlying issues and encourage positive reviews.

High competitors

AI often mentions multiple brands. Differentiate your value proposition clearly.

Viewing Full Results

Click any query result card to expand it and see:

  • • Full AI response text with brand highlights
  • • Detection details (what terms were searched)
  • • Competitors mentioned in that response
  • • Source citations (for search platforms)
  • • Cost and latency for that specific test

Frequently Asked Questions

Why might my brand not be found even though it's well-known?

AI models have training data cutoffs and may not know recent information. They also might reference your brand differently than expected. Try adding your brand name explicitly (not just URL) and check the detection details to see what variations were searched.

What's the difference between 'found' and 'cited'?

For search platforms, being 'cited' means your actual URL appears in the sources. For chat platforms, being 'found' means your brand name was mentioned in the text. Citations drive traffic; mentions drive awareness.

How accurate is sentiment analysis?

Sentiment analysis uses keyword detection and is approximately 70-80% accurate. It may miss nuanced sentiment or sarcasm. Always review the actual response text for important decisions.

Why do different models give different results?

Each AI model has different training data, knowledge cutoffs, and response styles. This is exactly why testing across multiple platforms is valuable—it shows where your brand has visibility gaps.

What are content-derived queries?

Instead of guessing what queries to test, content-derived queries analyze your actual webpage content and generate queries that your content should be cited for. This follows the GEO framework principle of 'working backwards' from your content to understand what you should rank for.

What do the intent and funnel stage tags mean?

Intent (learn, compare, buy, etc.) describes what action the user wants to take. Funnel stage (awareness, consideration, purchase, etc.) shows where they are in the buying journey. Together, they help you understand which types of queries you're visible for and where you have gaps.

What is match rate scoring?

Match rate measures how well a query aligns with your actual content. A high match rate (60%+) means your content directly addresses that query. A low match rate (<30%) means you might not have content that answers that question, even if you want to rank for it.

How often should I run visibility tests?

Monthly testing is a good baseline. Run additional tests after major marketing campaigns, PR events, or product launches to track impact on AI visibility.

Can I improve my brand's AI visibility?

Yes! Focus on: (1) Creating high-quality, authoritative content that AI systems can learn from, (2) Getting mentioned on well-indexed sites, (3) Building a strong online presence with consistent branding, (4) Encouraging authentic positive reviews, (5) Using content-derived queries to identify gaps in your content coverage.

Need help? Check the FAQ or Core Concepts.

Launch Tool