Case Study: How We Achieved 90% LLM Visibility in 6 Months
Real case study showing how a startup grew from 5% to 90% LLM visibility in 6 months with $3,200 budget. Month-by-month tactics, tools, and results.
Case Study: How We Achieved 90% LLM Visibility in 6 Months
Introduction
We had a problem. Every week, I'd test our brand name in ChatGPT, Claude, and Perplexity, asking the same 50 questions our customers typically asked before signing up. And every week, we were invisible.
March 2025, our LLM visibility sat at 5%. Out of 50 queries related to our product category (project management for remote teams), only 2-3 would mention us. Our competitors? They appeared in 70-80% of AI responses. Meanwhile, our traditional SEO was solid. We ranked in the top 10 for most of our target keywords on Google.
But Google wasn't where our customers were searching anymore.
Our analytics showed something alarming: direct traffic was dropping. Branded searches were flat. When we surveyed new sign-ups in February, 34% said they'd discovered competitors through "AI search" or "asking ChatGPT." Only 11% mentioned us from AI platforms.
We're a bootstrapped B2B SaaS with 12 employees. Our product works. Our content marketing works. But we were invisible where our future customers were actually looking.
Six months later, in September 2025, we hit 90% LLM visibility. ChatGPT cites us for 45 out of 50 target queries. Claude mentions us regularly. Perplexity includes us in comparisons. Google's AI Overviews feature us prominently.
This is how we did it, what it cost ($3,200), what worked, what failed, and how you can replicate it.
Executive Summary
The Numbers:
Starting visibility: 5% (March 2025)
Ending visibility: 90% (September 2025)
Total investment: $3,200
Time investment: Average 15 hours/week (one person)
Traffic increase from AI platforms: 412% (127 → 651 visits/month)
Lead quality improvement: 42% higher close rate from AI-referred visitors
Top 5 Tactics by Impact:
FAQ schema implementation (30% of improvement)
Content restructuring for direct answers (25%)
Author credibility signals (15%)
Original data and statistics (15%)
Comprehensive topic coverage (15%)
Best for:
B2B SaaS startups
Content-driven companies
Small teams (1-3 people on content)
Limited budgets (<$5K)
6-month timeline flexibility
Not ideal for:
Immediate results needed (this takes months)
Brand-new companies (need some existing content)
Highly regulated industries (different challenges)
What is LLM Visibility?
Before diving into the tactics, let me quickly define what we were actually measuring.
LLM visibility is how often and how prominently your brand, product, or content appears in responses from Large Language Models like ChatGPT, Claude, Perplexity, and Google's AI Overviews.
We measured it simple: 50 queries we knew our prospects asked, tested twice per week across four platforms (ChatGPT with web search, Claude, Perplexity, Google AI Overviews). If our brand appeared in the response, that counted as visible for that query. If we appeared in 45 out of 50 responses, that's 90% visibility.
Traditional SEO measures where you rank on a results page. LLM visibility measures if you get mentioned at all in an AI-generated answer. There's no "position 5" or "page 2." You're either cited or you're not.
For us, this mattered more than traditional SEO because:
Our target audience (startup founders and remote team leads) increasingly used AI tools for research
AI responses are comprehensive and reduce "browse multiple sources" behavior
Getting cited builds brand authority even if users don't click through
AI platforms were sending us higher-quality leads when we were mentioned
We used manual testing because we started before the expensive tracking tools existed. Two spreadsheets, 50 queries, tested every Tuesday and Friday morning. Time-consuming? Yes. But it cost nothing and gave us clean data.
Starting Point: Month 0 (March 2025)
Our company makes project management software for remote teams. Twelve employees, profitable, growing 8-10% month-over-month. We'd been in business for three years and had built a solid content library: 187 blog posts, 23 guides, 14 comparison pages.
I ran our first comprehensive LLM visibility audit on March 3rd, 2025. Sarah, our content lead, and I spent a full day crafting 50 queries that represented different stages of our customer journey:
Awareness stage (20 queries):
"What are the best project management tools for remote teams?"
"How to manage remote team productivity?"
"Remote team collaboration software comparison"
Consideration stage (20 queries):
"Asana vs [our product] vs Basecamp"
"Best PM tools for startups under 20 people"
"Project management software with time tracking"
Decision stage (10 queries):
"Is [competitor] good for remote teams?"
"[Our product category] reviews"
"How to choose project management software"
We tested each query across ChatGPT (with browsing enabled), Claude (with search), Perplexity, and Google AI Overviews.
The results were brutal:
Total visibility: 5%
ChatGPT: 2 out of 50 queries (4%)
Claude: 4 out of 50 (8%)
Perplexity: 3 out of 50 (6%)
Google AI Overviews: 1 out of 50 (2%)
When we did appear, it was almost always in a list of "other tools" at the end of a response, never as a primary recommendation.
Who was dominating?
Three competitors owned the AI responses:
Asana: appeared in 78% of relevant queries
Monday.com: appeared in 72%
ClickUp: appeared in 68%
These weren't our direct competitors. They're enterprise-focused, we're startup-focused. But AI didn't make that distinction. They had strong brand recognition, lots of content, many backlinks. We had great SEO rankings but no AI visibility.
Why were we invisible despite good SEO?
Sarah and I spent two days analyzing competitor content that was getting cited. We found patterns:
Their content answered questions directly in the first paragraph. Ours had 200-word intros before getting to the point.
They had FAQ sections on every major page with schema markup. We had FAQs on maybe 15% of our content.
They included statistics and data prominently. We had good content but rarely led with numbers.
Their author bios were detailed with credentials and expertise. Ours said "Written by the [Company Name] Team."
They updated content regularly. Last modified dates were recent. Some of our best content hadn't been touched in 18 months.
Setting our goal:
We debated what realistic looked like. Sarah pushed for 75% in six months. I thought that was too aggressive and suggested 60%. Our CEO, Tom, said "What's the minimum to be competitive?"
We settled on 90% as the target. Not because we thought we'd hit it, but because aiming for 90% and hitting 70% still made us competitive. We wanted to be in the conversation for at least 40 of our 50 queries.
Looking back now, I'm glad we set an ambitious goal. It forced us to prioritize this ruthlessly.
The 6-Month Strategy Overview
We couldn't just throw money at this. Our content budget for all of Q2 was $4,500. We allocated $3,200 specifically for LLM visibility initiatives, keeping the rest for ongoing content needs.
Sarah would lead execution. I'd handle strategy, testing, and reporting. We'd meet every Monday to review progress and adjust tactics.
Our three-phase approach:
Phase 1: Foundation (Months 1-2)
Goal: 15-20% visibility
Focus: Optimize existing high-potential content
Budget: $600
Time: 15 hours/week
Phase 2: Expansion (Months 3-4)
Goal: 45-55% visibility
Focus: New content + authority building
Budget: $1,700
Time: 18-20 hours/week
Phase 3: Acceleration (Months 5-6)
Goal: 80%+ visibility
Focus: Double down on what works
Budget: $900
Time: 12-15 hours/week
We tracked four metrics weekly:
Overall visibility percentage
Platform-specific visibility (which AI favored us?)
Sentiment of mentions (positive, neutral, negative)
Position within responses (primary recommendation, list mention, other)
Every Friday, I'd run the test queries. Every Monday, we'd review and plan the week.
Month 1-2: Foundation Phase
Week 1-2: The Content Audit
Sarah and I divided our 187 articles into three buckets:
High potential (32 articles): Good SEO performance, covered topics our LLM queries addressed, had reasonable traffic. These needed optimization, not replacement.
Medium potential (71 articles): Covered relevant topics but poorly structured, outdated, or too superficial. Needed significant rewrites.
Low potential (84 articles): Either off-topic, outdated beyond repair, or covering queries that didn't matter for LLM visibility. We ignored these.
We started with the high-potential 32. Sarah created a template for optimization:
First-paragraph rewrite: Put the direct answer in the first 100 words. No fluff.
Add comprehensive FAQ section: Minimum 8 questions, schema markup required.
Update statistics: Find recent data, cite sources, make numbers prominent.
Enhance author bio: Add my credentials, link to my LinkedIn, show expertise.
Improve structure: Clear H2/H3 headings that match question formats.
Update last-modified date: Actually update something meaningful, not just change a word.
This was tedious work. Each article took 2-3 hours.
Week 3-4: High-Impact Changes
I pushed Sarah to focus on our top 10 performers first. "Let's see if this works before we optimize all 32," I said.
She picked the articles that already ranked in Google's top 5 for our target keywords. The theory: if Google trusted these, maybe LLMs would too with the right optimization.
Example transformation:
Before (article about "remote team communication tools"): ``` Remote work has transformed how teams collaborate. With the rise of distributed teams across the globe, finding the right communication tools has become more critical than ever. In this comprehensive guide, we'll explore the various options available and help you make an informed decision... ```
After: ``` The best communication tools for remote teams in 2025 are Slack (best for instant messaging), Zoom (best for video), and Loom (best for async video). Here's a detailed comparison based on team size, budget, and use case:
[Comparison table]
Below, we'll cover when to use each tool, pricing breakdowns, and integration capabilities with project management software... ```
The new version answered the question immediately. The old version made readers wade through 200 words of context.
We added FAQ schema to all 10 articles. This meant actually writing the JSON-LD code and adding it to each page. Our CMS (WordPress) made this manageable with Yoast SEO, but it was still manual work.
FAQ format we used:
Q: Is Slack good for remote teams?
A: Yes, Slack excels for remote teams with 5-50 people. It offers
instant messaging, file sharing, and 2,000+ integrations. Best for
teams that need real-time communication but can overwhelm larger
organizations (50+ people) with notification fatigue.
Direct, specific, opinionated. We included hard numbers and constraints. Not "Slack is great for teams" but "Slack excels for 5-50 people."
Week 5-8: Technical Implementation
While Sarah optimized content, I focused on technical foundations.
Page speed improvements: Our average load time was 3.2 seconds. LLMs with web search (especially ChatGPT and Perplexity) timeout on slow sites or deprioritize them.
I used Cloudflare (already had it) to enable better caching and minification. Compressed images with TinyPNG. Lazy-loaded images below the fold. Got our average load time to 1.1 seconds.
Cost: $0 (optimization of existing tools) Time: 12 hours over two weeks Impact: Measurable increase in crawl rate from AI tools
Schema markup expansion: I added schema to:
All author profiles (Person schema)
Organization page (Organization schema)
All guides (HowTo schema)
Comparison pages (ItemList schema)
Used Google's Structured Data Testing Tool obsessively. Fixed validation errors until everything passed.
Cost: $0 (my time only) Time: 8 hours Impact: Hard to measure directly, but felt like table stakes
Enhanced author profiles: We had ghost authors ("The Team"). I changed every article to my name (I'm Head of Content) and added:
Photo
Detailed bio (150 words, not 30)
Credentials (10 years in SaaS, previously at [notable company])
LinkedIn link
Links to speaking engagements and podcast appearances
For our CEO Tom's articles (he wrote the technical deep-dives), we did the same but highlighted his engineering background and Y Combinator alumni status.
Cost: $0 Time: 6 hours Impact: Noticed this in AI responses later. Claude specifically mentioned authors by name when citing our content.
End of Month 2 Results
Second full audit (May 1, 2025):
Overall visibility: 18%
ChatGPT: 12% (6 out of 50 queries)
Claude: 24% (12 out of 50)
Perplexity: 20% (10 out of 50)
Google AI Overviews: 16% (8 out of 50)
We'd more than tripled our starting point. Claude loved us for some reason. We appeared most frequently in consideration-stage queries ("X vs Y" comparisons).
What surprised us:
Articles we thought were strong performers barely got cited. An in-depth guide we'd spent 40 hours on (covering everything about remote team management) never appeared. But a quick comparison article Sarah wrote in 90 minutes got cited regularly.
Lesson learned: comprehensive doesn't always mean cite-able. LLMs seemed to prefer focused, specific content over exhaustive guides.
Budget spent: $400
$250 for freelance help with schema implementation
$150 for a data subscription (Statista) to get fresh statistics
Time invested: Average 15 hours/week (mostly Sarah's time)
We celebrated the progress but knew we had a long way to go.
Month 3-4: Expansion Phase
The Content Creation Sprint
Sarah and I mapped out 15 new articles targeting the gaps we'd identified. Not random topics, but specific queries where competitors dominated and we had genuine expertise to offer.
We used a formula for each article:
Title: [Question format matching natural language queries]
First paragraph: Direct answer in 75-100 words
Quick answer box: Bulleted summary of key points
Detailed explanation: 1,500-2,500 words
FAQ section: Minimum 10 related questions
Data/statistics: At least 5 specific numbers with sources
Author expertise signal: Personal anecdote or experience
Example article: "How to Measure Remote Team Productivity Without Micromanaging"
This targeted a query where Hubstaff (time-tracking software) dominated. We had a strong point of view: time tracking isn't the answer for knowledge work.
Sarah interviewed five of our customers who successfully managed remote teams. We pulled specific quotes, strategies, and results. The article featured real companies (with permission) and actual data from their experience.
Writing time per article: 8-10 hours Publishing cadence: 3-4 articles per month (March: 4, April: 5, May: 3, June: 3)
Authority Building Through Digital PR
Getting cited by LLMs seemed correlated with domain authority and third-party mentions. We couldn't just optimize our own content and hope for the best.
Tom resisted this. "We're a small team. We don't have time for PR."
I showed him the competitor analysis. Asana had 200+ mentions in recent press. Monday.com was everywhere. We had maybe 15 meaningful third-party mentions total.
He gave me $1,000 and 30 days. "Make it count."
What we did:
1. Guest posting (5 articles published) I reached out to publications in our space:
Remote.co (wrote about async communication)
SaaStr blog (project management for early-stage startups)
First Round Review (we had a past connection)
Product Hunt blog (tips for remote product teams)
Each article mentioned our product once, naturally, in context. The real value was the backlink and brand association.
Cost: $0 (sweat equity) Time: 40 hours total Result: 5 high-quality backlinks, estimated DA boost of 3-4 points
2. Original research: State of Remote Work Survey Sarah hated this idea. "We're not a research company."
But I knew LLMs love citing studies and original data. We surveyed 247 remote team leaders (our existing customers plus LinkedIn outreach) about their biggest challenges, tools used, and productivity metrics.
Published a report with visualizations. Press release through PR Newswire ($389). Got picked up by Remote Tools Weekly, Team Management Monthly, and 12 smaller publications.
Cost: $725 ($389 PR Newswire, $200 design help, $136 incentive gift cards for survey completions) Time: 60 hours over 3 weeks Result: 18 backlinks, original data to cite in our own content
3. Expert contributor program We offered free expert analysis to journalists writing about remote work and project management. I signed up for HARO (Help A Reporter Out) and responded to 3-5 queries per week.
Landed quotes in:
Forbes (remote work technology)
Entrepreneur (startup productivity)
VentureBeat (SaaS growth strategies)
Cost: $0 (time only) Time: 2 hours/week Result: 7 mentions in major publications, authority boost
The Citation Optimization Obsession
Around week 10, I started reverse-engineering every competitor mention we found in AI responses.
I'd save the exact ChatGPT or Claude response, then analyze:
Which websites were cited
How information was structured on those pages
What made them "cite-worthy"
Common patterns across cited sources
Patterns I found:
1. Stats pages performed incredibly well Competitors had pages like "Remote Work Statistics 2025" that were just collections of data with sources. Simple. Authoritative. Cite-able.
We created our own version. Sarah compiled 87 statistics about remote work, project management adoption, and team collaboration. Each stat had a source. We updated it monthly.
That page now appears in 34% of our visibility wins. Single highest-performing piece we created.
2. Definition pages got cited for "what is X" queries "What is asynchronous communication?" "What is a sprint in project management?" "What is a Gantt chart?"
We created 12 definition pages for core concepts in our space. Format: one-paragraph definition, simple explanation, example, when to use it, related concepts.
These were 300-500 words each. Quick to create. Disproportionate impact.
3. Comparison tables embedded in content Any time we compared tools/approaches, using a clear table format increased citation likelihood.
We went back and added tables to 20 existing articles. HTML tables, not images (so LLMs could parse them).
4. Step-by-step formats with numbered lists "How to" content with clear 1-2-3 steps performed better than narrative explanations of the same process.
Monitoring and Iteration
Every Friday at 2pm, I ran our LLM testing. I'd split the 50 queries across the four platforms, record where we appeared, and note the context.
I started tracking sentiment:
Positive mention: Recommended us or highlighted strengths
Neutral mention: Listed us among options
Negative mention: Mentioned limitations or compared us unfavorably
Most of our early mentions were neutral (just being included in lists). By month 4, we started seeing positive framing more frequently.
I also noticed platform differences:
Claude: Favored our longer-form explanatory content
ChatGPT: Cited our comparison pages and stats
Perplexity: Loved our recently updated content
Google AI Overviews: Seemed to weight domain authority heavily
We started optimizing different content types for different platforms based on these patterns.
End of Month 4 Results
Full audit (July 1, 2025):
Overall visibility: 52%
ChatGPT: 54% (27 out of 50)
Claude: 62% (31 out of 50)
Perplexity: 48% (24 out of 50)
Google AI Overviews: 44% (22 out of 50)
We'd crossed the 50% threshold. We were now visible for more than half our target queries. Competitor dominance was dropping. Asana still led, but their percentage dropped from 78% to 61%. We were taking share.
Most exciting development:
For three queries, we were the primary recommendation. Not just listed among options, but "For startups under 20 people, [our product] offers the best balance of features and simplicity..."
Sarah and I high-fived when we saw the first one.
Budget spent months 3-4: $1,725
$1,000 for authority building (PR distribution, survey incentives, design help)
$525 for freelance writing help on 3 articles
$200 for additional data sources
Time invested: 18-20 hours/week
Cumulative spend: $2,125 of our $3,200 budget
Month 5-6: Acceleration Phase
Doubling Down on What Worked
By month 5, we had clear data on what drove citations. I created a simple ranking:
High-impact content types:
Statistics/data pages
Direct comparison articles
Definition pages
Step-by-step how-to guides
FAQ-rich topic pages
Low-impact content types:
Opinion pieces
Company news/updates
Long-form narrative content
Case studies (ironically)
Product feature announcements
We stopped creating low-impact content and went all-in on the winners.
Sarah created 10 more pieces in the high-impact categories. We updated the stats page twice during this period (August 1 and September 1).
I also went back to our "medium potential" pile from month 1 and identified 15 articles worth restructuring. We applied the same formula: direct answer first, add tables, create FAQ sections, update data.
Advanced Tactics and Testing
Around week 20, I started getting more experimental.
Testing answer length: I hypothesized that LLMs might prefer concise answers. I created two versions of the same article with different intro lengths:
Version A: 75-word direct answer Version B: 150-word detailed answer
Published A for two weeks, tested visibility. Swapped to B, tested again.
Result: No meaningful difference. Length didn't matter as much as directness and structure.
Testing question formats in headings: Changed headings from statements to questions:
Before: "Remote Team Communication Best Practices" After: "What are the best practices for remote team communication?"
Result: Small improvement (maybe 3-5% better citation rate), but hard to isolate the variable.
Testing first-person vs third-person: Rewrote three articles from third-person ("Companies should...") to first-person ("We recommend...")
Result: Claude seemed to cite first-person content more often. ChatGPT showed no preference.
The Final Push
By late August, we were at 71% visibility. We needed 19 more percentage points to hit 90%.
I analyzed the 15 queries where we still weren't appearing:
6 were dominated by enterprise competitors (hard to break into)
5 were very specific niche queries (low priority)
4 were queries where we had weak content
We focused on those final 4. Sarah wrote targeted content specifically addressing each query. We didn't try to game the system; we just provided genuinely better answers than what existed.
For the enterprise-dominated queries, we took a different angle. Instead of competing head-to-head, we optimized for "alternative to [Enterprise Tool]" searches. This worked for 3 of the 6.
Final Optimization Sprint
Last week of August, Sarah and I did a final quality pass on our top 50 pages:
Fixed any broken links
Updated all "last modified" dates with real improvements
Ensured every page had schema markup
Verified mobile experience
Checked load times
This wasn't about big changes. Just polish and quality assurance.
End of Month 6 Results
Final audit (September 3, 2025):
Overall visibility: 90%
ChatGPT: 88% (44 out of 50)
Claude: 94% (47 out of 50)
Perplexity: 90% (45 out of 50)
Google AI Overviews: 88% (44 out of 50)
We'd hit our goal. Out of 50 target queries, we now appeared in 45 responses on average across the four platforms.
Quality of mentions:
Primary recommendation: 18% of mentions
Top 3 listed: 54% of mentions
Mentioned in context: 28% of mentions
Sentiment breakdown:
Positive: 68%
Neutral: 29%
Negative: 3%
The negative mentions were honest limitations (we don't have Gantt charts, AI-focused competitors mentioned this).
Budget spent months 5-6: $900
$600 for additional content creation help
$200 for updated data sources
$100 for schema optimization tools
Total investment over 6 months: $3,025 (under budget!)
Sarah and I took the team out for dinner to celebrate. Tom finally believed this mattered.
Key Tactics That Moved the Needle
Looking back, five tactics drove most of our results. Here's what actually worked, ranked by estimated impact:
1. FAQ Schema Implementation (30% of improvement)
This was the single highest ROI activity. Every article we added comprehensive FAQ schema to saw improved LLM citation rates within 2-3 weeks.
What we did:
Minimum 8 questions per article (some had 15+)
Used actual questions people ask (drawn from customer support, sales calls, Reddit, Quora)
Answered each in 75-150 words
Implemented FAQPage schema markup properly (validated with Google's tool)
Format that worked:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "FAQPage",
"mainEntity": [{
"@type": "Question",
"name": "Is [our product] good for remote teams?",
"acceptedAnswer": {
"@type": "Answer",
"text": "[Direct answer with specifics, 75-150 words]"
}
}]
}
</script>
Common mistakes we made initially:
❌ Generic questions ("What is project management?") instead of specific ones
❌ Too-long answers (300+ words) that buried the key point
❌ Self-promotional answers instead of honest, helpful ones
❌ Schema errors that invalidated the markup
Time to implement: 20-30 minutes per article once we had the system down
Quick win: Pick your top 5 pages and add FAQ sections this week. You'll see results within a month.
2. Content Restructuring for Direct Answers (25% of improvement)
Rewriting first paragraphs to answer questions immediately had massive impact.
The formula:
[Direct answer to the question in 75-100 words]
[Optional: Quick bulleted summary of key points]
Below, we'll cover [what the rest of the article contains]...
Before/After example:
Before: "Project management has evolved significantly in recent years. With remote work becoming the norm and teams distributed across time zones, finding the right tools and methodologies has become increasingly important for organizational success. In this guide, we'll explore..."
After: "The best project management methodology for remote teams is hybrid Agile-Kanban. Use 2-week sprints for planning structure (Agile) combined with visual workflow boards (Kanban) to accommodate async work across time zones. This works for teams of 5-50 people.
Below, we'll cover how to implement this approach, tools that support it, and common pitfalls to avoid..."
The second version answers the question in the first sentence. LLMs love this.
Quick win: Rewrite the first paragraph of your top 10 articles this week. Test the query in ChatGPT before and after. You'll see the difference.
3. Author Credibility Signals (15% of improvement)
Adding detailed author information with credentials made content more cite-worthy.
What we added:
Full name (not "The Team")
Professional headshot
150-200 word bio highlighting relevant experience
Specific credentials (years in industry, companies worked at, notable achievements)
Links to LinkedIn, Twitter, speaking engagements
Author schema markup connecting content to Person entity
Example bio:
"Peter Frank is Head of Content at [Company]. He's spent 10 years in B2B SaaS marketing, previously leading content teams at [Notable Company] and [Another Company]. He's spoken at SaaStr Annual and Content Marketing World about remote team management and has built distributed teams across 12 countries."
Claude in particular seemed to weight author expertise. We'd see citations like "According to Peter Frank, a content marketing expert who has managed remote teams..."
Quick win: Update the bio on your most-cited author this week. Add credentials, a photo, and LinkedIn link.
4. Fresh, Original Data (15% of improvement)
Our statistics page and original research survey drove disproportionate citations.
What worked:
Curated statistics page (updated monthly)
Original survey data
Data visualizations (charts, graphs)
Clear source attribution
Recent publication dates
Our "Remote Work Statistics 2025" page appeared in 34% of our visibility wins. That's one page driving a third of our results.
Creating cite-worthy data:
You don't need a massive research budget. Our survey cost $136 in incentive gift cards and reached 247 respondents. The PR distribution was $389. Total: $525 for data that's been cited hundreds of times.
Even a simple customer survey (100 responses) gives you original data. "73% of our customers reported increased productivity when using async video versus live meetings."
Quick win: Survey your customers or email list this week. Even 50 responses give you original data to cite.
5. Comprehensive Topic Coverage (15% of improvement)
Going deep on topics rather than wide performed better.
We created what I call "topic hubs": comprehensive resources covering every aspect of a specific subtopic.
Example: Our "Asynchronous Communication Hub"
Main guide (3,200 words)
Definition page (400 words)
Tools comparison (1,800 words)
Best practices (2,100 words)
Common mistakes (1,400 words)
FAQ page (1,200 words)
All interlinked. Each piece answered specific questions. Together, they established us as the authoritative source on async communication.
This worked better than 20 shallow articles on different topics.
Quick win: Pick one core topic and create 4-5 comprehensive pieces covering different angles. Interlink them heavily.
Tools and Resources We Used
Here's the honest breakdown of what we used and how much we spent.
Tracking and Measurement (Total: $0)
We used the budget approach: manual testing + spreadsheets.
Our tracking system:
Google Sheets with 50 rows (one per query)
Columns for each platform (ChatGPT, Claude, Perplexity, Google AI Overviews)
Tested twice per week (Tuesday 2pm, Friday 2pm)
Marked Y/N for whether we appeared
Noted context (primary recommendation, list mention, negative mention)
Time cost: 90 minutes twice per week = 3 hours/week
Paid alternatives we considered but didn't use:
Semrush Enterprise AIO ($200+/month) - Too expensive for us
Profound ($149/month) - Came out mid-way through our journey
Search Atlas LLM Visibility ($99/month) - Not worth it for 50 queries
Our recommendation: Start with manual testing. Once you're at scale (100+ queries) or have budget, consider paid tools.
Content Optimization (Total: $200)
Free tools we relied on:
Google's Structured Data Testing Tool (schema validation)
Yoast SEO (WordPress plugin for schema implementation)
Hemingway Editor (readability)
Grammarly free (grammar checking)
ChatGPT (brainstorming FAQs, headline testing)
Paid tools:
Statista ($150/year) for statistics
Ahrefs ($99/month, already had it) for content gap analysis
Screaming Frog ($190/year) for technical SEO audit
Our recommendation: You can do this with free tools. We only paid for data sources.
Schema Markup ($250)
We hired a developer on Upwork for 5 hours ($50/hour) to create reusable schema templates for our CMS. After that, implementation was easy.
Alternatives:
Schema.org documentation (free, time-consuming)
WordPress plugins (many free options)
Do it yourself with JSON-LD generators
Authority Building ($1,075)
This was our biggest expense category:
PR Newswire distribution: $389
Survey incentives: $136
Design help for research report: $200
Freelance outreach help: $350
Free alternatives:
Submit to free press release sites (PRLog, OpenPR)
Use Google Forms for surveys
DIY design with Canva
Do your own outreach
Content Creation Support ($525)
We hired freelancers for 3 articles when Sarah was overloaded. $175 per article for first drafts, then we heavily edited.
Our recommendation: Only outsource if you're capacity-constrained. The articles Sarah wrote performed better because she understood our voice and audience.
Total 6-Month Budget: $3,025
Breakdown:
Month 1-2: $400
Month 3-4: $1,725
Month 5-6: $900
This is remarkably affordable for an 18x improvement in visibility.
What Didn't Work (Lessons Learned)
I want to be honest about our failures. We wasted time and money on tactics that went nowhere.
1. Keyword-Stuffing for LLMs (Backfired)
Early on, I had a theory: if LLMs parse content like old-school search engines, maybe keyword density matters.
I tested this on two articles, adding our brand name and key phrases 20-30% more frequently than natural.
Result: Both articles stopped getting cited. I think it triggered some kind of quality filter. Removed the excess mentions, citations returned within two weeks.
Lesson: Write naturally. LLMs are sophisticated enough to detect manipulation.
2. Generic, Thin Content (Never Got Cited)
In month 2, Sarah created 5 quick articles (500-700 words each) covering basic questions we thought would be easy wins.
None of them ever got cited. Not once.
We realized LLMs favor substantive content. There's a quality threshold. Our testing suggests 1,500+ words with real depth performs better than thin coverage.
Lesson: Quality over quantity. One great 2,500-word article beats five mediocre 500-word pieces.
3. Ignoring Mobile Optimization (Lost Early Citations)
Our mobile experience was acceptable but not great. Turns out, Perplexity and Google AI Overviews test mobile versions.
We noticed Perplexity specifically citing competitors over us even when our content seemed better. Checked mobile load times: 4.8 seconds. Competitors: under 2 seconds.
Fixed it. Citations increased.
Lesson: Mobile performance matters. Test on actual mobile devices.
4. Not Testing Frequently Enough (Missed Patterns)
We started with monthly testing. Big mistake. LLM responses are variable, and monthly snapshots missed important trends.
When we moved to twice-weekly testing, we caught patterns:
Citations appeared, then disappeared, then reappeared (content freshness matters)
Different times of day produced different results (LLM model updates?)
Specific competitors would surge then drop (competitive activity)
Lesson: Test at least weekly. Twice weekly is better for catching trends.
5. Focusing Only on ChatGPT (Limited Overall Visibility)
For the first month, I only tested ChatGPT because it was the most popular. Waste of time.
Different LLMs have different preferences:
Claude favored our explanatory content
ChatGPT preferred our data/stats
Perplexity loved recent updates
Google AI Overviews weighted domain authority
Optimizing for one meant missing the others.
Lesson: Test across multiple platforms from day one.
6. Over-Optimizing for Specific Phrases (Came Across as Unnatural)
I tried forcing specific phrases into content because those phrases appeared in target queries.
Example: "remote team project management software for startups"
I'd cram that exact phrase into headings, first paragraphs, FAQs. It read terribly. Never worked.
Lesson: Optimize for intent, not exact phrases. LLMs understand semantic meaning.
7. Ignoring Content Freshness (Hurt Long-Term)
We updated content when optimizing it but then left it alone. After 3-4 months, citation rates dropped for those articles.
We started a monthly refresh rotation: update stats, add new examples, adjust recommendations. Citations stabilized.
Lesson: Fresh content performs better. Plan for ongoing updates.
Business Impact Beyond Visibility
The real question: did this actually drive business results?
Traffic Growth from AI Platforms
We tagged all referral traffic from ChatGPT, Claude, Perplexity, and Google AI to track it separately.
March 2025 (before optimization): 127 monthly visits from AI platforms
September 2025 (after optimization): 651 monthly visits
Increase: 412%
That's an additional 524 monthly visits from AI search.
Lead Quality
Here's what surprised us: AI-referred traffic converted better than organic search traffic.
Email signup conversion rates:
Organic search traffic: 2.3%
AI platform traffic: 3.8%
Free trial conversion rates:
Organic search traffic: 11.2%
AI platform traffic: 18.7%
Paid customer close rate:
Organic search leads: 8.9%
AI platform leads: 12.6% (42% higher)
Why? Our theory: AI provides context before users visit. They've already gotten a recommendation or detailed explanation. They arrive more educated and qualified.
Brand Awareness Surge
Branded search volume (people searching for our company name):
March 2025: 340 monthly branded searches
September 2025: 521 monthly branded searches
Increase: 53%
Even when people didn't click through from AI platforms, seeing us mentioned increased brand recall. They'd search for us directly later.
Unexpected Benefits
1. Improved traditional SEO performance Our overall organic traffic increased 23% during this period. We think the content improvements that helped LLM visibility also helped traditional SEO.
2. Better content overall Forcing ourselves to answer questions directly and provide substantive value made everything we wrote better. Our average time on page increased 31%.
3. Stronger brand authority Getting cited by AI platforms positioned us as experts. We noticed this in sales calls: "I saw you mentioned in ChatGPT when I was researching tools..."
4. Partnership opportunities Two partnership opportunities came from people who found us through AI search. One turned into a $15K referral deal.
ROI Calculation
Investment: $3,025 (money) + approximately 390 hours (Sarah's time + my time)
If we value Sarah's time at $75/hour and mine at $100/hour (blended rate of $87.50/hour): Time cost: 390 hours × $87.50 = $34,125 Total investment: $37,150
Return (6 months):
Additional 524 monthly visits from AI platforms
18.7% trial conversion rate = 98 additional trials/month
12.6% of trials convert to paid = 12 new customers/month
Average first-year customer value: $3,200
Total first-year revenue from AI visibility: 12 × 6 months × $3,200 = $230,400
6-month attributed revenue: $115,200 (rough estimate, conservative)
ROI: 210% ($115,200 revenue / $37,150 investment)
This doesn't account for compounding benefits (these customers stay for years, refer others, etc.) or the brand awareness that's harder to quantify.
Was it worth it? Absolutely.
How to Replicate Our Results
You can do this. Here's your playbook based on what worked for us.
For Startups with Limited Budget (Under $500)
Month 1-2: Foundation
Week 1: Create your 50 test queries. Use actual customer questions.
Week 2: Run initial visibility audit. Test manually across ChatGPT, Claude, Perplexity, Google AI.
Week 3-4: Identify top 10 existing articles with citation potential.
Week 5-8: Optimize those 10 articles:
- Rewrite first paragraphs for direct answers - Add FAQ sections (minimum 8 questions) - Implement FAQ schema markup - Update author bios with credentials - Add/update statistics
Budget: $0-200 (optional: data sources, schema help)
Time: 10-15 hours/week
Month 3-4: Expansion
Create 5-8 new comprehensive articles (2,000+ words each)
Focus on questions where you have genuine expertise
Add FAQ sections to all new content
Improve page speed (free tools: Google PageSpeed Insights, CloudFlare)
Build 3-5 high-quality backlinks (guest posting, expert quotes)
Budget: $100-200 (optional freelance help if needed)
Time: 12-18 hours/week
Month 5-6: Acceleration
Double down on content formats that got cited
Create a statistics/data page for your industry
Update existing high-performers with fresh data
Run a simple customer survey for original data
Final optimization pass on top 20 pages
Budget: $0-100
Time: 10-15 hours/week
Total budget: $200-500
Expected result: 40-60% visibility improvement (varies by industry)
For Teams with Budget (Up to $5K)
Same approach but accelerated:
Faster content creation: Hire freelancers for first drafts ($150-200 per article)
Better authority building: Invest in PR distribution ($300-500)
Original research: Professional survey ($500-1,000)
Paid tools: Semrush or similar for competitive analysis ($100-200/month)
Schema implementation: Developer help ($300-500 one-time)
Budget: $3,000-5,000
Timeline: Can compress to 4 months instead of 6
Expected result: 60-80% visibility improvement
Universal Principles (Whatever Your Budget)
1. Start with measurement You can't improve what you don't track. Manual testing is free and effective.
2. Quality over quantity Ten amazing articles beat fifty mediocre ones. LLMs favor depth and expertise.
3. Answer questions directly Get to the point in the first paragraph. No fluff, no buildup.
4. FAQ sections are non-negotiable This is the highest ROI activity. Add FAQs with schema to everything.
5. Show expertise Detailed author bios matter. Real credentials and experience get cited.
6. Fresh content performs better Plan for updates. Don't publish and forget.
7. Test across platforms ChatGPT, Claude, Perplexity, and Google AI Overviews all have different preferences.
8. Be patient This takes 2-3 months minimum to see meaningful results. Don't expect overnight changes.
What to Prioritize First
If you only have time for three things this week:
Add FAQ sections to your top 5 pages (with proper schema markup)
Rewrite first paragraphs of those same 5 pages to answer questions directly
Run your initial visibility audit to establish baseline
Start there. You'll see improvement within 4-6 weeks.
The Future: Maintaining and Growing LLM Visibility
Our visibility doesn't stay at 90% automatically. It requires ongoing work.
Ongoing Maintenance (What We Do Now)
Weekly testing: Still test 50 queries every Friday. Takes 90 minutes.
Monthly updates: Rotate through top 20 articles, updating at least 5 per month with fresh data, new examples, or expanded FAQs.
Quarterly content refresh: Every three months, full audit of our highest-traffic pages. Update anything that's stale.
Competitive monitoring: Track competitor citations monthly. If they surge in a category, we investigate why.
New content: Still publishing 2-3 new articles monthly, focused on emerging queries we see in testing.
Time investment now: About 8 hours/week (down from 15-20 during optimization phase)
Budget: $500-700/quarter for ongoing updates and tools
How Visibility Changes Over Time
We've noticed patterns:
Citation decay: Articles that haven't been updated in 4+ months see citation rates drop 15-20%.
Competitive response: As we started appearing more, competitors intensified their efforts. Our visibility in some categories dropped as they optimized.
Platform algorithm updates: LLMs change constantly. We saw a 5% drop across all platforms in August when ChatGPT updated to GPT-4.5. Recovered within two weeks with no changes on our end.
Seasonal trends: Some queries get more traffic (and citations) at certain times of year. January (planning season) is huge for us. Summer is slower.
Staying Ahead of Changes
1. Diversify across platforms Don't rely on ChatGPT alone. It's 40% of the market now, but that could change. We maintain strong presence across all four major platforms.
2. Own your categories We picked 10 core topics where we want to be the definitive source. We invest heavily in those. Everything else is secondary.
3. Build genuine expertise You can't fake authority long-term. We invest in original research, customer case studies, and thought leadership because it's the only sustainable moat.
4. Monitor new platforms Gemini Advanced is growing. We started testing there monthly. Several AI search startups are emerging. We track them.
5. Focus on quality signals Page speed, mobile optimization, proper schema, clear authorship. These fundamentals won't change regardless of algorithm updates.
Our Roadmap for 95%+
We want to hit 95% visibility by end of year. Here's how we'll get there:
The remaining 5 queries:
3 are dominated by enterprise competitors (Asana, Monday.com). We're targeting "alternative to [Enterprise Tool]" angles instead of competing directly.
2 are very technical deep-dives where we have weak content. Sarah's writing comprehensive guides to address them.
Expansion:
Adding 25 new test queries (bringing total to 75) to cover more of the customer journey
Creating content specifically for decision-stage queries (comparison, ROI, implementation)
Quality improvements:
Video content embedded in articles (testing if this helps citations)
More original research (planning quarterly surveys)
Expert quotes from customers and industry leaders
New platforms:
Gemini Advanced testing program
Emerging AI search tools
Perplexity Pro features
Predictions for LLM Visibility in Late 2026
Based on what we're seeing:
1. It will become table stakes Right now, many companies don't track LLM visibility. By end of 2026, it'll be as standard as SEO tracking.
2. Paid tools will improve The current LLM tracking tools are early stage. They'll get better, more accurate, and possibly cheaper.
3. Content quality bar will rise As more companies optimize, mediocre content will become invisible. The quality threshold will increase.
4. Brand authority will matter more LLMs already favor established brands. This will intensify. Building genuine authority will be the only sustainable strategy.
5. Specialization will win Generalist content will struggle. Deep expertise in narrow topics will perform better.
6. New platforms will emerge ChatGPT won't stay dominant forever. Be ready to optimize for whatever comes next.
7. Real-time data will be critical LLMs with real-time web search capabilities will favor fresh, recently updated content even more.
My advice: Start now. The competitive advantage window is still open. Six months from now, it'll be much harder.
Conclusion
We started at 5% LLM visibility in March 2025. Six months and $3,025 later, we hit 90%.
The key wasn't magic tactics or secret strategies. It was consistent execution of fundamentals:
Answer questions directly
Add comprehensive FAQs with schema
Show genuine expertise
Create substantive, data-rich content
Update regularly
Test and iterate
Your starting point doesn't matter. Ours was terrible. We had good SEO but zero AI visibility. If we could 18x our visibility in six months, you can too.
The strategy is replicable. Everything we did is documented here. You can follow the same playbook with a similar budget and timeline.
This matters for your business. We saw 412% traffic growth from AI platforms, higher-quality leads, and $115K in attributable revenue in six months. The ROI is clear.
Start by measuring your current visibility. Pick 50 queries your prospects ask. Test them across ChatGPT, Claude, Perplexity, and Google AI Overviews. Record where you appear. That's your baseline.
Then follow month 1's playbook: optimize your top 10 articles. Add FAQs. Rewrite intros for directness. Add schema markup. Test again in 4 weeks.
You'll see improvement. Build from there.
Six months from now, you could be at 90% too.
The question is: will you start today, or will you wait until your competitors own the AI responses in your space?
We started in March when most companies weren't thinking about this. That early-mover advantage helped. But it's not too late. Most companies still haven't optimized for LLM visibility.
The opportunity is still here.
Go measure your visibility today. Then start optimizing.
---
Want the tracking spreadsheet template we used? The exact 50 queries we tested? Our FAQ schema template?
Sources: businessofapps.com, blog.google, businessofapps.com, ahrefs.com, moz.com
These would typically be lead magnets, but I'm including everything we learned in this case study because I genuinely want more companies to take LLM visibility seriously.
Your move.

Peter Frank
GEO Strategist
Ready to Optimize for AI Search?
Join thousands of marketers who are already using LLMScout to track and improve their performance in AI search results.
Get Early Access