A detailed case study of how a Series B SaaS company went from invisible in AI search to consistently cited by ChatGPT and Perplexity in 90 days, and the exact tactics that drove the shift.
In January 2026, a Series B project intelligence platform came to Arclign with a problem that is becoming increasingly common: they were winning deals through referrals and existing customer networks, but when enterprise buyers searched AI engines for solutions in their space, they simply did not appear. Not ranked lower than competitors — completely absent. Their category existed in AI search. Their competitors were named in it. They were not.
Ninety days later, the same company was appearing in ChatGPT responses for 140 distinct queries relevant to their product. Perplexity was citing them in answers about project intelligence, construction analytics, and capital project management. Three enterprise prospects that quarter mentioned to the sales team that they had "seen Arclign recommend" the platform — except they meant their AI search tool, not us. That attribution confusion is, perversely, a sign the strategy worked.
This is that story. With the client's permission to share methodology and outcomes (though not their name), I want to walk through exactly what we did, in what sequence, and why each element mattered.
The Starting Point: An AI Visibility Audit
Before building any strategy, we ran a baseline AI visibility audit. This means systematically querying ChatGPT, Perplexity, Gemini, and Microsoft Copilot across every query variant we could identify in the client's space, then documenting which brands appeared, how they were described, and which sources were cited.
The audit took three days and covered 180 queries. Results were stark. The client appeared in zero queries — not even in response to their own product name when framed as a category question ("what are the best platforms for capital project intelligence?"). Three competitors appeared frequently. Two were older, established brands. One was a newer entrant that had launched a systematic GEO programme earlier that year.
The newer competitor's rapid emergence was instructive. It showed that AI visibility is not simply a function of brand age or market share — it is a function of structured content, authority signals, and citation architecture. All three of which can be built deliberately.
Phase One: Entity Definition and On-Site Content (Days 1–30)
The first month focused entirely on making the client's website a reliable source of extractable, citable information. This was less glamorous than it sounds — it was mostly rewriting.
We audited every page on the site for what I call "extractable claim density" — the proportion of content that contains specific, factual, citable statements about what the company does, who it serves, and what outcomes it produces. For most pages, this proportion was below 20%. The remaining 80% was a mix of benefit statements, feature lists, and aspirational brand language that reads well to humans but tells AI models almost nothing they can confidently cite.
We rewrote the homepage, all product pages, and the About page with a different primary objective: AI extraction. The language became more direct, more specific, and more encyclopaedic. "Powerful analytics for complex projects" became "a capital project intelligence platform that processes real-time cost, schedule, and risk data across active construction and infrastructure projects." Less exciting to read. Dramatically more citable.
We also added structured FAQ sections to eight pages, each containing 4–6 questions written to match the actual query language buyers were using in AI searches. These questions were identified from sales call recordings, customer success conversations, and competitor analysis of which queries were triggering AI answers in the space. Each answer was written to be self-contained and directly citable.
The language became more direct, more specific, and more encyclopaedic. Less exciting to read. Dramatically more citable. That is the core trade-off of GEO content work.
Phase Two: Structured Data and Schema Implementation (Days 15–45)
Parallel to the content work — with a two-week overlap — we implemented a comprehensive schema markup programme. This included Organisation schema on the homepage with detailed sameAs properties linking to all authoritative third-party profiles, FAQPage schema on all pages containing FAQ sections, SoftwareApplication schema on product pages with detailed feature and capability descriptions, and BreadcrumbList schema across the site for navigational clarity.
Schema markup is not a silver bullet, and we're careful about overclaiming its direct impact on AI citation. What it does is reinforce entity associations — it signals to AI systems what kind of entity your brand is, what category you belong to, and what you do. When combined with strong on-site content, it creates a more coherent and confident signal for AI extraction systems to work with.
The implementation took a senior developer three weeks of part-time work. We validated everything with Google's Rich Results Test and ran structured data testing across Bing's equivalent tools before moving on.
Phase Three: Authority Signals — Analyst Relations and Earned Content (Days 30–90)
The third phase was the highest-leverage and slowest-moving: building the external authority signals that AI models use to validate and amplify on-site content. For this client, we focused on three channels.
Industry analyst coverage. We prepared and delivered briefings to analysts at G2, Capterra, and TrustRadius who cover project management and construction tech. This resulted in updated analyst profiles that described the platform in specific, citable language. G2 category placement matters significantly for AI citations in competitive software queries.
A proprietary research report. We produced a 1,400-word data report on "The State of Capital Project Delivery in 2026," drawing on the client's anonymised platform data. The report was sent to industry publications and generated editorial coverage in two construction technology trade publications. Both pieces described the client in specific, consistent language aligned with our target citation phrases.
LinkedIn Pulse articles from the CEO. Five CEO-authored articles covering capital project risk, construction technology adoption, and project intelligence best practices. These were published on LinkedIn with full company attribution, providing additional indexed content from an authoritative source.
The Results: 90-Day Outcome Breakdown
By day 90, the picture had changed substantially. We ran the same 180-query audit as baseline, plus an expanded set of 60 additional queries identified during the programme.
90-Day Results Summary
- ChatGPT citations: 140 queries (up from 0) — appearing in category recommendation, comparison, and feature queries
- Perplexity citations: 67 queries with source attribution to company blog and research report
- Google AI Overviews: Appearing in 12 queries, with branded result in 3
- Competitor gap: Closed from 3 competitors to 1 ahead in overall citation frequency
- Sales attribution: 3 inbound enterprise prospects mentioned AI search as discovery channel in quarter
The results were not uniform across AI platforms. ChatGPT showed the strongest response to on-site content changes and showed measurable citation increases within 30 days of the content rewrite. Perplexity responded strongly to the research report and trade publication coverage — its real-time retrieval system indexed the coverage quickly and began surfacing it in relevant answers within a week of publication. Google AI Overviews moved slowest, with meaningful presence emerging only in weeks 10–12.
What This Case Study Reveals About AI Citation Mechanics
A few things stand out from this programme that I think generalise beyond this specific client.
First, the content rewrite had a faster and more measurable impact than we expected. The conventional wisdom is that AI citations are primarily driven by external authority signals — backlinks, press coverage, analyst mentions. In our experience, on-site content quality is a more immediate lever than many practitioners assume, particularly for ChatGPT's retrieval-augmented responses.
Second, research reports are disproportionately high-leverage for B2B brands. A proprietary data report gives journalists a reason to cover you and gives AI models a citable source with specific, verifiable data. It is one of the most efficient content investments in the GEO toolkit.
Third, consistency of language matters more than volume of content. The brands that dominate AI citation in competitive categories are typically not the ones producing the most content — they are the ones using the most consistent language across every touchpoint. When your homepage, your analyst profiles, your press coverage, and your CEO's LinkedIn articles all describe you in the same specific terms, AI models develop high-confidence entity associations. Inconsistency creates uncertainty, and uncertain AI models hedge by citing multiple sources or avoiding citation altogether.
For a deeper look at the content frameworks underlying this programme, the GEO Content Framework guide covers the layer-by-layer approach in detail. And for the technical implementation side, the structured data and schema markup guide explains exactly how the schema work in Phase Two functions.
Frequently Asked Questions
How long does it take to get cited by ChatGPT?
Based on Arclign's work with B2B SaaS and enterprise software clients, meaningful ChatGPT citation improvements typically begin appearing within 30 to 60 days of implementing strong on-site content changes and structured data. Broader citation across multiple AI platforms, including Perplexity and Google AI Overviews, generally takes 60 to 90 days from programme launch. External authority signals — earned media, analyst coverage, research reports — can accelerate timelines but take longer to build. In the case study above, ChatGPT citations began appearing in meaningful numbers within the first 30 days following the content rewrite phase.
What is the most important factor for AI citations for SaaS companies?
Based on Arclign's analysis of SaaS brands' AI citation performance, the most impactful individual factor is on-site content specificity — how clearly and specifically your website describes what your product does, who it serves, and what outcomes it produces. Generic benefit language is nearly uncitable by AI engines. Specific, factual, encyclopaedic descriptions are highly citable. Beyond on-site content, external authority signals such as G2 and analyst coverage, industry press, and proprietary research reports significantly amplify citation frequency.
Does schema markup directly improve AI citations?
Schema markup does not directly force AI citations, but it plays an important supporting role. Structured data — particularly Organisation, FAQPage, and SoftwareApplication schema — helps AI systems understand what your brand is, what category it belongs to, and what it does. This reinforces entity associations that influence citation decisions. The strongest results come when schema markup is implemented in combination with high-quality on-site content and external authority signals, rather than as a standalone tactic. In the case study above, schema implementation contributed to the overall programme effectiveness but was not the primary driver of citation gains.