PLAYBOOK · GEO & AI SEARCH
THE COMFORTABLE ILLUSION
There is a growing cultural assumption that when ChatGPT, Perplexity, or Google’s AI Overview recommends a product, a service, or a brand, the recommendation is somehow more objective than a traditional search result. The AI weighed the options. It considered the evidence. It gave you the best answer.
The reality is far more complicated, and for anyone building a consumer brand right now, it matters.
TL;DR
- Consumers trust AI recommendations more than traditional search results. That trust is premature.
- AI systems save compute by recycling top-ranking listicle articles. Those listicles are often pay-to-play.
- Scaled, AI-generated “Top [Category]” content is gaming the system at industrial scale, and most of the “winners” are brands nobody has ever heard of.
- Google has corrected spam at this scale before (see the September 2024 core update). AI answers will too. Eventually.
- For legitimate brands—especially in beauty, wellness, and food and beverage—the answer is both strategic placement and genuine expertise content. Not either/or.
AI SYSTEMS TAKE SHORTCUTS
The uncomfortable truth about how AI-generated recommendations work in practice: these systems are engineered to conserve compute.
When a shopper prompts ChatGPT with “what’s the best retinol for sensitive skin?” the ideal process would be for the AI to read hundreds of pages of clinical data, user reviews, dermatologist commentary, and ingredient analyses, then synthesize a genuinely informed answer.
What actually happens is lazier. The system queries for existing “best of” listicles, pulls from the top-ranking articles, and largely recycles whatever those lists recommend. The brands at the top of a consumer magazine roundup or a major publication’s listicle end up at the top of ChatGPT’s answer—not because the AI independently determined they were superior, but because the AI read the same listicle the shopper could have found herself.
THE LISTICLES ARE PAY-TO-PLAY
This is where the economics get uncomfortable.
In the beauty and consumer space specifically, the major publications operate listicle placements as a revenue stream. A brand purchasing a meaningful advertising buy often receives a placement in a “Best of” article as a value-add. Some publications sell listicle spots directly. And the affiliate model means publications earn commission on every sale generated through their recommendation links, creating an incentive to recommend products with the highest affiliate payouts rather than the highest clinical efficacy or consumer satisfaction.
The chain looks like this: a brand pays for placement in a listicle, the listicle ranks well in Google, the AI reads the listicle and surfaces those brands as recommendations, and the consumer receives what she believes is an objective AI-generated answer.
Every link in that chain involves money changing hands. None of that context reaches the consumer.
MANIPULATION IS RAMPANT, AND IT WORKS
Beyond the pay-to-play listicle ecosystem, there is outright manipulation happening at scale.
One notable example: a company published hundreds of AI-generated articles following a simple template—“Top [Category] [Service Providers]”—and placed themselves at #1 across every category. These articles ranked in Google, got picked up by AI systems, and resulted in that company appearing as the top recommendation for dozens of categories they had no demonstrable expertise in.
At industry conferences, speakers have pulled up these AI recommendations and asked rooms full of seasoned professionals whether they had heard of the “winning” brands. Nobody raises a hand. These unknown entities consistently top AI-generated answers.
The manipulation works because of how Reciprocal Rank Fusion operates under the hood. AI systems perform multiple subqueries and synthesize results based on which entities appear most frequently across those subqueries. Flood the search results with enough self-promotional content and you appear across enough subqueries to trigger consistent AI recommendations, regardless of whether you deserve to be there.
THE SPAM WILL GET CORRECTED. EVENTUALLY.
History provides some comfort. Google has repeatedly demonstrated its ability to identify and penalize manipulative content patterns. The September 2024 core update crushed several companies that had built their visibility on exactly this kind of scaled, low-quality content production.
“Eventually” is doing a lot of work in that sentence. There is a window during which manipulative strategies work, and legitimate brands that play by the rules face a real competitive disadvantage against those that don’t.
The brands building their AI visibility on substantiated claims (genuine third-party validation, real customer reviews, expert endorsement, authentic content) are building on a foundation that will hold through algorithm updates. The brands gaming the system with scaled content spam are building on borrowed time.
WHAT THIS MEANS FOR LEGITIMATE BRANDS
If your brand is trying to compete in this landscape honestly, here’s what matters right now.
The AI can be influenced. Influence it strategically.
There is currently no penalty in generative search for strategic placements in third-party publications. Unlike traditional SEO—where buying links could get your site penalized—there is no equivalent punishment mechanism in AI systems. Strategic digital PR, earned media, and affiliate partnerships that get your brand into the listicles that AI systems cite are legitimate and effective tactics.
On-site content matters more than ever.
AI systems need to find clear, comprehensive information about your brand on your own website. If you offer a product, a feature, or a differentiator, it needs to exist in crawlable, text-based content on your site. The brands that lose in AI visibility are often not stating their own value proposition clearly enough for AI systems to extract.
Off-site corroboration is the key differentiator.
The gap between brands that show up in AI answers and the brands that don’t often comes down to whether external sources validate their claims. Press coverage, YouTube reviews, creator content, social proof, Reddit threads, industry citations—these are not just PR wins anymore. They are the corroboration layer AI systems rely on to decide who deserves a recommendation.
Invest in content that demonstrates genuine expertise.
When algorithm corrections inevitably punish manipulative content, the content that survives will be content that delivers genuine information gain, such as insights, perspectives, or data that didn’t exist before. Subject matter expert interviews, original research, and case studies with real numbers exemplify the types of content that both AI systems and algorithm updates reward.
A CALL FOR TRANSPARENCY
The AI industry has a responsibility to be more transparent about how recommendations are generated; consumers deserve to know when an AI recommendation is essentially a recycled affiliate listicle rather than an independent evaluation.
Until that transparency exists, brands need to understand the real mechanics of AI-generated recommendations—not the idealized version—and build their strategies accordingly. The landscape rewards those who combine strategic visibility tactics with genuine value creation. It punishes those who rely exclusively on either approach alone.
FREQUENTLY ASKED QUESTIONS
Are AI recommendations objective?
No. AI systems save compute by finding top-ranking listicle articles and recycling their recommendations. Those listicles are often pay-to-play through endemic advertising partnerships and affiliate economics.
Is there a penalty for gaming AI recommendations?
Not currently in the AI-answer layer. No link equity, no spam penalty, no algorithmic punishment for purchased placements in cited sources. Google’s September 2024 core update penalized some scaled-content offenders on the organic SEO side, but the AI-answer layer has no equivalent enforcement yet.
What should legitimate brands do about the manipulation problem?
Invest in strategic digital PR and affiliate placements into cited sources. Make on-site content comprehensive and crawlable. Build off-site corroboration through earned media, creator content, and review platforms. Prioritize genuine-expertise content that will survive the next round of algorithm corrections.
About the Author
John Morabito is SVP of Search & Innovation at Stella Rising, where he leads integrated SEO, GEO, and AI-native marketing programs for consumer, beauty, and wellness brands.

COMMENTS