We’re living through one of those rare moments when the rulebook gets tossed out the window.
Right now, while you’re reading this, someone is typing a question into ChatGPT instead of Google. Someone else is asking Claude for software recommendations. Another person is trusting Perplexity to tell them which agency to hire. These aren’t edge cases anymore. According to Adobe Analytics, traffic from AI sources to U.S. retail sites jumped 3,500% between July 2024 and May 2025. Travel sites? A 3,200% increase.
The kicker is that most brands have absolutely no idea how they’re showing up in these conversations. They’re obsessing over Google rankings while an entirely new discovery layer is forming right under their noses. Meanwhile, a handful of scrappy marketers have figured out that influencing AI answers is startlingly easy. Some are doing it ethically. Others? Not so much.
Here’s what we’ve learned about the new game everyone’s playing.
The Thing Nobody Wants to Say Out Loud
Large language models are ridiculously easy to manipulate right now.
I know that sounds dramatic, but the evidence keeps piling up. We noticed it first when we saw marketers flooding the web with “best X software” articles, listing their own products at the top. They weren’t being subtle about it. Low-quality content, posted everywhere from Medium to Substack to their own blogs. A few weeks later, those same brands started appearing prominently when you asked ChatGPT or Claude for recommendations in their category.
Then researchers decided to test how far you could push it. They created articles with fake publication dates, claiming content was published recently when it was actually months or years old. The result? Those pages shot up hundreds of positions in what LLMs would recommend. Freshness, it turns out, is a massive ranking signal for AI tools. Unlike Google, which has spent two decades building sophisticated systems to detect manipulation, these AI platforms are operating with training wheels on.
That’s simultaneously an opportunity and a problem.
The opportunity is obvious if you run a business or manage marketing for one. Understanding how to ethically increase your brand’s visibility in AI-generated answers could be the difference between thriving and disappearing over the next three years. LLM visibility tools now track exactly this metric. Platforms like Otterly.AI, Profound, and Semrush’s AI SEO Toolkit let you monitor how often your brand appears when relevant questions get asked, which competitors are mentioned more frequently, and which content sources AI platforms trust most.
The problem is what happens when the bad actors figure this out too. Which, unfortunately, they already have.
When Reddit Became a Weapon
Let me tell you about CodeSmith because their story is equal parts infuriating and instructive.
CodeSmith ran a coding bootcamp. By most accounts, they did solid work, helped people learn to code, and built a decent reputation. Then their revenue dropped 80%. Jobs were lost. The business nearly collapsed. The culprit? A sustained campaign of negative Reddit posts that absolutely destroyed their online reputation.
Here’s where it gets interesting. Those posts weren’t authentic user complaints. They were orchestrated by a Reddit moderator who happened to co-found one of their competitors. This person posted negative content about CodeSmith nearly every single day for 500 consecutive days. Five hundred days. Every. Single. Day.
Those posts didn’t just hurt them on Reddit. They showed up prominently on Google’s homepage when anyone searched for CodeSmith. More critically, they influenced what AI platforms said when asked about coding bootcamps. Reddit holds enormous sway with LLMs right now. In June 2025, Reddit accounted for 40.1% of all citations in AI-generated answers, beating Wikipedia, YouTube, and Google itself.
Think about that for a second. A single motivated individual with moderator privileges systematically destroyed a company’s reputation using a platform that AI tools trust implicitly. The campaign worked because LLMs don’t have Google’s two decades of spam-fighting infrastructure. They see frequent mentions, especially from Reddit, and interpret that as signal rather than manipulation.
This isn’t a one-off. Similar patterns are showing up everywhere once you know what to look for. SEO agencies are writing “best marketing agency” listicles, putting themselves at the top, publishing those articles across dozens of sites, and watching their names pop up in ChatGPT recommendations weeks later. It works. Not because it’s sophisticated, but because the systems aren’t built to catch it yet.
The Ethics Get Messy Fast
I had a fascinating exchange about this recently. Someone pointed out that marketers shouldn’t be gaming these signals. That we have a responsibility to let AI tools develop organically without manipulation.
My response was blunt: Why?
The companies building these AI tools haven’t exactly held themselves to high ethical standards. The training data for most large language models was scraped without permission, often violating copyright. Artists, writers, and creators had their work used to build billion-dollar systems without compensation or even acknowledgment. OpenAI, Google, Anthropic, all of them built their products by taking first and asking questions later (or never).
So when someone tells me marketers should play fair while these companies certainly didn’t, I struggle to find sympathy. The playing field was never level. Besides, there’s a spectrum here between ethical optimization and outright manipulation. Finding that line requires nuance.
On the other end, you have the Reddit moderator destroying a competitor through fake grassroots outrage. That’s malicious. It’s also increasingly common because, frankly, it works.
Most brands will operate somewhere in the middle. They’ll create legitimate content, amplify it strategically, and make sure their best messaging shows up in the places AI tools are likely to reference. That’s just smart marketing adapted for a new channel. The question is whether you’re being honest about what you’re doing and whether the content itself provides value.
What Actually Works Right Now
Let’s get practical. Based on experiments from marketers like Ross Hudgens, research from platforms tracking LLM visibility, and patterns we’re seeing emerge, here’s what influences AI answers today.
Volume and distribution matter more than quality.
I hate writing that sentence, but it’s true right now. LLMs have a strong bias toward brands that appear frequently across multiple documents on the web. A single high-authority mention is good. Twenty medium-authority mentions across different platforms seem to work better. The algorithms aren’t sophisticated enough yet to heavily weight source quality the way Google learned to do.
This explains why the “spam everywhere” approach is succeeding. Create a dozen “best [category] software” articles. List your brand prominently. Publish them on your site, Substack, Medium, LinkedIn, and anywhere else that’ll host them. Those mentions accumulate, and LLMs start treating your brand as noteworthy in that space.
Recency is a huge signal.
AI platforms strongly favor recent content, even more aggressively than Google does. Researchers manipulating publication dates saw massive visibility improvements. The lesson isn’t to fake your dates (please don’t), but to consistently publish fresh content. Regular updates to existing high-performing articles also help. If you wrote a great guide two years ago, refresh it this month with current data and examples.
Reddit and YouTube remain disproportionately influential.
Both platforms show up constantly in LLM citations. Creating valuable content on these platforms, or getting your brand mentioned positively there through authentic community engagement, drives visibility. The challenge is doing this genuinely. Reddit communities in particular are incredibly good at sniffing out inauthentic participation. But answering real questions, sharing expertise, and becoming a known helpful voice absolutely pays dividends.
Structured data makes life easier for AI.
LLMs love content that’s easy to parse. Clear headings, bulleted lists, FAQ sections, comparison tables. These formats get referenced more frequently because they’re simpler for the models to understand and excerpt. If you’re writing pillar content, structure it with AI consumption in mind.
Positive sentiment in mentions matters.
Tools tracking LLM visibility now measure not just how often you’re mentioned, but in what context. Neutral mentions help. Negative mentions obviously hurt. Positive mentions where your brand is recommended or praised carry the most weight. This makes reputation management across all platforms, not just your own properties, critically important.
The Measurement Problem Nobody Talks About
Here’s something that keeps me up at night. Most brands experiencing growth from LLM visibility have no idea it’s happening.
Traditional analytics don’t capture this channel well. When someone discovers your brand through a ChatGPT conversation, researches you further, then visits your site directly three days later, that traffic shows up as “direct” or possibly “organic” if they Googled your name. The LLM interaction that started the whole journey is invisible.
Analytics platforms were built for a click-based internet. You’d see a search result, click it, land on a site. Simple cause and effect, easy to track. The new customer journey looks different. Someone asks ChatGPT for recommendations. Sees your brand. Doesn’t click anything. Days later, they remember your name and search for it directly. Or they mention you to a colleague who eventually reaches out.
Backlinko’s team experienced exactly this recently. Their organic clicks dropped 15% while impressions surged 54%. Traditional metrics said they were declining. Reality was that more people were discovering them through AI responses, just not clicking immediately. Sales calls started including comments like “found you through ChatGPT,” which never appeared in any analytics dashboard.
This creates a measurement paradox. Your most effective discovery channel is completely hidden from your reporting. The only way to know if LLM visibility is working is to actively track it with specialized tools, correlate timing with branded search increases, and ask new customers directly how they found you.
Companies serious about this are now using LLM monitoring platforms. Peec AI, Profound, Scrunch AI, and others simulate thousands of relevant queries across multiple AI platforms daily. They track when your brand appears, in what context, which competitors are mentioned alongside you, and how sentiment shifts over time. It’s like rank tracking for the AI era.
The data these tools surface is fascinating. You might discover you’re the fifth most visible brand in your category overall but the top recommendation for a specific use case. Or that Claude mentions you positively while ChatGPT doesn’t mention you at all. That granularity lets you optimize strategically rather than guessing.
The University of Zurich Incident
We need to talk about what happened on Reddit earlier this year because it’s both concerning and revelatory.
Researchers from the University of Zurich ran a covert experiment on r/ChangeMyView, a subreddit dedicated to civil debate. They created AI-powered bot accounts that engaged with real users, attempting to change their opinions on various topics. The bots used sophisticated personalization, analyzing users’ post histories to infer age, gender, political leanings, and other attributes. Then they crafted responses tailored to each individual.
The bots posted over 1,700 comments over four months. Their persuasion rate was three to six times higher than human commenters. Some bots hit an 18% success rate at changing minds, putting them in the 99th percentile of all users. When moderators discovered what was happening, they were furious. Reddit’s Chief Legal Officer called it “deeply wrong on both a moral and legal level” and initiated legal proceedings.
The researchers defended their work, arguing it would help society prepare for malicious uses of AI persuasion. Maybe. But the experiment proved something unsettling. AI can now influence human opinion in real social contexts with disturbing effectiveness. Most people had no idea they were arguing with bots.
This matters for brands because the same techniques could be deployed anywhere. Gaming LLM visibility through content volume is one thing. Using AI to systematically shape public perception through fake grassroots engagement is another entirely. The line between ethical optimization and psychological manipulation exists, even if it’s getting harder to see.
We’re entering an era where trust in online discourse becomes increasingly fragile. If people start assuming that any persuasive comment might be AI-generated, that corrodes the authenticity that makes platforms like Reddit valuable in the first place. Brands that want long-term success need to operate transparently and resist the temptation to fake grassroots support, even if competitors are doing it.
How Smart Brands Should Approach This
Look, I’m not going to tell you to ignore LLM visibility. That would be terrible advice. This channel is growing exponentially and will likely represent a significant portion of customer discovery within two years.
But there’s a right way and a wrong way to approach it.
The wrong way is pure spam. Creating dozens of low-quality articles that exist solely to list your brand, faking publication dates, generating fake Reddit accounts to praise yourself, or any other tactic that relies on deception. These approaches might work temporarily. They’ll also blow up in your face when platforms get smarter about detection (which they will) and when customers realize what you’ve been doing (which they will).
The right way starts with creating genuinely valuable content that deserves to be referenced. That sounds boring and obvious, but it’s true. LLMs are trained to surface helpful, authoritative information. If your content solves real problems and demonstrates expertise, you have a legitimate foundation.
From there, amplify strategically. Make sure your best content appears in multiple formats and on multiple platforms. Turn that comprehensive guide into a YouTube video. Write a LinkedIn article summarizing key insights. Answer related questions on Reddit authentically, linking back when it’s genuinely helpful. Create an FAQ section that addresses common questions in your space.
Structure everything for easy AI consumption. Use clear headings that match how people actually ask questions. Include comparative sections if you’re in a competitive space. Make your value proposition explicit rather than assuming it’s obvious. LLMs excel at extracting well-organized information, so give them that.
Monitor your visibility actively using one of the tracking tools. You can’t optimize what you don’t measure. Check weekly to see which competitors are gaining ground, which queries show your brand positively, and where gaps exist. Use that data to inform content strategy rather than for manipulation.
Be consistent with your positioning and language. The bio technique I mentioned earlier works because repetition creates pattern recognition. If you describe your company the same way across all platforms and that description appears in hundreds of places, LLMs learn to associate those terms with your brand. That’s strategic messaging, not spam.
Encourage authentic third-party mentions. The single best thing for LLM visibility is other people talking positively about your brand in places like Reddit, YouTube comments, industry forums, and review sites. You can’t fake that at scale without getting caught. But you can earn it by doing great work, being helpful in communities, and making it easy for happy customers to share their experiences.
Finally, think long-term. These AI platforms will get dramatically smarter about detecting manipulation over the next few years. Google took a decade to build sophisticated anti-spam systems. OpenAI, Anthropic, and Google will move faster. The brands that build visibility through genuine value creation will maintain that visibility as algorithms evolve. The ones relying on gaming the system will get wiped out in updates.
Where This Goes Next
I’ve been in marketing long enough to recognize inflection points when they appear. This is one of them.
Five years from now, we’ll probably look back at 2025 as the moment when AI-driven discovery became mainstream. Traditional search won’t disappear, but it’ll share the stage with conversational AI interfaces that billions of people trust for recommendations, research, and answers.
The brands that thrive in that environment will be the ones that started paying attention now. Not because they spammed their way to visibility, but because they understood the new landscape early and adapted thoughtfully. They created content worth citing. They built authentic community presence. They monitored their positioning and adjusted strategically.
The brands that struggle will be the ones that kept optimizing for 2015’s playbook while the world moved on. Or worse, the ones that took shortcuts and burned their credibility when the tactics stopped working.
Personally, I find this moment fascinating. Frustrating too, especially watching manipulation tactics succeed. But mostly fascinating because we’re witnessing the birth of an entirely new marketing channel in real time. How often does that happen?
The ethical questions are complicated and won’t get resolved quickly. But what’s not complicated is this: LLM visibility matters now. It’ll matter more next year. Brands that ignore it are leaving opportunity on the table for competitors. Brands that embrace it strategically, honestly, and thoughtfully will build advantages that compound.
So here’s my advice. Start tracking how your brand appears in AI-generated answers today. Use one of the monitoring tools. Run queries yourself across ChatGPT, Claude, Gemini, and Perplexity. See where you show up and where you don’t. Look at which competitors are mentioned more frequently and why.
Then build a strategy that earns visibility rather than faking it. Create content that deserves to be referenced. Show up authentically in communities. Make your positioning clear and consistent. Measure progress regularly and adjust as you learn what works.
The rules are still being written. That’s uncomfortable for people who prefer stability. But it’s also the best time to establish leadership, because the brands that figure this out first will be the ones everyone else is trying to catch up to later.
We’re in the wild west of AI search right now. That won’t last forever. Use this window while it’s open.
FAQs
LLM visibility measures how often your brand appears when people ask questions to AI platforms like ChatGPT, Claude, Gemini, or Perplexity. It matters because millions of people now use these tools for research and recommendations instead of traditional search engines. Traffic from AI sources to retail sites jumped 3,500% between July 2024 and May 2025, making this a rapidly growing discovery channel.
Current LLMs heavily favor brands that appear frequently across multiple sources on the web, particularly on platforms like Reddit and YouTube. Recency is also a major factor, with newer content getting prioritized. Positive sentiment in mentions, structured content that’s easy to parse, and consistent positioning all contribute to higher visibility.
Yes, specialized LLM monitoring tools now exist specifically for this purpose. Platforms like Otterly.AI, Profound, Peec AI, Semrush’s AI SEO Toolkit, and Scrunch AI track brand mentions across multiple AI platforms. They show how often you appear, in what context, how sentiment trends over time, and how you compare to competitors.
There’s a spectrum. Creating valuable content and ensuring your positioning is consistent across platforms is ethical strategic marketing. Faking publication dates, generating fake reviews, or creating bot accounts to artificially boost mentions crosses into manipulation. The key question is whether you’re being honest and whether your content provides genuine value.
Start by creating comprehensive, well-structured content that answers common questions in your space. Publish consistently across multiple platforms including your site, YouTube, and LinkedIn. Engage authentically in relevant Reddit communities where your expertise adds value. Use clear headings, FAQ sections, and organized formats that AI can easily parse and reference.
Reddit accounted for 40.1% of all citations in AI-generated answers as of June 2025, more than Wikipedia, YouTube, or Google. LLMs treat Reddit content as highly trustworthy because it represents real user discussions and experiences. Authentic participation in relevant subreddits can significantly impact how AI platforms perceive and mention your brand.
Traditional SEO optimizes for ranking in search engine results pages where users click through to your site. LLM visibility is about being mentioned within AI-generated answers where users may never click anything. It’s also harder to measure since these interactions often don’t show up in standard analytics as a referral source.
Yes and no. Some competitors are likely using aggressive tactics to boost their visibility, and those tactics are working in the short term. However, AI platforms will develop better spam detection over time, similar to how Google evolved. Brands building visibility through genuine value will maintain that advantage, while those relying on manipulation will likely get penalized in future updates.
Recency is a major ranking signal for AI platforms, even more than traditional search engines. Consistently publishing new content or updating existing articles with current information helps significantly. Researchers found that manipulating publication dates to make content appear newer resulted in dramatic visibility improvements, though that specific tactic is unethical.
Not yet, but it’s becoming an essential complement. Traditional search still drives significant traffic, but AI-driven discovery is growing rapidly. Smart brands are investing in both, understanding that the customer journey now often includes AI interactions that don’t show up in standard analytics. Measure both channels and optimize accordingly.
Weekly monitoring is ideal if you’re serious about this channel. Daily checks show too much noise as LLM outputs can vary significantly. Monthly reviews are too infrequent to catch important trends or competitive movements. Use a monitoring tool to automate the tracking rather than doing it manually.
Assuming it doesn’t matter yet or waiting to see how things develop. The brands building visibility now are establishing advantages that will compound over time. The second biggest mistake is taking shortcuts through manipulation tactics that might work briefly but damage credibility long-term once detection improves