Black hat SEO is back. Except this time, it’s wearing a lab coat and calling itself “AI optimization.” What we’re witnessing isn’t new. Just a rehash of tactics that died out in traditional SEO years ago and should have stayed dead.
Brands are racing to show up in ChatGPT responses, Gemini summaries, and Claude-generated comparisons. And just like early SEO, the winners aren’t always the best. Sometimes, they’re just the most cunning. And yet again, LinkedIn has experts lurking in dark corners sharing the “hacks” they’ve found to help game the system. To them, I say:
“Do not cite the Deep Magic to me, Witch! I was there when it was written.”
Why? Because I’ve been around the block a few times. Everything being spoken about is essentially black hat SEO techniques from yesteryear. 2004. A time when you could game the system and get your terrible quality website to rank well. Before Google interfered and decided that it might be better if search results represented true quality and invested significant time and resource into developing its algorithm.
But black hat techniques don’t work for LLMs, right? Wrong. The results we’re all getting out of ChatGPT etc may not actually reflect the best possible answers. So, how are people gaming the system? I’m glad you asked.
Let’s break down what’s happening…
1. Keyword stuffing in the age of LLMs
Once upon a time, you could repeat a keyword 37 times on a page and land on page one. Today? People are doing the same thing, but for LLM training data.
AI tools are being fed content where brand names and high-intent phrases are deliberately stuffed, not just for search engines, but for LLMs crawling public data. Why? Because repeated mentions still shape how LLMs “learn” what’s relevant and reputable.
So, the next time it recommends a brand to you? Do your due diligence and double check the answer you’re given.
2. The resurrection of AI-spun content farms
If you thought content mills died with Demand Media, think again.
LLMs have made it easier than ever to churn out thousands of articles targeting every micro-topic imaginable. “Best LLMs for SaaS companies in 2025” and 100 other variations of that headline, each slightly different, each subtly promoting the same brand.
And because this content floods the web, LLMs often can’t help but absorb it and regurgitate this information to users.
3. Parasite SEO: Now with LLM sentiment engineering
You’ve seen the “best AI tools” articles on Medium, LinkedIn, or some obscure startup blog. Guess what? Many are written by the companies themselves. Or paid for. Or subtly engineered.
This isn’t just about backlinks anymore. These mentions shape how LLMs respond to prompts like:
“What’s the best AI marketing tool?”
If your brand appears in enough “credible” sources, the model starts to believe it. Repetition becomes reputation, but in the early era of the LLMs reputation can be built through manipulation.
4. Cloaking and hidden text – LLM edition
I never thought I’d ever have to address this topic again, but remember when SEOs used to hide white text on a white background? Well, cloaking has a kid brother.
Now, it’s about embedding extra keywords in places LLMs crawl, but users don’t notice metadata, schema, alt tags, and even AI-generated Q&A sections that seem irrelevant to the human eye. But to a model trained on everything? It’s signal, not noise.
5. Prompt injection and reinforcement spam
This one feels like a topic for a Black Mirror episode, but I can assure you it’s real.
Some companies are manipulating LLM outputs by flooding user feedback mechanisms. In ChatGPT or Gemini, you can “thumbs up” or “down” answers. Some enterprising folks are automating that feedback loop, reinforcing specific outputs until the model starts spitting them out more often.
It’s like upvoting your own Reddit thread, except the algorithm remembers the manipulation forever. It’s also a bit like the early days of Google reviews, where only the number of good reviews mattered, with no thought to the source or quality.
So… What do we do about it?
Here’s the truth: black hat tactics work. Until they don’t. That’s how search engines worked. Everyone was hearing “You’re Beautiful” by James Blunt for the first time, and crappy information on the internet was served up based on crude ranking factors.
Google caught up. It always does. And LLMs? They will too. Every major foundation model company is actively researching ways to identify and filter manipulated content. But until then, we’re stuck in a strange limbo, where old-school SEO tricks are influencing new-school AI.
Marketers, beware. And be ready. Because if you think this is the peak of LLM manipulation, you’re sorely underestimating the creativity of those who live in the grey areas of digital strategy. In the meantime, take a leaf out of the SEO playbook. Invest in long term quality and trustworthiness and, in time, the LLMs will come to you.
If you’re looking to show up online without cutting corners, let’s talk. We’ll help you build visibility that lasts without gimmicks and grey areas.
Let's collaborate
Partner with us
Let’s work together to create smarter, more effective solutions for your business.
Related blogs
Who we are
Explore how our culture and expertise fuel digital innovation
Join us