Skip to content

English edition

The Real Impact of AI Slop — What Do the Numbers Show, and What’s Next?

23 empirical measurements, all pointing to one thing: traditional measurement tools are losing their validity. 51% of internet traffic is bot traffic—and this is just the beginning.

VZ editorial frame

Read this piece through one operating lens: AI does not automate first, it amplifies first. If the underlying decision architecture is clear, AI scales clarity. If it is noisy, AI scales noise and cost.

VZ Lens

Through a VZ lens, the value is not information abundance but actionable signal clarity. 23 empirical measurements, all pointing to one thing: traditional measurement tools are losing their validity. 51% of internet traffic is bot traffic—and this is just the beginning. Strategic value emerges when insight becomes execution protocol.

TL;DR — The Numbers That Don’t Lie

  • 51% of internet traffic is from bots — half of all ads aren’t “seen” by humans
  • 74.2% of new websites contain AI-generated content (Ahrefs, 2025)
  • 69% of Google searches are clickless — traditional search intent measurement is flying blind
  • The appearance of AI Overview reduces clicks by 30–61% (5 independent research firms)
  • 74% of Amazon AI reviews are five-star, 93% are “verified purchases” — but no one actually bought the product
  • Model collapse in the medical domain yields clinically unusable results after 2 generations
  • The supply of human content will run out by 2026 for AI training (Epoch AI)
  • The loss of trust is steep: Gallup’s media trust rating fell from 40% to 28% in 5 years
  • “Slop” causes an annual productivity loss of $9.1 million per organization

“If your metrics are based on old-world assumptions—what are they actually measuring?”


What is “slop,” and why does it matter?

The word “slop” was named Merriam-Webster’s Word of the Year in 2025. It originally meant swill or slop—the food scraps fed to pigs. Today, on the internet, it means the same thing: mass-produced, low-quality, machine-generated content that nobody asked for, but is everywhere.

Think about it: you open YouTube, and half of what you see are strange, almost inhuman videos. You search for something on Google, and the top results seem like they weren’t even written by humans. You read a product review on Amazon, and something doesn’t feel right—it’s too polished, too perfect, as if a machine wrote it.

Well, not “as if.” It really was written by a machine.

This document demonstrates—with numbers, metrics, and research—just how real this phenomenon is, how profound its impact is, and where it will lead if we do nothing. It contains 23 empirical metrics, and behind each one lies a common thread that market researchers will immediately recognize: the validity of traditional measurement tools is fading before our eyes.

3–5% of consumer opinions are generated by machines, and this percentage is rising. Clickstream panels are no longer able to distinguish between human and bot behavior in the 51% that constitutes machine traffic. Focus group participants “research” the topic before the session—using automated content. Brand tracking survey systems fail to account for the fact that the respondent’s opinion was partly shaped by an AI summary, not the product experience.

For anyone involved in market research, there is now only one question that matters: if your measurement tools are based on assumptions from the old world, what are they actually measuring?

The zero-click rate, the spread of AI Overviews, review contamination, and the illusion of competence collectively create a methodological vacuum. There is no validated toolkit to distinguish genuine consumer intent from machine noise. Whoever builds this toolkit first—a reliable way to measure the significant difference between organic sentiment and machine contamination—is not just selling a service, but defining the next generation of market research.


I. How Much AI Content Is There on the Internet?

The Big Picture: There Are Already More Machines Than People

Imagine the internet as a huge marketplace where people chat, write articles, and share opinions. Now imagine that more and more bots are arriving at this marketplace, and they too are chatting, writing, and commenting—but not based on real experiences, rather on patterns they’ve learned from previous texts.

By 2025, robots had become the majority. 51% of internet traffic is machine-generated—and this is not an exaggeration, not a prediction, but a measurement. If you open a website today, you’re more likely to read machine-generated content than human-generated content.

The proportion of AI-generated content on the internet has grown from ~5% (2020) to 48% (May 2025). That’s nearly a tenfold increase in five years.

New websites: seven out of ten are machine-generated

According to a survey conducted by Ahrefs in April 2025, the content of seven out of ten new websites was written by artificial intelligence. It didn’t help—it just wrote. Without people, without oversight, without real experience. The exact figure: 74.2%.

What does this mean in practice? When you search for a question tomorrow and the first three results are new websites, the statistical probability that all three are AI-generated exceeds 40%.

Google Search: Less and Less Helpful

When you search for something on Google, nearly one in five of the top 20 results is written by AI (17.31% — peaking at 19.56%). But what’s happening behind the scenes is even more important.

The proportion of zero-click searches is growing year by year—69% of Google searches do not lead to any website. The user gets an answer without visiting any page. The classic measurement of search intent—on which the entire SEO industry is built—is flying blind.

When Google displays an AI Overview on the search results page, the situation is even more drastic. Five independent research firms measured the impact:

Research FirmData VolumeClick-Through Rate Decline
Seer Interactive25.1 million impressions, 3,119 terms-61%
Ahrefs300,000 keywords-58%
GrowthRC200,000+ keywords-32%
seoClarity12 million keywords-24 percentage points
BrightEdgeFortune 100 companies-30%

Google’s AI summary appears in 15–60% of searches (depending on the measurement methodology), but for informational queries—such as “how does it work?” or “what is…?”—this rate reaches 88% (seoClarity). In other words: for precisely those questions where people are seeking information, Google almost always responds with an AI summary, not with original sources.

Meanwhile, users are conducting 20% fewer searches per year on average (Datos, Q2 2025). Overall, websites—newspapers, blogs, online stores—are receiving dramatically fewer visitors, while AI takes over the role of providing answers.

Google’s market share also shows the largest decline of the decade: it fell from 91.47% to 89.57% (StatCounter, 2025). This may seem small—but in Google’s market, it represents a loss of billions in revenue.

Why does this matter from an AI perspective?

Google’s AI summaries themselves gather information from the internet. If internet content is increasingly AI-generated (74.2% of new websites), then Google’s AI responses will be based on AI-generated content—and people will accept these answers without checking the original source.

Unverified content thus passes through yet another filter and reaches the reader in an even more credible package. This isn’t simply “fake news”—it’s systemic contamination, where the filter itself is infected.

By platform: what’s the situation?

PlatformWhat’s happening?The number
YouTubeEvery 3rd to 5th recommended video is AI-generated “brainrot” content21–33%
YouTube278 channels produce ONLY AI-generated content — 63 billion views$117M/year in ad revenue
X (Twitter)Up to two-thirds of all accounts may be bots; three-quarters of traffic is automated during peak hours64% bots
InstagramApproximately 95 million fake accounts (one in ten). A quarter of major influencers’ followers are bots9.5% fake
Facebook5.4 billion fake accounts were deleted in a single year — twice the number of real users5.4 billion deleted

What does “brainrot” mean? Brainrot literally means “brain rot.” It refers to content that makes no sense, yet social media algorithms still recommend it because people click on it. Examples include strange AI-generated animations or Reddit stories read by robots with background music. The YouTube CEO named the fight against brainrot and slop as a priority for 2026—which in itself indicates the scale of the problem.


II. The Market for Reviews: How Is Trust Collapsing?

The Amazon Example: The Fake Five-Star

Imagine you want to buy a pair of headphones on Amazon. You look at the reviews: 4.7 stars, 2,000 ratings, mostly five-star. Convincing, right?

But now take a closer look at what the numbers show (Pangram Labs, July 2025):

  • 74% of AI-written reviews are five-star—among human reviews, this figure is only 59%
  • 93% of AI reviews carry the “Verified Purchase” label, which supposedly means the reviewer actually bought the product—but they didn’t, because the review was written by a machine
  • Among beauty products, one in twenty reviews is AI-generated

Why is this important to understand? A genuine review is backed by real experience: someone actually bought the headphones, wore them for a week, tested them while running, and wrote that they press against their ears. There is no experience behind an AI review—the machine generated the text based on patterns from previous reviews. It sounds like someone tried them, but no one actually did. This isn’t simply a “fake” review—it’s an unvalidated claim that appears real but has never been verified against reality.

And there’s an additional, insidious layer: research from Harvard HBS (Zhai and Ching, January 2026) showed that Amazon’s own AI summaries systematically overrepresent fake reviews — meaning Amazon’s own tool accelerates the Akerlof spiral. The platform that is supposed to guarantee trust is itself part of the contamination.

The “market for lemons” — Akerlof in action

There is a Nobel Prize-winning economic theory that precisely describes what happens in such cases. George Akerlof described the “market for lemons” (market for lemons) model, and while he was thinking of the used car market, today it applies to the entire internet.

Here’s an analogy: imagine you want to buy a used car. There are good cars (we call these “peaches”) and bad cars (we call these “lemons”). The seller knows what the car is like, but the buyer doesn’t—this is information asymmetry. Since the buyer can’t tell the difference, they’re willing to pay for “average” quality on the market. But at that price, only sellers of bad cars are willing to sell—because the good car is worth more. Slowly, the good cars disappear from the market.

Now replace the cars with online content:

  • Good car = genuine human experience, authentic opinion, in-depth article
  • Bad car = AI-generated opinion, formulaic article, machine-generated content
  • The buyer = you, the reader

If you can’t tell the real from the fake, what Akerlof predicted will happen: good content gets squeezed out. If someone spends hours writing a thorough product review, but there are a thousand AI-generated five-star reviews right next to it, no one will notice theirs. They’ll give up. Only “lemons” remain on the market.

Researchers Easley and Kleinberg calculated the exact threshold:

If the proportion of good products in a market falls below two-thirds, there is only one equilibrium: one in which only bad products remain. Buyers know this in advance, and the market grinds to a halt.

Those who explicitly apply the Akerlof model to AI markets: Zhai and Yang (HBS, 2025) in their study “Platform Design with Lemons,” and Chhillar et al. (arXiv, January 2026) in their paper “When Life Gives You AI, Will You Turn It Into A Market for Lemons?”—the latter being the first experimental framework. Their main finding: users under-correct for lemons, and at high lemon densities, even disclosure mechanisms fail.

Science Is Also Affected

The world of science has been similarly shaken:

  • At the major artificial intelligence conference ICLR, the reviews (which researchers use to decide which papers are good) were written by AI — meaning it doesn’t test or question, but rather repeats patterns from previous reviews without understanding the content
  • The doubling time for scientific fraud is 18 months — meaning twice as many fake scientific articles are published every 18 months as before
  • AI chatbots incorrectly flagged 18% of valid studies as retracted, while reporting retracted studies as valid — because they did not examine the truth of the content, but rather “guessed” based on text patterns
  • Wiley/Hindawi retracted over 8,000 articles in 2023 and shut down 19 journals (June 2024), resulting in a revenue loss of $35–40 million. Wiley’s own filter flags 10–13% of incoming manuscripts as paper mills (organizations that mass-produce fraudulent articles)
  • The number of retractions exceeded 10,000 in 2023—a new record. AI-related retractions: fewer than 20 in total before 2022, 663 in 2023 alone

III. Can you tell if a machine wrote it? (No.)

Humans: at the coin-flip level

A 2025 large-scale study (iProov) asked people to distinguish between real and machine-generated content. The result:

  • Of all participants, only 0.1%—or one in a thousand—were able to correctly identify all the real and fake content
  • People correctly identified the fake videos (deepfakes) in only a quarter of cases—which is worse than flipping a coin

“One of our main defenses against synthetic media used as a weapon remains the target’s ability to visually recognize machine-generated content. As the realism of synthetic content increases, this defense becomes increasingly ineffective.”

Think about what this means in practice: if a deepfake video is made of you, your friends, colleagues, and family have a 75% chance of believing that it’s really you. Visual recognition as a defense—which we intuitively rely on—practically doesn’t exist.

Machines Can’t Reliably Tell Either

You’d think that if humans can’t tell, machines will. But the situation is more complicated than that:

Text TypeDetection Accuracy
Untouched, raw AI text86–98%
Rewritten, edited text60–70%
Simple trick (“write like a teenager”)<10%

Source: UCLA, Times Higher Education, multiple audits (2025–2026)

An accuracy of 86–98% sounds impressive—until you realize that this applies only to raw, unedited AI text. In the real world, no one publishes AI output without editing it (or if they do, that’s the least dangerous scenario). As soon as someone rewrites, edits, or mixes in human elements—detection rates drop to 60–70%. A simple instruction to the AI—“write as if you were a teenager”—pushes the detection rate below 10%. This is an arms race in which the attacker always has the upper hand.

The Injustice: Non-Native Speakers Are Penalized by the System

There is a particularly unpleasant side effect. AI detection tools tend to flag human-written text as AI-generated—especially if the author is a non-native speaker:

  • GPTZero incorrectly flags text by non-native English speakers as machine-generated in one-quarter of cases
  • Turnitin is wrong in 10–15% of such cases, while ZeroGPT is wrong in 15–20%

This means that a Hungarian, Chinese, or Spanish student writing an essay in English is four times more likely to be unfairly labeled a “cheater” than a native-speaking peer. Detectors are therefore not simply inaccurate—they discriminate on a systemic level. The tool developed to protect academic integrity actually generates injustice itself.


IV. The Great Feedback Loop: When the System Devours Itself

This Is What You Really Need to Understand

This is the most important part of this document. This is not a simple problem, but a self-reinforcing spiral—a feedback loop that is very difficult to escape.

The essence is simple, but the consequences are dramatic:

  1. AI systems (such as ChatGPT, Claude, Gemini) are trained on human text. From books, articles, websites, Wikipedia—from what people have written over decades.

  2. These AI systems are now generating massive amounts of text, which is fed back into the internet.

  3. Next-generation AI systems are trained on this internet—which is now full of AI-generated text.

  4. So AI learns from its own output—which is like a photocopier copying its own copies over and over again.

Model collapse: when a copy copies a copy

In 2024, a startling study was published in Nature—the world’s most prestigious scientific journal. Shumailov et al. demonstrated what happens when an AI model trains itself using its own previous outputs:

1. Rare ideas disappear first. Imagine a classroom where everyone reads a book and then retells it from memory. The first person still remembers the unique details and the strange twists. By the tenth person, only the main character and the basic plot remain. By the twentieth person, all that’s left is “once upon a time, there was a man who did something.”

The same thing happens with AI: the unusual, original, minority perspectives—that is, precisely the ones that are most valuable—disappear first. What remains is the average, the cliché, the safe middle ground.

2. Every generation gets dumber—because it doesn’t learn from real data. Dohmatob et al. (ICLR, 2025) demonstrated that the performance of AI models deteriorates with every generation if they learn from their own previous texts. This isn’t a gradual decline—at a certain point, it suddenly collapses. This is called “strong model collapse.” And what’s particularly alarming: Dohmatob showed that even 0.1% synthetic data causes a collapse—meaning you don’t have to replace the entire dataset; it’s enough if just one-thousandth of it is machine-generated.

3. The human text needed for training is running out NOW. The Epoch AI research institute calculated in 2023 that if we continue training AI models at the current pace, by 2026 we will run out of publicly available, high-quality text written by humans. It is 2026. Today is March 7, 2026.

Model collapse is not a future event. The depletion of training data is happening right now—and this is not an abstract claim. This text you are reading right now will also become part of the internet’s content. It is likely to end up in the training data of a future AI model. If the model learns from this—from a text that was partly written by AI and partly edited by a human—then the next generation will no longer learn from original human thoughts, but from a human-machine hybrid that reflects on previous machine and human texts. The line between human and machine-generated text is already blurred. The feedback loop is therefore not a future possibility. We are already in it.

Model collapse data — the numbers are alarmingly low

Model collapse is not a theoretical possibility—it has been measured in the lab, and the generation counts are surprisingly small:

ResearchModel/domainComplete collapse (synthetic data only)Mixed with real data
Shumailov (2024, Nature)Language model (OPT-125m)9 generations (+56% error)9 gen, 10% real data (+9% error)
Shumailov (2024)Image recognition (VAE/MNIST)20 generations (digit identity lost)
Alemohammad (2024, ICLR)Face generation (StyleGAN)3–5 generations (diversity collapses)Fresh data stable at every gen
Alemohammad (2024)CIFAR-10 images5 generations (noticeable MADness)Preceded by 2,250 real samples
Briesch (2023)LLM (trained from scratch)39 generations (zero diversity)Expanding data: 22% loss at 50 generations
Dohmatob (2025, ICLR)Linear regressionEven 0.1% synthetic data causes collapseScaling laws are distorted
Liu et al. (2026)Clinical documentation2 generations (clinically unusable)Can be mitigated with quality filtering

The most alarming figure: in the medical domain, machine-generated content becomes clinically unusable after just 2 recursive generations (Liu et al., 2026, medRxiv, 800,000+ synthetic data points). The rate of false-positive diagnoses triples, jumping to 40%. Rare pathological findings (pneumothorax, pleural effusion) disappear from the synthetic output. The outputs skew toward a middle-aged male phenotype—meaning the system reproduces the most common pattern, and unusual cases become invisible.

How many “generations” have we actually gone through? The GPT family has so far gone through 5–7 major generations (from GPT-1 to GPT-5). But the real world is not a laboratory “complete replacement” scenario—it is gradual and cumulative contamination. According to an extrapolation of ecosystem-level similarity analysis (arXiv, 2025), output similarity between models will reach 90% saturation by ~2035—the point at which ecosystem-level collapse is expected.

Search Will Also Collapse

There is a recent study (Yu, Kim, and Kim, 2026, ACM Conference) that describes a new type of problem: Retrieval Collapse.

What is this? Google and other search engines work by collecting content from the internet and showing you the best results. But if internet content is increasingly AI-generated, then the search engine ranks AI content based on AI content. It’s like a judge reading only past wrongful verdicts and ruling based on them.

The fundamental problem here is the same: AI-generated web pages aren’t born from real experience, measurement, or expertise. A product description written by AI isn’t based on someone having tried the product. A travel guide written by AI is not based on someone having been there. Medical advice written by AI is not based on someone having examined the patient. Yet Google ranks this content—because it cannot distinguish text based on real-world experience from text that merely repeats statistically probable patterns.

In medicine, this is already life-threatening

He and colleagues (2026) demonstrated that AI is also beginning to fill medical documents with machine-generated content—and the problem here isn’t that the content is “synthetic,” but that it has never been validated against real patient data.

What does this mean in practice? Imagine it like this:

  1. A medical AI system generates text for a patient’s medical record—say, a diagnosis or a treatment plan
  2. This text is not derived from actual test results, but from patterns in previous texts
  3. Statistically, it “appears plausible”—meaning it doesn’t seem obviously false
  4. But its level of significance is low: in other words, its reliability hasn’t been verified
  5. The next AI system is trained on these documents—and it treats as “fact” what the previous system only “inferred”

The key difference: a human doctor writing a diagnosis relies on actual test results—blood tests, X-rays, physical examinations. An AI that generates text relies on previous texts. If these previous texts are also partly machine-generated and have not been validated, then the diagnosis is not based on the patient’s condition, but on the statistical average of text samples—which is an entirely different matter.

Model collapse is no longer an abstract IT problem here, but a concrete life-threatening risk: the doctor relies on the machine system, the machine system relies on previous machine-generated texts, and no one has checked whether there was any real patient data in the background at any point in the chain.


V. What economic impacts are already evident?

In the workplace: “workslop”—the machine-generated drivel that’s everywhere

In 2025, the research firm BetterUp surveyed how AI-generated content affects workplaces. The results are thought-provoking:

  • 40% of AI-generated content in the workplace is “workslop”— machine-generated text that the sender didn’t read, didn’t check, and forwarded to their colleagues as is. The problem is the same as everywhere else: the text isn’t based on the sender’s actual knowledge, experience, or thinking—but on a machine’s pattern recognition that no one has compared to the actual situation
  • This results in an annual productivity loss of $9.1 million per organization
  • When someone receives machine-generated drivel from a colleague, most feel annoyed (54%), frustrated (46%), or confused (38%)
  • Colleagues who send such content are viewed as less competent, creative, and reliable—so “workslops” damage professional reputations

This isn’t simply a productivity issue. Workslops undermine organizational trust. If you know your colleague didn’t read what they sent—because a machine wrote it and they just hit “send”—you’ll read their next message with suspicion. The loss of trust here isn’t an abstract market phenomenon: it happens within your own team.

The advertising market: money spent on AI-generated content is money down the drain

A crucial statistic for advertisers: ads appearing on high-quality websites lead to actual purchases at a 91% higher rate than those on pages filled with AI-generated content (IAS, 2025).

But due to changes in Google search, 37% fewer clicks are reaching legitimate websites. Some major publishers (such as MailOnline) have experienced a 90% drop in traffic for certain topics.

Meanwhile, more than a thousand news sites operate almost exclusively with bots (NewsGuard, 2025). These websites look like real newspapers, but no human editors or journalists work on them. Advertisers’ money—who cannot distinguish a quality site from a bot-run site—flows into this black hole.

Creative markets: the Akerlof spiral in action

In March 2026, the University of Florida published a study that describes exactly what is happening:

“Beginners are flooding creative markets with AI-generated content. This deters consumers and makes it harder for professional creators to stand out. The transitional era—the one we’re in now—is the destructive phase.”

The Akerlof model is playing out in real time: as “lemons” (AI-generated content) are flooding the market, while the creators of “peaches” (high-quality human-generated content) are giving up because they can’t demonstrate that theirs is better.

Three markets that have already crossed the Akerlof threshold

Stock photo: Adobe Stock’s AI content grew from 2.5% (May 2023) to 47.85% (April 2025)—a 19-fold increase in two years. Over 300 million AI images uploaded. Getty Images’ revenue down 4.5% (2024). Shutterstock’s subscribers have dropped by 22% compared to the 2022 peak. The Getty-Shutterstock forced merger ($3.7 billion, January 2025) is a textbook Akerlof response: the quality market defends itself against collapse. Shutterstock now earns $104 million by selling data to AI companies—meaning it has abandoned its own business model. When a photo platform earns more from its data than from its photos, that is proof of Akerlof’s theory.

Academic publishing: Wiley/Hindawi retracted over 8,000 articles in 2023 and shut down 19 journals (June 2024), resulting in a revenue loss of $35–40 million. The number of retractions exceeded 10,000 in 2023 (a new record). AI-related retractions: <20 in total before 2022, 663 in 2023 alone.

Freelance writing/translation: Writing job postings have decreased by 33% since the launch of ChatGPT (Bloomberry, 2023). Demand for replaceable skills has fallen by 20–50% (Demirci et al., JEBO, July 2025). Critical Akerlof signal: it affected the highest-quality, most expensive freelancers the most—exactly the dynamic Akerlof predicted. Fiverr’s active buyers decreased by an additional 12,694. Those performing the most valuable work are the first to be squeezed out, because their work is the most expensive, and the client cannot distinguish AI-generated text from the work of a professional copywriter.


VI. The Great Feedback Spiral — The Entire System Is Interconnected

Now let’s put the whole picture together. These aren’t five separate problems—it’s a single self-reinforcing spiral where every element reinforces the others.

The spiral has two feedback points:

1. Content feedback: AI-generated content floods the internet → people don’t trust the content → less quality content is produced → even more AI-generated content is needed → it floods the internet even more

2. Learning feedback: AI learns from its own output → each generation gets worse → worse content ends up on the internet → the next generation learns from this → it gets even worse

Both loops share the same underlying problem: the lack of validation. Original human content was backed by real experience—someone actually tried a product, actually examined a patient, actually conducted the experiment. AI-generated content lacks this foundation. It is statistically “probable”—but it has never been measured against reality. And as this unvalidated content feeds back into the system, each subsequent layer moves further and further away from what we might call “true.”


VII. What’s Next? — Six Possible Scenarios

Scenario 1: Marketing Collapses (It’s Already Begun)

What’s Happening: Online advertising is reaching fewer and fewer real people.

  • 51% of internet traffic is bot traffic — half of all ads are not “seen” by humans
  • 69% of Google searches result in no clicks
  • Where AI Overview appears, clicks drop by 30–61%
  • Google’s market share shows the largest decline of the decade: from 91.47% to 89.57% (StatCounter, 2025)
  • People are conducting 20% fewer searches per person per year

The chain reaction that can already be measured: Clicks are declining → companies are spending less on advertising → content creators (journalists, bloggers) receive less advertising revenue → less quality content is produced → even more AI-generated content fills the void.

Specific metrics from the cascade:

  • Google’s HCU (Helpful Content Update) wiped out 90%+ of organic traffic for 32% of 671 travel publishers — with a single stroke
  • LinkedIn’s B2B non-brand organic traffic dropped by 60% due to AI search engines
  • Instagram’s organic reach is now just 4% (down 18% year-over-year)

Who does this affect? Marketers, advertising professionals (5–10 million jobs worldwide), online content creators, advertising agencies, and small and medium-sized businesses that depend on online marketing — and SMEs have no Plan B: only 35% of them even have a Google Business Profile (BrightLocal).

Scenario 2: The Trust Crisis in Sales

What’s happening: If shoppers don’t trust product reviews, it will become increasingly difficult to sell anything online. Would you buy a $200 phone if you knew that a quarter of the reviews were written by bots, and you couldn’t tell which ones? Most people wouldn’t buy anything in this situation—or they’d go back to a physical store where they can touch the product.

Chain reaction: Declining online shopping → e-commerce sales drop → delivery companies get less work → warehouses downsize → the local economy suffers.

Real sign: On Amazon, 93% of AI-written reviews are “Verified Purchases”—meaning the strongest sign of trust is already being faked. When the trust signal becomes worthless, the market grinds to a halt. Akerlof describes this with the concept of “adverse selection”: the market selects itself out of existence.

Scenario 3: The crisis of knowledge — you don’t know what to trust

What happens: If some scientific articles are written by AI, one-fifth of peer reviews are conducted by AI, and unverified machine-generated content leaks into medical databases, then the credibility of expert knowledge is called into question.

The power of science has always rested on the fact that someone actually measured, observed, and experimentally verified what they claim. If content enters the system that looks like a scientific result but lacks actual measurement—being merely a statistical pattern of previous texts—then the foundation of science, repeatability and verifiability, ceases to exist.

Chain of consequences: You cannot trust scientific findings → you cannot trust medical advice → you cannot trust the data behind political decisions → distrust of institutions → social disintegration.

We’re already seeing the signs: Obschonka and Levesque (2026) showed that scientific research using AI methodology is considered significantly less credible than traditional research—even if the research itself is sound. In other words, the mere suspicion of AI is enough to undermine credibility.

Scenario 4: The economic cascade — when the ecosystem collapses

Specific example: If a medium-sized online store loses 30% of its traffic (because customers don’t trust the reviews and Google is sending fewer visitors), then:

  • Fewer packages are shipped → the courier service employs fewer drivers
  • It orders fewer products → the supplier hires fewer workers
  • It needs fewer office workers → business at the cafeterias around the office building declines
  • If it closes → the building owner loses revenue → the property management company also loses out

We observed exactly this chain of causality during the 2008 financial crisis, the collapse of the print media, and now the erosion of the online trust system. The mechanism is the same: the stability of a system does not depend on its strongest element, but on its weakest. And the weakest element right now is trust.

Scenario 5: The Collapse of the Competence Market — When Everyone Appears to Be an Expert

What’s Happening: AI tools (GitHub Copilot, ChatGPT, Claude) make it possible for someone with minimal actual knowledge to appear convincingly competent. They write code, design systems, build presentations—and the output looks professional on the surface. It’s the same information asymmetry as with Amazon reviews—only now it’s not about products, but about people’s competence.

The Dunning-Kruger effect on steroids. The Dunning-Kruger effect means that those who know little about a field tend to overestimate their own knowledge—because they lack the knowledge to recognize what they don’t know. AI tools dramatically amplify this effect:

  • The user writes a prompt → the AI provides convincing output → the user says, “I did it.”
  • The output actually works—up to a certain point. A simple website goes live. A piece of code runs. A presentation looks great
  • The user’s sense of “I’m a developer,” “I’m an architect,” “I can handle this” grows stronger
  • AI creates the illusion of omnipotence—and this narcissistic feedback loop grows stronger as the tools get better

The problem begins when the task exceeds the AI’s context window (the amount of text it can process at once), or when the system has to operate under load, in production, or in edge cases. That’s when it turns out that the “expert” doesn’t understand what they built—because they didn’t build it, they just gave instructions.

The delayed detonation. Competence fraud doesn’t explode immediately. Its defining feature: the damage manifests months or years later.

  1. Someone builds a system using AI. It works, the client is satisfied, the project is closed
  2. Six months later, the system needs to scale—but the architecture wasn’t designed for that, because the “developer” didn’t understand load balancing. They just issued the prompt, the AI gave something, and it seemed fine
  3. A year later, a security vulnerability emerges because there was a pattern in the AI-generated code that the developer didn’t recognize as a vulnerability—because they lack the depth of knowledge required for that
  4. By then, the “expert” has long since moved on to another project. Someone else discovers the flaw—if it is discovered at all

It’s the same with the medical analogy: the AI diagnosis seems good today, but no one has verified it using real patient data. The error only becomes apparent when the patient’s condition worsens.

The Collapse of the Knowledge Pyramid. Every profession has a learning pyramid. A senior developer can debug a system because, as a junior, they wrote faulty code themselves and learned why it was faulty. An architect can see three years into the future because, as a senior, they lived through three system crashes.

AI tools now make it possible for someone to skip the junior and, to some extent, the senior levels:

  • Architect level (44%): Becoming increasingly scarce, with no new talent coming through
  • Senior level (shortage): Becoming increasingly scarce because juniors haven’t gone through the process
  • Junior level (empty): “Why should I learn if AI can do it?”
  • AI user: The bottom of the pyramid — prompt-based “competence”

The consequences in three phases:

  1. Today: Existing senior and architect professionals still compensate for the shortage—they know when AI is lying because they have a frame of reference
  2. Within 5 years: These professionals will retire or burn out, and there will be no one to take their place—because today’s juniors haven’t walked the path
  3. Within 10 years: No one will understand complex systems: neither AI (because its context window is finite) nor humans (because they haven’t learned it). The systems will work—as long as they work. When they break, no one will be able to fix them

The Akerlof model in the labor market: The same mechanism as in the product market. The client cannot distinguish an AI-competent “expert” from a real expert. The real expert is more expensive (because they have 15 years of experience)—but the output looks the same on the surface. The client chooses the cheapest option. The real expert is pushed out. Those who remain are all AI-competent. When the system crashes, there’s no one to call—because the real experts have long since moved on to other fields.

An AI-competent developer builds a system in 2 months for 3 million forints. A real architect would build it in 4 months for 8 million forints—but theirs would run stably for 5 years. The difference only becomes apparent when it breaks—and by then, there might be no one left who can fix it.

The full causal chain: AI tools → illusion of competence → narcissistic validation → “Why should I pay a senior?” → seniors are pushed out → junior pipeline breaks → 5–10 years later: knowledge vacuum. Systems that work, but which no one understands, and which no one can fix when they break.

Scenario 6: A Return to the Past — What Worked for 200,000 Years

The previous five scenarios may seem frightening. But there is a perspective from which what is happening now is not a collapse—but a return to the past, to what is natural for humans.

200,000 years vs. 5,000 years. Humans—Homo sapiens, our species—have lived on Earth for about 200,000 years. Of that time, we spent roughly 195,000 years in small, personally known groups. In communities where everyone knew everyone else. Where if someone lied, the others knew. Where reputation (trustworthiness) could be verified firsthand.

“Large-scale society”—with institutions, abstract reputation systems (brands, diplomas, star ratings), and cooperation with strangers—accounts for barely 1% of human history. On an evolutionary scale, this is a very brief experiment.

The Dunbar number: a biological limit, not a cultural choice. Research by Robin Dunbar, an anthropologist at Oxford, has shown that the human brain is capable of maintaining approximately 150 genuine social relationships—relationships in which you know the other person’s history, remember your past interactions, and can judge how much you can trust them. The Dunbar number is not a convention we chose. It follows from the size of our brains—specifically, the capacity of the neocortex, which in primates correlates closely with group size.

Above the 150-person limit, humans are forced to rely on abstract signals: brands, diplomas, star ratings, institutional guarantees. Precisely the signals that AI has now made susceptible to forgery.

Historical wisdom: those who understood this built within small communities. Humanity’s great spiritual traditions all recognized this limit—and it is no coincidence that they were built on small, personal units:

  • The Apostle Paul’s letters are addressed to specific people who know one another. The early Christian ecclesias were home communities of 10–30 people—not institutions, but networks of personal relationships
  • The Buddha made the sangha—the personal community—one of the Three Jewels, alongside the Dharma and the Buddha. The sangha is not an organization. It is a personal presence where everyone knows one another
  • Stoic circles of friends—Seneca’s letters, Marcus Aurelius’s reflections—all regarded direct, personal connection as the arena for the practice of wisdom
  • The Jewish minyan: ten people who are physically present. That is all that is needed for prayer. Not a thousand, not a hundred—ten people you know

Each tradition contains the same insight: human trust is rooted in personal presence, and does not scale indefinitely.

What does this mean in the context of AI? If the reliability of large systems—the internet, search engines, online stores, academic publishers—approaches zero, people retreat to places where trust is naturally high: small communities where they know each other personally.

  • You ask your friend which headphones to buy—not Amazon reviews
  • You shop at the local store where you know the salesperson—not from an unknown online store
  • You go to your doctor, whom you know personally—not to a chatbot for a diagnosis
  • You read a book by an author you know—not an AI-generated summary

From an evolutionary perspective, this is not a collapse. A return to what worked for 200,000 years. The experiment—that strangers can cooperate based on abstract signals—was very short-lived, and it is now becoming clear that the signaling system can be faked.

The Zen paradox. There is a perspective from which all this is not a tragedy. The civilization-scale assumption—that billions can cooperate based on abstract signals without knowing one another—could never be sustained. Only postponed. Institutions, brands, diplomas, and star ratings were all tools of postponement: temporary solutions to the problem that the human brain’s capacity does not allow for genuine trust beyond 150 people.

AI didn’t destroy something that was stable. It accelerated the emergence of what had always been fragile.

Small communities are not a last resort. They are not a second-best solution we are forced to settle for because the internet has broken down. The small community is the original form—the one our brains are wired for, the one we’ve lived in for 200,000 years, and the one where trust doesn’t depend on abstract signals, but on knowing the other person’s face, their story, their flaws.

The question that remains: can the global economy and knowledge production, as we know them today, function in this configuration—because the institutional framework built over the past 5,000 years was based precisely on the assumption that abstract reputation signals are reliable. AI has not destroyed this assumption—it has shown that it was always fragile.


VIII. The Speed of Trust Loss—Faster Than You Think

Online trust is not declining gradually. It is in free fall, and the speed is accelerating.

What was measured?PeriodChange
Gallup: Trust in U.S. media2020–202540% → 28% (-12 points, 5 years)
Pew: Trust in national news organizations2016–202576% → 56% (-20 points)
Pew: Trust in national news organizationsMar.-Sept. 202567% → 56% (-11 points, half a year)
Edelman: Trust in search engines2024–202568% → 63% (-5 points)
BrightLocal: Google as a review platform2023–202687% → 71% (-16 points)
Deloitte: “the benefits of online are worth the risk”2024–202558% → 48% (all-time low)
X/Twitter: advertiser trust2022–202422% → 12% (last place for three years)
Reuters Institute: Facebook news traffic2023–2025-67% over two years

What wasn’t obvious until now: According to Pew Research data from September 2025, the 18–29 age group trusts social media (50%) almost as much as national news organizations (51%). This is a convergence without historical precedent—for young people, TikTok and NBC News carry equivalent credibility. This is not a rise in the value of social media—it is a freefall in the credibility of traditional media.

The Deloitte Connected Consumer Survey (2025, 3,524 U.S. respondents) most clearly illustrates the AI-specific erosion of trust: the proportion of those who believe generative AI makes it harder to assess the reliability of online content rose from 70% to 74% in one year. Meanwhile, the volume of fake reviews increased by 758% between 2020 and 2024 (FTC), and AI-generated fake reviews are showing an 80% monthly increase (The Transparency Company, December 2024).

What is still missing: there is no longitudinal measurement that tracks the rate of trust loss by platform, broken down by quarter. The existing data are annual snapshots—the actual shape of the decline curve is unknown. Is it linear? Exponential? Is there a point where it suddenly collapses?


IX. What Can Be Done? — Countermeasures and Their Limitations

There is a seemingly inescapable trap: every solution is itself part of the problem. Every new filter, detector, or flagging mechanism increases the system’s complexity—and increasing complexity means increasing the cognitive load. If you had to question the credibility of the source, verify the origin of the content, and evaluate the reliability of the alerts for every online interaction, it would require an unbearable amount of energy. It’s like when a ship is sinking, and you put another heavy pump on it to pump out the water. The pump helps fight the water—but its weight causes the ship to sink deeper.

Possible SolutionHow Does It Work?How Effective Is It?What Are Its Weaknesses?
Machine detection of AI-generated contentAlgorithms look for signs of machine-generated textGood on raw text (86–98%), but weak in real-world scenarios (60–70%)Arms race: every detector can be circumvented. Unfair to non-native authors
Digital Authentication (C2PA)Like a digital seal: verifies that it was created by a humanTechnologically viable, supported by major companiesVoluntary: not using it does not imply it is fake
Google quality filteringGoogle reduces the visibility of low-quality pagesReduced traffic to some pages by 45–80%Good content can also be harmed. Reactive: always one step behind
Return to personal networksPersonal recommendations, local communities, trusted sourcesThis is the only thing that has always worked in human historyIt cannot scale to a billion-user level — which is precisely the point

Key Takeaways

  1. Most internet traffic is no longer human. The 51% bot rate, the 74.2% AI content on new websites, and the 69% zero-click rate collectively mean that previous measurement methodologies—clickstream, search intent, brand tracking—are losing their validity.

  2. Trust signals have become falsifiable. From Amazon’s “Verified Purchase” label to scientific peer review to Google star ratings—the abstract signals on which modern society relies are manipulable. The Akerlof model predicts: if the proportion of “lemons” exceeds two-thirds, the market will collapse.

  3. Model collapse is not a future event. Collapse values measured in the lab range from 2 to 39 generations, and in the medical domain, 2 generations are enough to produce clinically unusable results. The human text corpus will run out by 2026 for AI training.

  4. The illusion of competence is the most dangerous time bomb. AI tools allow someone to appear competent without actual knowledge—but the error only becomes apparent months or years later, when the system is supposed to be functioning. The knowledge pyramid is crumbling from the bottom up.

  5. Reorganization is not collapse. The Dunbar number (a group of 150 people) is a biological limit, not a cultural choice. Small communities are not a second-best solution—they are the original form for which our brains are designed. AI did not destroy something that was stable—it accelerated the emergence of what was always fragile.


FAQ

What is the AI slop?

The word “slop” means slop or swill—it was named Merriam-Webster’s Word of the Year in 2025. AI slop refers to the massive amounts of low-quality content generated by artificial intelligence that is flooding the internet: automated blog posts, machine-generated reviews, meaningless videos, and articles full of fabricated references. The problem isn’t that it was created by AI—it’s that there’s no real experience, measurement, or verification behind it, yet it looks as if there is.

What is model collapse?

When an AI model is trained on its own previous outputs—rather than on text written by humans—the quality deteriorates with every “generation.” Rare, original ideas disappear, the text increasingly repeats the average and clichés, and at a certain point, it suddenly collapses. Research published in Nature (Shumailov, 2024) measured a 56% increase in errors after 9 generations. In the medical domain, 2 generations are enough to produce clinically dangerous results.

Who is George Akerlof, and what is the “market for lemons”?

George Akerlof is a Nobel Prize-winning economist who described the “market for lemons” model in 1970. The gist of it is this: if buyers in a market cannot distinguish good products from bad ones (information asymmetry), then good products are squeezed out because sellers cannot prove their quality. The bad products—the “lemons”—remain. This applies today to online content, the labor market, and reviews as well.

Will we really run out of human-written text for training AI?

Yes. According to a 2023 estimate by the Epoch AI research institute, by 2026 we will run out of publicly available, human-written quality text that can be used to train AI models. This doesn’t mean there’s “no more text”—it means that an increasing proportion of the remaining text is AI-generated, which accelerates model collapse.

What is the Dunbar number?

A concept derived from the research of Oxford anthropologist Robin Dunbar: the human brain is capable of maintaining approximately 150 genuine social relationships. This is a biological limit, not a cultural choice—it stems from the size of the neocortex. Above the 150-person limit, we rely on abstract indicators (brands, certificates, star ratings). AI has now made these indicators susceptible to forgery—and this is where the Dunbar number intersects with the AI slop problem.

What can I personally do?

Three things: (1) Question the source—who wrote it, and why? Is there real-world experience behind it? (2) Build personal networks—a friend’s recommendation, a known expert, or a local community is more reliable than any algorithmic ranking. (3) Look for primary sources — books, research articles, actual measurements, not AI summaries. Trust doesn’t scale — but real knowledge does, if there’s someone to authenticate it.


Key Takeaways

  • Half of all internet traffic is now generated by bots, and the majority of new websites are AI-generated, which fundamentally changes the kind of content we encounter. As Harari points out in Nexus, it is becoming increasingly difficult to trust that there is genuine human experience behind these texts.
  • Google’s AI summaries reduce clicks by 30–61%, and nearly 70% of searches result in no clicks, calling into question the fundamentals of traditional search intent analysis and web traffic measurement (clickstream).
  • AI reviews (e.g., on Amazon) distort market reality, where 93% of reviews may be “verified purchases” without anyone having actually bought the product, thereby undermining consumer decision-making based on reviews.
  • Model collapse poses a threat: in medical fields, clinically unusable results emerge after just two generations of retraining, highlighting the sustainability issue of AI learning.
  • The measurement crisis is deepening: bot traffic, contaminated opinions, and zero-click searches collectively create a methodological vacuum where traditional market research tools (e.g., brand tracking, focus groups) are unable to distinguish genuine human intent from machine noise.

Zoltán Varga - LinkedIn
Neural • Knowledge Systems Architect | Enterprise RAG architect
PKM • AI Ecosystems | Neural Awareness • Consciousness & Leadership
51% of traffic is not human. The lemons have taken over the market. Now what.

Strategic Synthesis

  • Define one owner and one decision checkpoint for the next iteration.
  • Track trust and quality signals weekly to validate whether the change is working.
  • Run a short feedback cycle: measure, refine, and re-prioritize based on evidence.

Next step

If you want your brand to be represented with context quality and citation strength in AI systems, start with a practical baseline and a priority sequence.