Google’s AI Overview: Just 1% of Clicks Go to Original Sources, Threatening the Digital Economy

Google is transforming the internet into a storefront of AI-generated summaries—and this shift could spell disaster for the entire digital economy. According to a new study by the Pew Research Center, a mere 1% of search queries displaying an AI Overview result in a click on the original source. In other words, the overwhelming majority of users never even visit the websites from which the information is derived.

The AI Overview feature, introduced in 2023, swiftly rose to dominance in search results, displacing the traditional “10 blue links” model. Instead of engaging with human-crafted journalism or blog content, users are presented with algorithmically generated digests. The issue lies not only in these summaries siphoning traffic away from content creators, but also in their frequent redirection to less reliable sources.

Such was the case with 404 Media’s expose on AI-generated music tracks falsely attributed to deceased artists. Despite the article’s widespread impact and Spotify’s subsequent intervention, Google’s search results prioritized an AI-generated overview sourced from a secondary blog, dig.watch, rather than the original piece. In the AI Overview panel, 404 Media’s report was conspicuously absent—replaced by aggregators like TechRadar, Mixmag, and RouteNote.

Original content creators are losing readers, revenue, and the ability to sustain their work. Even high-quality journalism is being buried beneath repackaged information, created without human insight or effort. The proliferation of fake AI aggregators has become endemic—drawing traffic while contributing nothing to journalism.

The situation is further exacerbated by how easily AI Overview can be manipulated. Artist Eduardo Valdes-Hevia conducted an experiment by publishing a fictitious parasitic theory of encephalization. Within hours, Google began presenting it as scientific fact. He then coined the term “AI Engorgement”—and once again, it was treated as legitimate. Eventually, he blended real diseases with invented ones, like Dracunculus graviditatis—and AI failed to discern fact from fiction.

Other examples abound: Google once advised users to eat glue, mistaking a Reddit joke for real advice, and falsely reported the death of living journalist Dave Barry. The algorithm fails to recognize humor, satire, or falsehood, yet delivers its outputs with unwavering confidence.

The true danger lies not only in such errors, but in their scalability. As Valdes-Hevia notes, just a handful of forum posts dressed in “scientific” language can be enough for misinformation to pass as truth. In this way, Google has inadvertently become a vector for the spread of disinformation.

This is a systemic issue. Search traffic—long the lifeblood of media outlets and independent creators—is vanishing. SEO is no longer effective, and both small businesses and major newsrooms are incurring losses. Rather than fostering competition, we are witnessing a centralization of error and falsehood, legitimized by the brand authority of Google.

Some companies have begun offering alternatives—ad-free search engines, AI-content filters—but as long as Google remains the standard, users are served not what they seek, but what the algorithm deems fit to display.

In an official statement, Google dismissed Pew’s methodology as “unrepresentative” and asserted that it “redirects billions of clicks daily.” Yet the data tells a different story: with AI Overviews, users are increasingly bypassing original sources. The consequence is the slow but steady erosion of the internet’s human knowledge ecosystem.