https://finance.yahoo.com/news/ais-mad-cow-disease-problem-tramples-into-earnings-season-100005953.htmlTL;DR: As more and more internet content is produced by AIs, the datasets AIs use to train themselves become more and more AI-generated. So you end up with a situation where errors (or virus-like replicators) could be perpetuated and would become hard to eradicate.
One metaphor is how feeding cows the remains of other cows helped mad cow disease spread. Another metaphor is how incest can lead to the multiplication of genetic errors.
In practice, this scenario might take years to unfold. The steps would be:
- AI tools capable of mass producing viral social media content become widely available and usable. The AI content is hard to distinguish from real content.
- Would-be influencers utilize the tech to mass-produce content / clickbait. Using AI to create internet content becomes analogous to bitcoin mining, except you're changing the world's understanding of aspects of reality. AI swarms for hire start creating massive amounts of content about how sponsored brands are the best or targeting elections.
- Social media companies and startups begin mass-producing content, using viral content produced by earlier AIs as training material
- Efforts to identify and exclude AI content generally fail, as AI is uniquely adaptive to defeat such efforts. Meanwhile, AI content quality improves to the point video and audio content become indistinguishable from human-produced content. Neither experts nor other AIs can tell the difference.
- Yottabytes of spammy, clickbaity content are created, overwhelming the volume of all human-produced content and straining the power grid. Search tools cannot identify AI content, so human-produced content becomes less and less of what we consume.
- Eventually, humans stop contributing content to the internet because it is relatively inefficient and such content becomes a needle in an exponentially growing haystack. Earning money from human created content becomes impossible - like hand-making things which can be produced by factories.
- As thousands of successive generations of AI content use previous AI content as training data, the internet becomes detached from reality and even more untrustworthy than it currently is as an information source. Sites like Wikipedia are utterly vandalized by swarms of AIs and become useless. Whatever becomes viral (most common) will become truth to the AIs. This may not look like the internet saying humans have six fingers per hand, but it will look something like this in tens of thousands of domains where viral misinformation gets magnified many-fold.
- A generation of people will be gullible and believe any AI content they consume, just as the first generation to experience any technology is always gullible. It is unclear whether democracy will survive said generation, or if the tech oligarchs will use their power for self-promotion the way Vlad Putin used a takeover of all media to establish his dictatorship.
- The backlash starts either when (1) online content becomes laughably false, or (2) people start paying to subscribe to AI "friends" who have good empathy, but of course eventually steer them to products.
This would be a process occurring over several years, maybe a decade, but the endpoint would look like bad data everywhere (Wikipedia, maps, news, reviews, fake photos, fake videos, fake music, etc) to the point these services would be unusable for anything other than entertainment. At this point the internet would be bifurcated into an entertainment function and a business function. The internet's usefulness as a source for usable information would be... questionable.
So we'd have no choice in this scenario but to find other ways to obtain useful information. E.g. if you look on YouTube for a video of how to fix your car, all the results would be AI-generated with the AI's idea of what your car looks like under the hood. In fact, you'd find a completely different reality when you started working. And your first 10,000 results would be like this. For everything.
To be clear, TikTok-like entertainment would still be around, and businesses would still sell stuff and work online, and there would still be efforts made to identify and stamp out fraud. It's just that the internet would become useless for learning or looking up information because there would be so much self-referential bad info. Stated another way, as human-generated content that is tied at least slightly to reality becomes relatively more rare on the internet, machine-generated content tied to nothing more than itself and its own hallucinations becomes more common.
Contrary arguments:
1) Just like computer viruses and spam, this challenge will be dealt with via thousands of little tweaks to our methods and software. If the internet didn't collapse for these reasons, it won't collapse due to AI.
2) People will accept a certain level of incorrectness in their information if it's cheap and easy to obtain the information. We already see this phenomenon today.
3) The expansion of AI generated content will mostly affect large social media platforms and search engines. It'll have less an effect on blogs, company websites, and anything else where a human holds the keys and can cultivate a place dedicated to information quality. Such brands will become more important than the volume of data. Search engines may go back to the old Yahoo model of manually curated links.
Your thoughts? Will the internet become a (worse) trash pile of bad information?