“Experts estimate that as much as 90 percent of online content may be synthetically generated by 2026,” the report warned, adding that synthetic media “refers to media generated or manipulated using artificial intelligence.”
3 consequences that we can already (fore)see:
1) When AI starts learning from AI
The data used to train large language models come from sources produced by humans – books, articles, photos, etc. However, as more and more people use AI to create content, the question arises: what will happen when AI models start training on content generated, indeed, by AI itself?
A group of researchers from the UK and Canada believes that the effects are decidedly negative and lead to the creation of a loop and the collapse of the language model. “Over time, errors in the generated data add up and ultimately force models that learn from these data to perceive reality even more erroneously,” the scientists believe. Thus, copies of AI copies would result in the degradation of the quality of generated responses. Ilia Shumailov, one of the researchers, adds: “We were surprised when we observed how quickly the models collapse: they can quickly forget most of the original data they initially learned from.”
2) Advertisment vs content farms
AI chatbots are used to generate content that fills low-quality websites. More than 140 major brands (many of them on the Fortune 500 list) pay for advertisements on such sites, according to a NewsGuard report. Hundreds of ads appear on sites stuffed with AI-generated content through automated campaigns. “Programmatic is the main source of revenue for these types of websites,” says Lorenzo Arvanitis, an analyst with NewsGuard.
Such content farms have gained momentum with the development of AI tools.
One site flagged by NewsGuard generated over 1200 articles per day!
Most of the ads that end up on such spammy sites are served by Google. The tech giant, whose ad revenue was $168 billion in 2022, responds that it follows strict rules and ads do not appear on sites with spam or duplicated content. However, it appears that content generators are currently winning against such restrictions.
3) Fake users = fake evaluation
IRL is an app that is intended to help discover local events and make connections. According to the management, it was supposed to have 20 million users, and it managed to raise a considerable $200 million from investors. The problem is that it turned out that as much as 95% of the users are fake accounts or bots. After these facts came to light, the company will be shut down and is supposed to return capital to shareholders. Doubts about the real number of users were first reported a few months ago by IRL employees.
Co-Founder @ MDBootstrap.com / Forbes 30 under 30 / EO'er
For years I've been working as an IT Consultant in countries like Netherlands, Belgium, Poland or India developing enterprise class systems for the biggest companies within domain.
Since 2016 I'm co-founder of MDBotstrap.com - world class UI Framework used by NASA, Amazon, Nike, Airbus, Samsung, Apple and many other Fortune 500 Companies.All author posts