Photo by Emiliano Vittoriosi / Unsplash
The Economics of AI Hallucinations: Information Quality as a Market Good
Nowadays, many people use AI to answer their questions. Easily-accessible AI programs like ChatGPT, Google Gemini, and Grok provide answers in common language to queries that used to simply deliver lists of websites. The style and length of answers, as well as their depth and complexity, can be adjusted by the user through directions written in common language. For example, a user can request a one-page biography of Winston Churchill, written in the style of an 8th grade social studies student, be generated with three photographs embedded. Never before in history has it been so easy for humans to access tremendous amounts of information.
Decline of Information as a Market Good
Years ago, people paid for information: news, encyclopedias, how-to books, guides, etc. These information resources were in the print medium, though later people also paid for documentaries or instructional videos. By the late 1990s, major news publications created their own websites, allowing Internet users to get free information rather than pay for newspaper or magazine subscriptions. In 2005, the creation of YouTube began the decline of purchasing video documentaries. By the early 2010s, free websites like Wikipedia had eliminated public demand for printed encyclopedias.
In just a few decades, information went from something people spent hundreds, if not thousands, of dollars on to something available almost entirely for free. However, the rapid rise of free news and information sources did spark a widespread debate about the reliability of such information. In the education realm, schools debated whether the popular site Wikipedia could be trusted as a resource. Research purists argued that it could not, as any user could post information on the site. Wikipedia users countered that the site’s numerous dedicated editors quickly corrected any misinformation, making the site roughly as accurate as encyclopedias and other expensive information sources.
Asymmetric Information and Market Failure
Unfortunately, asymmetric information is a potential cause of market failure, making the reliability of information a pertinent issue. The increasing popularity of layperson-driven information in the early 2000s through websites like Wikipedia and YouTube, where any user could post videos or write articles, subjected more viewers to incorrect or incomplete information. Simultaneously, the blogosphere emerged, with people able to publish their thoughts and opinions online to countless viewers.
Many Internet users may struggle to determine which information they see and read is accurate versus inaccurate. Controversially, many editorials, articles, and videos are portrayed as being unbiased and created by experts when they are not. This can lead to people being unwittingly misled, potentially making expensive mistakes. For example, bloggers or YouTube content creators may present misleading investment or financial information to viewers while portraying themselves as savvy, experienced investors - perhaps even professionals. Of course, when the viewers discover the advice was bad, there is no recourse.
Concerns About AI [Mis]Information
A decade ago, people could be misled by content creators who portrayed their works, from print to video, as expert and unbiased. Today, people are increasingly getting their information and advice from nameless, faceless AI programs. Many people may accept all AI answers or advice with little scrutiny, not considering how AI programs work or whether mistakes are common. Although many people are wary of the implications of AI, they continue to use it regularly as a tool.
Currently, AI’s accuracy is highly variable. For most consumer use, it functions as a powerful search tool and pulls information from available resources online. If the topic has a lot of expert information available, AI is likely to craft a highly accurate answer. However, if the topic is relatively unresearched, AI may “fill” its answers with less verified information, such as blog posts or opinion articles. When it presents this information to viewers, it may do so without any caveats or warning, making such blog or opinion article material indistinguishable from peer-reviewed, factual information.
AI Slop
Some Internet users feel that AI, in its current form, has become too prevalent and is pushing out high-quality, human-created content with low-quality replication. AI slop is the pejorative term for this mass of computer-created content, much of which still looks and sounds artificial. Critics argue that companies’ reliance on AI slop instead of human-created content risks stifling creativity and innovation, as AI draws on material that already exists. Thus, AI does not create anything new or advance the public discourse.
Renewed Market for Human-Reviewed Information?
Growing concerns about AI slop and the ability of consumers and producers to be misled by AI may be generating renewed demand for human content creation and fact-checking. This has slowed what was thought to be a rapid takeover of almost all content creation by AI, with many users being dissatisfied with the quality of AI output. However, the true ratio of AI versus human content creation is difficult to determine due to many content creators using AI for part of their creations. Some of what is thought to be purely AI, therefore, is actually only partially AI.
Likely Future: Hybrid Content Creation
It turns out that human content creators cannot easily be replaced entirely; we need fact-checkers, writers, and artists to fix what AI gets wrong or wonky. What is likely to occur is that entertainment and news companies pair existing artists and journalists with AI tools to assist with their research. AI can speed up the process of creating content, but human workers need to sign off on the finished product. What this looks like will almost certainly vary from company to company, but consumers may demand some indication that they are purchasing products (physical or digital) that are human-approved.
Government Regulation May Stave Off AI Unemployment Apocalypse
Even if companies don’t particularly mind churning out AI slop, governments may soon pass regulations requiring verification that their output is human-approved. This requirement to keep human employees on board is intended to stop the dreaded AI unemployment apocalypse, which is feared to cause mass unemployment of white collar workers, particularly entry-level workers, in the near future. Companies will have to retain many white collar workers to verify and approve the work of AI, though this work may be low-wage and result in pay cuts for many employees. Still, these workers would be employed (versus unemployed) and would help maintain the quality of information available to consumers.