Meta’s recent decision to end its professional fact-checking program has ignited widespread criticism across tech and media sectors. Critics caution that abandoning expert oversight might lead to a decrease in trust and reliability within the realm of digital information. This concern is particularly pertinent as profit-driven platforms have primarily been left to self-regulate.

However, an aspect often overlooked in this ongoing debate is the growing role of AI large language models (LLMs) in crafting news summaries, headlines, and engaging content. These AI systems can produce information that attracts attention before traditional content moderation can even react. The conversation should extend beyond mere instances of misinformation; rather, it should focus on how the selection, framing, and emphasis of ostensibly accurate information can significantly influence public perception.

AI’s Role in Information Consumption

Large language models are increasingly shaping opinion formation as they generate the information disseminated through chatbots, virtual assistants, and increasingly, news platforms and social media. Studies indicate that these models do more than relay facts; they subtly accentuate certain perspectives while minimizing others, often without users being aware of such biases.

Communication Bias in AI

In a forthcoming paper published in the journal Communications of the ACM, researchers Stefan Schmid and I explored the communication bias exhibited by large language models. We discovered that these models might favor highlighting specific viewpoints while either downplaying or omitting opposing perspectives. Such bias affects how users process information, regardless of its veracity.

Empirical research over recent years has established benchmark datasets correlating model outputs with political positions during elections, revealing significant variations in how LLMs handle public content. Depending on the user’s context or self-described persona, these models may tilt their outputs towards different perspectives. For instance, the phrasing of a response could vary greatly based on whether a user identifies as an environmental activist or a business owner, despite both receiving factually accurate answers.

The Challenge of Sycophancy

This tendency towards aligning outputs with user expectations can easily be misconstrued as flattery, a phenomenon known as sycophancy. However, communication bias runs deeper and points to fundamental disparities in the design and training of AI systems. A few developers controlling the LLM market may lead to the consistent promotion of certain viewpoints. Consequently, minor behavioral differences in these models can escalate into significant distortions in how information is conveyed to the public.

Regulatory Measures and Limitations

The reliance on LLMs by society as a primary means of accessing information has prompted governments worldwide to initiate policies addressing AI bias. For instance, the European Union’s AI Act and Digital Services Act aim to enforce transparency and accountability, yet they do not adequately address the subtleties of communication bias in AI outputs.

While advocates of AI regulation often strive towards attaining neutral AI, true neutrality remains elusive. AI systems inherently reflect the biases present in their training data and design processes, and efforts to rectify such biases often lead to replacing one type of bias with another.

The Need for Holistic Solutions

Addressing communication bias goes beyond merely correcting biased training data. It encompasses the entire market structure influencing technology design. As few large language models dominate the information access landscape, the risk of communication bias intensifies. To effectively mitigate bias, it is crucial to promote competition, user accountability, and regulatory openness towards diverse construction and deployment methods of LLMs.

Beyond Regulations

While regulating harmful outputs post-deployment or mandating audits before launch is standard practice, our research indicates these approaches may overlook the subtle communication biases that emerge from user interactions with the models.

Expecting regulation to eliminate all biases in AI systems could be an overly optimistic approach. While certain policies are beneficial, they tend to neglect the root issue: the underlying incentives shaping how technologies convey information to the public. A more sustainable solution hinges on promoting competition, transparency, and active consumer participation in designing and refining large language models.

This necessity stems from the fact that AI will not only impact the information we seek but will also critically influence the societal framework we envisage for the future.