In a developing landscape of artificial intelligence, Google and Meta are facing increasing scrutiny over potential defamation risks linked to their AI-generated responses. Experts have raised concerns about these platforms using user-generated comments or reviews, particularly regarding their functionalities related to restaurant queries and sentiment summaries.
The legal backdrop in Australia indicates that when an allegedly defamatory post or review is made on platforms like Google or Facebook, typically, the original poster is held accountable. However, a pivotal 2021 high court ruling established that the pages hosting these comments—such as news pages on Facebook—could also bear liability.
Tech companies like Google and Meta have encountered legal challenges in Australia. For instance, in 2022, Google paid over $700,000 after hosting a defamatory video, and in 2020, the company was ordered to pay $40,000 related to search results that linked to a defamatory news article about a Melbourne lawyer, although this ruling was later reversed by the high court.
Recently, Google began implementing changes in their Maps service, utilizing a new AI—Gemini—that enables users to inquire about places to visit or activities and summarizes user reviews for various establishments. Additionally, the introduction of features providing summary responses to Australian users has rolled out concurrently.
Meta has been incorporating AI-generated summaries of public commentaries on posts, including those from news outlets. Michael Douglas, a defamation expert at Bennett Law, anticipates that cases may emerge as AI becomes more integrated into these platforms, indicating that if AI systems “suck up comments and spit them out,” then they might be categorized as publishers and face defamation liabilities.
While companies may argue defensively, citing the principle of ‘innocent dissemination’ under defamation acts, experts doubt the efficacy of such defenses in court, especially when they could be deemed to have reasonably known they are repeating defamatory content.
Douglas has noted that although some states have introduced new provisions for digital intermediaries under defamation laws, these may not adequately cover AI-generated content.
Prof. David Rolph from the University of Sydney has expressed that the AI’s potential to repeat defamatory comments poses ascertainable risks for tech companies. Nevertheless, the implementation of a serious harm requirement in newer defamation reforms may diminish risks, though he highlights that those reforms arrived before the advent of large language model AI, which adds layers of complexity that current legislation has yet to tackle.
Rolph suggests that AI’s ability to generate varied responses based on user input may limit the dissemination of any allegedly defamatory material to smaller audiences.
In addressing these defamation risks, Miriam Daniel, VP and Head of Product, stated that efforts are in place to identify and remove fake reviews or content that breaches platform policies. She emphasized that Gemini seeks to forge “a balanced point of view” through analysis of themes from multiple reviews, capturing both positive and negative sentiments.
A Meta spokesperson further acknowledged the newness of their AI systems, pointing out that outputs may not always align perfectly with intended responses, and reiterated their commitment to ongoing improvements in their models to mitigate inaccuracies.