On June 6, 2025, the High Court of England and Wales issued a stern warning to lawyers regarding the ramifications of relying on artificial intelligence for legal documentation. This intervention stemmed from multiple incidents where legal professionals cited inaccurate content generated by AI, including fabricated quotes and non-existent court rulings.

In a remarkable move, the court’s president, Judge Victoria Sharp, and her colleague, Judge Jeremy Johnson, highlighted two notable instances in which such AI-generated falsehoods were integrated into legal arguments presented to the court. In one instance, a claimant and his attorney acknowledged that their lawsuit against two banks included inaccuracies stemming from AI-generated content, which ultimately led to the case being dismissed. Another situation involved a lawyer who could not account for fictitious case laws referenced while representing her client against a local council.

Judge Sharp utilized authority typically reserved for regulating court procedures to bring attention to these serious misuse cases, emphasizing the detrimental impact on both the justice system and public trust if AI continues to be misapplied. In her remarks, she underscored the potential for criminal charges against lawyers who incorporate false AI-generated materials into their cases, which may result in serious professional repercussions including disbarment.

This critique of AI usage in the courtroom represents a significant milestone in the ongoing dialogue about ethics and accuracy in legal practices, as legal experts grapple with the integration of technology into law. While AI has the potential to enhance efficiency and provide new resources for lawyers, its misuse can jeopardize the integrity of legal proceedings and ultimately harm individuals seeking justice.