AI in Legal Practice: Barrister’s Misstep

Oct 22, 2025 | AI Trends

In a striking incident within the legal profession, an immigration barrister, Chowdhury Rahman, faced serious repercussions for using artificial intelligence inappropriately during tribunal preparations. The judge, Mark Blundell, discovered that Rahman had cited cases that were found to be either entirely fictitious or wholly irrelevant, highlighting the potential pitfalls of relying on AI tools without proper diligence.

The tribunal hearing revealed that Rahman had utilized ChatGPT-like software to conduct his legal research. As the proceedings unfolded, it became evident that Rahman not only failed to verify the accuracy of the information produced by the AI but also attempted to conceal its use from the judge. Such actions wasted the court’s time and raised ethical questions concerning the responsibilities of legal practitioners in utilizing modern technology.

The case initially involved two Honduran sisters who were seeking asylum due to threats from a criminal gang in their home country. As Rahman represented these sisters, the complexity of their appeal escalated to the upper tribunal level. However, amid critical scrutiny by Judge Blundell, Rahman’s arguments were dismissed, with the judge noting that “nothing said by Mr. Rahman orally or in writing establishes an error of law on the part of the judge.”

During his examination of the appeal grounds submitted by Rahman, Judge Blundell identified severe discrepancies. He noted that 12 legal authorities referenced in the documentation did not exist, and those that did failed to support the legal propositions presented. In a rare and candid postscript, Blundell indicated significant problems with the appeal grounds, stating that it appeared Rahman had drafted them with the assistance of generative artificial intelligence, thereby misleading the tribunal.

During the hearing, Rahman suggested that the inaccuracies stemmed from his “drafting style,” admitting to some confusion in his submissions. However, Judge Blundell dismissed this explanation, asserting that the issues at hand were fundamentally more serious than mere drafting problems. He emphasized that the authorities cited either did not exist or were irrelevant to the legal arguments advanced. This situation raises pressing concerns regarding the integration of AI tools in legal contexts, especially as Rahman’s reliance on AI exemplifies a cautionary tale of technological misuse.

Ultimately, Judge Blundell expressed his intention to report Rahman to the Bar Standards Board. Such actions underscore the growing need for strict regulations and guidelines governing the use of AI in legal practices. As technology continues to evolve, it is crucial for barristers and other legal professionals to balance innovative tools with ethical responsibilities, ensuring that such resources enhance, rather than hinder, the integrity of the judicial process.