Michael Cohen's AI Blunder: Fake Cases in Court Documents

Michael Cohen, the former lawyer for Donald Trump, has become a surprising case study in the misuse of artificial intelligence in legal proceedings

by Faruk Imamovic
SHARE
Michael Cohen's AI Blunder: Fake Cases in Court Documents
© Getty Images/Yana Paskova

Michael Cohen, the former lawyer for Donald Trump, has become a surprising case study in the misuse of artificial intelligence in legal proceedings. According to a report by The New York Times, Cohen admitted to using AI-generated content in a legal document presented to a federal judge.

This situation unfolded when Cohen used Google’s Bard, which he mistakenly believed to be a powerful search engine, rather than recognizing it as an AI chatbot. Cohen's mistake involved a motion requesting a reduction in his three-year probation period, following a guilty plea to tax evasion among other charges.

However, the motion included references to non-existent court cases, which caught the attention of US District Judge Jesse Furman. Judge Furman questioned Cohen's lawyer, David Schwartz, about these citations, prompting an investigation into their origin.

Cohen's Explanation and the Risks of AI in Legal Research

In response to the judge's queries, Cohen submitted a statement explaining his actions. He clarified that he had not intended to mislead the court and had used Google Bard for legal research, unknowingly citing fake cases.

Cohen's misunderstanding of Bard’s capabilities led to this error; he had previously used the tool in other contexts with successful results in finding accurate information. This incident with Michael Cohen is not an isolated case.

It echoes a similar situation in June, where two New York lawyers faced sanctions and fines for including ChatGPT-generated bogus court cases in a legal brief. The integration of AI in legal processes is increasing, with some lawyers even employing AI tools to draft arguments.

A notable example is the legal team for rapper Pras Michél, who sought a new trial following a guilty verdict, relying in part on AI assistance. Cohen's case, along with these other incidents, highlights the growing challenge of discerning authentic and AI-generated content in the legal field.

It underscores the need for legal professionals to be aware of the capabilities and limitations of AI tools to prevent similar mishaps in the future.

Donald Trump
SHARE