The Verdict is In: ChatGPT on Trial, Banned From the Courtroom

One attorney's decision to rely on AI, precisely ChatGPT, for legal research in a federal filing, led to adverse consequences.

Attorney Steven Schwartz had an incident where he incorporated the AI language model ChatGPT into his legal research for a federal filing. Even though many people rely on AI today, it turned out that all six cases and relevant precedents provided by AI were completely fabricated.

This incident prompted Texas federal judge Brantley Starr to take proactive measures and introduce a new requirement for attorneys appearing in his court. According to the requirement, attorneys must declare that an AI language model generated no part of their filing, or if it was, it had to be reviewed by a human being. This additional requirement highlights the judge's determination to avoid any potential mishaps and dangers stemming from AI-generated legal work.

While AI language models have various applications in the legal field, such as generating standardized legal documents, suggesting document improvements, predicting potential questions during oral arguments, and aiding discovery requests, they are unsuitable for legal briefings. These platforms are prone to providing false information and showing bias. They can fabricate quotes and citations and are not bound by any sense of duty, honor, or commitment to justice, thus not possessing any allegiance to a client, the rule of law, or the laws and Constitution of the United States.

Certificate is obligatory  

From now on, all attorneys appearing in court must submit a certificate as part of the official record. The certificate must affirm that no part of their filing was generated by generative artificial intelligence (such as ChatGPT, Harvey.AI, or Google Bard). However, if AI assistance was used, a human being had to review any language generated by AI.

Although this requirement is currently specific to one judge in one court, it wouldn't come as a shock if other judges adopted a similar rule. While AI is a powerful and potentially beneficial technology, it must be checked for accuracy. Suppose a party believes that an AI platform meets the necessary standards of accuracy and reliability for legal briefing. In that case, they can request permission from the court and provide an explanation supporting their belief.