Attorneys in Mississippi have raised serious questions about whether a federal judge unwittingly used artificial intelligence in issuing a flawed court order. U.S. District Judge Henry T. Wingate issued a temporary restraining order on July 20 halting Mississippi’s controversial ban on DEI programs, but legal professionals flagged multiple factual errors—including misidentifying plaintiffs, misquoting legislative language, and citing non-existent cases—prompting widespread concern about the origin of the document.
In response to the issues, Judge Wingate withdrew the original order three days later and reissued a corrected version on July 23, backdated to the original filing date. The original version was removed from public access on the docket, raising concerns about transparency and whether the errors reflect deeper procedural irregularities.
Lawyers involved in the litigation questioned whether AI-generated language—in what is sometimes called “hallucinations”—was used in drafting the order. The misquoted text and incorrect names of plaintiffs have legal professionals asking whether the judge or his staff relied on unverified AI tools to generate portions of the document.
Christina Frohock, a legal scholar at the University of Miami who studies AI risks in the judiciary, cautioned that this sort of confusion is common when AI is involved. She pointed out that fabricated case citations and inaccuracies are hallmarks of AI hallucinations and said she was “Alice in Wonderland” when reviewing the record—“I actually don’t know how to explain the backstory here.”
Critics say that regardless of intent, judges and legal staff have an ethical obligation to ensure accuracy in court filings. Courts have disciplined attorneys for similar lapses—some linked to AI—but there’s generally no equivalent oversight when judicial opinions themselves contain errors, which raises questions about accountability in the judiciary.
Meanwhile, the parties in the DEI litigation are scheduled to reconvene on August 5 to argue over a preliminary injunction. That hearing may provide the first judicial opportunity to address whether human error or AI was responsible—and what safeguards should be implemented in future opinion drafting.