News

Administrative Order on Notices of Change of Address, Substitution/Withdrawal of Counsel, and Designation of Attorney of Record in Palm Beach County

On July 23, 2025, the U.S. District Court for the Northern District of Alabama issued a consequential sanctions order that has become a touchstone in the rapidly evolving relationship between artificial intelligence and legal ethics.

U.S. District Judge Anna Manasco sanctioned three attorneys from the prominent law firm Butler Snow LLP for submitting court filings that relied on fictitious case law generated by ChatGPT. The incident has sparked widespread concern across the legal profession, not only for the ethical violations it reveals but for what it portends about the broader risks of AI misuse in practice.

The Incident

The underlying litigation involved allegations by plaintiff Frankie Johnson, an Alabama inmate, who claimed he was stabbed multiple times while incarcerated at Donaldson Correctional Facility. His lawsuit, which named former Alabama Department of Corrections Commissioner Jefferson Dunn as a defendant, raised serious claims of constitutional violations stemming from alleged failures to protect him from known threats. Butler Snow represented Dunn in the matter and was responsible for filing dispositive motions in federal court.

What followed was a cautionary tale. In briefing submitted to the court, three attorneys—whose names, despite probably extensive unblemished records, are now permanently dragged in the mud—cited purported case law that, upon closer inspection, did not exist. One of the referenced authorities, Kelley v. City of Birmingham, was not merely misquoted or misapplied—it was entirely fabricated. The source of the error, as later revealed, was a ChatGPT-generated output, one that had not been verified before being presented to the court.

Judge Manasco, in a sharply worded opinion, held that the attorneys had committed serious professional misconduct by failing to meet the basic ethical duty of verifying the accuracy and authenticity of the authorities cited. The court rejected the attorneys’ explanations that they “assumed” someone else on the team had checked the citations. Such delegation, the court emphasized, does not absolve an individual lawyer of the professional responsibility to ensure that filings submitted under their name are accurate and honest.

As a result, the court imposed three distinct sanctions: public reprimand, disqualification from continued representation in the case, and referral to the Alabama State Bar for possible disciplinary action. These penalties send a clear signal that the judiciary is no longer willing to tolerate even inadvertent AI-related misconduct in court filings.

Broader Trends in AI Use and Misuse in Litigation

The Butler Snow sanctions are not an isolated event. Rather, they reflect a growing judicial impatience with AI-generated inaccuracies, especially those that make their way into formal court submissions. In 2023, a similar scandal erupted in the Southern District of New York, when lawyers cited nonexistent cases in an affidavit also sourced from ChatGPT. That matter resulted in sanctions as well, though the attorneys’ contrition and remedial steps were taken into account in the final ruling. Since then, other courts—including federal judges in California and Florida—have  reportedly begun issuing standing orders requiring that any AI-generated material be disclosed and independently verified.

What distinguishes the Butler Snow matter is that the attorneys involved worked for a large and sophisticated law firm with ample resources to implement guardrails and verification protocols. The fact that such a lapse occurred in that context heightens concerns within the judiciary and the bar. The court noted that even though the use of generative AI is no longer novel, its uncritical use—particularly in high-stakes litigation—remains a serious ethical risk.

Importantly, the court’s reasoning emphasized that ethical responsibility remains personal and non-delegable, even in collaborative or firm-wide workflows. An attorney may not rely on another’s representations, nor may they rely on software—no matter how sophisticated—to perform duties of due diligence. The invocation of technology, without verification, is no defense to misconduct.

The Ethics of Artificial Intelligence in Legal Practice

This case adds to a growing body of law and commentary regarding the ethical implications of artificial intelligence in the legal profession. The American Bar Association has issued guidance acknowledging that AI can enhance efficiency but must be used under conditions of competence, confidentiality, and diligence as set forth in Model Rules 1.1, 1.6, and 1.3, respectively. State bar ethics committees are likewise beginning to issue opinions requiring attorneys to understand and control the risks associated with algorithmic assistance.

At the same time, some firms are responding to this risk environment by implementing “AI verification protocols” or even banning the use of generative AI tools altogether for litigation filings unless specifically authorized and reviewed. Legal research platforms are also racing to develop integrated verification layers to distinguish real precedent from fabricated citations. This technological arms race is occurring in parallel with a growing movement toward requiring certifications of accuracy and human oversight in every AI-influenced court filing. (Whether such certifications are appropriate or unduly invasive is a potential subject for future discussion and analysis).

Cultural Shifts in Judicial Expectations

Another dimension of this trend is the cultural shift underway in the judiciary’s relationship with technology. Courts are becoming increasingly tech-literate, and judges are no longer inclined to extend the benefit of the doubt when attorneys plead ignorance about AI-generated content. Rather, there is a growing expectation that attorneys who choose to use AI tools must not only understand their benefits but also their limitations—and bear full responsibility for both.

As Judge Manasco’s opinion makes clear, courts are beginning to treat “AI hallucinations” not merely as errors but as professional breaches when they result from a lack of due diligence. Her order quoted prior precedent emphasizing that litigants who come to court must do so with clean hands and clean files, implicitly extending that obligation to include any digital tool used in the drafting process.

The Future: Compliance, Sanctions, and Systemic Change

What this case reveals, while harsh, is a rapidly maturing view of legal ethics in the age of AI. The profession can no longer treat generative AI as an experimental novelty; it must now be approached as a regulated tool with known risks. Firms must develop internal protocols, require attorney training, and implement layered review processes to ensure that AI-augmented documents meet the same standards of candor and reliability as any traditional submission.

Moreover, this case likely signals a new phase of legal malpractice exposure. As courts document these failures and as clients become more sophisticated, firms that do not implement proper controls may find themselves exposed to claims not only from judges but also from their own clients.

In short, the Butler Snow sanctions order is a watershed moment—not because it introduced new law, but because it confirms that the judiciary is ready to enforce long-standing professional obligations in a new technological context. Attorneys, firms, and regulators alike must take notice.

 

Mark Osherow

Managing Member at Osherow, PLLC

Jurisdiction: Boca Raton


Phone: +1 561 257 0880

Email: mark@osherowpllc.com