In the span of just a few years, the legal profession has experienced a technological evolution that rivals the advent of email or electronic filing. Generative artificial intelligence, once the realm of speculative tech futurism, is now a mainstream tool in litigation strategy, legal drafting, discovery management, and client communication. For Florida attorneys in 2025, this change is not only cultural—it is procedural, ethical, and potentially disciplinary.
At the same moment that AI tools like ChatGPT, Copilot, Grok, Claude, and many others, as well as legal-specific platforms, are entering widespread use, the Florida Supreme Court has ushered in sweeping amendments to the Florida Rules of Civil Procedure. These reforms, effective January 1, 2025, signal a harmonization with federal practice, imposing new obligations on proportionality in discovery, specificity in objections, and the handling of electronically stored information. Together, the rise of AI and the shift in procedure create a singular mandate for attorneys: adapt with competence, or face consequences.
The new legal landscape requires an entirely different posture toward the use of artificial intelligence. No longer can it be treated as an experimental novelty or personal productivity shortcut. Under ABA Formal Opinion 512, issued in 2024, lawyers who use generative AI must understand not just its outputs, but its operation. This includes knowing how the model was trained, its limitations, the risks of hallucination, and the circumstances under which it may inadvertently expose confidential client data. The opinion makes clear that ethical competence under Model Rule 1.1—and by extension, Florida Rule 4-1.1—demands technological literacy at a basic functional level. This includes not only lawyers, but also the nonlawyer staff they supervise under Rule 4-5.3.
AI systems can enhance legal work in dramatic ways. They can summarize depositions, identify key issues in a document dump, draft initial versions of interrogatories, and even generate outlines for closing arguments. But they also introduce risk. The now-infamous Mata v. Avianca case, in which a lawyer filed a brief containing fictitious case law generated by ChatGPT, serves as a stark warning. While Mata occurred outside Florida, its lesson is universal: if you submit AI-generated content to a court, you own every word. Any factual error, phantom case, or misstatement becomes your ethical responsibility. Opinions are coming out frequently addressing misuse of AI by lawyers.
The recent procedural reforms enacted by the Florida Supreme Court heighten the consequences of careless AI use. Under Rule 1.280, which now adopts federal-style proportionality standards, lawyers must manage discovery in a way that balances the needs of the case against cost, burden, and scope. This means understanding whether the use of AI tools narrows or broadens the scope of responsive documents, whether they introduce error, and whether their methodology is defensible. If a firm uses AI to sort or tag documents and fails to produce relevant information due to an algorithmic misclassification, they may not be excused by pointing to the machine. They may be sanctioned under Rule 1.380, which has also been amended to mirror federal enforcement standards, including mandatory disclosure obligations and penalties for noncompliance.
Under the amended Rule 1.350, objections to discovery must now be stated with specificity and must disclose whether any documents are being withheld. This prevents lawyers from using generic AI outputs to issue broad, unsupported objections. The court expects attorneys to know what responsive materials exist, whether those materials are being withheld, and why. An objection must now not only assert proportionality—it must articulate it.
Meanwhile, Rule 1.200 gives courts expanded authority at case management conferences to address discovery plans, AI-assisted ESI production, and the use of expert witnesses. Judges may inquire whether AI was used to generate discovery responses, and if so, whether the use of such tools was disclosed to opposing counsel. (Whether such an inquiry is proper should be considered as well). Judges may require counsel to explain the form in which ESI was preserved or produced and may demand a description of the technical tools involved.
Discovery protocols that rely on AI must now be structured, documented, and ready for challenge. That level of procedural rigor was once reserved for complex litigation; it is now more likely to be the expectation in every civil division across the state.
In courtrooms, the implications extend even further. Judges are becoming increasingly aware of the role AI plays in case development and trial preparation. Some attorneys use AI to draft pretrial motions, others to develop direct examination outlines or summarize expert testimony. As these uses increase, so does the likelihood that AI-generated content will become part of the record. In cases involving forensic evidence, IP disputes, or commercial fraud, AI tools may themselves become subjects of expert testimony—requiring lawyers to understand and articulate the reliability of the systems their firms employ. If a forensic expert uses a machine learning model to analyze digital evidence, counsel must be able to defend that model’s methodology under Florida’s Daubert standard. They must know what training data the model used, how its accuracy is measured, and whether its output is reproducible.
These are no longer questions for the future. They are questions that trial lawyers in Florida must be ready to answer now.
The ethical implications of AI reach far beyond discovery and evidence. They also reshape the way attorneys manage client relationships. Engagement agreements must evolve to reflect the use of AI in the representation. If a law firm routinely uses AI tools to assist in drafting, document review, or timeline building, those uses should be disclosed to the client at the outset. Clients should know how their data will be handled, whether it will be shared with third-party platforms, and what safeguards are in place to protect privilege. Under Rule 4-1.6, a lawyer may not disclose confidential information unless authorized by the client. Inputting privileged client content into a public-facing AI tool could result in inadvertent waiver or exposure—and the ethical consequences that follow. While this may seem unlikely, it remains possible and must be considered.
Furthermore, the use of AI impacts how lawyers bill. If AI completes in five minutes what would ordinarily take an hour, billing for the full hour— even where the use of AI has been disclosed to the client)—may violate Rule 4-1.5’s requirements of reasonableness and transparency. Clients are increasingly sophisticated. They know AI is being used. What they expect is honesty and proportionality in how those efficiencies are passed on. Billing models may also need to be reconsidered in this new era, as billing for the use of AI and how that applies to any particular document, research or case, may be complex. Until there are more clear-cut rules or procedures the safest and clearest route is to bill for the actual time expended, including for review and verification of all cited materials and rewriting based on that analysis. That is, billing only for the actual time expended on the task.
In light of these dynamics, the lawyer of 2025 must operate with what the ABA and other commentators have described as an “operator mindset.” This does not mean becoming a software engineer. It means being an informed user—one who verifies, supervises, and, when necessary, discards AI-generated work. It means understanding the limitations of the tools being used and building workflows that include human review at critical junctures. It means keeping prompt histories and citation logs where AI is used in research or drafting. It means treating every AI-assisted task as if it were performed by a junior associate—accountable, supervised, and correctable.
Ultimately, the convergence of Florida’s civil procedure reform and the rise of generative AI presents both a challenge and an opportunity. Those who fail to understand the rules—whether ethical or procedural—will find themselves at risk: of sanctions, of evidentiary exclusion, of lost credibility with the court, and of bar discipline. But those who embrace the new landscape with care, clarity, and competence will find themselves on the leading edge of a transformed legal practice. The amount of work that can be accomplished with the use of AI is indeed exciting and should be embraced, albeit only under exacting and understandable parameters.
Artificial intelligence is not a substitute for legal judgment. It is not a shortcut to ethical decision-making. It is a powerful tool—one that must be wielded with precision, diligence, and respect for the rules that govern our profession. The future of legal practice is here. The rules are in effect. The technology is advancing. It is time for Florida attorneys to meet the moment.
Appendix: Sample AI Use Disclosure Clause for Florida Engagement Agreements
Artificial Intelligence Use Disclosure Clause
As part of our commitment to providing efficient legal services, this firm may utilize artificial intelligence tools to assist in legal research, document review, drafting, and internal workflow management. These tools are operated under the supervision of licensed attorneys, and no AI-generated content will be submitted to any court or third party without verification. We do not input privileged or confidential information into public-facing AI platforms. If you have questions about the use of such tools, we are happy to discuss their function and safeguards with you.

