Introduction: From Innovation to Regulation
As artificial intelligence (AI)—especially generative AI—evolves from theoretical promise to practical ubiquity, the legal system is racing to catch up. Once the domain of speculative ethics panels and academic policy debates, AI regulation in the United States is now a pressing concern for litigators, in-house counsel, compliance professionals, and courts. From consumer protection and employment law to algorithmic bias and biometric privacy, the legal landscape has entered a period of rapid and fragmented development.
This article explores the current framework of AI governance in the United States, with an emphasis on federal regulatory agency initiatives, emerging state legislation, and litigation implications. The analysis also considers how U.S. regulatory efforts compare with international models like the European Union’s AI Act and offers strategic guidance for navigating a still-forming but increasingly high-stakes legal domain.
Federal Regulatory Landscape: A Distributed Model of Oversight
At the federal level, the United States has not yet enacted a comprehensive AI statute. Instead, regulatory oversight is distributed across a patchwork of agencies, each addressing AI within the context of its statutory mission. The Federal Trade Commission (FTC) plays a leading role in scrutinizing AI-related consumer protection issues, particularly regarding deceptive practices, unfair competition, and data misuse. The agency views AI systems as subject to long-standing principles of truth-in-advertising and consumer autonomy and has the ability to take enforcement action where companies exaggerate the capabilities of AI models or fail to disclose material risks.
The Equal Employment Opportunity Commission (EEOC) has focused on the use of automated decision systems in employment. It emphasizes that Title VII protections apply regardless of whether a human or machine makes the decision and warns that algorithms used in hiring may produce disparate impacts based on race, gender, disability, or age.
In the financial services sector, the Consumer Financial Protection Bureau (CFPB) oversees AI-driven credit scoring and underwriting systems, ensuring compliance with the Equal Credit Opportunity Act and the Fair Credit Reporting Act. The Department of Justice (DOJ) contributes to AI governance by exploring civil rights enforcement and the antitrust implications of algorithmic collusion and market concentration in AI development.
This sector-specific regulatory architecture allows for tailored enforcement but also creates compliance complexity, especially for businesses operating across multiple industries. However, given actions by the Trump Administration, the extent of this regulatory oversight is currently unclear.
State-Level Legislation: Emerging Models in Colorado and California
State legislatures have moved more quickly than Congress in crafting AI-specific laws. Colorado’s Artificial Intelligence Act, enacted in 2023, is one of the most detailed and forward-looking state AI laws. It focuses on “high-risk” AI systems—those making consequential decisions affecting employment, housing, education, or healthcare—and requires risk assessments, disclosures, and impact audits.
California has introduced multiple AI-related bills targeting biometric surveillance, algorithmic hiring practices, and facial recognition technologies. These laws build on California’s comprehensive data privacy framework under the California Consumer Privacy Act (CCPA) and its successor, the California Privacy Rights Act (CPRA).
However, these laws diverge significantly in scope, definitions, and obligations, resulting in a fragmented regulatory map. For example, while some jurisdictions define AI broadly to include statistical decision tools, others confine the term to machine learning systems. This disparity heightens compliance risks and opens the door to jurisdiction-specific litigation strategies.
Risks at the Intersection: Biometrics, AR, and Training Data
Biometrics and augmented reality (AR) technologies raise unique legal challenges under AI governance. The use of facial recognition in AR platforms, for instance, may trigger strict consent and data protection requirements under state statutes like Illinois’ Biometric Information Privacy Act (BIPA). BIPA’s private right of action has led to high-profile class actions with multi-million-dollar settlements.
Another critical area involves AI training data. Generative models often rely on massive data sets scraped from the internet, which may include copyrighted works, private images, and sensitive metadata. Plaintiffs—including authors, artists, and website owners—have initiated lawsuits against AI developers, alleging copyright infringement, violation of the right of publicity, and breaches of privacy laws.
Courts are now being asked to decide whether such data use constitutes fair use, whether AI outputs infringe derivative rights, and whether developers can be held accountable under traditional tort or consumer protection theories.
Bias, Decision-Making, and Consequential Harms
AI systems used in consequential decision-making—such as employment screening, insurance underwriting, and healthcare triage—pose particularly acute legal risks. Bias in training data or model outputs can lead to discriminatory effects, even if unintended. This has prompted regulatory and private scrutiny under federal civil rights laws, the ADA, and state anti-discrimination statutes.
Legal theories continue to evolve. Plaintiffs may allege negligent design, failure to warn, product liability, or consumer deception. These claims often require sophisticated discovery into the underlying model architecture, data lineage, and output logic. Florida’s recent civil procedure amendments, particularly the incorporation of federal-style proportionality in Rule 1.280, significantly shape how such discovery unfolds.
Comparative Perspectives: The European Union AI Act
The European Union’s AI Act, likely to take effect in stages beginning in 2025, classifies AI systems into four risk tiers and imposes obligations accordingly. High-risk systems—such as those used in law enforcement, hiring, and credit scoring—require conformity assessments, documentation, audit trails, and human oversight. Unacceptable-risk systems, such as social credit scoring, are banned outright.
This centralized, precautionary model stands in contrast to the United States’ sectoral and reactive framework. While several federal bills have been proposed in the U.S., including the Algorithmic Accountability Act and the SAFE Innovation Framework, none have been enacted. As a result, U.S. companies operating globally may find themselves subject to conflicting requirements.
Strategic Implications for Litigators and Corporate Counsel
Attorneys advising clients on AI governance must now account for substantive regulation, procedural obligations, and reputational risk. AI-related claims may arise not only from a model’s use but also from failures to disclose its capabilities, preserve its documentation, or supervise its integration into decision-making.
Discovery strategies should incorporate detailed preservation protocols for training data, model parameters, decision logs, and user audits. Expert witnesses with technical and legal fluency will become increasingly essential in litigation involving complex algorithms. In Florida, procedural amendments to Rule 1.200 on case management and Rule 1.280 on discovery will directly affect litigation strategy and timeline.
Moreover, engagement letters, service agreements, and internal governance policies should now reflect obligations related to AI model development, third-party tools, data licensing, and electronic discovery.
Recent Enforcement and Litigation Trends
Several federal and state enforcement actions have begun to shape the emerging AI litigation environment. The FTC has issued warning letters and opened investigations into companies overstating AI capabilities or deploying systems that result in consumer harm. The EEOC recently filed a lawsuit against a firm accused of using an AI screening tool that systematically excluded older job applicants, illustrating how civil rights laws are being brought to bear on algorithmic processes.
Private litigation under BIPA, California’s CPRA, and general tort theories is also rising. Courts are beginning to define standards for “black box” evidence, expert testimony on AI causation, and model explainability.
Global Perspectives Beyond the EU
Several other nations are building their own AI regulatory models. China has enacted sweeping rules requiring registration and state approval for generative AI models, along with data provenance disclosures. Canada has introduced the Artificial Intelligence and Data Act (AIDA), which would impose significant transparency and accountability obligations on high-impact AI systems.
The United Kingdom, post-Brexit, has taken a lighter, innovation-centric approach, focusing on sector-led oversight rather than centralized regulation. These differing models reflect broader geopolitical priorities but also complicate compliance for multinational companies.
Anticipated Developments in 2025 and Beyond
The regulatory future of AI in the U.S. remains uncertain, but multiple initiatives point toward increased harmonization and accountability. The White House has issued Executive Orders directing agencies to develop AI safety standards, transparency rules, and protections against algorithmic discrimination.
Meanwhile, Congress is debating the formation of a national AI commission. The National Institute of Standards and Technology (NIST) continues to develop its AI Risk Management Framework, which could serve as a de facto standard for responsible AI development. States may also continue to legislate where federal action stalls, reinforcing the need for proactive risk assessment and cross-jurisdictional compliance strategies.
Practitioner Insights: Tactical Tips for Counsel
Practitioners should advise clients to conduct periodic audits of high-risk AI tools and maintain documentation of how models were trained, tested, and validated. Engagement agreements should include AI-specific clauses addressing proportionality in discovery, expectations for ESI preservation, and scope-of-use for third-party tools.
During litigation, consider early meet-and-confer sessions under Florida Rule 1.200 to clarify the parties’ positions on AI-related discovery. Develop standard procedures for managing algorithmic evidence and consider working with data scientists as part of the litigation team to ensure comprehensibility and admissibility.
Florida Litigation and AI: Unique Considerations
Florida’s recent amendments to its Rules of Civil Procedure place it at the forefront of AI-related litigation preparedness. Rule 1.280 now mirrors the federal proportionality standard, requiring litigants to tailor discovery based on the needs of the case and the burden of production. Rule 1.200 encourages early case management conferences, where courts can address AI issues such as expert disclosures, model transparency, and data preservation.
For attorneys litigating in Florida, the intersection of these rules with AI issues requires early strategy, interdisciplinary coordination, and an understanding of both technical and procedural risks.
Building the Legal Infrastructure for AI
The legal system’s engagement with artificial intelligence is no longer speculative. Regulation, litigation, and compliance responsibilities are expanding rapidly and demand a multidisciplinary approach. From risk assessments and impact disclosures to discovery protocols and cross-border compliance, AI governance is reshaping the role of lawyers in advising clients and managing disputes.
I have highlighted the growing need for legal professionals to understand AI at both a technical and strategic level. As courts, regulators, and litigators continue to define the rules of engagement for AI, early and informed action will be essential to navigating this dynamic and high-stakes legal terrain.

