News

EU AI Regulation in Tax Law: New Obligations for Tax Advisory Firms

New EU requirements for the use of AI in tax advisory practices. Learn how to prepare your firm for upcoming obligations and integrate AI systems securely and in compliance with the law.

Regulation (EU) 2024/1689 (“AI Act”) also applies to the use of AI in tax advisory services, with graduated obligations depending on the risk level of the systems applied. Chapters I (General Provisions) and II (Prohibited AI Practices) have been in force since 2nd February 2025, while the remaining provisions apply from 2nd August 2026. For tax firms, the focus is on training, transparency, and data protection/confidentiality. The use of AI by public authorities and judicial assistance systems remains subject to strict controls under fundamental rights, the GDPR, and requirements for reasoned decisions. Now is the time to establish AIready structures, processes, and policies.

 

 

 

Why This Matters for Tax Advisory Firms

The AI Act follows a riskbased approach (minimal/low/high/unacceptable). Tools commonly used in tax practices (e.g. generative AI, chatbots, dataextraction tools) are typically classified as low-risk systems. Highrisk classifications mainly affect AI used for judicial decision support. Nevertheless, specific obligations apply to tax firms, and all AIgenerated outputs must always undergo human review.

 

 

Key Obligations for Tax Firms

AI Competence Within the Team (Art. 4 AI Act)

Operators of AI systems must ensure that employees possess an adequate level of AI literacy (e.g. risks such as hallucinations, bias, prompt design, review routines). This obligation has been in effect since 2nd February 2025.

Transparency (Art. 50 AI Act)

If clients interact with a chatbot on your website, it must be clearly recognizable as an AI system.
AIgenerated content intended for the public (e.g. newsletters) must always be labelled as artificially generated, unless a human has performed editorial review and a natural or legal person assumes editorial responsibility.

Confidentiality & GDPR

Client and personal data may not be entered into open generative AI models. Where necessary, anonymization or pseudonymization must be carried out beforehand. This is required due to professional secrecy, GDPR rules, and the typical “black box” character of many models.

HumanintheLoop

AI provides drafts proposals and recommendations for decisions, but they only become binding after professional review by a qualified practitioner. Any responsibility and liability remain with humans.

 

 

Beyond Tax Firms: Public Administration & Courts

Fiscal Authorities

Current AI applications in tax administration are generally not classified as high-risk. Nevertheless, transparency, training, obligations to provide reasoning, and Article 22 GDPR (human final decision in automated processes) are essential. This is particularly relevant for audits carried out and implemented by fiscal authorities. In addition, legal decisions (notices) must remain clearly reasoned.

Fiscal Courts

AI used for investigative or interpretative support is generally considered high-risk (strict requirements for data quality, documentation, supervision, risk and fundamental rightsimpact assessments). In practice, courts currently rely primarily on assistance systems. Judicial decisions remain strictly human.

 

 

Implementation in Your Firm: A Practical Checklist

AI Policy & Roles: Purpose definitions, approved tools, approval processes, responsible persons (including dual control principle for publications).
Training Program (Art. 4): Mandatory modules covering hallucinations, bias, prompting, review routines, documentation.
Data Protection & Confidentiality: SOP (standard operating procedure) for anonymization/pseudonymization; no raw client data in open models; tool inventory including data flows.
Transparency Rules (Art. 50): Labelling of chatbots; editorial final review and documented responsibility for newsletters/blog posts.
Documentation: Prompt templates, sources, review steps, version history for internal traceability and external demonstration.
Quality Assurance: Sample audits for accuracy and currentness; clear “nogo’s” (e.g., no legal advice solely based on AI).
Client Communication: Clearly inform clients about AIsupported services and human review.
 

 

Timeline & Conclusion

Since 2nd of February 2025:
Prohibitions and principles (Chapters I–II) and the training obligation (Art. 4) are in effect. Begin training, adopt your AI policy, and operationalize transparency and dataprotection processes.

From 2nd of August 2026:
Full application of the remaining provisions—until then, implement your roadmap and establish documentation and controls. GDPR and EU fundamental rights ensure additional safeguards (transparency, reasoning, human oversight).

The EU AI Act introduces clear minimum standards for tax practice: train, review, act transparently. This allows efficiency gains to be exploited safely. For tax advisors, this primarily means building competence, strengthening governance, and ensuring disciplined editorial processes. In administration and judiciary, humans remain responsible, with fundamental rights (good administration, duty to give reasons, effective legal protection) setting clear boundaries.

If you have any questions, our expert will be happy to assist you.

Christoph Schmidl

Partner at Grant Thornton Austria

Jurisdiction: Vienna


Phone: +43 1 505 43 13 2051

Email: Christoph.Schmidl@at.gt.com