2026-05-04 · 9 min read
EU AI Act 2026: What Every Business Must Do Now
The EU AI Act is active law. Deadlines, risk tiers, fines up to 7% turnover, and exact compliance steps for businesses operating in Europe in 2026.
TL;DR: The EU AI Act imposes binding obligations on every business deploying AI that touches EU users - starting with high-risk system rules in August 2026. This article maps the exact deadlines, risk tiers, and compliance steps you need. Start your gap analysis now or face fines up to 7% of global turnover.
European AI regulation is now law, not a proposal. The EU AI Act is the world's first comprehensive binding AI law, and it applies to your business if any user or customer is based in the EU - regardless of where your company is incorporated. As of May 2026, the first wave of obligations is already active, and the high-risk system deadline arrives in August 2026. Businesses that wait will not have enough time to complete conformity assessments.
What the EU AI Act actually requires
The EU AI Act classifies AI systems into four risk tiers: unacceptable risk (banned), high risk, limited risk, and minimal risk. Banned systems include social scoring by governments, real-time biometric surveillance in public spaces, and AI that exploits psychological vulnerabilities. These prohibitions became enforceable in February 2025. Any company still running these systems in the EU is already in violation.
High-risk systems face the heaviest obligations. Companies must conduct conformity assessments, maintain technical documentation, implement human oversight mechanisms, register systems in the EU database, and demonstrate ongoing monitoring. This applies to AI used in recruitment, credit decisions, medical diagnosis, education assessment, and critical infrastructure management. The deadline for these obligations is August 2026 - three months from today.
Limited-risk systems - such as chatbots and deepfake generators - must meet transparency requirements. Users must be informed they are interacting with AI. This rule has been active since August 2025. Many businesses remain non-compliant here because they assume small-scale chatbot deployments fall below regulatory notice. They do not.
The compliance timeline every business must know
According to PwC's 2025 AI Regulation Readiness Report, only 31% of European companies had completed a formal AI inventory by end of 2025. An AI inventory - a list of every AI system your company uses or deploys - is the mandatory first step before any risk classification. Without it, you cannot determine which obligations apply to you.
| Deadline | Obligation | Who It Affects | Penalty for Non-Compliance |
|---|---|---|---|
| February 2025 (active) | Prohibition of unacceptable-risk AI systems | All businesses operating in EU | Up to 35M EUR or 7% global turnover |
| August 2025 (active) | Transparency rules for limited-risk AI (chatbots, deepfakes) | Businesses using customer-facing AI | Up to 15M EUR or 3% global turnover |
| August 2025 (active) | General-purpose AI model obligations (GPAI) | Developers of foundation models | Up to 15M EUR or 3% global turnover |
| August 2026 (upcoming) | High-risk AI system conformity assessments and registration | HR, fintech, healthcare, infrastructure AI users | Up to 15M EUR or 3% global turnover |
| August 2027 | High-risk AI in regulated products (medical devices, machinery) | Product manufacturers using embedded AI | Up to 15M EUR or 3% global turnover |
Gartner projected in its 2025 AI Governance Forecast that 60% of large enterprises will face at least one AI compliance gap audit by 2026. That number is proving accurate. EU national market surveillance authorities began issuing formal inquiries to companies in Q1 2026, particularly targeting HR software vendors and fintech firms using automated decision systems.
High-risk AI: the sector breakdown
If your company uses AI to screen job applications, rank candidates, or evaluate employee performance, you operate a high-risk AI system. The same applies to AI that determines credit eligibility, sets insurance premiums, or scores loan applications. These are not edge cases - they describe the standard software stack of most mid-size European businesses in 2026.
Healthcare AI faces the strictest combined regulatory burden. Medical AI tools must comply with both the EU AI Act high-risk requirements and the EU Medical Device Regulation. A diagnostic AI tool used in Germany, for example, needs a conformity assessment under both frameworks before it can legally operate. McKinsey's 2025 European Healthcare AI Report estimated that dual-compliance costs for medical AI average 400,000 to 800,000 euros per system for mid-size providers.
Biometric AI sits at the intersection of the AI Act and GDPR. Real-time remote biometric identification in public spaces is banned. Post-hoc biometric categorization for law enforcement requires specific authorization. For commercial settings - such as retail analytics using facial recognition - the legal position is highly restricted and under active enforcement scrutiny in France, Germany, and Italy as of April 2026.
General-purpose AI models and the GPAI rules
The AI Act introduces a separate framework for general-purpose AI models - the foundation models powering tools like enterprise versions of GPT-4o, Claude 4.7, and Gemini 2.5. Any provider placing a GPAI model on the EU market must provide technical documentation, cooperate with downstream deployers, and publish a summary of training data. These obligations activated in August 2025.
Models classified as having "systemic risk" - those trained on more than 10^25 floating point operations - face additional requirements including adversarial testing, incident reporting, and cybersecurity obligations. As of May 2026, the European AI Office has identified nine models meeting this threshold. Companies building on top of these models as deployers inherit indirect compliance obligations through contractual arrangements with the model provider.
Forbes reported in March 2026 that U.S.-based AI providers are restructuring their EU contracts specifically to clarify GPAI compliance chains. If your business licenses an enterprise AI platform, your contract should now specify which party holds responsibility for model documentation, incident reporting, and user disclosure. If it does not, your legal team needs to address this before August 2026.
What compliance actually costs - and how to structure it
McKinsey's 2026 State of AI in Europe survey found that companies spending less than 50,000 euros on AI compliance in 2025 were 3.4 times more likely to receive a regulatory inquiry in Q1 2026 than companies that invested 150,000 euros or more. This is not an argument for spending without direction - it is an argument for structured investment in the right areas.
The core compliance architecture has three layers. First, an AI system inventory covering every internal and customer-facing AI tool. Second, a risk classification for each system against the Act's four tiers. Third, a gap analysis comparing current practices against the obligations for each tier. This three-step process takes four to eight weeks for a company with fewer than 500 employees and three to six months for enterprise-scale organizations.
For businesses building their internal AI competency, structured training accelerates the inventory and classification process significantly. At AI Expert Academy, the curriculum covers EU AI Act compliance frameworks alongside practical AI deployment skills - relevant for both compliance officers and business leaders responsible for AI strategy. When Bartosz Cruz was interviewed on Polskie Radio Czworka's Swiat 4.0 program in May 2025, the discussion covered exactly this gap: most organizations lack the internal cognitive fluency to evaluate AI tools critically, which makes regulatory compliance harder to achieve.
Smaller businesses can reduce costs through two mechanisms. First, use the EU's official AI Act compliance checker tool, available through the European AI Office portal since January 2026. Second, leverage sector-specific guidance published by national authorities - Germany's BNetzA, France's CNIL, and Poland's UODO have each published sector templates for SME compliance as of Q1 2026.
Practical steps for May 2026
With the August 2026 high-risk deadline three months away, the immediate priority is completing your AI system inventory. Document every AI tool in use, who the provider is, what decisions it influences, and which data it processes. This document becomes the foundation of your risk classification and your evidence file if regulators request documentation.
Second, audit your vendor contracts. Every AI system your company uses from a third-party provider requires a contract that allocates compliance responsibilities. Under the AI Act, deployers - meaning your business - carry obligations even when using someone else's AI system. If a vendor's contract does not address EU AI Act compliance, that contract needs renegotiation now.
Third, appoint a responsible person for AI governance. This does not need to be a new hire. Many companies are extending the role of their Data Protection Officer or Chief Compliance Officer. What matters is that someone in your organization has explicit accountability for AI system documentation, risk monitoring, and regulatory communication. The Act does not prescribe a job title - it prescribes accountability.
For businesses seeking a practical AI strategy framework that integrates regulatory compliance with business performance, explore the resources at AI strategy for SMEs and the AI governance frameworks overview published on this site.
Frequently asked questions
When does the EU AI Act fully apply to businesses?
The EU AI Act entered into force in August 2024 and applies in phases. Prohibited AI systems were banned from February 2025. High-risk system obligations apply from August 2026. General-purpose AI model rules apply from August 2025.
What are the fines for violating the EU AI Act?
Fines reach up to 35 million euros or 7% of global annual turnover for prohibited AI violations - whichever is higher. Lesser violations carry fines up to 15 million euros or 3% of turnover. Providing incorrect information to regulators carries fines up to 7.5 million euros or 1.5% of turnover.
Does the EU AI Act apply to companies outside Europe?
Yes. Any company deploying AI systems that affect EU residents must comply, regardless of where the company is headquartered. This includes U.S., Asian, and other non-EU firms. The extraterritorial scope mirrors the GDPR model.
What is a high-risk AI system under the EU AI Act?
High-risk AI systems include tools used in hiring and HR decisions, credit scoring, medical devices, biometric identification, critical infrastructure, and law enforcement. These systems require mandatory conformity assessments, human oversight, and detailed technical documentation before deployment.
Last updated: 2026-05-04