The EU AI Act entered into force in August 2024. Its provisions are phasing in through 2027, and the most significant compliance deadlines are landing now.
If you are a business leader and you have not done a structured review of your AI systems against this regulation, you are behind. That is true whether your company is based in Berlin or Boston.
This is not a theoretical concern. The EU AI Act has broad extraterritorial reach, substantial penalties, and enforcement infrastructure that is actively being built. Understanding it — and getting ahead of its requirements — is a compliance obligation, not optional diligence.
What the EU AI Act Actually Is
The EU AI Act (Regulation 2024/1689) is the world's first comprehensive regulatory framework for artificial intelligence. It establishes a risk-based classification system for AI systems and imposes compliance obligations that scale with risk level.
The regulation applies to providers who place AI systems on the EU market, deployers who use AI systems in the EU, and — critically — providers and deployers outside the EU whose AI systems affect people within it. That last clause is what catches many US businesses off guard.
If your AI system is used by EU-based customers, employees, or business partners, you likely have obligations under this regulation even if you have never had a legal entity in Europe.
The Risk Classification System
The Act organizes AI systems into four risk tiers. Understanding where your systems fall is the first step to determining your compliance obligations.
Unacceptable risk — prohibited. A small category of AI applications is banned outright. This includes real-time biometric surveillance in public spaces (with narrow law enforcement exceptions), AI systems that exploit psychological vulnerabilities to manipulate behavior, social scoring by public authorities, and certain predictive policing applications. If any of your systems fall here, they need to be discontinued.
High risk. This is where most compliance complexity lives. High-risk AI systems are those used in specific sectors and use cases defined in Annex III of the regulation: recruitment and HR decisions, credit scoring, biometric identification, critical infrastructure management, education and vocational training, access to essential services, law enforcement, migration management, and the administration of justice.
If you use AI to screen job applications, assess creditworthiness, make decisions about access to services, or operate in any of the listed sectors, you are likely operating a high-risk system. High-risk classification comes with substantive obligations.
Limited risk. Systems like chatbots and AI-generated content tools fall here. The primary obligation is transparency — users must be informed they are interacting with AI. This is achievable, but it requires intentional implementation. Deploying a customer-facing LLM without AI disclosure is non-compliant.
Minimal risk. The vast majority of AI applications — spam filters, AI in video games, most recommendation systems — fall here. No specific obligations beyond general principles apply, though voluntary codes of practice are encouraged.
High-Risk Compliance Requirements
If you are operating a high-risk AI system, the compliance requirements are substantial. The Act specifies:
Risk management systems. You must establish, implement, and maintain a documented risk management process for the AI system throughout its lifecycle. This is ongoing, not a one-time assessment.
Data governance. Training, validation, and testing data must meet quality standards. You need documented practices for data collection, examination, assessment, and management. Bias detection and correction are expected.
Technical documentation. You must maintain documentation that allows competent authorities to assess compliance. This includes system architecture, training methodologies, performance metrics, and known limitations.
Transparency and user information. High-risk systems must be designed to be sufficiently transparent that deployers can understand the system's capabilities and limitations. Users affected by AI decisions must be informed.
Human oversight. High-risk systems must be designed to allow human oversight. Operators need to be able to understand, monitor, override, and if necessary halt AI operations.
Accuracy, robustness, and cybersecurity. Systems must meet appropriate levels of accuracy and must be resilient to attempts to manipulate outputs. The cybersecurity requirements here intersect directly with frameworks like NIST AI RMF and ISO 27001.
Conformity assessment. Before placing a high-risk AI system on the market or putting it into service, you must complete a conformity assessment — either self-assessment or third-party, depending on the system type.
Registration. High-risk AI systems must be registered in a public EU database before deployment.
General Purpose AI Models
The EU AI Act introduced a significant new category not present in earlier drafts: general purpose AI (GPAI) models. This applies primarily to frontier model providers — organizations training and releasing large foundation models.
If you are a GPAI provider with a model above a compute threshold (10^25 FLOPs), you face additional obligations including adversarial testing requirements, incident reporting obligations, and the requirement to publish a summary of training data.
Most enterprise organizations are not GPAI providers. But if you are building on top of GPAI models — using GPT-4, Claude, Gemini, or similar via API — you have downstream obligations when deploying those models in high-risk use cases. The GPAI provider's compliance does not eliminate your obligations as a deployer.
The Timeline
The Act's provisions are phasing in over several years:
- February 2025: Prohibited AI practices provisions took effect. If you were running prohibited systems, the deadline to stop has already passed.
- August 2025: GPAI model requirements and governance provisions took effect.
- August 2026: High-risk AI system requirements for systems in Annex III take effect. This is the major compliance deadline for most enterprise organizations.
- August 2027: Additional provisions for AI systems embedded in regulated products take effect.
The August 2026 deadline is what most organizations need to be focused on right now. Getting from zero compliance to full high-risk compliance is not a quick process — it requires system audits, documentation creation, risk management implementation, and potentially system redesign.
How This Affects US Businesses
The extraterritorial reach is real. If you operate a US company and you use AI systems that affect EU individuals — whether they are your customers, your employees who are EU residents, or third parties your AI decisions affect — you have obligations under this regulation.
The penalties are significant: up to 35 million euros or 7% of global annual turnover for violations involving prohibited practices, up to 15 million euros or 3% of turnover for other violations. Enforcement authority sits with national market surveillance authorities in each member state, coordinated at the EU level.
The practical implication: if you have any material EU business, an EU AI Act gap assessment should be on your 2026 compliance roadmap. If you have high-risk systems, you should be in active remediation now.
US businesses should also be aware that the EU AI Act is not happening in isolation. Several US states have enacted or are considering AI-specific legislation, and federal regulatory guidance on AI is evolving. Building a governance infrastructure for the EU AI Act creates a foundation that will serve you across multiple regulatory regimes.
Where to Start
The first step is inventory. You cannot assess compliance for systems you do not know about. Document every AI system your organization uses, builds, or deploys — including AI features embedded in SaaS tools you did not build yourself.
Map each system against the risk classification categories. For anything that might be high-risk, conduct a more detailed analysis against Annex III.
From there, you have a compliance gap. Prioritize remediation by risk level and deployment timeline.
At Ayliea, our assessments include an EU AI Act readiness review as part of broader AI governance evaluation. If you are not sure where your organization stands, that is a reasonable place to start.
