EU AI Act — 12-month programme
52 weeks · classify → document → test → register
0week 0 → 5252
Risk classification
Technical documentation
Data governance
Conformity testing
CE marking + registration
Post-market monitoring
Interactive timelineHover to replay

The EU AI Act situation in April 2026 is unusually unstable. The European Commission missed its 2 February statutory deadline to publish guidance on Article 6 — the provision that determines whether an AI system qualifies as high-risk and therefore triggers the Act’s most demanding obligations. The Commission has also proposed a Digital Omnibus on AI that would push the August 2026 high-risk application date back by up to 16 months, to December 2027 for Annex III systems. The Omnibus is not yet law. Negotiators are targeting political agreement before June 2026 so that any delay can legally take effect before the current August deadline.

For a US SaaS company with meaningful EU customer exposure, this means the compliance project you start today runs through a moving regulatory target. The honest answer is not “wait for clarity.” The honest answer is “prepare as if August 2026 is the deadline, because it legally still is, and the work itself is what your customers and regulators will ask about regardless of which date ultimately applies.”

This article is that plan. Phased, deadline-anchored to the current statutory August 2026 date, with explicit flags on the points where the regulation itself is uncertain and what to do about each.

EU AI Act in 2026: what hits when

The AI Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024. Its obligations phase in across several years. As of April 2026, the following is in effect:

Already in force:

  • 2 February 2025: prohibitions on unacceptable-risk AI systems (social scoring, manipulative subliminal techniques, emotion recognition in workplaces and schools, etc.) and general AI literacy obligations.
  • 2 August 2025: obligations for providers of General-Purpose AI (GPAI) models; governance structures (including the AI Office); member-state penalty regimes.

Currently pending (original timeline):

  • 2 August 2026: most remaining obligations apply, including high-risk system requirements for systems listed in Annex III (stand-alone high-risk use cases — employment, education, law enforcement, migration, credit scoring, critical infrastructure, and similar). Transparency obligations under Article 50 (labelling AI-generated content, disclosing AI interaction) become applicable. Every member state must have at least one AI regulatory sandbox operational.
  • 2 August 2027: Article 6(1) and high-risk requirements for Annex I systems (AI embedded in products already regulated by EU harmonisation law — toys, medical devices, vehicles, machinery, aviation). GPAI models placed on the market before August 2025 must be fully compliant.

What the Digital Omnibus on AI would change:

The Commission’s November 2025 proposal would introduce a conditional “stop the clock” mechanism tied to the availability of harmonised technical standards. If the standards aren’t finalised by the application date, high-risk obligations would be deferred with long-stop dates:

  • Annex III systems: compliance required 6 months after standards are confirmed; long-stop December 2, 2027.
  • Annex I systems: compliance required 12 months after standards are confirmed; long-stop August 2, 2028.

This is not a guaranteed extension. It requires European Parliament and Council adoption, which must happen before the current August 2026 deadline for the delay to take legal effect. If negotiations slip beyond August, the EU enters a period where high-risk obligations technically apply while key standards and guidance remain incomplete — arguably the worst possible outcome for providers and regulators alike.

The editorial position that matters for planning: treat August 2, 2026 as the deadline. If the Omnibus passes and extends it, you benefit from extra time on a programme already in flight. If the Omnibus doesn’t pass, you hit the statutory deadline on schedule. Planning around the delay and then having to accelerate at short notice is the expensive option.

Does your system qualify as high-risk?

The second most consequential question for a US SaaS company — after whether the AI Act applies to you at all (Article 2: yes, if you place an AI system on the EU market or your system’s output is used in the EU, regardless of where you’re established) — is whether any of your systems are high-risk.

High-risk is defined in Article 6, via two routes. Route one (Article 6(1)): AI systems that are a safety component of, or are themselves, a product covered by EU harmonisation legislation listed in Annex I (medical devices, machinery, toys, vehicles, aviation equipment, etc.). Most US SaaS products don’t fall here. Route two (Article 6(2)): AI systems used in the specific areas listed in Annex III — biometrics, critical infrastructure, education and vocational training, employment and worker management, access to essential public and private services (including credit scoring and essential-service pricing), law enforcement, migration, administration of justice, democratic processes.

The Article 6(3) exception lets providers classify an AI system as not high-risk where it doesn’t pose significant risk to health, safety, or fundamental rights — provided the system performs a narrow procedural task, doesn’t materially influence decision outcomes, or similar criteria apply. Using this exception requires a documented risk assessment. The Digital Omnibus, if adopted, would reduce the procedural burden of invoking Article 6(3) but doesn’t change the substantive test.

This is where Article 6 guidance matters. The Commission was supposed to publish practical implementation guidance and a list of examples by 2 February 2026. It missed the deadline. Preliminary drafts circulated through 2025 and 2026, but as of April 2026 the definitive guidance has not yet been adopted. Organisations are therefore making classification decisions without the official Commission clarification the regulation promised.

The honest position: where your use case is clearly inside or clearly outside Annex III, proceed with classification. Where the use case is ambiguous (the most common place for SaaS companies), two reasonable paths exist. Path one — classify conservatively as high-risk and implement the programme; if final guidance later clarifies the system is not high-risk, the investment is not wasted because your customers will ask for most of the same artefacts regardless. Path two — classify as not high-risk under Article 6(3) with a documented risk assessment, and be prepared to revisit if guidance or a customer challenge requires it.

What you should not do is stall the decision indefinitely. The regulation applies regardless of whether the Commission has finished its guidance homework.

The compliance programme at a glance

For a US SaaS company with one or more systems classified as high-risk (or conservatively treated as such), the end-to-end implementation programme runs about seven months on realistic effort. Organisations coming from a mature ISO 27001 / SOC 2 foundation can compress; organisations starting without an ISMS should budget nine to twelve.

Phase 1: System classification and scoping         Weeks 1–4
Phase 2: Data governance and documentation         Weeks 4–12
Phase 3: Risk management system and FRIA           Weeks 8–16
Phase 4: Technical documentation and testing       Weeks 12–20
Phase 5: Conformity assessment and CE marking      Weeks 20–28
Phase 6: Post-market monitoring (ongoing)          Week 20+

Phases overlap heavily. The AI Act is not a sequential deliverables project — data governance work in Phase 2 feeds the risk management system in Phase 3, which feeds the technical documentation in Phase 4, which is what the conformity assessment in Phase 5 evaluates. Work the phases in parallel where dependencies allow.

Phase 1: System classification and scoping (weeks 1–4)

Four weeks to decide what’s inside your programme.

System inventory. List every AI system your organisation develops or deploys that is placed on the EU market or whose output is used in the EU. For SaaS companies this usually means a short list — the product itself and perhaps internal AI tooling that’s customer-facing. Document for each: what it does, what data it processes, who uses it, who’s affected by its output, and what decisions or inferences it produces.

Role determination. For each system, are you a provider (you develop or materially modify it), a deployer (you use it), an importer, a distributor, or a combination? Most SaaS companies developing their own AI features are providers for those features and potentially deployers for third-party AI they integrate.

Risk classification. Walk each system through the four-tier risk framework: Prohibited (Article 5), High-Risk (Article 6), Limited Risk (Article 50 transparency obligations), Minimal Risk. For anything in doubt, document the classification logic against Annex III’s eight high-risk domains. Where Article 6(3) exceptions are claimed, document the rationale and keep the documentation current.

Scope statement. Which of your systems are in the AI Act compliance programme and at what level? This becomes the foundation document for everything that follows.

Phase 2: Data governance and documentation (weeks 4–12)

Eight weeks of work on data foundations. This is where most high-risk AI Act programmes actually do their hardest engineering.

Training, validation, and testing datasets. Article 10 requires high-risk system providers to govern the quality of data used to train, validate, and test AI systems. Data governance practices must cover: relevant design choices, data collection processes, data preparation operations, examination of biases and measures to mitigate them, identification of data gaps, and evaluation of whether the datasets are appropriate for the intended purpose.

This is a heavier lift than most compliance-oriented teams expect. The Article doesn’t say “have a dataset governance policy.” It requires documented processes for every stage of dataset preparation and explicit bias evaluation. For many SaaS companies running AI features, this is the first time bias evaluation becomes a mandatory documented activity rather than an aspirational one.

Technical documentation. Article 11 requires technical documentation for each high-risk system, created before the system is placed on the market and kept current. Annex IV specifies what this documentation must include — system description, detailed design, training data characteristics, evaluation methods and results, information on the risk management system, and changes over the lifecycle. For a SaaS product with continuous deployment, “kept current” is an operational discipline, not a one-time document.

Record-keeping and automatic logs. Article 12 and Article 19 require high-risk systems to automatically log their operations, and providers to retain logs for a defined period (at least six months, longer in some cases). Build this into the system architecture, not as an afterthought.

Phase 3: Risk management system and FRIA (weeks 8–16)

Two months to stand up the risk management processes the regulation requires.

Risk management system. Article 9 requires high-risk system providers to establish, implement, document, and maintain a risk management system throughout the AI system’s lifecycle. It must include identification and analysis of known and foreseeable risks, estimation and evaluation of risks that may emerge in use, evaluation of risks based on post-market monitoring data, and adoption of risk management measures.

This is broader than the data-focused risk work in Phase 2. It covers risks to health, safety, and fundamental rights — including risks that emerge from the way the system is used by deployers in ways the provider didn’t anticipate. Build the risk management system as a living artefact with documented review cadences.

Fundamental Rights Impact Assessment (FRIA). Article 27 requires deployers of certain high-risk systems (specifically those used by public sector bodies, and a narrower subset used by private entities providing public services or in credit scoring, life insurance pricing, and some other areas) to conduct a FRIA before deployment. The FRIA evaluates the system’s potential impact on fundamental rights and specifies mitigation measures.

US SaaS companies are typically providers, not deployers, and the FRIA obligation falls primarily on the deployer. But you will be asked to support your deployer customers’ FRIA work — providing information about the system’s function, risks, and limitations that deployers need to complete their assessment. Build the FRIA support materials in Phase 3 rather than producing them reactively when a customer first asks.

Human oversight. Article 14 requires high-risk systems to be designed so that they can be effectively overseen by natural persons during use, with oversight proportionate to the risks. “Oversight” isn’t just “a user can click cancel.” It’s documented mechanisms allowing an overseer to understand the system’s capabilities and limitations, detect and address anomalies or dysfunctions, interpret outputs, and intervene or override.

Phase 4: Technical documentation and testing (weeks 12–20)

Two months of validation and testing against the regulation’s technical requirements.

Accuracy, robustness, and cybersecurity. Article 15 requires high-risk systems to achieve appropriate levels of accuracy, robustness, and cybersecurity, consistent throughout the lifecycle. Accuracy metrics must be declared in the accompanying instructions for use. Robustness means the system performs as intended under a range of conditions including potential adversarial manipulation. Cybersecurity means resilience against attempts to alter the system’s use, outputs, or performance.

For SaaS teams, this is where security programmes already running (SOC 2, ISO 27001) do useful work. The AI-specific additions are robustness testing (adversarial inputs, edge cases, distribution shift) and the requirement to declare accuracy metrics publicly.

Quality management system. Article 17 requires providers of high-risk systems to have a quality management system covering the system’s design, development, quality control, testing, validation, post-market monitoring, and more. If you have an ISO 27001 ISMS, you’ve got most of the bones of a QMS already — the AI-specific additions slot in alongside.

Instructions for use. Article 13 requires high-risk systems to be accompanied by instructions that allow deployers to interpret and use the system’s output appropriately. This is a specific deliverable to write in Phase 4 — documentation clearly aimed at the deployer audience covering system capabilities, intended purpose, limitations, performance metrics, and oversight measures.

Phase 5: Conformity assessment and CE marking (weeks 20–28)

Two months to complete the formal compliance route.

For most Annex III high-risk systems, the conformity assessment is an internal process under Article 43 — the provider self-assesses against the requirements using the quality management system, technical documentation, and the risk management system as evidence. For certain systems (specifically biometric identification systems not covered by listed harmonised standards), third-party assessment by a Notified Body is required.

The process: apply the quality management system assessment (internally or by a Notified Body where required), draw up the EU declaration of conformity, affix the CE marking where applicable, and register the system in the EU database for high-risk AI systems before placing it on the market.

Assumes here: harmonised standards are available. If they aren’t (still the case for most AI Act-specific standards as of April 2026, which is exactly why the Digital Omnibus exists), providers must demonstrate compliance against the regulation’s essential requirements directly — a more demanding exercise than conformity against a harmonised standard. This is the specific uncertainty the Digital Omnibus attempts to address.

Phase 6: Post-market monitoring (ongoing)

Post-market monitoring is not a project phase. It’s an operational programme that starts the moment the system is on the market and runs for the system’s entire lifecycle. Article 72 requires providers to establish a documented post-market monitoring system that proactively collects and analyses data on the system’s performance in real-world use, to ensure continuous compliance with the regulation’s requirements.

Serious incidents must be reported to market surveillance authorities under Article 73 — within 15 days for most incidents, and faster for specific categories. Build the incident detection and reporting pipeline as part of the operational rollout, not as a post-launch afterthought.

Where this is still uncertain — and what to do about it

Five areas where the regulation or its implementation are genuinely unsettled, and the pragmatic response to each.

Article 6 high-risk classification guidance. Missed February 2026 deadline; Commission indicated final draft circulation by end of February 2026 with adoption expected March or April 2026. Response: proceed with classification based on the regulation’s text and the preliminary drafts. Document rationale. Revisit when guidance publishes.

Harmonised technical standards. CEN and CENELEC missed their 2025 deadlines for core AI Act standards; targeted end of 2026. Response: don’t wait. Compliance against the regulation’s essential requirements without harmonised standards is harder but feasible. Early adopters will shape how compliance against the essential requirements is interpreted in the standards’ absence.

Digital Omnibus on AI. Proposed delay to high-risk application dates, not yet adopted. Must be agreed by Parliament and Council before August 2026. Response: plan to August 2026 deadline; treat any extension as schedule relief, not as an excuse to pause. The work is the same either way.

Member state implementation variation. Penalty ceilings, enforcement priorities, and sandbox implementations vary across member states. Response: for US SaaS targeting the EU broadly, harmonise to the strictest national regime you operate under. The cost of harmonising up is usually less than the cost of running different compliance postures per member state.

Overlap with sector legislation. AI systems embedded in medical devices, vehicles, and other regulated products must comply with both the AI Act and the sector regulation. Response: coordinate the AI Act programme with any existing sector-regulatory programme from Phase 1. Retrofitting later is expensive.

The underlying reality: the EU AI Act is a genuinely novel regulation and it’s being implemented in conditions where the Commission, the standards bodies, and the member states are all behind schedule. Organisations that treat the uncertainty as a reason to wait tend to end up behind schedule themselves. Organisations that pick reasonable interpretations, document their reasoning, and proceed tend to be well positioned when the guidance eventually arrives.

Overlap with ISO 42001

Organisations implementing the EU AI Act and ISO 42001 in parallel discover substantial overlap — particularly on the quality management system, the risk management system, the AI impact assessment process, and data governance. Running the two as a combined programme is the efficient pattern. Full guidance in our ISO 42001 implementation article, with the combined-programme logic in the ISO 27001 + ISO 42001 dual-track guide.

FAQ

Does the EU AI Act apply to my US SaaS company?

Yes, if you place an AI system on the EU market or your system’s output is used in the EU — regardless of where your company is established. The Act’s territorial scope is broad.

What’s the actual deadline?

Legally, 2 August 2026 for most high-risk obligations (Annex III systems). The Digital Omnibus on AI, if adopted, would delay to long-stop dates of December 2, 2027 (Annex III) or August 2, 2028 (Annex I). The Omnibus is not yet law.

What are the penalties for non-compliance?

The headline figure is up to €35 million or 7% of worldwide annual turnover (whichever is higher) for the most serious violations. Lower tiers apply to other categories. Member states set penalty ceilings within these limits.

What’s the difference between provider, deployer, importer, and distributor?

Provider: you develop the AI system or place it on the market under your name. Deployer: you use the AI system. Importer: you place an AI system from a non-EU provider on the EU market. Distributor: you make an AI system available on the market without being the provider or importer. Most US SaaS companies developing AI features are providers.

Do I need a FRIA?

FRIA is a deployer obligation, not a provider obligation — but specifically for deployers that are public sector bodies, private entities providing public services, or certain private sector deployers in areas like credit scoring and insurance. US SaaS companies typically don’t conduct FRIAs themselves but need to support their deployer customers’ FRIAs.

What’s the relationship to GDPR?

GDPR governs processing of personal data; the AI Act governs AI systems. For high-risk AI systems that process personal data — which is most of them — both regulations apply. Run them as coordinated programmes, not as alternatives. See our GDPR implementation guide.

Is ISO 42001 certification sufficient for EU AI Act compliance?

No, but it’s substantial evidence. ISO 42001 is a voluntary management system standard; the AI Act is law. ISO 42001 certification supports AI Act compliance but doesn’t substitute for it — you still need the specific artefacts the regulation requires (technical documentation, declaration of conformity, CE marking, post-market monitoring programme).

Does the AI Act apply to General-Purpose AI models?

Yes, with a separate obligation set. GPAI providers have obligations around documentation, copyright compliance, and transparency that applied from 2 August 2025. Additional obligations apply to GPAI models with “systemic risk” (a specific classification for the largest models).

What’s the status of the Digital Omnibus on AI?

Proposed by the Commission in November 2025. Under consideration by the European Parliament and Council. Negotiators are targeting political agreement before June 2026 to ensure any delay can take effect before the current August 2026 deadline. Not yet adopted.

What happens if a customer asks for AI Act compliance documentation before the obligations apply?

This is increasingly common — EU customers asking US SaaS providers for AI Act readiness evidence ahead of the deadline. Treat the question as a commercial signal that matters whether or not the deadline moves. The artefacts your programme produces (technical documentation, QMS evidence, risk management system documentation) are what the customer is asking for, and having them ready early is a commercial advantage.