Skip to main content

Blog

EU AI Act enforcement is August 2, 2026. What medical device teams cannot defer.

I've been on calls with four medtech companies in the past month who are genuinely behind on August 2. None of them are behind on documentation. They're behind on the engineering decisions the documentation is supposed to reflect. Automated performance logging. Bias documentation on training datasets assembled three years ago. Human oversight boundaries defined informally in design reviews and never formally specified. These aren't document problems. You can't write your way out of them.

August 2, 2026 is full enforcement of the EU AI Act's high-risk AI provisions. Not a transition date. Not a grace period for devices already on the market. If your medical device contains an AI or ML component and you sell into the EU, you have roughly four months.

Why August 2 is actually a hard deadline

The AI Act's phased implementation has created confusion about timelines. Prohibition on unacceptable-risk AI systems took effect February 2025. General-purpose AI model requirements apply from August 2025. High-risk AI systems — the category that captures medical devices — reach full enforcement August 2, 2026.

"Full enforcement" means that after August 2, placing a high-risk AI system on the EU market without meeting all AI Act requirements is a regulatory violation. No separate implementation period for existing devices. If your device was CE-marked under EU MDR before the AI Act applied, that CE mark doesn't extend to AI Act compliance. The two frameworks run in parallel. You need both.

Four months sounds like enough time. It's not, if you're starting from zero on the engineering requirements. Notified Body slots are already booking into late summer. And the non-deferrable requirements — automated logging, bias documentation, human oversight specification — require engineering decisions and infrastructure, not just writing.

Is your device in scope?

Three-part check. First: does your product contain an AI or ML component? This includes classical ML models, neural networks, LLM-based clinical decision support, computer vision for imaging analysis, anomaly detection algorithms, any system generating outputs from learned patterns. If yes, continue.

Second: is the product intended for a medical purpose under EU MDR? If it carries a CE mark under MDR or IVDR, or would require one, the answer is yes.

Third: do you place it on the EU market, or intend to?

Three yeses and you're in scope. That captures essentially all SaMD with ML components, AI-assisted diagnostic imaging devices, clinical decision support tools with learned models, patient monitoring systems with anomaly detection, and most AI-driven drug delivery or dose calculation systems. The only realistic exceptions are devices where the AI component is demonstrably irrelevant to clinical function — rare in 2026.

What dual conformity actually means in practice

Dual conformity means satisfying two conformity assessment frameworks simultaneously. EU MDR requires technical documentation per Annex II and Annex III, a QMS meeting Annex IX, and a conformity assessment by a Notified Body for Class IIa and above. The EU AI Act requires additional technical documentation per AI Act Annex IV, a QMS that addresses AI-specific requirements, and conformity assessment for high-risk AI systems.

The assessments aren't entirely separate processes. The AI Act is designed to integrate with sectoral legislation — AI Act conformity assessment is intended to be conducted in conjunction with the MDR assessment, not as a second independent audit. One submission, one NB assessment, covering both frameworks.

The catch: your Notified Body must be designated under both MDR and the AI Act. As of early 2026, only a small number of NBs hold dual designation. If your current NB isn't among them, you need clarity now — not in June. Some NBs are in the process of obtaining AI Act designation; others are not. Some arrangements involve a lead NB for MDR with a collaborating body for AI Act assessment. Those arrangements add coordination overhead and timeline risk.

Practically: your MDR technical file and AI Act Annex IV documentation are developed in parallel, with explicit cross-references where requirements overlap. The NB reviews both together. Your Declaration of Conformity references both regulations. Your QMS covers both frameworks. Ongoing post-market obligations run in parallel under both regimes.

EU MDRAnnex II / IIIEU AI ActAnnex IVTechnical documentation (general)Risk management file (ISO 14971)Clinical evaluation reportPost-market surveillance planPerformance evaluation (IVD)Instructions for useGeneral AI system descriptionDesign specs + training dataData provenance + bias docsAutomated performance loggingHuman oversight documentationAI-specific post-market monitoringreuse with additionspartial reusenew worknew workpartialpartialReuse existing MDR artifactExtend existing artifactNet-new work required
MDR-to-AI Act artifact mapping. Green: existing MDR documentation satisfies the requirement. Amber: existing artifact needs additions. Red: new documentation required.

What you can satisfy from existing MDR artifacts

AI Act Annex IV specifies six categories of technical documentation for high-risk AI systems. Three have meaningful overlap with existing MDR work.

General description of the AI system. Annex IV requires documentation of intended purpose, version, and deployment context. Your MDR Annex II documentation already contains intended use, indications for use, device description, and version identification. The AI Act requires more explicit coverage of operational context and deployment environment. In most cases, you're extending existing content, not creating it from scratch.

Design specifications including architecture and performance metrics. This overlaps substantially with your MDR design inputs, design outputs, and IEC 62304 software architecture documentation. The AI Act requires explicit documentation of model architecture, training methodology, and performance metrics on validation datasets. If your DHF is in order and your IEC 62304 documentation is current, the engineering content already exists. The work is mapping it to Annex IV structure and filling ML-specific gaps.

Risk management. The AI Act requires a risk management section in Annex IV. Your ISO 14971 risk file is the source material. The AI Act's risk framing is broader — it explicitly includes AI-specific failure modes like distributional shift and adversarial inputs — but the methodology maps directly to ISO 14971. Your existing risk file covers most of the required ground. You need to add explicit treatment of AI-specific hazards.

Three of six Annex IV categories have meaningful MDR overlap. The other three are largely new — and two of them require engineering decisions, not just documentation effort.

The four requirements you cannot defer

These are the requirements companies are actually behind on. All four require engineering decisions, infrastructure, or significant documentation reconstruction. None can be resolved the week before NB submission.

Automated performance logging and monitoring. The AI Act requires that high-risk AI systems be designed to automatically log events relevant to identifying risks and deviations from intended performance across their lifecycle. This is an engineering requirement, not a documentation one. Your device must be capable of generating, retaining, and making available logs that capture performance metrics in real-world deployment. Drift detection must be part of your system architecture. If you're building this capability now, it won't be production-ready by August.

Bias documentation and training data provenance. Annex IV requires documentation of training and testing data: sources, preprocessing steps, selection criteria, known limitations or biases. For devices already on market, this means going back to your model development history and creating a formal data governance statement. If your training data documentation is informal, reconstruction is possible but time-consuming. If your training pipeline involved third-party datasets, you need formal provenance records and bias assessments for each.

Data governance statements. Beyond training data documentation, the AI Act requires ongoing data governance covering practices that affect AI system behavior throughout the product lifecycle: how new real-world data is collected and used for model updates, what retention practices apply to logged performance data, how data quality is assured in deployment. Most MDR quality systems don't address AI-specific data governance explicitly. This is new QMS content.

Human oversight decision boundaries. The AI Act requires explicit documentation of what the AI system can and cannot decide autonomously — the boundary between AI-generated output and required human decision-making. This is more specific than general use instructions in your IFU. You need a formal human oversight specification that maps AI outputs to required user actions, identifies conditions where AI output must not be acted on without clinician review, and documents the design features that enforce those boundaries. For devices already designed, this often requires a design review to confirm the product as built actually enforces the boundaries you intend to document.

The Notified Body bottleneck is real

About 30 MDR-designated Notified Bodies are active in the EU as of early 2026. The subset with AI Act designation is significantly smaller. Demand for dual-designated NB slots will exceed capacity before August 2.

If you're already engaged with a NB for MDR work, your first call this week should be about their AI Act designation status and timeline capacity. If they're not dual-designated, you need to understand your options: some NBs are in the process of obtaining AI Act designation and may handle August timelines; others are not. Some arrangements involve a lead NB for MDR with a collaborating body for AI Act assessment — those add coordination overhead and timeline risk.

If you don't have an existing NB relationship, the situation is more challenging. Engaging a NB now — not after your documentation is complete — is the correct sequence. NBs can give provisional slot commitments before documentation is final. Waiting for completed documentation before initiating NB contact adds months to the timeline.

There's also an internal bottleneck. AI Act conformity assessment requires your regulatory, engineering, and quality teams to be aligned on Annex IV content. In most device companies, the people who understand the AI architecture aren't the same people who understand the regulatory framework. Building that bridge takes time, and it needs to happen before the NB assessment, not during it.

An 8-week sprint to August readiness

Eight weeks of focused work creates a realistic foundation for August compliance.

Weeks 1–2: Scope confirmation and NB engagement. Confirm AI Act applicability for each product in your portfolio. Contact your Notified Body this week — not after documentation progresses. Assign a dual-regulation lead who owns both Annex IV documentation and NB coordination.

Weeks 3–4: Gap analysis against Annex IV. Map your existing MDR documentation to AI Act Annex IV requirements section by section. Document what exists, what requires extension, what's net-new. For each of the four non-deferrable categories — automated logging, bias documentation, data governance, human oversight boundaries — assess current state specifically.

Weeks 5–6: Engineering decisions and infrastructure. Automated logging architecture must be specified; if it requires device changes, the design process starts now. Human oversight decision boundaries must be formally specified and verified against the current implementation. If design changes are required — system constraints, forced confirmation steps, output caveats — the design change process starts here. This is the highest-risk segment: discoveries here can expand scope.

Weeks 7–8: Documentation production. Annex IV drafted and internally reviewed. Training data provenance and bias documentation completed. Data governance statement drafted for QMS integration. Risk management file updated to address AI-specific hazards. Human oversight specification finalized and cross-referenced to device design.

May–June: QMS update, internal audit, NB submission preparation. QMS procedures updated to address AI Act obligations — post-market monitoring, incident reporting, data governance. Internal audit against the updated QMS. Technical documentation finalized and submitted to NB. NB review and assessment in July, targeting conformity determination before August 2.

One note on PCCP: if you've developed a Predetermined Change Control Plan for FDA, it maps conceptually to the AI Act's post-market monitoring requirements. PCCP covers planned modifications to the AI algorithm, validation requirements for each change type, and criteria that trigger a new submission. The AI Act's post-market monitoring plan for AI requires analogous content — planned update procedures, performance monitoring thresholds, incident response protocols. Use your PCCP as a starting point. Not a one-to-one map, but a substantial head start.

Weeks 1–2NB contact +scope assessmentWeeks 3–4Gap analysis againstAnnex IVWeeks 5–6Logging architecture+ bias docsWeeks 7–8Documentation+ QMS updateMay – JuneAuditreadinessAug 2deadline
Eight-week sprint to August 2. NB engagement in week 1 is not optional — dual-designated body availability is the critical path.

MANKAIND

Every AI Act Annex IV requirement has its origin in engineering decisions you've already made. Your model architecture was designed. Your training data was selected. Your performance targets were specified. Your human-in-the-loop requirements were defined, at least informally. The gap between where most teams stand today and where they need to be isn't a gap in engineering work — it's a gap in the translation of that work into structured regulatory documentation.

MANKAIND captures design decisions as they happen — architecture choices, training data specifications, performance requirements, risk management updates, human oversight boundaries — and generates regulatory documentation from that record in real time. When August 2 approaches, your AI Act Annex IV documentation isn't assembled from memory and meetings. It generates from the same engineering record that already powers your MDR technical file.

Dual conformity under EU MDR and the AI Act is real work. It isn't primarily a documentation problem — it's an engineering problem that documentation reflects. Treat it that way, start the NB conversation this week, and the August deadline is achievable.

See how MANKAIND handles this

30-minute demo. Bring your hardest design controls question.