Use Cases
You changed a design input on your AI diagnostic. Now what?
Your team is building an AI-powered ECG analysis algorithm—a Class II SaMD device that identifies arrhythmia patterns cardiologists might miss on a standard 12-lead. Last week, your ML engineers retrained the model on a larger, more diverse dataset. Sensitivity improved by 3% across all subgroups. Everyone is excited. Then your regulatory engineer asks the question that kills the mood: "What documentation needs updating?"
The answer is almost everything. The model architecture changed—two additional convolutional layers—so the IEC 62304 software detailed design needs revising. The training dataset changed, which means the algorithm validation report is stale. Three SOUP components were updated to support the new architecture, and each one needs a fresh hazard analysis entry. The software safety classification might have shifted because the failure modes of the new architecture haven't been evaluated yet. And the entire V&V suite needs re-running against the updated model, but nobody is sure which tests are still valid and which need redesigning.
The spreadsheet nightmare
Here's where most SaMD teams are right now. The SOUP register lives in a shared spreadsheet that was last updated two sprints ago. The IEC 62304 document set—software development plan, requirements spec, architecture document, detailed design, unit verification, integration testing, system testing—exists as a collection of Word documents in a shared drive, each maintained by a different engineer. The traceability between software requirements and test cases lives in Jira, sort of, if you know which custom fields to query and which labels to filter. The safety classification rationale is in a PDF that references a version of the architecture that no longer exists.
Your ML engineer made one decision—retrain with better data. The documentation impact assessment takes two weeks. Not because the analysis is hard, but because finding all the documents that need updating, understanding their interdependencies, and confirming which versions are current requires archaeology, not engineering. Meanwhile, the model actually running in your test environment has diverged further from the model described in your documentation, and the gap keeps widening.
The same model update, with MANKAIND
Run it again. Same team, same model improvement. But this time, when the ML engineer commits the retrained model, MANKAIND understands the cascade. The platform maps the model change to the affected IEC 62304 artifacts: the detailed design document flags the architectural delta, the SOUP register highlights which dependencies were updated and which hazard analysis entries need re-evaluation, and the V&V framework identifies exactly which test cases need re-execution versus which remain valid.
The safety classification analysis runs against the new architecture's failure modes—not the old one's. If the classification shifts from Class B to Class C, the platform surfaces the additional verification obligations before the team discovers them during an FDA review. The predetermined change control plan—required for adaptive algorithms under FDA's AI/ML framework—is maintained as a living engineering specification, not a static document that drifts further from reality with each model iteration.
Cybersecurity threat modeling updates in parallel. The new SOUP components are evaluated against the existing threat model. The Cybersecurity Bill of Materials reflects the current software stack, not last quarter's. The STRIDE analysis covers the actual attack surface, including the data pipeline changes that the new training dataset required.
The outcome
The documentation impact assessment that took two weeks now takes an afternoon—not because the analysis was skipped, but because MANKAIND already maintains the dependency graph between every engineering artifact. The model improvement ships with its documentation, not months ahead of it. Your regulatory engineer spends her time evaluating whether the performance improvement changes the clinical claims, not hunting for stale Word documents.
The IEC 62304 document set isn't reconstructed after the software is built. It emerges from the engineering decisions as they're made—architecture, requirements, design rationale, verification results—all captured in a structured format that produces submission-ready documentation without a separate documentation sprint. For SaMD teams shipping on fast development cycles, that's the difference between a submission that goes in on schedule and one that waits six months for the documentation to catch up.
See how MANKAIND handles this
30-minute demo. Bring your hardest design controls question.