Use Cases
Software as a medical device (SaMD) and AI/ML development
Software as a medical device presents engineering challenges that hardware-first development teams rarely encounter and software-first teams are often unprepared for. The FDA's SaMD framework, IEC 62304, the IMDRF SaMD guidance, and—for AI/ML devices—the FDA's action plan for predetermined change control plans create a regulatory surface area that is genuinely complex. The challenge is not managing that complexity administratively. The challenge is making sound engineering decisions inside it.
MANKAIND is built for engineering teams who build software that makes clinical decisions. The platform understands the structural requirements of IEC 62304, the validation demands of FDA's AI/ML framework, and the cybersecurity expectations of NIST SP 800-30 and FDA's cybersecurity guidance—and it uses that understanding to support the engineering decisions those requirements demand, not just to generate the documents those requirements produce.
Software classification is an engineering decision, not a filing exercise
IEC 62304 software safety classification—A, B, or C—determines the rigor of the software development lifecycle required for each software unit. Getting classification wrong has downstream consequences: Class C units require unit testing and complete traceability from requirements to code; Class A units do not. Teams that under-classify to reduce burden create verification gaps that surface during FDA review. Teams that over-classify create development overhead that slows the program without improving safety.
MANKAIND supports classification analysis by mapping software units to their failure mode consequences at the system level. When an engineering team defines a software item's intended function and its potential failure modes, the platform helps evaluate whether a failure of that unit could result in serious injury—the determining question for Class C assignment. That analysis is captured in the software development plan and the hazard analysis simultaneously, maintaining traceability between the two documents from the start.
Algorithm validation: the evidence burden is an engineering problem
For AI/ML-based SaMD, the FDA's performance testing expectations go significantly beyond traditional software verification. A neural network that classifies retinal images for diabetic retinopathy must demonstrate performance across demographic subgroups, imaging equipment types, and clinical sites. The selection of the validation dataset, the statistical approach to confidence interval calculation, and the definition of clinically meaningful performance thresholds are all engineering decisions with direct submission consequences.
The FDA's predetermined change control plan (PCCP) framework adds another layer: engineering teams building adaptive algorithms must prospectively define the scope of permissible modifications, the performance monitoring approach, and the retraining protocol before the device is cleared. This is not a regulatory exercise—it is a disciplined engineering specification of how the algorithm will evolve post-market. MANKAIND helps engineering teams develop PCCPs by mapping the algorithm's architecture, its training data dependencies, and its performance sensitivity to distribution shift, then generating the structured PCCP narrative that the FDA expects.
SOUP management: the hidden traceability problem
Software of unknown provenance—third-party libraries, open-source components, commercial off-the-shelf software integrated into the device—is one of the most common sources of IEC 62304 findings during FDA review. The standard requires that SOUP items be identified, their functional and performance requirements documented, and the failure modes of each SOUP component evaluated in the hazard analysis.
For modern SaMD teams, the SOUP list is long. A machine learning inference stack alone may include a deep learning framework, a numerical computation library, an image preprocessing pipeline, and several model serving components—each with its own versioning, maintenance status, and known vulnerability history. MANKAIND maintains the SOUP register as a living document, tracks version changes through the development lifecycle, and flags when a SOUP component update requires a re-evaluation of the associated hazard analysis entries. Engineers are not chasing a static list. The list tracks the codebase.
Cybersecurity: threat modeling is engineering work
FDA's cybersecurity guidance for medical devices requires a Cybersecurity Bill of Materials (CBOM), a threat model, and a software update mechanism. For networked SaMD—which is most SaMD—these requirements extend to the cloud infrastructure, the API surface, and the data pipeline. The engineering team that built the device knows its attack surface better than any documentation consultant. The challenge is capturing that knowledge in a form the FDA recognizes.
MANKAIND supports threat modeling using STRIDE and DREAD frameworks mapped against the device's architecture diagram. When the engineering team defines system components and data flows, the platform generates the threat model structure and helps the team evaluate likelihood and severity for each identified threat. The resulting threat model feeds directly into the security risk assessment that accompanies the 510(k) or PMA submission—and into the post-market monitoring plan that tracks vulnerability disclosures against the CBOM.
The software lifecycle document set—generated from engineering decisions
IEC 62304 conformance requires a specific document set: software development plan, software requirements specification, software architecture document, software detailed design, software unit implementation, software integration testing, software system testing, and software release procedures. For most teams, producing this document set is a retrospective exercise conducted after the software is built.
MANKAIND inverts that sequence. As the engineering team makes architectural decisions, defines requirements, and documents design rationale, the platform generates the IEC 62304 document structure in parallel. The software architecture document is not written after the architecture is finalized—it emerges from the architectural decisions as they are made. The verification records are generated from the test execution data, not reconstructed from engineering memory. The result is a document set that accurately reflects what was built, because it was produced by the engineers who built it, at the time they built it.
For SaMD teams operating under the pressure of fast development cycles, that is not a regulatory luxury. It is the difference between a submission that goes in on schedule and one that requires a six-month documentation sprint before the team can file.
See how MANKAIND handles this
30-minute demo. Bring your hardest design controls question.