Skip to main content

Use Cases

Your 510(k) submission is next quarter. Your DHF has gaps.

Your team has been building a Class II wearable cardiac monitor for 14 months. The engineering is done—the device passed bench testing, the firmware is stable, the biocompatibility study is complete. Your VP of Engineering told the board the 510(k) goes in next quarter. Then your regulatory lead ran the traceability audit.

Thirty-four orphaned requirements. Design inputs that aren't linked to design outputs. Verification tests that reference requirements that were renumbered during the last specification revision. A risk analysis that covers the right failure modes but doesn't trace to the specific design controls that mitigate them. The predicate comparison table is drafted but the performance data columns are empty—the bench test reports exist, they're just not mapped to the predicate device's specifications.

The engineering is solid. The device works. But the distance between "the device works" and "the 510(k) submission is complete" just became a six-month project.

The pre-submission crunch

This is the most common failure mode in 510(k) programs: the engineering team builds the device on one timeline, and the documentation catches up on a different one. By the time you're assembling the submission, the people who made the design decisions are already working on the next project. The rationale for choosing a specific sensor configuration lives in an engineering notebook that nobody digitized. The substantial equivalence argument references performance characteristics that were tested—but the test reports use different terminology than the predicate device's labeling.

Your regulatory lead starts the reconstruction effort:

  • Two weeks mapping design inputs to design outputs, discovering that the specification restructuring in month eight broke half the traceability links
  • Three weeks rewriting verification protocols to explicitly reference the design output requirements they were actually intended to demonstrate—instead of the generic titles they were given during testing
  • One week rebuilding the predicate comparison table, pulling performance data from test reports and reformatting it against the predicate's 510(k) summary
  • Two weeks writing the risk analysis crosswalk—connecting each identified hazard to the specific design control, verification test, and labeling warning that addresses it

That's eight weeks of full-time regulatory work to turn completed engineering into a submittable package. Your VP of Engineering just told the board it's slipping to the following quarter.

The same submission, with MANKAIND

Run the scenario again. Same device, same 14 months of development, same 510(k) target. But from the start, engineering decisions flow into a structured record. When the hardware engineer selects the cardiac sensor, that design input traces to the design output specification, the verification protocol that will test it, and the risk analysis entry that evaluated its failure modes. When the specification gets restructured in month eight, the traceability links update because they're structural—not manual cross-references in a spreadsheet.

The predicate comparison builds progressively. MANKAIND maps your device's performance characteristics against the predicate's 510(k) summary as your bench testing generates data. Gaps surface while there's still time to run additional tests—not when you're assembling the submission under deadline pressure.

The substantial equivalence argument isn't written after the engineering is done. It's structured from the design inputs forward: same intended use, same technological characteristics where it matters, different technological characteristics where you can demonstrate equivalent performance. The testing program is designed to generate the specific evidence the substantial equivalence argument needs—not generic performance data that requires retroactive mapping.

The outcome

The traceability audit finds three gaps instead of 34—genuine engineering gaps that need resolution, not broken links from a specification restructuring that nobody re-indexed. The eight-week submission assembly phase doesn't exist because the submission package built itself from 14 months of structured engineering decisions. Your regulatory lead spends her time on the pre-submission meeting strategy and the clinical data strategy—not on reformatting test reports to match predicate comparison tables.

The 510(k) goes in next quarter. As promised. Because the engineering record was submission-ready from the day the first design input was documented.

See how MANKAIND handles this

30-minute demo. Bring your hardest design controls question.