Standards
FDA 510(k) submissions—from engineering to clearance
The FDA 510(k) is a premarket notification submission that demonstrates a new device is substantially equivalent to a legally marketed predicate device. It is the most common pathway to market for Class II medical devices in the United States, and it is fundamentally an engineering argument—supported by design documentation, performance testing, and risk analysis—that the new device performs at least as well as the predicate. Understanding what that argument requires of engineering teams is the starting point for building a submission that clears on the first review cycle.
Substantial equivalence: the engineering argument
Substantial equivalence is established by demonstrating that the new device has the same intended use as the predicate and either the same technological characteristics, or different technological characteristics that do not raise new questions of safety or effectiveness and that the device performs at least as well as the predicate.
The intended use comparison is a regulatory determination, but the technological characteristics comparison is an engineering one. Engineering teams must articulate, in specific terms, which features of their device are identical to the predicate and which differ. For each difference, they must demonstrate—through performance data, engineering analysis, or both—that the difference does not raise new safety or effectiveness questions. That demonstration requires engineering rigor: a vague assertion that a new material is "equivalent" without supporting biocompatibility testing, material characterisation, or published literature is not a demonstration.
Predicate selection is a strategic engineering decision made early in development, but its implications run through the entire submission. The predicate defines the performance benchmarks the new device must meet or exceed, the testing standards that apply, and the risk profile against which the new device is compared. Engineering teams that select predicates without understanding the testing record behind them—or that select predicates for commercial rather than technical reasons—often discover late in development that their testing strategy does not support their substantial equivalence argument.
Performance testing strategy
The testing content of a 510(k) submission varies by device type, risk classification, and the nature of the differences from the predicate, but the general structure is consistent: bench performance testing, biocompatibility evaluation if the device contacts patients, software documentation if the device contains software, electrical safety and electromagnetic compatibility testing if applicable, and sterility and shelf life data if the device is distributed sterile.
Bench performance testing is the core of most 510(k) submissions. FDA guidance documents for specific device types define the performance characteristics that must be tested and often the test methods that must be used. Engineering teams should identify the applicable guidance documents before finalising the design—because guidance documents define performance requirements that will need to be met, and the design specification should be written to satisfy them. Testing a design against a guidance document requirement that was not considered during design inputs is recoverable but expensive.
Biocompatibility evaluation follows ISO 10993. The biological evaluation plan is driven by the nature of patient contact: contact type (surface contact, externally communicating, implant), contact duration (limited, prolonged, permanent), and the materials the device is made from. Engineering teams are responsible for the material characterisation that feeds biocompatibility—knowing what the device is made of at a sufficient level of detail to select the right endpoints. Material changes made after biocompatibility testing is complete typically require re-evaluation.
Software documentation
If the device contains software, the 510(k) must include a Software Documentation section that follows FDA's guidance on software in medical devices. The level of documentation required—Basic, Enhanced, or the more stringent documentation for devices with higher software risk—is determined by the software level of concern, which is established by a two-step assessment: first, whether a software failure could directly harm a patient, and second, the severity of that harm.
The Software Documentation section requires a software description, a device hazard analysis covering software, a software requirements specification, an architecture design chart, a software design specification for Enhanced-level devices, traceability analysis linking requirements to tests, software development and maintenance practices, validation and verification testing, unresolved anomalies, and revision history. Engineering teams building software for 510(k) devices that structure their software development process against IEC 62304 from the start will find that most of this documentation already exists—as a natural output of the development process—rather than needing to be created for submission.
Labeling
Labeling for a 510(k) submission includes the device label, Instructions for Use (IFU), and any other labeling that accompanies the device. FDA reviews labeling for consistency with the intended use statement in the submission, clarity of indications and contraindications, and adequacy of instructions for safe use.
Labeling is an engineering output, not a marketing output. The IFU must accurately reflect how the device was validated—the user population tested in usability validation, the clinical conditions under which the device was evaluated, the procedures and precautions that testing confirmed are necessary for safe use. When labeling is developed independently of the validation program, it often contains claims that testing does not support or fails to reflect limitations that testing identified.
The submission package: what engineering teams actually produce
A 510(k) submission package consists of: the cover sheet and cover letter, device description, substantial equivalence comparison, proposed labeling, biocompatibility evaluation, sterilisation information if applicable, software documentation, performance testing results, and any other technical information requested by applicable guidance. The submission is organised into sections that follow FDA's electronic submission template, and each section references engineering documentation that the team produced during development.
The submission does not ask engineering teams to create new documentation—it asks them to compile and present the documentation they generated during the design process. Teams with a well-maintained DHF, a current risk management file, and a complete testing record find that 510(k) preparation is a compilation exercise. Teams without those records find that it is a reconstruction exercise—slower, more expensive, and more prone to gaps that generate FDA additional information requests.
How MANKAIND compiles engineering outputs into submission-ready packages
MANKAIND structures engineering documentation—design inputs, verification protocols, test results, risk analysis, software lifecycle records—against the 510(k) submission template throughout the development program. When a performance test is completed, MANKAIND links the result to the design input it satisfies and to the substantial equivalence comparison that it supports. When the team is ready to submit, MANKAIND generates the submission structure from the engineering record—not from a separate documentation effort.
For engineering teams targeting 510(k) clearance, the practical value of that approach is in cycle time. An FDA reviewer who receives a 510(k) with complete, internally consistent documentation—where the substantial equivalence argument, the testing record, and the risk file all tell the same story—has what they need to reach a determination. A submission with gaps, inconsistencies, or documentation that does not reflect the design as tested generates an Additional Information request that adds months to the clearance timeline. MANKAIND is built to eliminate that gap between the engineering work and the submission record.
See how MANKAIND handles this
30-minute demo. Bring your hardest design controls question.