Blog
Requirements that fail FDA review — the five mistakes I see every quarter
Half the Design History Files I've read this year contain at least one requirement phrased as "the device shall support data export." That isn't a requirement. It's a gesture toward one.
And that's upstream of where most 510(k) AI requests actually originate. Not in V&V. Not in the risk file. In the SRS, where a sentence that can't be verified was approved three years earlier and then propagated through every downstream artefact.
Ask any reviewer where submissions slow down. They won't say verification. They'll say the requirement-to-design-output trace. The trace breaks because the requirements were never tight enough to trace cleanly in the first place.
What reviewers actually apply under 21 CFR 820.30
The regulation doesn't prescribe wording. It prescribes process — design inputs, design outputs, design review, V&V, transfer. What hardened into review practice over the last decade is a set of attributes your requirements have to carry. Miss any one and the AI request arrives.
- Verifiable. Testable by inspection, analysis, demonstration, or test. Pass or fail. No interpretation allowed.
- Unambiguous. One reading. "Appropriate," "sufficient," "user-friendly," "fast," and "shall support" are the words reviewers scan for first. They find them immediately.
- Complete. Nominal behaviour plus failure conditions plus edge cases. A requirement that only describes what happens when things go right is half a requirement.
- Consistent. Doesn't contradict another requirement. Most contradictions come from requirements written by different people at different points in the program without a shared source of truth.
- Traceable. Has a source. User need, risk control, regulatory requirement, performance standard. No source, no justification for the requirement existing.
- Singular. One behaviour, one trigger, one response. The word "and" in the middle of a requirement is almost always a split waiting to happen.
EARS — what it actually does
EARS (Easy Approach to Requirements Syntax) came out of Rolls-Royce work in the early 2000s. Five templates, each forcing you to name a trigger, a condition, a subject, and a response. It isn't a style guide. It's a constraint on what sentences can leave the SRS.
The five, with medical device examples:
- Ubiquitous (always active): "The device shall deliver insulin at the programmed basal rate with an accuracy of ±5% under normal operating conditions."
- Event-driven (triggered by an event): "When reservoir volume falls below 20 units, the device shall activate the low-reservoir alert within 10 seconds."
- State-driven (active while in a state): "While the device is in suspend mode, the device shall not deliver insulin regardless of the programmed rate."
- Unwanted behaviour (response to a hazardous condition): "If communication with the glucose sensor is lost for more than 60 seconds, the device shall cease closed-loop operation and notify the user."
- Optional feature (conditional on configuration): "Where remote monitoring is enabled, the device shall transmit session data to the connected application within 30 seconds of session end."
The templates make you name the trigger. Ambiguity lives in requirements that leave the trigger unstated. An engineer reading "the device shall notify the user" has to infer when. An engineer reading "when reservoir volume falls below 20 units, the device shall notify the user within 10 seconds" has nothing to infer.
Teams that adopt EARS seriously don't do it because QA insisted. They do it because they got tired of losing a week per review cycle arguing about what a sentence was supposed to mean.
The five mistakes I see on every requirements audit
1. Implementation disguised as requirements. "The device shall support data export." That's a capability, not a requirement. What format? What data? What triggers the export? How fast must it complete? Rewrite: "When the user selects Export from the session menu, the device shall write session data to the selected storage location in CSV format within 30 seconds."
2. Compound requirements. "The device shall alert the user when battery level is below 10% and shall enter low-power mode when battery level is below 5%." Two triggers, two responses, potentially two safety classes. Split them. Every "and" in the middle of a requirement is a split waiting to happen.
3. Unquantified performance. "The alarm shall sound promptly." Promptly means 500 ms to a firmware engineer and 10 seconds to a busy clinical user. "The alarm shall activate within 2 seconds of the triggering condition, measured from sensor event timestamp to audible output at ≥60 dB(A) at 1 m" is verifiable. Every performance requirement needs a number, a tolerance, and a measurement condition. No exceptions.
4. No failure mode. Teams write the happy path. IEC 62304 Clause 5.2.2 requires that safety-related software requirements address what the system does when it fails. For every safety-critical functional requirement, the next line in the SRS should be the safe-state behaviour. If it isn't there, you'll be writing it into the AI response in nine months.
5. Sourceless requirements. Requirements accumulate. New ones get added. Old ones never get deleted. After two years, nobody can say why half of them exist. If you can't point at a user need, a risk control, or a specific standard clause as the origin, the requirement has no anchor — and it's the first thing that gets modified without justification when someone needs the device to pass a test.
Where the requirement-to-design-output trace actually breaks
The trace looks clean on paper. Requirement → design output → verification test. Three columns. Straightforward.
In the wild, it breaks in two places.
First, at the design output itself. Most requirements don't map to a single design element. A software requirement for alarm behaviour spans the alarm management service, the notification queue, the UI component, and the logging service. Four design outputs, one requirement. When the matrix shows one design-output reference per requirement, that isn't completeness. That's oversimplification, and it's where reviewers start pulling threads.
Second, at change. A requirement gets revised in sprint 14. The design element was built in sprint 6. The verification test was written in sprint 10. The revision should trigger a cascade — design review, test re-run, risk file re-evaluation. In most programs the cascade happens partially. Requirement gets the new revision. Design output gets a cursory re-check. Test never gets re-run because nobody flagged it.
That silent decoupling is what kills submission timelines. Not the requirement writing. The drift between requirement and its downstream implementations every time something changes.
Risk connection from line one
ISO 14971 identifies hazards. Risk controls drop out of the analysis. Those risk controls become requirements. The teams that write defensible requirements make this connection at creation, not during submission prep.
What that looks like: when a requirement is written, it carries a reference to the risk control it implements, the hazard that generated that control, and the severity of harm it prevents. If it doesn't implement a risk control, it references a user need or a specific clause in a conformance standard. No requirement floats free.
The payoff shows up in review. When the reviewer walks a thread — hazard → risk control → requirement → design output → verification test — the chain is intact. They don't find missing links. They find coherent engineering. That's the outcome a properly structured requirements exercise produces, and it's also the one most teams don't produce because the linkage was never there to start with.
Traceability as structure, not retrofit
Treating traceability as something you assemble after the fact is the single most expensive habit a medtech team can have. It works exactly once — the first submission, while memory is fresh. By the time a design change hits, the matrix is already stale, and the reconstruction cost compounds from there.
The alternative is structural. Each requirement carries forward references to the design outputs implementing it and backward references to the design inputs justifying it. Tests reference the requirements they verify at the moment they're written. The traceability matrix is a view into a live record, not a document assembled at submission.
MANKAIND is built this way because the retrofit approach compounds debt that eventually has to be paid in review. Change a requirement and the platform surfaces the affected design outputs, the tests that need re-running, the risk controls that need re-evaluation. The engineering team still makes every decision. The platform just ensures they see what's affected before the AI request arrives.
The method, condensed
- Source every requirement. User need, risk control, regulatory requirement, or standard clause. No exceptions.
- Write in EARS. One behaviour per requirement. Explicit trigger, explicit response, explicit timing.
- Validate against the six attributes. Verifiable, unambiguous, complete, consistent, traceable, singular.
- Link forward at creation. Design output reference and test reference in the same commit — not a quarter later.
- Audit for the five failure modes. Capability language, compound statements, unquantified performance, missing failure modes, sourceless requirements.
See how MANKAIND handles this
30-minute demo. Bring your hardest design controls question.