Skip to main content

Blog

FDA Section 524B — RTA is the new normal, and your SBOM is the first thing that'll get you sent home

Section 524B has been enforced for 30 months now and the thing that still surprises sponsors is how FDA handles cybersecurity gaps. They don't write a deficiency letter. They Refuse to Accept the submission. The clock doesn't start. You get the package back with a note, and you lose the review cycle.

Talked with a friend at a mid-size device company last week — third RTA on the same 510(k) for cybersecurity content. Not different findings. Same finding each time. Their SBOM met NTIA minimum elements but couldn't answer the question CDRH was actually asking, which was "if someone discloses a CVE in library X tomorrow, can you tell us within 72 hours which of your fielded devices contain the affected version?"

That's the question the 2023 Final Guidance is really about. Everything else is mechanics.

What actually got you RTA'd in the first four quarters

Rough pattern I've seen across ~40 submissions since October 2023, speaking with sponsors and regulatory consultants who work this space: roughly 8% of cyber device submissions got RTA'd on cybersecurity content. The findings concentrate on three issues.

First, SBOM depth that hits the NTIA minimum but can't be operationalised. Flat dependency list. No transitive relationships captured. No version pinning tight enough to CVE-match. Technically compliant, functionally useless for the post-market monitoring the statute requires. Auditors see this and they know the monitoring plan is fiction.

Second, post-market vulnerability management plans written as aspirations. "We will monitor CVEs and patch as appropriate" is not a plan. It's a sentence. Plans need defined responsibilities, monitoring sources, triage thresholds with specific CVSS levels, patch deployment pathways, and customer communication procedures. If CDRH can't read the plan and sim-run a hypothetical vulnerability through it from detection to disclosure, the plan fails.

Third, threat models that enumerate STRIDE generically against the device block diagram. S, T, R, I, D, E — check, check, check — but no device-specific attack path analysis, no adversary capability framing, no mitigations traced to specific security controls. Reviewers read these as compliance exercises rather than engineering analyses, and they reject them as such.

SBOM — what "usable" means in practice

Here's the test: a CVE drops Tuesday afternoon. Library X, version range 2.1.0 through 2.3.4. Your devices shipped with various versions. Can your on-call engineer answer three questions by Thursday?

One: which of our devices contain this library, directly or transitively.

Two: what specific version is deployed on each.

Three: where in the device architecture does that library sit — can the vulnerability be reached by the device's actual attack surface?

If the answer is yes, your SBOM works. If the answer requires someone to grep through build logs and manually reconcile package manifests, it doesn't. Meeting NTIA minimum elements is necessary and not sufficient. You need transitive dependencies captured (CycloneDX 1.5 handles this cleanly; SPDX 2.3 needs explicit Relationship statements), specific version pinning at build time (not semver ranges), and cryptographic hashes of deployed artefacts so you can verify the SBOM matches the binary that shipped.

Without the last piece, your SBOM diverges from reality within weeks of submission.

Threat modeling — when STRIDE stops being enough

STRIDE is fine for most Class II devices. FDA accepts it as a threat classification framework. Where it starts failing is the submissions where "we assumed an opportunistic attacker with commodity tooling" isn't a credible adversary model.

Pacemaker with cloud telemetry. Insulin pump with closed-loop control. Surgical robotics with remote assist. These are devices where a targeted threat actor is a realistic concern — not paranoid, but the kind of concern that informs architecture decisions. For devices in that category, CDRH has been asking for adversary capability justification starting roughly mid-2024. What attacker profile are you defending against? What's their realistic capability? What motivations do they have against this device?

If you can't answer those questions, your threat model is incomplete. The framework you want is PASTA or attack tree analysis per ISO/IEC 15408 Common Criteria methodology. Anchor the threat model in adversary capability, derive attack paths from that capability, and you end up with a threat model that explains why your security controls are sized the way they are. STRIDE alone gives you a list. PASTA gives you a risk-calibrated defence.

The security vs safety risk management distinction

ISO 14971 governs safety risk. AAMI TIR57 governs security risk. They share mechanics. They differ in substance.

Safety asks what can go wrong. Security asks who might intentionally make it go wrong.

The consequence nobody's quite ready for: these two risk analyses produce documents that have to reconcile. An individually-acceptable security risk (exploit probability low, exploit impact moderate) can combine with other acceptable security risks to produce an unacceptable safety risk when you trace the causal chain through to patient harm. Reviewers look for evidence you ran that reconciliation. If your security risk file and safety risk file don't agree about which vulnerabilities matter and what their ultimate safety impact is, that's a deficiency finding waiting to be written up.

The manufacturers who handle this cleanly maintain one integrated risk register with security and safety views, cross-referenced at every level. The ones who maintain two separate files almost always find they disagree when a reviewer crosses them.

Architecture views — the three the guidance actually wants

2023 Final Guidance Section VI specifies three architecture views. Most submissions give you one — the global system view. That's the top of the funnel. The other two are where threats actually get analysed.

Multi-patient context view. How does the device behave across multiple patients? Shared infrastructure? Data segregation? Authentication boundaries? This is the view that exposes privilege escalation and lateral movement risks. It's the one most commonly omitted from submissions.

Data flow view. Where does data cross trust boundaries internally? Trust boundaries are where threats concentrate, and a data flow view that doesn't mark them is structurally incomplete. An auditor reading the architecture without trust boundary annotations can't evaluate your threat model — which means they'll either defer, or reject.

Post-market is the continuing burden

Once you clear the submission, Section 524B keeps going. Monitoring obligation. Disclosure obligation. Patch obligation. Customer communication. This isn't a filing activity — it's an operational capability that has to exist and keep running.

Operational components that an auditable post-market cyber program actually needs: automated CVE monitoring against the SBOM, vulnerability triage with defined severity thresholds (CVSS v3.1 or v4.0 base score plus exploitability assessment), a patch deployment infrastructure that can reach fielded devices, coordinated vulnerability disclosure per ISO/IEC 29147:2018, vulnerability handling per ISO/IEC 30111:2019, and customer notification procedures.

That last one — CVD — is what most manufacturers under-build. Publish a security reporter channel. Acknowledge reports within 7 days. Triage within 30. Remediate within 30 to 180 days depending on severity. Coordinate public disclosure. If you don't publish a channel or don't acknowledge promptly, the security research community stops giving you the coordinated disclosure courtesy and starts dropping 0-days in public. That's not a compliance problem. It's a brand problem.

Quality system integration — the inspection pattern that's emerging

Section IV of the Final Guidance addresses the QSR integration. This is where routine inspections have started picking up cyber findings since mid-2024. Cybersecurity activities performed but not documented as design controls is easy to remediate. Cybersecurity activities documented but not connected to the rest of the design record is structurally harder and usually requires platform changes.

The direction everything is going is that cybersecurity has to live inside the design control process. Security requirements flow from threat modeling, into software requirements, into design outputs, into verification. The SBOM is a design output. Post-market vulnerability triage feeds into CAPA. These linkages have to be first-class in the engineering record — not separate lanes that touch the record at submission time.

How MANKAIND handles the cybersecurity load

The SBOM generates from dependency data captured at build time. Transitive relationships, version pinning, hash attestation — all there, all queryable. Threat models link to the architecture views they analyse and the risk management file entries they produce. Post-market vulnerability monitoring runs against the SBOM continuously; matches trigger triage workflows that inherit context from the original threat model. When a vulnerability is disclosed, the platform tells you which devices are affected, what the existing mitigation posture is, and what customer communication is appropriate.

The submission plan and the operating reality are the same plan. That's the distinction that matters under 524B, and it's the distinction that fragmented documentation environments can't produce.

Frequently asked questions about FDA Section 524B cybersecurity

What is Section 524B of the FD&C Act?

Section 524B was added to the Federal Food, Drug, and Cosmetic Act by Section 3305 of the Consolidated Appropriations Act, 2023. It requires sponsors of cyber devices to include cybersecurity information in premarket submissions — including a plan to monitor and address post-market vulnerabilities, processes to provide updates and patches, and a software bill of materials. It took effect October 1, 2023.

Does every medical device need an SBOM?

Every cyber device under Section 524B requires a software bill of materials covering commercial, open-source, and off-the-shelf software components. FDA accepts SBOMs in standard formats such as SPDX and CycloneDX. Devices that do not meet the statutory definition of a cyber device are not required to provide an SBOM, though FDA increasingly expects one for any software-containing device.

What is a cyber device under Section 524B?

A cyber device is defined in Section 524B(c) as a device that includes software validated, installed, or authorised by the sponsor; has the ability to connect to the internet; and contains technological characteristics validated, installed, or authorised by the sponsor that could be vulnerable to cybersecurity threats. The definition is broad — virtually any network-connected or cloud-connected medical device qualifies.

What cybersecurity standards does FDA recognise?

FDA recognises AAMI TIR57 for security risk management, AAMI/UL 2900 for device security, ANSI/ISA/IEC 62443 for industrial cybersecurity (applicable to some medical devices), and ISO/IEC 27001 for information security management. The FDA 2023 Premarket Cybersecurity Guidance aligns its expectations with these consensus standards while establishing device-specific requirements beyond them.

How does cybersecurity risk management differ from safety risk management?

Safety risk management under ISO 14971 addresses unintentional hazards and failure modes. Cybersecurity risk management addresses threats from adversarial actors — attackers who actively probe for vulnerabilities and exploit them. The two processes share a risk framework, but cybersecurity requires threat modeling, adversary capability analysis, and continuous post-market monitoring in ways safety risk analysis does not.

See how MANKAIND handles this

30-minute demo. Bring your hardest design controls question.