Blog
The pre-Series A QMS — what FDA expects and what to defer
Sat with a founding CTO last fall. Four engineers. Class IIb SaMD. Seed round closing. The question on the table: when do we hire a quality manager, and what do we do until then? She'd been told by three separate consultants that she needed to be ISO 13485 certified before building anything. Two of them were selling eQMS implementations. The third was selling an audit.
None of the three were right.
FDA doesn't require certification to begin development. It requires design controls, risk management, and a documented process — all of which can exist at a four-person company without a single SOP in a commercial eQMS. What FDA does care about, very specifically, is whether the record you show up with at submission reflects the engineering that actually happened. Most pre-Series A teams don't fail the technical bar. They fail the evidentiary one.
What 21 CFR 820.30 actually demands
The design controls regulation is ten subsections. Skip the ceremony and they reduce to nine activities you must document evidence for. Not SOPs about the activities. The activities themselves.
- Design and development planning — a documented plan that says who does what, on what schedule, reviewed by whom.
- Design inputs — documented requirements covering physical, performance, safety, and regulatory characteristics, approved before design work begins.
- Design outputs — specifications, drawings, software, and supporting artefacts complete enough to manufacture and test against.
- Design review — planned reviews at defined stages, with at least one independent participant, and a record of who attended and what was decided.
- Design verification — documented evidence that outputs meet inputs.
- Design validation — documented evidence that the device meets intended use and user needs under actual or simulated use conditions.
- Design transfer — procedures confirming the design can be reproduced in production.
- Design changes — documented identification, review, and approval of every change after approval.
- Design History File — the compiled record demonstrating all of the above.
That's it. Not a particular SOP template. Not a specific tool. Not a dedicated quality team. Evidence of those nine activities, traceable, current.
The four-layer minimum viable QMS
The teams that get this right at pre-Series A run four layers of quality infrastructure, in this order. Everything else waits.
Document control, lean. Version history. A way to know which version is current. Review and approval for changes. This does not require a commercial document management system. A monorepo with approved PRs and a CODEOWNERS file can satisfy it. A Google Drive with versioned folders and a signature log can satisfy it. What it cannot be is informal. "Everyone knows which version is current" is not a documented process, and it won't survive the first audit question.
DHF structure. The Design History File isn't a document. It's a compiled set of records demonstrating design controls were followed. At minimum, organised to contain design inputs, design outputs, design review records, V&V protocols and results, and change control records. Structure matters more than format. If you can't walk from a requirement to the design element implementing it to the test verifying it in under a minute, your DHF isn't navigable, and that will surface in review.
Risk management file. ISO 14971 requires a risk management plan, a risk analysis, and a risk management report. Plan and report can be lightweight at early stage. Analysis can't. Hazard identification, risk estimation, and risk control decisions have to be documented and connected to design decisions. A risk analysis in a spreadsheet separate from your requirements document is already diverging from reality by sprint 4.
Change control from day one. Every change to an approved design document needs a rationale, a review, and an approval. The process can be trivial at early stage — a PR description, a reviewer, a date. What it cannot be is absent. Undocumented design changes are one of the top 483 observation patterns in design controls inspections, and they're preventable with a process that takes 90 seconds per change.
The five patterns that cost startups 12 months
I've watched enough programs lose a full year to post-hoc remediation to call these out specifically. The pattern is consistent.
1. "We'll do design controls properly after the prototype works." The prototype becomes the device. The decisions made during prototyping are the design decisions. Reconstructing design inputs after the design is fixed is not design controls. FDA reviewers can tell. The reconstruction takes three months and is often unusable because nobody can remember why version 3 abandoned the approach in version 2.
2. Risk analysis in a separate Excel file. The risk analysis starts on the regulatory consultant's laptop. Nobody engineering the device ever opens it. By submission, the risk controls in the file don't match the design controls in the code, and the risk-to-requirement trace is a fiction assembled in the last two weeks before submission.
3. Outsourcing the QMS to a consultant who writes SOPs nobody follows. The consultant writes 40 SOPs tailored to "best practice" for a 200-person company. The four engineers don't read them. When the auditor shows up, the gap between written procedure and actual process is the first thing documented. Expensive lesson.
4. Hiring a quality manager too early — or too late. Too early: a quality manager with no record to work on ends up building an elaborate system in advance of the engineering work, which then doesn't match how engineering actually happens. Too late: the quality manager inherits a three-year documentation debt and spends six months in archaeology mode before producing anything forward-looking. The right window is usually around Series A, with a specific submission date within 18 months.
5. Picking a submission pathway before talking to FDA. Teams pick 510(k) because it sounds easier than De Novo and because a competitor cleared one. Then discover their device doesn't have a valid predicate. A pre-submission meeting (Q-Sub) costs a few weeks of preparation and a non-trivial chunk of founder time. It's cheaper than nine months of pathway disputes during review.
What you can legitimately defer
Not everything in ISO 13485 needs to be operational at pre-Series A. These can wait without material risk, as long as the placeholders exist.
- Formal CAPA system. A lightweight issue tracker with root-cause fields works until about 10 people. Build the full CAPA process before your first notified body audit or before post-market launch, whichever comes first.
- Supplier qualification. The full ISO 13485 supplier controls apply to production suppliers. At development stage, a simple supplier record per purchase is sufficient for components and materials.
- Management review. The full ISO 13485 management review cadence matters more at scale. A quarterly written summary of design controls status works at early stage.
- Competency management. Training records should exist from day one. Elaborate competency frameworks can wait.
Deferral means "exists as a simple process now, gets formalised later." It does not mean absent.
The first quality hire — and what they should actually do
Most teams make this hire between Series A and Series B, usually when the team hits 10–15 people or a submission timeline becomes concrete. The candidate matters less than the first 90 days of work.
Their first priority shouldn't be writing SOPs. It should be reading your existing record. Requirements coverage, risk control implementation, verification traceability, gap list. The output of month one is a defensible assessment of where you are, not a binder of new procedures. SOPs follow. The design history has to be defensible first, and if it isn't, that's what the hire is working on.
The worst outcome I've seen: first quality hire arrives, spends four months writing SOPs, discovers during the first design review that the underlying record has a two-year hole in risk control traceability, and has to throw out most of the SOPs because they didn't match the actual process. Hire for assessment first, procedure-writing second.
Quality embedded in engineering, not alongside it
Startups that navigate this stage well aren't the ones with the most elaborate QMS. They're the ones where design controls evidence is a natural output of how engineering happens, not a parallel documentation task triggered by deadlines.
MANKAIND is built for exactly that model. Design inputs connect to design outputs, risk controls connect to requirements, changes propagate structurally through the record. When the first quality hire arrives, they inherit a defensible design history — not three years of reconstruction work. When the submission date arrives, the traceability matrix is a render from the live record, not an assembly project.
The minimum viable QMS — 5 steps
- Start with design controls under 21 CFR 820.30. DHF, requirements, and traceability structure first. The rest of the QMS builds on this foundation.
- Establish lightweight document control. Version-controlled procedures and specifications with an approval workflow. Enforceable, auditable, and cheap to maintain.
- Set up the risk management file under ISO 14971. Hazard analysis, risk estimation, risk controls. A live record throughout development, not a submission deliverable.
- Institute CAPA and complaint handling. The process must exist before you need it. Root cause analysis, corrective action, effectiveness checks.
- Add supplier and training controls as you scale. Start light. Formalise before the first notified body audit.
See how MANKAIND handles this
30-minute demo. Bring your hardest design controls question.