Skip to main content

Use Cases

A surgeon just reported a device malfunction. Where do you start?

Tuesday morning. A complaint lands from a surgical center in Minneapolis: a powered surgical instrument locked up mid-procedure. The surgeon had to switch to a backup device. No patient harm—this time—but the complaint is flagged as a potential reportable event. Your quality team opens an investigation. The clock starts on your MDR reporting timeline. And the first question—what could have caused this?—sends your team into three different systems.

The device's design history file lives in the PLM system. The manufacturing batch records and incoming inspection data live in the MES. The risk analysis—the FMEA that should tell you whether this failure mode was anticipated—lives in a controlled Word document on the quality system drive. The complaint itself was entered in the eQMS. Four systems, four logins, four different data structures. The quality engineer running the investigation has access to two of them.

The disconnected investigation

The quality engineer starts with what she can see: the complaint record and the device history record for the specific unit. The lot number traces to a manufacturing batch from six months ago. Incoming inspection data for the motor assembly shows everything in spec. So far, nothing obvious. To go deeper, she needs the design FMEA—did the engineering team anticipate a lock-up failure mode during sustained use? She emails the design engineering manager. He's on PTO until Thursday. His backup finds the FMEA, but it's revision C, and nobody is sure whether revision D was completed before the last design transfer.

Meanwhile, two more complaints come in from different sites. Same failure mode—instrument lock-up during extended procedures. This is now a trend, and the regulatory team needs to decide on MDR reportability within 30 days. The root cause investigation is still assembling context. The design engineer who specified the motor duty cycle rating left the company eight months ago. His thermal analysis—the document that would explain why the motor was rated for the duty cycle it was rated for—is referenced in the DHF index but the file link is broken. Someone moved the folder during the server migration.

The investigation concludes six weeks later. Root cause: the motor thermal protection circuit was designed for a 60% duty cycle, but the surgical technique for a procedure that gained popularity after launch requires 85% sustained activation. The failure mode wasn't in the FMEA because the use case didn't exist when the risk analysis was written. The corrective action requires a design change, a new verification protocol, a risk analysis update, and a field safety corrective action notification. Each of these is initiated in a different system by a different team.

The same complaint, with MANKAIND

The complaint still arrives Tuesday morning. But when the quality engineer opens the investigation, MANKAIND connects the complaint to the device's full engineering context. The specific device configuration traces to the design outputs, the verification records, and the risk analysis entries that cover the motor subsystem—all visible from the investigation record, without switching systems or waiting for email responses.

The FMEA surfaces immediately, and it's the current revision. The quality engineer can see that motor thermal protection was addressed in the risk analysis, but the duty cycle assumption is documented: 60% maximum, based on the surgical technique profile from the original intended use definition. The gap between that assumption and the field reality is visible within hours, not weeks. When the second and third complaints arrive, MANKAIND flags the trend against the risk analysis baseline—the observed failure rate exceeds the residual risk estimate for this failure mode, triggering a formal trend investigation before the quality team would have manually identified the pattern.

The corrective action flows through the same engineering record. The design change to the thermal protection circuit is evaluated against the full traceability matrix: affected design outputs, verification requirements that need updating, labeling changes, and manufacturing process impacts—all identified from the dependency graph, not from manual cross-referencing. The field safety corrective action documentation is generated from the investigation record, the engineering change, and the risk analysis update—maintaining the evidentiary chain from field event to engineering root cause to corrective action.

The outcome

The root cause is identified in days, not weeks. The corrective action addresses the systemic engineering condition—a duty cycle assumption that no longer reflects clinical reality—not just the proximate symptom. The MDR report contains a substantive investigation narrative because the engineering evidence was accessible from the start. The preventive action extends beyond this failure mode: MANKAIND identifies other design assumptions in the risk analysis that were based on the original use profile and flags them for re-evaluation against current field data. The post-market surveillance system isn't just collecting complaints—it's feeding engineering intelligence back into the design record, closing the loop that most quality systems leave open.

See how MANKAIND handles this

30-minute demo. Bring your hardest design controls question.