Skip to main content

Use Cases

Your IVD assay failed validation. The pharma partner is waiting.

Your team is building a companion diagnostic for a PD-L1 inhibitor in non-small cell lung cancer. The pharma partner's Phase III trial is enrolling, and your assay is the patient selection tool. Last Friday, the analytical validation study came back: within-run precision is solid, but between-laboratory repeatability failed the acceptance criteria by a margin that isn't going to be explained away by statistical fluctuation. You have a real engineering problem—and a pharma partner whose clinical trial timeline depends on your assay being submission-ready.

Monday morning, the cascade begins.

What breaks when precision fails

The performance evaluation report—the centerpiece of your submission under both FDA and EU IVDR—is invalid. The precision claims are woven through the entire document, connected to the intended use statement, the clinical performance claims, and the benefit-risk analysis. You can't swap in new numbers; the narrative changes because the performance envelope changed. The risk analysis needs revisiting: between-lab variability means the assay might produce different patient selection decisions at different sites—a failure mode with direct clinical consequences for an oncology companion diagnostic. The risk controls you specified assumed the original precision performance; they may no longer be adequate.

The root cause investigation is pulling your assay development team away from the clinical validation study that was supposed to start next month. Is it reagent lot-to-lot variability? Calibrator stability during shipping? Matrix sensitivity you didn't characterize with enough specimen types? Each hypothesis requires different experiments, and each answer triggers a different redesign path. Meanwhile, your pharma partner's regulatory team is asking for an updated timeline, and you can't give them one because you don't yet know whether this is a two-week reagent optimization or a three-month assay redesign.

The same failure, with MANKAIND

The precision data still fails—MANKAIND doesn't change physics. But what happens next is fundamentally different. The platform has maintained the connection between your CLSI EP05 study design, your performance claims, your risk analysis, and your submission documentation from the start of the validation program. When the between-lab repeatability data comes in below threshold, MANKAIND maps the impact: which sections of the performance evaluation report are affected, which risk analysis entries reference the original precision claims, and which downstream validation studies depend on the precision specification holding.

The root cause investigation starts with context, not archaeology. The platform surfaces reagent lot characterization data, calibrator stability trending, and specimen matrix studies—all connected to the precision study design—so the assay development team can evaluate hypotheses against existing data before designing new experiments. If the root cause is reagent lot variability, MANKAIND traces that finding to the manufacturing process controls and incoming material specifications that need tightening. If it's matrix sensitivity, the platform identifies which specimen types in the clinical validation study design are at highest risk for similar performance issues.

The reference material traceability chain—from NIST or WHO reference through your calibrators to the measurement result—is intact and structured. When the FDA asks how the revised quantitative claims were established, the answer is already documented. The IVDR common specifications for your analyte category are mapped to the revised study design, so you know whether the new precision target still meets minimum performance requirements before you run the next study.

The outcome

Your pharma partner gets a timeline on Wednesday instead of in three weeks—because the impact assessment that would normally require cross-referencing six different document systems was generated in hours. The redesigned precision study is built on a study plan that MANKAIND structured from the CLSI EP protocols, the revised performance targets, and the clinical context of the companion diagnostic's intended use. The performance evaluation report doesn't need rewriting from scratch; affected sections are identified, and unchanged sections—clinical performance data, scientific validity assessment—remain intact and consistent with the updated precision narrative.

When the assay passes the second round of analytical validation, the submission documentation reflects the actual development history—including the failure, the investigation, and the redesign—because MANKAIND captured it as it happened. The post-market performance follow-up plan is generated from the revised performance claims, with precision monitoring built in as a specific surveillance objective. The pharma partner's regulatory team gets a companion diagnostic submission that tells a coherent evidence story, not a binder assembled under panic.

See how MANKAIND handles this

30-minute demo. Bring your hardest design controls question.