Mark Twain may not have been thinking of Increasing IVD Regulatory Success when he said “There are lies, damned lies and statistics” but…
When developing In-vitro Diagnostic Devices, and diagnostic devices in general, a statistical algorithm is almost always required to convert the stream of electronic sensor data into a clinical relevant parameter – displaying volts or ADC counts is not usually sufficient. However, displaying a clinically useful parameter is fraught with technical and regulatory issues.
Below are several considerations for algorithm development based on regulatory strategies and issues. I’ve included 3 tips and techniques for reviewing and analyzing diagnostic performance data which may be unfamiliar to those without a statistician background.
Regulatory Considerations
1 – Quantitative Index has less Regulatory Burden than Diagnosis
Obtaining regulatory clearance is generally more difficult for IVDs intended to provide simple yes/no answers for disease diagnosis, as these must rely on large pivotal clinical studies to demonstrate efficacy. A simpler approach which often permits getting to market more quickly is to show agreement or correlation with existing devices (demonstrated with Correlation and Bland-Altman plots), and then perform post marketing clinical studies later which support disease diagnosis claims.
Many sensing technology delivers a continuously varying output depending on the concentration of an analyte of interest. However, clinicians are taught Sensitivity and Specificity in grad school, which is the percentage of correctly identified disease states as identified by your device, when compared to “the truth”. As a huge generalization, many clinicians also have magic numbers in their mind which they consider acceptable, such as 90% sensitivity, 90% specificity.
An example is blood glucose testing for diagnosing diabetes, where the glucose concentration of blood is provided in mmol/L or mg/dL. The meter itself does not diagnose presence or absence of diabetes; the clinician interprets the result after a specific protocol (such as fasting glucose tolerance test) and diagnoses the patient in conjunction with signs, symptoms and clinical knowledge.
The opposite of this approach is a qualitative pregnancy test, which uses antibodies and enzymes to detect presence of hormone hCG. The pregnancy test is “pregnant” or “Not pregnant” (although some are “weak positive”, discussed later.) One can imagine the importance for correct diagnosis.
2 – Sensitivity and Specificity is not Always the Best Indicator for Diagnostic Performance
Some devices must display a yes/no answer to be meaningful, for example the qualitative pregnancy test. Referring to sensitivity and specificity for these devices may not tell the whole story, as this all depends on disease prevalence. For example, a novel device is evaluated on patients with a very low disease prevalence of 2 in 1,000, and correctly identifies 1 of the two diseased cases correctly, but misses the other as disease negative.
In this case the Sensitivity is only 50%, which when mentioned as a bare statistic would make most clinicians balk, even though the odds of randomly getting this statistic are 1 in 500 due to the large number of disease negative patients.
A useful set of statistics which takes disease prevalence into consideration are predictive values, but they can only be used in cases where disease prevalence is representative of the condition in the field. In the example above, if we now say 188 patients have been incorrectly identified as disease positive, this leaves 810 patients correctly identified as disease negative. Again, when using classical Sensitivity and Specificity, the specificity is only 81%, which does not sound that impressive, but the Negative Predictive value is close to 100%. However, since 188 patients have been incorrectly classified as disease positive then the Positive Predictive value is only 0.5%.
Using predictive values shows that this particular test is best for ruling out presence of disease, as a large number of patients are correctly identified as disease negative, and would be ideal for a primary care screening IVD. A follow up test, possibly more expensive or an invasive, for those whom this test identifies as disease positive would be required before any serious treatment options considered.
3 – Don’t waste money on Rushed Clinical Studies
Clinical Studies are expensive. Having a predefined development plan with formal gate review is a way to ensure money is not wasted by rushing to large, expensive clinical studies before the time is right – repeating clinical studies due to unexpected results can be a major headache.
When designing clinical studies, help from a Biostatistician is invaluable in properly powering the studies, and formalizing endpoints and acceptance criteria as part of an overall Clinical Validation Plan. They can also help you formulate a study design on what to do when your natural disease prevalence is low.
For example, a Clinical Validation Plan might be something like:-
- Alpha Device used in 30 patient pilot study and preliminary algorithm. Leave one out cross validation used with ROC curves to assess disease discrimination performance and refine algorithm.
- Alpha or Beta Device used in 100 selected patient feasibility study for 50:50 disease ratio split, powered by pilot study. Data used to validate quantitative index algorithm and tweak.
- Beta or Pre-Production Device used in 250 selected patient pivotal study with 50:50 disease ratio split for regulatory submission with quantitative index and no diagnostic claim. Includes subset analysis to match disease prevalence in the field.
- Post market Study with 600 unselected patients and 5% natural disease prevalence, powered by Pivotal study. Date used to extend claims to include diagnosis.
Conclusion
Planning what you are going to do before you do it and having an overall algorithm development strategy are essential for Medical Devices. They tie in with your IP Strategy, Regulatory Strategy, Reimbursement Strategy and Consumable Strategy, amongst others.
Vincent Crabtree, PhD is a Regulatory Advisor & Project Manager at StarFish Medical, and would be delighted to hear comments and constructive criticism on this article.