The failed promise of CAD
In screening mammography, experts typically have 80% sensitivity at 90% specificity. In the above figure, ROI = region of interest, or suspicious region found by expert radiologist or CAD. Considering 1000 women, about 5 of them have cancer. The expert detects about 4 cancers , i.e., 5 x 0.8, while generating about 100 unnecessary recalls, i.e., 995 x (1 - 0.9). However, not all radiologists are experts. Given the wide variability in their skill levels, a woman's chance of a correct diagnosis depends on the doctor who reads her mammogram, surely an intolerable situation from the woman's point of view.
If CAD were as good as the radiologist, it would generate about one non-lesion localization (NL) every ten cases, while marking about 4 of the five cancers, i.e., 4 lesion localizations (LL) in one thousand patients.
Actually, CAD generates about 3 NL marks per patient, i.e., 30 times what it should! Based on CAD, every patient would be recalled and, of course, then CAD would trivially find all 5 cancers! In spite of claims by CAD researchers that CAD works in the laboratory, it does not. Any radiologist who recalled every patient would be fired. The CAD researcher's counter to this statement is "but CAD is only intended to be used as a second reader", implying that lower than expert level performance is acceptable. Unfortunately, the FDA has approved with this low bar for considering CAD a success*. Some CAD researchers have realized the folly of this approach, but dare not "rock the boat". CAD and perception researchers have even convinced the NIH, in a recently issued Funding Opportunity Announcement (PAR-17-125), to pump money rewarding their fundamentally flawed approach, by treating the poor clinical performance of CAD as a user interface issue! According to them, CAD works in the laboratory but radiologists do not know how to use CAD! This is a serious abdication of scientific integrity and responsibility.
* This is the reason no existing CAD company dare take the name ExpertCAD™.