What is wrong with the current approach to CAD design and optimization?
- Use of an incorrect FROC curve based figure of merit. They should be using the area under the AFROC.
- The implicit assumption that CAD is perfect at search, when in fact it has near zero search performance. The qualitative definition of search performance as the ability to find lesions while avoiding finding non-lesions. Follow this link to learn about search performance and how it is measured.
- CAD is trained with comparable numbers of non-diseased and diseased cases, typically about 300 each. With this many lesions, the cost of generating tens of false positives is not too high a price to pay for detecting 80% of the diseased cases. The screening situation is quite different: the ratio of non-diseased to diseased cases is roughly 1000 to 5. With so few lesions, the cost of generating tens of false positives is very high. Using a proprietary method, ExpertCAD™ trains CAD in a screening-like prevalence that matches the intended usage of the algorithm.
- Lack of importance assigned to non-lesion localizations - "do not worry about specificity, worry about sensitivity" - exact quote from a researcher at a CAD conference; any one can achieve 100% sensitivity by recalling every patient (this is ROC-101).
- Radiologists can readily dismiss CAD false positives. No! Like the boy crying "wolf" too often, they will either ignore CAD marks (the experts) or use them inappriately (the non-experts). CAD is no different, in principle, from a person sitting next to the radiologist who whispers after each interpretation "have you looked at this region, and how about this region?" If the person is an expert (and the only way to determine this is to measure stand-alone performance) then the radiologist will soon learn that the whispers need to be taken seriously. If the majority of the suggested regions can be readily dismissed, then the radiologist will eventually lose confidence in the person.