A recent Funding Opportunity Announcement (FOA) Number PAR-17-125 states in part (their emphasis, not mine!)
This FOA is not appropriate for the development of new software tools or expert systems, but studies of how human observers interact with such systems would be appropriate.
Translation: there is no problem with CAD, but one does need to work on the user interface. Never in my 30+ years of experience have I seen such pseudo-science make its way into an official NIH document.
Another part of this announcement states:
While computer-aided detection (CAD) systems perform well at detecting cancer in the laboratory, the technology has not improved cancer detection in the clinic.
The second clause belatedly agrees, after 10 years of denial, with the results of massive clinical trials, that CAD does not work*, but the first clause is not true. CAD does not work, even in the laboratory. There is widespread misunderstanding of how to analyze CAD data, and given the confusion, even bad results can be made to look good.
There is much questionable science in the CAD and image perception field and this FOA will perpetuate past mistakes. No improvement in CAD's clinical performance is expected to result from the money spent in this FOA. There is the possibility that a Golden Fleece Award may go to a GUI-improvement CAD project funded by this FOA.
*The first Fenton et al study set off a furor among the CAD community and a flurry of Letters to the Editor were sent to discredit the studies. Dr. Chakraborty refused to sign any of them.
There are actually two mutually reinforcing CAD studies, 2007 and 2011, with a total of almost a million patients. When the numbers of cases is this large, subtle methodological issues become less important.