CAD AND AI

2024-12-27 10:19


Clearly one of the major technological advances of the past 40 years is the development of Artificial Intelligence(AI). The use of computers to help analyze mammograms dates back to the 1980's. In the beginning,computer assistance was called "Computer Aided Detection(CAD)". Before we were able to obtain digital mammograms directly, we were able to  "digitize"  Screen/Film mammograms to provide the pixel numbers for the computer use. We then provided developers with "training cases". I remember spending hours reviewing normal mammograms along with those with findings and proven pathology, and highlighting the findings on the mammograms. We then provided descriptions as to what we were seeing.Using the "digitized" mammograms,the developers then "taught" the computer to "look for" findings on mammograms that "looked like" what we had described on the training cases. It turned out that the CAD systems that were developed were fairly good at finding "microcalcifications" that are frequently associated with Ductal Carcinoma In Situ(and often associated with invasive cancers), but these are also the easiest for the radiologist to find.Masses and architectural distortion were more difficult for the computer to find.


The initial CAD studies were very optimistic.In several, the CAD systems were able to identify more than 70% of the cancers that had been missed by the radiologists(1,2). These were studies of "missed" cancers that were visible in retrospect.There were some "prospective" studies where CAD was used to assist radiologists,and some had promising results(3). The use of CAD was greatly facilitated by the development of Full Field Digital Mammography(FFDM) since it removed the need to digitize the screen/film images.CAD could be applied directly to the interpretation of FFDM images.Unfortunately,although many radiologists added CAD to their analyses, it is not clear that CAD has actually had much benefit in early detection.


DOUBLE READING

Since every radiologist will fail to see cancers that are visible in retrospect,and since a cancer that may be missed by the first radiologist is often seen by a second, the Europeans taught us to have two or more radiologists evaluate each screening study. In our own practice at the Massachusetts General Hospital, having a second reader increased our detection of early cancers by 5-7%.In our practice, "double reading"  was superior to using a CAD system, but it added more expense to screening when the goal is to keep the costs as low as possible while maximizing the early detection of cancers.


ARTIFICIAL INTELLIGENCE

More recently AI has been developed to help detect breast cancer on mammograms. AI differs from CAD in that we(radiologists) "taught" the CAD systems to look for findings on mammography that we have defined as suggesting possible malignancy. AI systems utilize "neural networks". These are computers that are designed based on a simplified understanding of how our brains work. The computer is shown thousands of mammograms and told which mammograms are of a breast with cancer and which mammograms are of women who do not have breast cancer. The computer then "learns" to recognize the positive and negative mammograms.


Since computers never  "forget" a case; are not distracted;and do not tire, it seems to me that they should be better at finding cancers than humans, but,thus far, this has not been the case. I have been surprised that AI systems are just beginning to equal radiologists in detecting cancers and some are claiming that they are now better at finding cancers,but I am not sure this is the case.I suspect that as computers learn from their own mistakes they will improve, but that remains to be seen.


A major problem is that computers, thus far,cannot explain why they were concerned about an area in a breast. They can highlight an area,but they cannot explain what they are "seeing" .The systems are truly "black boxes". The computer is shown an image, and it "analyzes" the image and produces an answer, but it cannot tell us why it was or was not concerned. This will be more and more of a problem as we rely more and more on AI.


Another problem is that there are numerous AI systems under independent development. The results are only as good as the training studies that the computers are shown and many of these are unique and not being shared between the various developers.Ideally, there need to be controls on the quality of the mammograms that are used by the computers so that they have the best chance of detecting cancers, and training images need to be included from women of all ethnic backgrounds. A huge file should be developed and images should be shared between developers so that CAD can accurately interpret mammograms from all groups.


I continue to have high hopes for AI. Screening the hundreds of millions of women who need to be screened is a large task. This is compounded by the fact that the vast majority of annual mammograms will be negative and fewer than 1 woman in 100 will be found to have breast cancer each year. Unless we can determine which women will not develop breast cancer(and we cannot do this as yet and may never be able to this) all women are at risk and we need to screen all women. What is being suggested is that the first use for AI will be to try to identify all the mammograms that show no evidence of breast cancer either to a radiologist and/or the computer. This would be a major advantage if the computer can eliminate 20-30% of cases that the radiologist does not need to review since there is nothing on these studies that will concern a human reader. Even this is still an unrecognized advantage. Screening mammograms are a very low yield process since,again, fewer than 1 in 100 will be obtained in women who are found to have cancer. Consequently, if the computer misses even one of these cancers, that could have been detected had the study been reviewed by a human,the intense screening effort will have been compromised.


In the U.S. we have an additional problem. If computers are used to reduce the number of studies that radiologists have to review by identifying the "negative" studies,and it turns out that the computer missed a case that a human could have found, who will be responsible for the error? In the U.S. the radiologist is responsible if a cancer is missed. Who will be responsible when a computer misses a cancer?


My hope is that  AI will get better and better and become sufficiently superior to human readers that we can rely on AI to interpret all the screening mammograms so that radiologists can concentrate on "Diagnostic" imaging and procedures. To accomplish this we will need systems such that the computer can provide an understanding of how it analyzed a study and reached the conclusion that it reached. We have still not yet reached these fundamentally important goals.

 

REFERENCES

---------------------------

1.Warren Burhenne LJ,Wood SA,D'Orsi CJ,Feig SA,Kopans DB,O'Shaughnessy KF,Sickles EA,Tabar L,Vyborny CJ,Castellino RA.Potential contribution of computer-aided detection to the sensitivity of screening mammography.Radiology.2000;215:554-62.

2.Birdwell RL,Ikeda DM,O'Shaughnessy KF,Sickles EA.Mammographic characteristics of 115 missed cancers later detected with screening mammography and the potential utility of computer-aided detection.Radiology 2001;219:192-202

3.Freer TW,Ulissey MJ.Screening mammography with computer-aided detection:prospective study of 12,860 patients in a community breast center.Radiology.2001;220:781-6.