Improving Breast Cancer Detection with Two-in-One Imaging
TaiwanFri Dec 27 2024
Breast cancer is a major health concern for women worldwide, including those in Taiwan. Early detection is key, and imaging techniques like mammograms and ultrasounds play a big role. But, these methods alone aren't perfect. To make diagnosis better, scientists are combining these two methods. This is called cross-modality fusion.
Researchers used pictures from public datasets, like the RSNA, PAS, and Kaggle. They split these into two groups: those showing cancer (malignant) and those not (benign). They also fixed any issues with the ultrasound pictures having too few examples of one type.
Three different models were created:
1. Pre-trained computer models (CNNs) with extra help from simple classifiers.
2. Models that learned from other tasks (transfer learning).
3. A brand new, custom-made 17-layer CNN.
The custom model did the best, with an accuracy of 96. 4% and a Kappa score of 92. 7%. The transfer learning model was good but not as great (84. 6% accuracy, 69. 4% Kappa). The pre-trained models with classifiers did the worst (78% accuracy, 55. 9% Kappa).
Combining the strengths of both imaging methods worked well. It improved how accurately cancer could be found.
This study shows that combining imaging methods and creating special computer models can make breast cancer diagnosis much better. It could help find cancers earlier and with fewer mistakes, helping patients get the right treatment faster.
https://localnews.ai/article/improving-breast-cancer-detection-with-two-in-one-imaging-d2c4bd6a
continue reading...
questions
What factors might have contributed to the variation in performance between the pre-trained CNN models and the custom-designed CNN?
How did the data augmentation techniques specifically address the imbalances in the ultrasound dataset?
Are the higher accuracy rates a cover-up for deeper, more sinister inaccuracies in the system that are being overlooked?
actions
flag content