Skip to main content

Main menu

  • Home
  • Current Issue
  • Archive
  • Info for
    • Authors
    • Editorial Policies
    • Advertisers
    • Editorial Board
    • Special Issues
  • Journal Metrics
  • Other Publications
    • Anticancer Research
    • Cancer Genomics & Proteomics
    • Cancer Diagnosis & Prognosis
  • More
    • IIAR
    • Conferences
  • About Us
    • General Policy
    • Contact
  • Other Publications
    • In Vivo
    • Anticancer Research
    • Cancer Genomics & Proteomics

User menu

  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart

Search

  • Advanced search
In Vivo
  • Other Publications
    • In Vivo
    • Anticancer Research
    • Cancer Genomics & Proteomics
  • Register
  • Subscribe
  • My alerts
  • Log in
  • My Cart
In Vivo

Advanced Search

  • Home
  • Current Issue
  • Archive
  • Info for
    • Authors
    • Editorial Policies
    • Advertisers
    • Editorial Board
    • Special Issues
  • Journal Metrics
  • Other Publications
    • Anticancer Research
    • Cancer Genomics & Proteomics
    • Cancer Diagnosis & Prognosis
  • More
    • IIAR
    • Conferences
  • About Us
    • General Policy
    • Contact
  • Visit iiar on Facebook
  • Follow us on Linkedin
Research ArticleClinical Studies
Open Access

A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis

JUN HYEONG PARK, JUNE HYUCK LIM, SEONHWA KIM and JAESUNG HEO
In Vivo November 2024, 38 (6) 2864-2872; DOI: https://doi.org/10.21873/invivo.13767
JUN HYEONG PARK
1Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea;
2Ajou Healthcare AI Research Center, Suwon, Republic of Korea;
3Department of Biomedical Sciences, Graduate School of Ajou University, Suwon, Republic of Korea
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
JUNE HYUCK LIM
1Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea;
2Ajou Healthcare AI Research Center, Suwon, Republic of Korea;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
SEONHWA KIM
1Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea;
2Ajou Healthcare AI Research Center, Suwon, Republic of Korea;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
JAESUNG HEO
1Department of Radiation Oncology, Ajou University School of Medicine, Suwon, Republic of Korea;
2Ajou Healthcare AI Research Center, Suwon, Republic of Korea;
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: nahero{at}ajou.ac.kr
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

Background/Aim: Breast cancer remains a major global health concern. This study aimed to develop a deep-learning-based artificial intelligence (AI) model that predicts the malignancy of mammographic lesions and reduces unnecessary biopsies in patients with breast cancer. Patients and Methods: In this retrospective study, we used deep-learning-based AI to predict whether lesions in mammographic images are malignant. The AI model learned the malignancy as well as margins and shapes of mass lesions through multi-label training, similar to the diagnostic process of a radiologist. We used the Curated Breast Imaging Subset of Digital Database for Screening Mammography. This dataset includes annotations for mass lesions, and we developed an algorithm to determine the exact location of the lesions for accurate classification. A multi-label classification approach enabled the model to recognize malignancy and lesion attributes. Results: Our multi-label classification model, trained on both lesion shape and margin, demonstrated superior performance compared with models trained solely on malignancy. Gradient-weighted class activation mapping analysis revealed that by considering the margin and shape, the model assigned higher importance to border areas and analyzed pixels more uniformly when classifying malignant lesions. This approach improved diagnostic accuracy, particularly in challenging cases, such as American College of Radiology Breast Imaging-Reporting and Data System categories 3 and 4, where the breast density exceeded 50%. Conclusion: This study highlights the potential of AI in improving the diagnosis of breast cancer. By integrating advanced techniques and modern neural network designs, we developed an AI model with enhanced accuracy for mammographic image analysis.

Key Words:
  • Artificial intelligence
  • breast cancer
  • classification
  • diagnosis
  • mammography

Breast cancer is one of the most common cancers affecting women worldwide (1). A concerning statistic revealed that one in every eight women in the United States will be diagnosed with this condition during their lifetime (2). The importance of early detection cannot be overstated; diagnosis at stage 0 or 1 promises a 5-year survival rate of 99% (3). However, for those detected at stage 3, the rate decreases drastically to 72% (4). Accordingly, an accurate diagnosis of breast cancer has become an important research topic in the medical field (5).

Mammography, the gold standard for early breast cancer detection, primarily identifies the tumor location and size. However, this method can misdiagnose 10-30% of malignancies (6). Furthermore, of the women recommended for biopsies based on mammographic findings (7), 80% have benign conditions (8). Unnecessary breast biopsy causes a short-term decrease in the patient’s quality of life due to anxiety and pain before biopsy (9). In cases of late diagnosis, the tumor is less favorable compared with those of initial diagnosis, resulting in more mastectomies (10). Additionally, mammography sometimes fails to detect malignancies in women with dense breast tissue as it can obscure tumors (11).

The potential of artificial intelligence (AI) to address these shortcomings has become a popular research topic (12, 13). However, it is unclear whether the AI model is based on actual medical knowledge similar to that of a specialist when making decisions, and its interpretability remains ambiguous (14). Many of these models, have achieved detection accuracies of approximately 80-85% (15, 16). Moreover, they are vulnerable to data imbalances, leading to misclassification rates of up to 15% (17-20).

In this study, we developed an AI model to classify malignant lesions using mammographic images from a widely recognized Curated Breast Imaging Subset of Digital Database for Screening Mammography (CBIS-DDSM). We aimed to design AI methods that mirror intricate observations made by radiologists and capture the diverse lesion locations and morphologies. We intend to integrate sophisticated cascade-region-based convolutional neural network (R-CNN) designs and employ data-augmentation techniques to achieve this objective. Our primary goal was to increase the accuracy of lesion detection using mammography. We hypothesize that by imitating the diagnostic methods of actual radiologists, the properties of mass lesions can be learned using deep learning-based AI to improve the accuracy of malignancy prediction by mammography.

Patients and Methods

Dataset. This study was approved by the Institutional Review Board (IRB) of the authors’ affiliated institutions (IRB No. AJOUIRB-EX-2023-422). The need for informed consent from participants was waived by the IRB because of the retrospective nature of this study. In this study, we used the CBIS-DDSM in the Digital Imaging and Communications in Medicine format for patients with breast cancer (21). The repository includes 3,568 radiographic images from 1,566 patients. Each patient’s dataset predominantly featured craniocaudal (CC) and mediolateral oblique (MLO) views. To integrate the shape and margin properties of mass lesions into our model training framework, which is the main focus of our study, we excluded images without mass lesions from the learning process. As a result, a total of 892 patients and 1,696 mammography images were used in this study. Among these, the test dataset, as defined by the CBIS-DDSM data curators, consisted of 201 patients and 378 images (Table I). The entire cohort included only malignant and benign tumors. Detailed information on the shapes and margins of the mass lesions is shown in Table II. To ensure a balanced distribution of malignant and benign cases in the training and validation datasets, we employed stratified k-fold cross-validation using the StratifiedKFold module from the Python library scikit-learn. This technique partitioned the data into 5 folds while preserving the relative class frequencies, maintaining an approximate 8:2 ratio of malignant to benign cases in each fold. By using stratification, we mitigated potential bias and promoted a balanced learning process for our model.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table I.

Distribution of patients and mass images in the training and testing sets of the Curated Breast Imaging Subset of Digital Database for Screening Mammography dataset.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table II.

Mass margins and shapes and number of images.

Data preprocessing. Considering the memory limitations of the graphic processing unit, the images were resized using a linear interpolation technique. During this process, axial padding was judiciously applied to preserve the original aspect ratio and achieve a congruent quadrilateral dimensionality. Before being entered into the machine learning model, radiographic data were subjected to normalization, constraining the pixel values within the interval (0 and 1).

To prepare the input data for the detection model, the mammographic images were resized to a height of 960 pixels while maintaining their aspect ratio. This resizing process ensures that the images are compatible with the input requirements of the detection model while preserving the relative dimensions of the lesions. Once the detection model identified the lesions within the mammographic images, the regions of interest (ROIs) containing the lesions were extracted. These ROIs were then resized to a fixed size of 224 pixels in width and 224 pixels in height. The resized lesion ROIs served as the input for the malignancy classification model. This two-stage approach, consisting of lesion detection followed by malignancy classification, allows for a more focused analysis of the lesions and helps improve the overall accuracy of the breast cancer diagnosis system.

Image augmentation. Compared with images from other fields, medical images are typically characterized by scarcity and class imbalance. Due to these characteristics, augmentation is crucial for achieving generalized model performance (22). We employed the following augmentation techniques to develop a model with robust and powerful generalization capabilities.

Brightness adjustment: The brightness and saturation of the images were modulated using the hue saturation, brightness, contrast, and gamma techniques.

Elastic transformations: Techniques, such as optical distortion, grid distortion, elastic transformation, and shift-scale rotation were used to introduce various elastic deformations into the images.

Noise injection: Blur, Gaussian, and multiplicative noise techniques were applied to introduce noise into the images.

Image blending techniques: The mix-up technique was used to overlap two images to produce a single composite, whereas the Mosaic (23) technique was used to combine four training images into a single one.

Object detection in mammographic images. Recognizing that identifying the location of lesions is critical for a more precise diagnosis, we initially implemented an object detection model to identify these lesions, followed by their classification. A cascade R-CNN was applied to further fine-tune the detection of lesion locations using supervised learning. The detection performance was evaluated using mean average precision (mAP), which is a popular metric that provides a comprehensive assessment of object detection accuracy by averaging the precision values across different recall levels.

Multi-label classification. ViT-B DINO-v2, a vision transformer model pre-trained using the self-supervised learning method DINO (Self-Distillation with No Labels) (24), was used for training during the supervised learning phase. Vision transformers (ViTs) have recently emerged as a powerful alternative to convolutional neural networks (CNNs) for image classification tasks, showing improved performance and generalization capabilities (25). We applied the multi-label classification to concurrently train the model on the malignancy and mass characteristics of the lesion. By learning multiple labels associated with the image, the AI model can capture a wide range of image features and their intricate relationships (26). This approach closely mirrors the diagnostic processes employed by radiologists as recommended by the American College of Radiology Breast Imaging Reporting and Data System (ACR BI-RADS) guidelines (27).

The shape and margin of a mass provide critical information about its relationship with the surrounding tissues. A mass with an irregular margin or asymmetric shape or one that has blurred boundaries with adjacent tissues or appears to infiltrate them, often indicates a malignancy (28). Rather than merely focusing on a binary classification into benign or malignant, we designed our training model to consider these intricate characteristics of the mass (Figure 1). In the final step, the model accepted two image views, namely CC, and MLO. The max probability from these lesion images was computed to predict the likelihood of the patient’s breast being malignant (Figure 2).

Figure 1.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 1.

Region of interest (ROI) classification. This figure shows the architecture of a classifier that learns to classify lesion images found by a detector. We used three classifiers, each learning the shape (malignant or benign), and margins of the mass. Through this process, the encoder creates a better image representation.

Figure 2.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 2.

Training process. The detector identifies lesions in the mammography image and learns them using a multi-label classification method with a vision transformer.

To assess the performance of our multi-label classification model, we employed several widely used evaluation metrics, including the area under the receiver operating characteristic curve (AUROC), F1 score, and accuracy. The AUROC measures the model’s ability to discriminate between classes, with a higher value indicating better performance. The F1 score is the harmonic mean of precision and recall, providing a balanced measure of the model’s accuracy. Accuracy represents the proportion of correct predictions made by the model. Moreover, to gain insights into the regions of the images that most influenced the model’s predictions, we utilized Gradient-weighted Class Activation Mapping (GradCAM). GradCAM generates a heatmap that highlights the important regions in the input image for a particular class prediction. Comparing the GradCAM visualizations of our multi-label model with those of a model trained solely on lesion malignancy facilitates better understanding of the effect of incorporating mass shape and margin information to the model’s decision-making process.

Results

For mass detection on mammograms, our model demonstrated promising results, achieving a mean average precision (mAP) of 0.713, mAP@0.5 of 0.843, and mAP@0.75 of 0.628 (Figure 3). We employed multi-label classification to address the complexity of mass lesion. Thus, we trained our model simultaneously using two distinct parameters: shape and margin of the mass. When trained on mass shape and margins, our model achieved an F1 score (machine learning accuracy measure) of 0.7789 (95%CI=0.7712-0.7867) and an AUROC of 0.8522 (95% CI=0.8480-0.8565).

Figure 3.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 3.

Lesion detection model performance and examples. We demonstrate an example prediction of a lesion detection model on mammography images. The red box is the ground truth with actual lesions, and the green box is the ground truth. This is the location and probability value of the lesion predicted by the model. The table below evaluates the detection performance of the model using average precision mAP, evaluated across various IoU thresholds. Our model achieved a performance of 0.843 at mAP@0.5.

In contrast, when trained solely on the malignancy of the lesion, the model’s performance decreased slightly, with an F1 score of 0.7697 (95%CI=0.7692-0.7703) and an AUROC of 0.8408 (95%CI=0.8362-0.8440) The difference in performance between the two models was statistically significant (p=0.0112 for F1 score, p=0.0018 for AUROC) (Table III).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table III.

Performance malignancy classification of the Curated Breast Imaging Subset of Digital Database for Screening Mammography (the developed model versus other models).

Figure 4 shows a comparison using GradCAM when the model’s training was limited to the malignant tumor of the lesion and when it was trained considering the overall characteristics (margin and shape) of the tumor. GradCAM confirmed that the model that learned the types of margins and shapes demonstrated higher weight to the border area and viewed the entire pixel evenly when classifying malignant lesions.

Figure 4.
  • Download figure
  • Open in new tab
  • Download powerpoint
Figure 4.

Comparison of the developed model with other models using gradient-weighted activation mapping. The image feature generated by an image encoder that learns both the margin and shape of the mass reveals the lesion more specifically than only learning whether the lesion is malignant.

In contrast, the model that learned only whether or not a malignant lesion was present tended to make decisions by looking only at specific pixels. Table IV presents the performance evaluation results in terms of breast density divided according to the ACR BI-RADS guidelines. In this case, we obtained the probability of malignancy for lesions detected in four mammographic views: LEFT, RIGHT, CC, and MLO. The max probability was subsequently used to evaluate whether the patient had a malignant lesion or not. Particularly, in categories 3 and 4, where the density was ≥50% (29), our methodology predicted breast cancer better than when only the lesions were used for training.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table IV.

Performance results according to breast density.

Discussion

In this study, we successfully developed an AI model tailored to classify lesions using the CBIS-DDSM. The pathological diagnosis of malignant lesions poses a unique challenge. Our model is designed to capture the different shapes and margins of the mass. Before this task, we developed a lesion detection model for lesion classification, and our model, with a ConvNeXt backbone integrated with a cascade RCNN detector, showed notable precision and efficiency. Our model, which employs this innovative architecture, significantly outperformed conventional models (30). We believe that the insights gained from this study can strengthen the role of AI in the medical domain.

Image augmentation is a method of augmenting the amount of an image by applying various transformations to an existing image (31). Compared with general image techniques, image augmentation techniques are essential for medical images that include complex distributions, are less general, and have a severe class imbalance (32), where there is often a lack of available data. In our study, we harnessed the capabilities of the “albumentations” and “mmdetection” Python libraries to implement diverse augmentation strategies. One such strategy, distortion, involves the application of elastic transformations to warp the image. Given the inherent variability in lesion shapes on mammographic images among different patients (33), such an augmentation strengthens the model’s generalization capabilities across a broad spectrum of image variants (34). Furthermore, we used the mix-up technique, wherein two images are superimposed, conferring robustness against adversarial data and curtailing the risk of overfitting (35).

ResNet, a traditional CNN model, has demonstrated good performance for a long time using a simple idea called the residual block. However, this model does not reflect cross-channel information (36). The overall structure of ConvNeXt is similar to that of ResNet50. Nevertheless, it includes the feature extraction layer of the head, middle layer, where the bottleneck structures of four different dimensions are separately stacked, and final high-dimensional feature classification layer (37).

The architecture of standard CNNs was modernized to construct a hierarchical vision transformer, enabling newer models to surpass the performance of existing CNN models, including ResNet (38). ConvNeXt has a multistage design with varying feature map resolutions for each stage. Therefore, it is easy to extract features from radiographic images (39).

The cascade R-CNN introduces a three-stage detector structure to rectify the aforementioned challenges. The intersection over union (IoU) is the most popular evaluation metric for object detection benchmarks (40). The IoU refers to the ratio of predicted to marked regions, and a high IoU indicates that the model is good at finding objects in the background (41). Each phase was trained by incrementally increasing the IoU threshold. Initially trained with an IoU of 0.5, the detector generated region proposals. These proposals were then fed into the subsequent detector, which was trained with an IoU of 0.6. Finally, the output values were derived from a detector operating at an IoU of 0.7. This methodology substantially mitigates the overfitting problem and ensures consistent performance during the training and inference phases (42).

The unique characteristics of cascade R-CNNs are essential for detecting lesions on mammographic images. Lesions often require comparison with surrounding tissues for accurate assessment (43), making higher IoU values indispensable for precision detection. The trilevel structure of the cascade R-CNN facilitates this process. Furthermore, the integration of this detector considerably augmented the accuracy of lesion detection in dense breast tissue (Figure 3).

The core of our model for classifying the malignancy of lesions after detection is the multi-label classification approach. In the GradCAM analysis, our methodology focused on pixels that closely resembled the actual ROI mask of the mass lesions (Figure 4). Notably, the difference in performance was based on breast density. Our model did not reveal a significant performance increase compared to the model that learned only the malignancy of lesions when the density was <25%. However, for cases with density >25%, a significant improvement was observed in AUROC (Table IV).

Study limitations. First, the data used for training the model were exclusively derived from the CBIS-DDSM. The retrospective nature of the data may introduce potential biases and limit the generalizability of the findings. Moreover, the model’s performance with other datasets, particularly prospective data, is yet to be evaluated. External validation using independent datasets is necessary to assess the model’s robustness and generalizability. Furthermore, it remains unclear how the AI model employed in this study classifies other types of breast cancers. Second, our model follows a two-stage approach: detecting lesions and classifying their malignancy rather than employing a more streamlined and efficient end-to-end structure. Therefore, a method is required to manage when the detector fails to identify a lesion. Third, further improvement in the prediction score can be achieved using techniques such as ensembles with other models. Finally, convolutional networks are prone to information loss during pooling (44). Comparing the results using a Swin transformer (45), a transformer model, as the backbone model, is crucial.

Conclusion

In conclusion, our findings demonstrate the potential of AI in improving breast cancer diagnosis, particularly regarding dense breast tissue. By integrating cutting-edge techniques with contemporary neural network designs, we present an AI model demonstrating superior accuracy in mammographic image analysis. Future studies should assess its broader clinical significance and potential utility in patients with dense breast tissues.

Acknowledgements

This research was supported by a grant of the Korea Health Technology R&D Project through the Korea Health Industry Development Institute (KHIDI), funded by the Ministry of Health & Welfare, Republic of Korea (Grant number: HR21C1003, HR22C1734). This work was supported by research fund of Ajou University Medical Center (2023).

Footnotes

  • Authors’ Contributions

    Conceived and designed the analysis: J.H.P., J.H.; Collected the data: J.H.P., J.H.L., S.K. Contributed data or analysis tools: J.H.P., J.H; Performed the analysis: J.H.P.; Wrote the paper: J.H.P; Manuscript editing: J.H.

  • Conflicts of Interest

    The Authors have no conflicts of interest to declare relevant to this article.

  • Received July 8, 2024.
  • Revision received July 17, 2024.
  • Accepted July 18, 2024.
  • Copyright © 2024 The Author(s). Published by the International Institute of Anticancer Research.

This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY-NC-ND) 4.0 international license (https://creativecommons.org/licenses/by-nc-nd/4.0).

References

  1. ↵
    1. Jokhadze N,
    2. Das A,
    3. Dizon DS
    : Global cancer statistics: A healthy population relies on population health. CA Cancer J Clin 74(3): 224-226, 2024. DOI: 10.3322/caac.21838
    OpenUrlCrossRefPubMed
  2. ↵
    1. DeSantis C,
    2. Ma J,
    3. Bryan L,
    4. Jemal A
    : Breast cancer statistics, 2013. CA Cancer J Clin 64(1): 52-62, 2014. DOI: 10.3322/caac.21203
    OpenUrlCrossRefPubMed
  3. ↵
    1. Tabár L,
    2. Chen TH,
    3. Yen AM,
    4. Dean PB,
    5. Smith RA,
    6. Jonsson H,
    7. Törnberg S,
    8. Chen SL,
    9. Chiu SY,
    10. Fann JC,
    11. Ku MM,
    12. Wu WY,
    13. Hsu CY,
    14. Chen YC,
    15. Svane G,
    16. Azavedo E,
    17. Grundström H,
    18. Sundén P,
    19. Leifland K,
    20. Frodis E,
    21. Ramos J,
    22. Epstein B,
    23. Åkerlund A,
    24. Sundbom A,
    25. Bordás P,
    26. Wallin H,
    27. Starck L,
    28. Björkgren A,
    29. Carlson S,
    30. Fredriksson I,
    31. Ahlgren J,
    32. Öhman D,
    33. Holmberg L,
    34. Duffy SW
    : Early detection of breast cancer rectifies inequality of breast cancer outcomes. J Med Screen 28(1): 34-38, 2021. DOI: 10.1177/0969141320921210
    OpenUrlCrossRefPubMed
  4. ↵
    1. Thomas A,
    2. Rhoads A,
    3. Pinkerton E,
    4. Schroeder MC,
    5. Conway KM,
    6. Hundley WG,
    7. McNally LR,
    8. Oleson J,
    9. Lynch CF,
    10. Romitti PA
    : Incidence and survival among young women with stage I-III breast cancer: SEER 2000-2015. JNCI Cancer Spectr 3(3): pkz040, 2019. DOI: 10.1093/jncics/pkz040
    OpenUrlCrossRef
  5. ↵
    1. Bhushan A,
    2. Gonsalves A,
    3. Menon JU
    : Current state of breast cancer diagnosis, treatment, and theranostics. Pharmaceutics 13(5): 723, 2021. DOI: 10.3390/pharmaceutics13050723
    OpenUrlCrossRefPubMed
  6. ↵
    1. Iranmakani S,
    2. Mortezazadeh T,
    3. Sajadian F,
    4. Ghaziani MF,
    5. Ghafari A,
    6. Khezerloo D,
    7. Musa AE
    : A review of various modalities in breast imaging: technical aspects and clinical outcomes. Egypt J Radiol Nucl Med 51(1): 57, 2020. DOI: 10.1186/s43055-020-00175-5
    OpenUrlCrossRef
  7. ↵
    1. Majid AS,
    2. de Paredes ES,
    3. Doherty RD,
    4. Sharma NR,
    5. Salvador X
    : Missed breast carcinoma: Pitfalls and pearls. Radiographics 23(4): 881-895, 2003. DOI: 10.1148/rg.234025083
    OpenUrlCrossRefPubMed
  8. ↵
    1. Ekpo EU,
    2. Alakhras M,
    3. Brennan P
    : Errors in mammography cannot be solved through technology alone. Asian Pac J Cancer Prev 19(2): 291-301, 2018. DOI: 10.22034/APJCP.2018.19.2.291
    OpenUrlCrossRefPubMed
  9. ↵
    1. Humphrey KL,
    2. Lee JM,
    3. Donelan K,
    4. Kong CY,
    5. Williams O,
    6. Itauma O,
    7. Halpern EF,
    8. Gerade BJ,
    9. Rafferty EA,
    10. Swan JS
    : Percutaneous breast biopsy: effect on short-term quality of life. Radiology 270(2): 362-368, 2014. DOI: 10.1148/radiol.13130865
    OpenUrlCrossRefPubMed
  10. ↵
    1. van der Veer EL,
    2. Lameijer J,
    3. Coolen AM,
    4. Bluekens AM,
    5. Nederend J,
    6. Gielens M,
    7. Voogd A,
    8. Duijm L
    : Causes and consequences of delayed diagnosis in breast cancer screening with a focus on mammographic features and tumour characteristics. Eur J Radiol 167: 111048, 2023. DOI: 10.1016/j.ejrad.2023.111048
    OpenUrlCrossRefPubMed
  11. ↵
    1. Smetana GW,
    2. Elmore JG,
    3. Lee CI,
    4. Burns RB
    : Should this woman with dense breasts receive supplemental breast cancer screening? Ann Intern Med 169(7): 474-484, 2018. DOI: 10.7326/m18-1822
    OpenUrlCrossRefPubMed
  12. ↵
    1. Jones MA,
    2. Islam W,
    3. Faiz R,
    4. Chen X,
    5. Zheng B
    : Applying artificial intelligence technology to assist with breast cancer diagnosis and prognosis prediction. Front Oncol 12: 980793, 2022. DOI: 10.3389/fonc.2022.980793
    OpenUrlCrossRefPubMed
  13. ↵
    1. Brunetti N,
    2. Calabrese M,
    3. Martinoli C,
    4. Tagliafico AS
    : Artificial intelligence in breast ultrasound: from diagnosis to prognosis-a rapid review. Diagnostics (Basel) 13(1): 58, 2022. DOI: 10.3390/diagnostics13010058
    OpenUrlCrossRefPubMed
  14. ↵
    1. Frasca M,
    2. La Torre D,
    3. Pravettoni G,
    4. Cutica I
    : Explainable and interpretable artificial intelligence in medicine: a systematic bibliometric review. Discov Artif Intell 4(1): 15, 2024. DOI: 10.1007/s44163-024-00114-7
    OpenUrlCrossRef
  15. ↵
    1. Agarwal R,
    2. Diaz O,
    3. Lladó X,
    4. Yap MH,
    5. Martí R
    : Automatic mass detection in mammograms using deep convolutional neural networks. J Med Imaging (Bellingham) 6(3): 031409, 2019. DOI: 10.1117/1.JMI.6.3.031409
    OpenUrlCrossRefPubMed
  16. ↵
    1. Shen L,
    2. Margolies LR,
    3. Rothstein JH,
    4. Fluder E,
    5. McBride R,
    6. Sieh W
    : Deep learning to improve breast cancer detection on screening mammography. Sci Rep 9(1): 12495, 2019. DOI: 10.1038/s41598-019-48995-4
    OpenUrlCrossRef
  17. ↵
    1. Chen JL,
    2. Cheng LH,
    3. Wang J,
    4. Hsu TW,
    5. Chen CY,
    6. Tseng LM,
    7. Guo SM
    : A YOLO-based AI system for classifying calcifications on spot magnification mammograms. Biomed Eng Online 22(1): 54, 2023. DOI: 10.1186/s12938-023-01115-w
    OpenUrlCrossRefPubMed
    1. Han S,
    2. Kang HK,
    3. Jeong JY,
    4. Park MH,
    5. Kim W,
    6. Bang WC,
    7. Seong YK
    : A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 62(19): 7714-7728, 2017. DOI: 10.1088/1361-6560/aa82ec
    OpenUrlCrossRefPubMed
    1. Tanaka H,
    2. Chiu SW,
    3. Watanabe T,
    4. Kaoku S,
    5. Yamaguchi T
    : Computer-aided diagnosis system for breast ultrasound images using deep learning. Phys Med Biol 64(23): 235013, 2019. DOI: 10.1088/1361-6560/ab5093
    OpenUrlCrossRefPubMed
  18. ↵
    1. Cao Z,
    2. Duan L,
    3. Yang G,
    4. Yue T,
    5. Chen Q
    : An experimental study on breast lesion detection and classification from ultrasound images using deep learning architectures. BMC Med Imaging 19(1): 51, 2019. DOI: 10.1186/s12880-019-0349-x
    OpenUrlCrossRefPubMed
  19. ↵
    1. Lee RS,
    2. Gimenez F,
    3. Hoogi A,
    4. Miyake KK,
    5. Gorovoy M,
    6. Rubin DL
    : A curated mammography data set for use in computer-aided detection and diagnosis research. Sci Data 4: 170177, 2017. DOI: 10.1038/sdata.2017.177
    OpenUrlCrossRefPubMed
  20. ↵
    1. Chlap P,
    2. Min H,
    3. Vandenberg N,
    4. Dowling J,
    5. Holloway L,
    6. Haworth A
    : A review of medical image data augmentation techniques for deep learning applications. J Med Imaging Radiat Oncol 65(5): 545-563, 2021. DOI: 10.1111/1754-9485.13261
    OpenUrlCrossRefPubMed
  21. ↵
    1. Shorten C,
    2. Khoshgoftaar TM
    : A survey on Image Data Augmentation for Deep Learning. J Big Data 6(1): 60, 2019. DOI: 10.1186/s40537-019-0197-0
    OpenUrlCrossRef
  22. ↵
    1. Oquab M,
    2. Darcet Te,
    3. Moutakanni T,
    4. Vo HQ,
    5. Szafraniec M,
    6. Khalidov V,
    7. Fernandez P,
    8. Haziza D,
    9. Massa F,
    10. El-Nouby A,
    11. Assran M,
    12. Ballas N,
    13. Galuba W,
    14. Howes R,
    15. Huang PY,
    16. Li SW,
    17. Misra I,
    18. Rabbat MG,
    19. Sharma V,
    20. Synnaeve G,
    21. Xu H,
    22. Jégou H,
    23. Mairal J,
    24. Labatut P,
    25. Joulin A,
    26. Bojanowski P
    : DINOv2: Learning robust visual features without supervision. ArXiv: abs/2304.07193, 2023. DOI: 10.48550/arXiv.2304.07193
    OpenUrlCrossRef
  23. ↵
    1. Dosovitskiy A,
    2. Beyer L,
    3. Kolesnikov A,
    4. Weissenborn D,
    5. Zhai X,
    6. Unterthiner T,
    7. Dehghani M,
    8. Minderer M,
    9. Heigold G,
    10. Gelly S,
    11. Uszkoreit J,
    12. Houlsby N
    : An image is worth 16x16 words: Transformers for image recognition at scale. ArXiv: abs/2010.11929, 2020. DOI: 10.48550/arXiv.2010.11929
    OpenUrlCrossRef
  24. ↵
    1. Zhang ML,
    2. Zhou ZH
    : A review on multi-label learning algorithms. IEEE Trans Knowl Data Eng 26: 1819-1837, 2014.
    OpenUrlCrossRef
  25. ↵
    1. Shi J,
    2. Sahiner B,
    3. Chan HP,
    4. Ge J,
    5. Hadjiiski L,
    6. Helvie MA,
    7. Nees A,
    8. Wu YT,
    9. Wei J,
    10. Zhou C,
    11. Zhang Y,
    12. Cui J
    : Characterization of mammographic masses based on level set segmentation with new image features and patient information. Med Phys 35(1): 280-290, 2008. DOI: 10.1118/1.2820630
    OpenUrlCrossRefPubMed
  26. ↵
    1. Rawashdeh M,
    2. Lewis S,
    3. Zaitoun M,
    4. Brennan P
    : Breast lesion shape and margin evaluation: BI-RADS based metrics understate radiologists’ actual levels of agreement. Comput Biol Med 96: 294-298, 2018. DOI: 10.1016/j.compbiomed.2018.04.005
    OpenUrlCrossRefPubMed
  27. ↵
    1. Gemici AA,
    2. Bayram E,
    3. Hocaoglu E,
    4. Inci E
    : Comparison of breast density assessments according to BI-RADS 4th and 5th editions and experience level. Acta Radiol Open 9(7): 2058460120937381, 2020. DOI: 10.1177/2058460120937381
    OpenUrlCrossRefPubMed
  28. ↵
    1. An J,
    2. Yu H,
    3. Bai R,
    4. Li J,
    5. Wang Y,
    6. Cao R
    : Detection and segmentation of breast masses based on multi-layer feature fusion. Methods 202: 54-61, 2022. DOI: 10.1016/j.ymeth.2021.04.022
    OpenUrlCrossRefPubMed
  29. ↵
    1. Saini D,
    2. Malik R
    : Image data augmentation techniques for deep learning -a mirror review. In: 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), pp. 1-5, 2021.
  30. ↵
    1. Yin X,
    2. Li Y,
    3. Zhang X,
    4. Shin BS
    : Medical image augmentation using image synthesis with contextual function. In: 2019 12th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics (CISP-BMEI), pp. 1-6, 2019.
  31. ↵
    1. Berment H,
    2. Becette V,
    3. Mohallem M,
    4. Ferreira F,
    5. Chérel P
    : Masses in mammography: What are the underlying anatomopathological lesions? Diagn Interv Imaging 95(2): 124-133, 2014. DOI: 10.1016/j.diii.2013.12.010
    OpenUrlCrossRefPubMed
  32. ↵
    1. Li P,
    2. Li D,
    3. Li W,
    4. Gong S,
    5. Fu Y,
    6. Hospedales TM
    : A simple feature augmentation for domain generalization. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 8866-8875, 2021.
  33. ↵
    1. Zhang H,
    2. Cissé M,
    3. Dauphin YN,
    4. Lopez-Paz D
    : Mixup: Beyond empirical risk minimization. In: 2021 9th International Conference on Reliability, Infocom Technologies and Optimization (Trends and Future Directions) (ICRITO), pp. 1-5, 2018.
  34. ↵
    1. Alzubaidi L,
    2. Zhang J,
    3. Humaidi AJ,
    4. Al-Dujaili A,
    5. Duan Y,
    6. Al-Shamma O,
    7. Santamaría J,
    8. Fadhel MA,
    9. Al-Amidie M,
    10. Farhan L
    : Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. J Big Data 8(1): 53, 2021. DOI: 10.1186/s40537-021-00444-8
    OpenUrlCrossRef
  35. ↵
    1. Tian G,
    2. Wang Z,
    3. Wang C,
    4. Chen J,
    5. Liu G,
    6. Xu H,
    7. Lu Y,
    8. Han Z,
    9. Zhao Y,
    10. Li Z,
    11. Luo X,
    12. Peng L
    : A deep ensemble learning-based automated detection of COVID-19 using lung CT images and Vision Transformer and ConvNeXt. Front Microbiol 13: 1024104, 2022. DOI: 10.3389/fmicb.2022.1024104
    OpenUrlCrossRefPubMed
  36. ↵
    1. Liu Z,
    2. Mao H,
    3. Wu CY,
    4. Feichtenhofer C,
    5. Darrell T,
    6. Xie S
    : A ConvNet for the 2020s. In: 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 11966-11976, 2022.
  37. ↵
    1. Hassanien MA,
    2. Singh VK,
    3. Puig D,
    4. Abdel-Nasser M
    : Predicting breast tumor malignancy using deep ConvNeXt radiomics and quality-based score pooling in ultrasound sequences. Diagnostics (Basel) 12(5): 1053, 2022. DOI: 10.3390/diagnostics12051053
    OpenUrlCrossRefPubMed
  38. ↵
    1. Rezatofighi H,
    2. Tsoi N,
    3. Gwak J,
    4. Sadeghian A,
    5. Reid I,
    6. Savarese S
    : Generalized intersection over union: A metric and a loss for bounding box regression. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 658-666, 2019.
  39. ↵
    1. Yang R,
    2. Yu Y
    : Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis. Front Oncol 11: 638182, 2021. DOI: 10.3389/fonc.2021.638182
    OpenUrlCrossRefPubMed
  40. ↵
    1. Cai Z,
    2. Vasconcelos N
    (2018) Cascade R-CNN: Delving into high quality object detection. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6154-6162, 2018.
  41. ↵
    1. Zhou J,
    2. Zhang Y,
    3. Chang KT,
    4. Lee KE,
    5. Wang O,
    6. Li J,
    7. Lin Y,
    8. Pan Z,
    9. Chang P,
    10. Chow D,
    11. Wang M,
    12. Su MY
    : Diagnosis of benign and malignant breast lesions on DCE-MRI by using radiomics and deep learning with consideration of peritumor tissue. J Magn Reson Imaging 51(3): 798-809, 2020. DOI: 10.1002/jmri.26981
    OpenUrlCrossRefPubMed
  42. ↵
    1. Özdemir C
    : Avg-topk: A new pooling method for convolutional neural networks. Expert Syst Appl 223: 119892, 2023. DOI: 10.1016/j.eswa.2023.119892
    OpenUrlCrossRef
  43. ↵
    1. Liu Z,
    2. Lin Y,
    3. Cao Y,
    4. Hu H,
    5. Wei Y,
    6. Zhang Z,
    7. Lin S,
    8. Guo B
    : Swin transformer: Hierarchical vision transformer using shifted windows. In: 2021 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 9992-10002, 2021.
PreviousNext
Back to top

In this issue

In Vivo: 38 (6)
In Vivo
Vol. 38, Issue 6
November-December 2024
  • Table of Contents
  • Table of Contents (PDF)
  • About the Cover
  • Index by author
  • Back Matter (PDF)
  • Ed Board (PDF)
  • Front Matter (PDF)
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in spreading the word on In Vivo.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis
(Your Name) has sent you a message from In Vivo
(Your Name) thought you would like to see the In Vivo web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
3 + 0 =
Solve this simple math problem and enter the result. E.g. for 1+3, enter 4.
Citation Tools
A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis
JUN HYEONG PARK, JUNE HYUCK LIM, SEONHWA KIM, JAESUNG HEO
In Vivo Nov 2024, 38 (6) 2864-2872; DOI: 10.21873/invivo.13767

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
Reprints and Permissions
Share
A Multi-label Artificial Intelligence Approach for Improving Breast Cancer Detection With Mammographic Image Analysis
JUN HYEONG PARK, JUNE HYUCK LIM, SEONHWA KIM, JAESUNG HEO
In Vivo Nov 2024, 38 (6) 2864-2872; DOI: 10.21873/invivo.13767
Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Patients and Methods
    • Results
    • Discussion
    • Conclusion
    • Acknowledgements
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

Cited By...

  • Four Different Artificial Intelligence Models Versus Logistic Regression to Enhance the Diagnostic Accuracy of Fecal Immunochemical Test in the Detection of Colorectal Carcinoma in a Screening Setting
  • Long-term Outcomes of Helical Tomotherapy in Lymph Node-positive Breast Cancer Following Breast-conserving Surgery
  • Google Scholar

More in this TOC Section

  • Effect of Partial Splenic Embolization on Immune Environment and Hepatic Function in Cirrhosis Patients With Portal Hypertension
  • Laryngeal and Hypopharyngeal Malignancies: Where Do We Stand? A Retrospective Single-center Study
  • Modified Subtraction Technique for the Middle Hepatic Vein Tributary and Glissonean Pedicle in Right Lobe Graft Procurement
Show more Clinical Studies

Keywords

  • artificial intelligence
  • Breast cancer
  • classification
  • diagnosis
  • mammography
In Vivo

© 2026 In Vivo

Powered by HighWire