Adversarial Attacks & Detection on a Deep Learning-Based Digital Pathology Model

Citation:

Vali E, Alexandridis G, Stafylopatis A. Adversarial Attacks & Detection on a Deep Learning-Based Digital Pathology Model. In: 2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW). ; 2023. pp. 1-5.

Date Presented:

June

Abstract:

Medical imaging modalities, like magnetic resonance imaging (MRI), have enabled efficient diagnosis of various conditions, including cancer, lung disease, and brain tumors. With the advancements in machine learning, AI-based medical image segmentation and classification systems have emerged, potentially replacing human diagnosis. However, the security and robustness of these systems are crucial, as they are vulnerable to adversarial attacks, as demonstrated in previous studies. In this respect, the current work explores the one-pixel attack’s impact on the reliable VGG16 model, the effectiveness of combining the one-pixel attack with the FGSM attack, the potential of using the squeezing color bits detector to counter the one-pixel attack, and the possibility of using a combination of the squeezing color bits and PCA whitening detectors to protect against the aforementioned attacks.