An Explainable AI Paradigm for Alzheimer’s Diagnosis Using Deep Transfer Learning

Mahmud, Tanjim and Barua, Koushick and Habiba, Sultana Umme and Sharmen, Nahed and Hossain, Mohammad Shahadat and Andersson, Karl (2024) An Explainable AI Paradigm for Alzheimer’s Diagnosis Using Deep Transfer Learning. Diagnostics, 14 (3). p. 345. ISSN 2075-4418

[thumbnail of diagnostics-14-00345.pdf] Text
diagnostics-14-00345.pdf - Published Version

Download (8MB)

Abstract

Alzheimer’s disease (AD) is a progressive neurodegenerative disorder that affects millions of individuals worldwide, causing severe cognitive decline and memory impairment. The early and accurate diagnosis of AD is crucial for effective intervention and disease management. In recent years, deep learning techniques have shown promising results in medical image analysis, including AD diagnosis from neuroimaging data. However, the lack of interpretability in deep learning models hinders their adoption in clinical settings, where explainability is essential for gaining trust and acceptance from healthcare professionals. In this study, we propose an explainable AI (XAI)-based approach for the diagnosis of Alzheimer’s disease, leveraging the power of deep transfer learning and ensemble modeling. The proposed framework aims to enhance the interpretability of deep learning models by incorporating XAI techniques, allowing clinicians to understand the decision-making process and providing valuable insights into disease diagnosis. By leveraging popular pre-trained convolutional neural networks (CNNs) such as VGG16, VGG19, DenseNet169, and DenseNet201, we conducted extensive experiments to evaluate their individual performances on a comprehensive dataset. The proposed ensembles, Ensemble-1 (VGG16 and VGG19) and Ensemble-2 (DenseNet169 and DenseNet201), demonstrated superior accuracy, precision, recall, and F1 scores compared to individual models, reaching up to 95%. In order to enhance interpretability and transparency in Alzheimer’s diagnosis, we introduced a novel model achieving an impressive accuracy of 96%. This model incorporates explainable AI techniques, including saliency maps and grad-CAM (gradient-weighted class activation mapping). The integration of these techniques not only contributes to the model’s exceptional accuracy but also provides clinicians and researchers with visual insights into the neural regions influencing the diagnosis. Our findings showcase the potential of combining deep transfer learning with explainable AI in the realm of Alzheimer’s disease diagnosis, paving the way for more interpretable and clinically relevant AI models in healthcare.

Item Type: Article
Subjects: Institute Archives > Multidisciplinary
Depositing User: Managing Editor
Date Deposited: 06 Feb 2024 06:43
Last Modified: 06 Feb 2024 06:43
URI: http://eprint.subtopublish.com/id/eprint/4078

Actions (login required)

View Item
View Item