Search for collections on FTS Digilib

Transformer-Augmented Deep Learning Ensemble for Multi-Modal Neuroimaging-Based Diagnosis of Amyotrophic Lateral Sclerosis

Asuai, Clive and Andrew, Mayor and Arinomor, Ayigbe Prince and Ogheneochuko, Daniel Ezekiel and Joseph-Brown, Aghoghovia Agajere and Merit, Ighere and Collins, Atumah (2025) Transformer-Augmented Deep Learning Ensemble for Multi-Modal Neuroimaging-Based Diagnosis of Amyotrophic Lateral Sclerosis. Journal of Computing Theories and Applications, 3 (2). pp. 190-205. ISSN 3024-9104

[thumbnail of 14661-Article Text-51782-1-10-20251013.pdf]
Preview
Text
14661-Article Text-51782-1-10-20251013.pdf - Published Version
Available under License Creative Commons Attribution.

Download (492kB) | Preview

Abstract

Amyotrophic Lateral Sclerosis (ALS) is a progressive neurodegenerative disorder that presents significant diagnostic challenges due to its heterogeneous clinical manifestations and symptom overlap with other neurological conditions. Early and accurate diagnosis is critical for initiating timely interventions and improving patient outcomes. Traditional diagnostic approaches rely heavily on clinical expertise and manual interpretation of neuroimaging data, such as structural MRI, Diffusion Tensor Imaging (DTI), and functional MRI (fMRI), which are inherently time-consuming and prone to interobserver variability. Recent advances in Artificial Intelligence (AI) and Deep Learning (DL) have demonstrated potential for automating neuroimaging analysis, yet existing models often suffer from limited generalizability across modalities and datasets. To address these limitations, we propose a Transformer-augmented deep learning ensemble framework for automated ALS diagnosis using multi-modal neuroimaging data. The proposed architecture integrates Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), and Vision Transformers (ViTs) to leverage the complementary strengths of spatial, temporal, and global contextual feature representations. An adaptive weighting-based fusion mechanism dynamically integrates modality-specific outputs, enhancing the robustness and reliability of the final diagnosis. Comprehensive preprocessing steps, including intensity normalization, motion correction, and modality-specific data augmentation, are employed to ensure cross-modality consistency. Evaluation using 5-fold cross-validation on a curated multi-modal ALS neuroimaging dataset demon-strates the superior performance of the proposed model, achieving a mean classification accuracy of 94.5% ± 0.7%, precision of 93.9% ± 0.8%, recall of 92.9% ± 0.9%, F1-score of 93.4% ± 0.7%, spec-ificity of 97.4% ± 0.6%, and AUC-ROC of 0.968 ± 0.004. These results significantly outperform baseline CNN models and highlight the potential of transformer-augmented ensembles in complex neurodiagnostic applications. This framework offers a promising tool for clinicians, supporting early and precise ALS detection and enabling more personalized and effective patient management strategies.

Item Type: Article
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Depositing User: dl fts
Date Deposited: 13 Oct 2025 08:13
Last Modified: 13 Oct 2025 08:17
URI: https://dl.futuretechsci.org/id/eprint/134

Actions (login required)

View Item
View Item