Ibam, Emmanuel Onwako and Oluwagbemi, Johnson Bisi (2026) Multimodal Deep Learning for Pneumonia Detection Using Wearable Sensors: Toward an Edge-Cloud Framework. Journal of Computing Theories and Applications, 3 (3). pp. 314-333. ISSN 3024-9104
14944-Article Text-54548-1-10-20260119.pdf - Published Version
Available under License Creative Commons Attribution.
Download (617kB) | Preview
Abstract
Pneumonia remains a leading cause of morbidity and mortality worldwide, particularly in resource-limited settings and among elderly populations, where timely diagnosis and continuous monitoring are often constrained by limited clinical infrastructure. This study presents an edge–cloud–integrated framework for early pneumonia risk monitoring, leveraging multimodal wearable sensors and deep learning to support continuous short-duration monitoring. The proposed system is designed to operate in near real time under simulated deployment conditions, continuously acquiring and analyzing physiological signals (respiratory rate, heart rate, SpO₂, and body temperature) alongside event-driven acoustic biomarkers (cough sounds) within a distributed architecture. A lightweight edge module performs local signal preprocessing and anomaly triage, selectively transmitting salient information to a cloud-based multimodal deep learning model for refined risk estimation and interpretability analysis. The framework was evaluated using a multi-source dataset comprising public repositories (MIMIC-III and Coswara) and a clinically supervised wearable study conducted in two Nigerian hospitals, resulting in 718 hours of quality-controlled multimodal monitoring data. In a pooled multi-source evaluation, the system achieved an AUC of 0.95, while in a clinically realistic local-only evaluation, the AUC was 0.86, reflecting a consistent but preliminary diagnostic signal. These results highlight the importance of local data adaptation for real-world applicability and suggest that multimodal AI can provide meaningful early risk indicators under resource constraints. Beyond predictive performance, this work demonstrates the feasibility of integrating multimodal learning, edge–cloud computation, and explainable analytics into a deployment-aware, privacy-preserving monitoring framework for low-resource healthcare environments.
| Item Type: | Article |
|---|---|
| Subjects: | Q Science > QA Mathematics > QA75 Electronic computers. Computer science |
| Depositing User: | dl fts |
| Date Deposited: | 20 Jan 2026 06:00 |
| Last Modified: | 20 Jan 2026 06:00 |
| URI: | https://dl.futuretechsci.org/id/eprint/142 |
