scispace - formally typeset
Search or ask a question
Author

Samiul Based Shuvo

Bio: Samiul Based Shuvo is an academic researcher from Bangladesh University of Engineering and Technology. The author has contributed to research in topics: Engineering & Computer science. The author has an hindex of 3, co-authored 6 publications receiving 25 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: The obtained results demonstrate that the proposed end-to-end architecture yields outstanding performance in all the evaluation metrics compared to the previous state-of-the-art methods with up to 99.60% accuracy, 98.56% precision, 99.52% recall and 99.68% F1- score on an average while being computationally comparable.
Abstract: The alarmingly high mortality rate and increasing global prevalence of cardiovascular diseases (CVDs) signify the crucial need for early detection schemes. Phonocardiogram (PCG) signals have been historically applied in this domain owing to its simplicity and cost-effectiveness. In this article, we propose CardioXNet, a novel lightweight end-to-end CRNN architecture for automatic detection of five classes of cardiac auscultation namely normal, aortic stenosis, mitral stenosis, mitral regurgitation and mitral valve prolapse using raw PCG signal. The process has been automated by the involvement of two learning phases namely, representation learning and sequence residual learning. Three parallel CNN pathways have been implemented in the representation learning phase to learn the coarse and fine-grained features from the PCG and to explore the salient features from variable receptive fields involving 2D-CNN based squeeze-expansion. Thus, in the representation learning phase, the network extracts efficient time-invariant features and converges with great rapidity. In the sequential residual learning phase, because of the bidirectional-LSTMs and the skip connection, the network can proficiently extract temporal features without performing any feature extraction on the signal. The obtained results demonstrate that the proposed end-to-end architecture yields outstanding performance in all the evaluation metrics compared to the previous state-of-the-art methods with up to 99.60% accuracy, 99.56% precision, 99.52% recall and 99.68% F1- score on an average while being computationally comparable. This model outperforms any previous works using the same database by a considerable margin. Moreover, the proposed model was tested on PhysioNet/CinC 2016 challenge dataset achieving an accuracy of 86.57%. Finally the model was evaluated on a merged dataset of Github PCG dataset and PhysioNet dataset achieving excellent accuracy of 88.09%. The high accuracy metrics on both primary and secondary dataset combined with a significantly low number of parameters and end-to-end prediction approach makes the proposed network especially suitable for point of care CVD screening in low resource setups using memory constraint mobile devices.

60 citations

Journal ArticleDOI
TL;DR: In this article, a lightweight convolutional neural network (CNN) architecture was proposed to classify respiratory diseases from individual breath cycles using hybrid scalogram-based features of lung sounds.
Abstract: Listening to lung sounds through auscultation is vital in examining the respiratory system for abnormalities. Automated analysis of lung auscultation sounds can be beneficial to the health systems in low-resource settings where there is a lack of skilled physicians. In this work, we propose a lightweight convolutional neural network (CNN) architecture to classify respiratory diseases from individual breath cycles using hybrid scalogram-based features of lung sounds. The proposed feature-set utilizes the empirical mode decomposition (EMD) and the continuous wavelet transform (CWT). The performance of the proposed scheme is studied using a patient independent train-validation-test set from the publicly available ICBHI 2017 lung sound dataset. Employing the proposed framework, weighted accuracy scores of 98.92% for three-class chronic classification and 98.70% for six-class pathological classification are achieved, which outperform well-known and much larger VGG16 in terms of accuracy by absolute margins of 1.10% and 1.11%, respectively. The proposed CNN model also outperforms other contemporary lightweight models while being computationally comparable.

49 citations

Posted Content
TL;DR: This work proposes a lightweight convolutional neural network architecture to classify respiratory diseases from individual breath cycles using hybrid scalogram-based features of lung sounds, which outperforms well-known and much larger VGG16 in terms of accuracy by absolute margins.
Abstract: Listening to lung sounds through auscultation is vital in examining the respiratory system for abnormalities. Automated analysis of lung auscultation sounds can be beneficial to the health systems in low-resource settings where there is a lack of skilled physicians. In this work, we propose a lightweight convolutional neural network (CNN) architecture to classify respiratory diseases using hybrid scalogram-based features of lung sounds. The hybrid scalogram features utilize the empirical mode decomposition (EMD) and continuous wavelet transform (CWT). The proposed scheme's performance is studied using a patient independent train-validation set from the publicly available ICBHI 2017 lung sound dataset. Employing the proposed framework, weighted accuracy scores of 99.20% for ternary chronic classification and 99.05% for six-class pathological classification are achieved, which outperform well-known and much larger VGG16 in terms of accuracy by 0.52% and 1.77% respectively. The proposed CNN model also outperforms other contemporary lightweight models while being computationally comparable.

36 citations

Proceedings ArticleDOI
05 Jun 2020
TL;DR: This work presents a low-cost, low-power, and wireless ECG monitoring system with deep learning-based automatic arrhythmia detection that provides an accuracy of 94.03% in classifying abnormal cardiac rhythm on the MIT-BIH Arrhythmia Database.
Abstract: Continuously monitoring the Electrocardiogram (ECG) is an essential tool for Cardiovascular Disease (CVD) patients. In low-resource countries, the hospitals and health centers do not have adequate ECG systems, and this unavailability exacerbates the patients' health condition. Lack of skilled physicians, limited availability of continuous ECG monitoring devices, and their high prices, all lead to a higher CVD burden in the developing countries. To address these challenges, we present a low-cost, low-power, and wireless ECG monitoring system with deep learning-based automatic arrhythmia detection. Flexible fabric-based design and the wearable nature of the device enhances the patient's comfort while facilitating continuous monitoring. An AD8232 chip is used for the ECG Analog Front-End (AFE) with two 450 mi-Ah Li-ion batteries for powering the device. The acquired ECG signal can be transmitted to a smart-device over Bluetooth and subsequently sent to a cloud server for analysis. A 1-D Convolutional Neural Network (CNN) based deep learning model is developed that provides an accuracy of 94.03% in classifying abnormal cardiac rhythm on the MIT-BIH Arrhythmia Database.

11 citations

Posted ContentDOI
02 Sep 2020-medRxiv
TL;DR: This work presents a low-cost, low-power, and wireless ECG monitoring system with deep learning-based automatic arrhythmia detection that provides an accuracy of 94.03% in classifying abnormal cardiac rhythm on the MIT-BIH Arrhythmia Database.
Abstract: Continuously monitoring the Electrocardiogram (ECG) is an essential tool for Cardiovascular Disease (CVD) patients. In low-resource countries, the hospitals and health centers do not have adequate ECG systems, and this unavailability exacerbates the patients’ health condition. Lack of skilled physicians, limited availability of continuous ECG monitoring devices, and their high prices, all lead to a higher CVD burden in the developing countries. To address these challenges, we present a low-cost, low-power, and wireless ECG monitoring system with deep learning-based automatic arrhythmia detection. Flexible fabric-based design and the wearable nature of the device enhances the patient’s comfort while facilitating continuous monitoring. An AD8232 chip is used for the ECG Analog Front-End (AFE) with two 450 mi-Ah Li-ion batteries for powering the device. The acquired ECG signal can be transmitted to a smart-device over Bluetooth and subsequently sent to a cloud server for analysis. A 1-D Convolutional Neural Network (CNN) based deep learning model is developed that provides an accuracy of 94.03% in classifying abnormal cardiac rhythm on the MIT-BIH Arrhythmia Database. Index Terms Wearable ECG, deep learning, arrhythmia detection.

10 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: The obtained results demonstrate that the proposed end-to-end architecture yields outstanding performance in all the evaluation metrics compared to the previous state-of-the-art methods with up to 99.60% accuracy, 98.56% precision, 99.52% recall and 99.68% F1- score on an average while being computationally comparable.
Abstract: The alarmingly high mortality rate and increasing global prevalence of cardiovascular diseases (CVDs) signify the crucial need for early detection schemes. Phonocardiogram (PCG) signals have been historically applied in this domain owing to its simplicity and cost-effectiveness. In this article, we propose CardioXNet, a novel lightweight end-to-end CRNN architecture for automatic detection of five classes of cardiac auscultation namely normal, aortic stenosis, mitral stenosis, mitral regurgitation and mitral valve prolapse using raw PCG signal. The process has been automated by the involvement of two learning phases namely, representation learning and sequence residual learning. Three parallel CNN pathways have been implemented in the representation learning phase to learn the coarse and fine-grained features from the PCG and to explore the salient features from variable receptive fields involving 2D-CNN based squeeze-expansion. Thus, in the representation learning phase, the network extracts efficient time-invariant features and converges with great rapidity. In the sequential residual learning phase, because of the bidirectional-LSTMs and the skip connection, the network can proficiently extract temporal features without performing any feature extraction on the signal. The obtained results demonstrate that the proposed end-to-end architecture yields outstanding performance in all the evaluation metrics compared to the previous state-of-the-art methods with up to 99.60% accuracy, 99.56% precision, 99.52% recall and 99.68% F1- score on an average while being computationally comparable. This model outperforms any previous works using the same database by a considerable margin. Moreover, the proposed model was tested on PhysioNet/CinC 2016 challenge dataset achieving an accuracy of 86.57%. Finally the model was evaluated on a merged dataset of Github PCG dataset and PhysioNet dataset achieving excellent accuracy of 88.09%. The high accuracy metrics on both primary and secondary dataset combined with a significantly low number of parameters and end-to-end prediction approach makes the proposed network especially suitable for point of care CVD screening in low resource setups using memory constraint mobile devices.

60 citations

Journal ArticleDOI
TL;DR: This paper aims at reviewing the solutions described in the literature, besides commercially available devices and electronic components useful to setup laboratory prototypes, at the development of a wireless ECG system.

37 citations

Journal ArticleDOI
13 Jan 2022-PLOS ONE
TL;DR: The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay.
Abstract: This study was sought to investigate the feasibility of using smartphone-based breathing sounds within a deep learning framework to discriminate between COVID-19, including asymptomatic, and healthy subjects. A total of 480 breathing sounds (240 shallow and 240 deep) were obtained from a publicly available database named Coswara. These sounds were recorded by 120 COVID-19 and 120 healthy subjects via a smartphone microphone through a website application. A deep learning framework was proposed herein that relies on hand-crafted features extracted from the original recordings and from the mel-frequency cepstral coefficients (MFCC) as well as deep-activated features learned by a combination of convolutional neural network and bi-directional long short-term memory units (CNN-BiLSTM). The statistical analysis of patient profiles has shown a significant difference (p-value: 0.041) for ischemic heart disease between COVID-19 and healthy subjects. The Analysis of the normal distribution of the combined MFCC values showed that COVID-19 subjects tended to have a distribution that is skewed more towards the right side of the zero mean (shallow: 0.59±1.74, deep: 0.65±4.35, p-value: <0.001). In addition, the proposed deep learning approach had an overall discrimination accuracy of 94.58% and 92.08% using shallow and deep recordings, respectively. Furthermore, it detected COVID-19 subjects successfully with a maximum sensitivity of 94.21%, specificity of 94.96%, and area under the receiver operating characteristic (AUROC) curves of 0.90. Among the 120 COVID-19 participants, asymptomatic subjects (18 subjects) were successfully detected with 100.00% accuracy using shallow recordings and 88.89% using deep recordings. This study paves the way towards utilizing smartphone-based breathing sounds for the purpose of COVID-19 detection. The observations found in this study were promising to suggest deep learning and smartphone-based breathing sounds as an effective pre-screening tool for COVID-19 alongside the current reverse-transcription polymerase chain reaction (RT-PCR) assay. It can be considered as an early, rapid, easily distributed, time-efficient, and almost no-cost diagnosis technique complying with social distancing restrictions during COVID-19 pandemic.

26 citations

Journal ArticleDOI
01 Feb 2022-Sensors
TL;DR: A fusion of three optimal convolutional neural network models by feeding the image feature vectors transformed from audio features, which confirmed the superiority of the proposed fusion model compared to the state-of-the-art works.
Abstract: Lung or heart sound classification is challenging due to the complex nature of audio data, its dynamic properties of time, and frequency domains. It is also very difficult to detect lung or heart conditions with small amounts of data or unbalanced and high noise in data. Furthermore, the quality of data is a considerable pitfall for improving the performance of deep learning. In this paper, we propose a novel feature-based fusion network called FDC-FS for classifying heart and lung sounds. The FDC-FS framework aims to effectively transfer learning from three different deep neural network models built from audio datasets. The innovation of the proposed transfer learning relies on the transformation from audio data to image vectors and from three specific models to one fused model that would be more suitable for deep learning. We used two publicly available datasets for this study, i.e., lung sound data from ICHBI 2017 challenge and heart challenge data. We applied data augmentation techniques, such as noise distortion, pitch shift, and time stretching, dealing with some data issues in these datasets. Importantly, we extracted three unique features from the audio samples, i.e., Spectrogram, MFCC, and Chromagram. Finally, we built a fusion of three optimal convolutional neural network models by feeding the image feature vectors transformed from audio features. We confirmed the superiority of the proposed fusion model compared to the state-of-the-art works. The highest accuracy we achieved with FDC-FS is 99.1% with Spectrogram-based lung sound classification while 97% for Spectrogram and Chromagram based heart sound classification.

23 citations

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel scalogram-based convolutional neural network (SCNN) to detect obstructive sleep apnea using single-lead electrocardiogram (ECG) signals.

23 citations