scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Classification of cardiac disorders using 1D local ternary patterns based on pulse plethysmograph signals

About: This article is published in Expert Systems.The article was published on 2021-01-13. It has received 16 citations till now. The article focuses on the topics: Plethysmograph & Pulse (signal processing).
Citations
More filters
Journal ArticleDOI
TL;DR: Comparative analysis with existing approaches confirmed the reliability of the proposed method for categorizing CAD in general clinical environments and enhances the diagnosis performance by providing a second opinion during the medical examination.
Abstract: According to the World Health Organization, Coronary Artery Disease (CAD) is a leading cause of death globally. CAD is categorized into three types, namely Single Vessel Coronary Artery Disease (SVCAD), Double Vessel Coronary Artery Disease (DVCAD), and Triple Vessel Coronary Artery Disease (TVCAD). At present, angiography is the most popular technique to detect CAD that is quite expensive and invasive. Phonocardiogram (PCG), being economical and non-invasive, is a crucial modality towards the detection of cardiac disorders, but only trained medical professionals can interpret heart auscultations in clinical environments. This research aims to detect CAD and its types from PCG signatures through feature fusion and a two-stage classification strategy. The self-developed low-cost stethoscope was used to collect PCG data from a local hospital. The PCG signals were preprocessed through an iterative signal decomposition method known as Empirical Mode Decomposition (EMD). EMD decomposes the raw PCG signal into its constituent components called Intrinsic Mode Functions (IMFs). Preprocessed PCG signal was generated exclusively through combining those signal components that contain high discriminative characteristics and less redundancy. Next, Mel Frequency Cepstral Coefficients (MFCCs), spectral and statistical features were extracted. A two-stage classification framework was devised to identify healthy and CAD types. The first stage framework relies on the fusion of MFCC and statistical features with the K-nearest neighbor classifier to predict normal and CAD cases. The second stage is activated only when the first stage detects CAD. The fusion of spectral, statistical, and MFCC features was employed with Support Vector Machines classifier to categorize PCG signatures into DVCAD, SVCAD, and TVCAD classes in the second stage. The proposed method yields mean accuracy values of 88.0%, 89.2%, 91.1%, and 85.3% for normal, DVCAD, SVCAD, and TVCAD, respectively, through 10-fold cross-validation. Comparative analysis with existing approaches confirmed the reliability of the proposed method for categorizing CAD in general clinical environments. The proposed model enhances the diagnosis performance by providing a second opinion during the medical examination.

35 citations

Journal ArticleDOI
TL;DR: A new heart sound classification model is proposed based on Local Binary Pattern (LBP) and Local Ternary (LTP) Pattern features and deep learning that surpasses the up-to-date methods according to the classification accuracy rate.

32 citations

Journal ArticleDOI
TL;DR: This article proposes a computer‐aided diagnosis system to detect Myocardial Infarction, Dilated Cardiomyopathy, and Hypertension from PuPG signals through low‐cost and non‐invasive means.
Abstract: Cardiac disorders are one of the prime reasons for an increasing global death rate. Reliable and efficient diagnosis procedures are imperative to minimize the risk posed by heart disorders. Computer‐aided diagnosis, based on machine learning and biomedical signal analysis, has recently been adopted by researchers to accurately predict cardiac ailments. Multi‐channel Electrocardiogram signals are mostly used in scientific literature as an indicator to diagnose cardiac disorders. Recently pulse plethysmograph (PuPG) signal got attention as an evolving biosignal and promising diagnostic tool to detect heart disorders since it has a simple sensor with low cost, non‐invasive, reliable, and easy to handle technology. This article proposes a computer‐aided diagnosis system to detect Myocardial Infarction, Dilated Cardiomyopathy, and Hypertension from PuPG signals. Raw PuPG signal is first preprocessed through empirical mode decomposition (EMD) by removing the redundant and useless information content. Then, highly discriminative features are extracted from preprocessed PuPG signal through novel local spectral ternary patterns (LSTP). Extracted LSTPs are input to a variety of classification methods such as support vector machines (SVM), K‐nearest neighbours, decision tree, and so on. SVM with cubic kernel yielded the best classification performance of 98.4% accuracy, 96.7% sensitivity, and 99.6% specificity with 10‐fold cross‐validation. The proposed framework was trained and tested on a self‐collected PuPG signals database of heart disorders. A comparison with previous studies and other feature descriptors shows the superiority of the proposed system. This research provides better insights into the contributions of PuPG signals towards reliable detection of heart disorder through low‐cost and non‐invasive means.

16 citations

Journal ArticleDOI
04 Jun 2021
TL;DR: An intelligent, portable, and low‐cost embedded system for the classification of cardiac disorders associated with heart murmurs and the k‐nearest neighbors classifier is trained and tested to distinguish between normal and four cardiac disorders.
Abstract: Phonocardiogram (PCG) signals hold significant prognostic and diagnostic information about cardiac health. Numerous PCG or heart sound based automated detection algorithms were previously proposed to assist the disease diagnosis process. Most of the previous studies only focused on algorithmic development. This study presents an intelligent, portable, and low‐cost embedded system for the classification of cardiac disorders associated with heart murmurs. Different stages corresponding to the developed embedded system implementation are summarized as follows: The first stage consists of the acquisition of PCG signals of both normal and patients from various hospitals with a customized and low‐cost stethoscope. The second stage describes the preprocessing, localization of S1 and S2 heart sounds, and the extraction of systole and diastole from a heart signal with an empirical mode decomposition integrated with the self‐developed algorithm. In the third stage, discriminant features are extracted to represent various cardiac classes of PCG signals in a compact manner. In the final stage of the algorithm, the k‐nearest neighbors classifier is trained and tested to distinguish between normal and four cardiac disorders. The proposed algorithm achieved 94% mean accuracy through comprehensive experimentation. The cardiac disorders classification algorithm is implemented on a RP‐based embedded system. Software application with an interactive graphical interface is also designed to assist users. The developed intelligent system is portable, low‐cost, and it enables regular patient‐monitoring. The proposed system has the potential to be employed at remote locations where the availability of doctors remains challenging.

15 citations

Journal ArticleDOI
TL;DR: In this article, a computer-aided diagnosis system for the categorization of coronary artery disease and its types based on Phonocardiogram (PCG) signal analysis is presented.

13 citations

References
More filters
Journal ArticleDOI
TL;DR: In this paper, a new method for analysing nonlinear and nonstationary data has been developed, which is the key part of the method is the empirical mode decomposition method with which any complicated data set can be decoded.
Abstract: A new method for analysing nonlinear and non-stationary data has been developed. The key part of the method is the empirical mode decomposition method with which any complicated data set can be dec...

18,956 citations

Journal ArticleDOI
TL;DR: This work presents a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition, and improves robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources.
Abstract: Making recognition more reliable under uncontrolled lighting conditions is one of the most important challenges for practical face recognition systems. We tackle this by combining the strengths of robust illumination normalization, local texture-based face representations, distance transform based matching, kernel-based feature extraction and multiple feature fusion. Specifically, we make three main contributions: 1) we present a simple and efficient preprocessing chain that eliminates most of the effects of changing illumination while still preserving the essential appearance details that are needed for recognition; 2) we introduce local ternary patterns (LTP), a generalization of the local binary pattern (LBP) local texture descriptor that is more discriminant and less sensitive to noise in uniform regions, and we show that replacing comparisons based on local spatial histograms with a distance transform based similarity metric further improves the performance of LBP/LTP based face recognition; and 3) we further improve robustness by adding Kernel principal component analysis (PCA) feature extraction and incorporating rich local appearance cues from two complementary sources-Gabor wavelets and LBP-showing that the combination is considerably more accurate than either feature set alone. The resulting method provides state-of-the-art performance on three data sets that are widely used for testing recognition under difficult illumination conditions: Extended Yale-B, CAS-PEAL-R1, and Face Recognition Grand Challenge version 2 experiment 4 (FRGC-204). For example, on the challenging FRGC-204 data set it halves the error rate relative to previously published methods, achieving a face verification rate of 88.1% at 0.1% false accept rate. Further experiments show that our preprocessing method outperforms several existing preprocessors for a range of feature sets, data sets and lighting conditions.

2,981 citations

Journal ArticleDOI
16 Dec 2011-Science
TL;DR: A measure of dependence for two-variable relationships: the maximal information coefficient (MIC), which captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination of the data relative to the regression function.
Abstract: Identifying interesting relationships between pairs of variables in large data sets is increasingly important. Here, we present a measure of dependence for two-variable relationships: the maximal information coefficient (MIC). MIC captures a wide range of associations both functional and not, and for functional relationships provides a score that roughly equals the coefficient of determination (R2) of the data relative to the regression function. MIC belongs to a larger class of maximal information-based nonparametric exploration (MINE) statistics for identifying and classifying relationships. We apply MIC and MINE to data sets in global health, gene expression, major-league baseball, and the human gut microbiota and identify known and novel relationships.

2,414 citations

Journal ArticleDOI
TL;DR: MIMIC-II documents a diverse and very large population of intensive care unit patient stays and contains comprehensive and detailed clinical data, including physiological waveforms and minute-by-minute trends for a subset of records.
Abstract: Objective: We sought to develop an intensive care unit research database applying automated techniques to aggregate high-resolution diagnostic and therapeutic data from a large, diverse population of adult intensive care unit patients. This freely available database is intended to support epidemiologic research in critical care medicine and serve as a resource to evaluate new clinical decision support and monitoring algorithms. Design: Data collection and retrospective analysis. Setting: All adult intensive care units (medical intensive care unit, surgical intensive care unit, cardiac care unit, cardiac surgery recovery unit) at a tertiary care hospital. Patients: Adult patients admitted to intensive care units between 2001 and 2007. Interventions: None. Measurements and Main Results: The Multiparameter Intelligent Monitoring in Intensive Care II (MIMIC-II) database consists of 25,328 intensive care unit stays. The investigators collected detailed information about intensive care unit patient stays, including laboratory data, therapeutic intervention profiles such as vasoactive medication drip rates and ventilator settings, nursing progress notes, discharge summaries, radiology reports, provider order entry data, International Classification of Diseases, 9th Revision codes, and, for a subset of patients, high-resolution vital sign trends and waveforms. Data were automatically deidentified to comply with Health Insurance Portability and Accountability Act standards and integrated with relational database software to create electronic intensive care unit records for each patient stay. The data were made freely available in February 2010 through the Internet along with a detailed user’s guide and an assortment of data processing tools. The overall hospital mortality rate was 11.7%, which varied by critical care unit. The median intensive care unit length of stay was 2.2 days (interquartile range, 1.1‐4.4 days). According to the primary International Classification of Diseases, 9th Revision codes, the following disease categories each comprised at least 5% of the case records: diseases of the circulatory system (39.1%); trauma (10.2%); diseases of the digestive system (9.7%); pulmonary diseases (9.0%); infectious diseases (7.0%); and neoplasms (6.8%). Conclusions: MIMIC-II documents a diverse and very large population of intensive care unit patient stays and contains comprehensive and detailed clinical data, including physiological waveforms and minute-by-minute trends for a subset of records. It establishes a new public-access resource for critical care research, supporting a diverse range of analytic studies spanning epidemiology, clinical decision-rule development, and electronic tool development. (Crit Care Med 2011; 39:952‐960)

960 citations

Journal ArticleDOI
TL;DR: Different types of artifact added to PPG signal, characteristic features of PPG waveform, and existing indexes to evaluate for diagnoses are discussed.
Abstract: Photoplethysmography (PPG) is used to estimate the skin blood flow using infrared light. Researchers from different domains of science have become increasingly interested in PPG because of its advantages as non-invasive, inexpensive, and convenient diagnostic tool. Traditionally, it measures the oxygen saturation, blood pressure, cardiac output, and for assessing autonomic functions. Moreover, PPG is a promising technique for early screening of various atherosclerotic pathologies and could be helpful for regular GP-assessment but a full understanding of the diagnostic value of the different features is still lacking. Recent studies emphasise the potential information embedded in the PPG waveform signal and it deserves further attention for its possible applications beyond pulse oximetry and heart-rate calculation. Therefore, this overview discusses different types of artifact added to PPG signal, characteristic features of PPG waveform, and existing indexes to evaluate for diagnoses.

912 citations