scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Journal of Biomedical and Health Informatics in 2013"


Journal ArticleDOI
TL;DR: The emergence of `ambient-assisted living’ (AAL) tools for older adults based on ambient intelligence paradigm is summarized and the state-of-the-art AAL technologies, tools, and techniques are summarized.
Abstract: In recent years, we have witnessed a rapid surge in assisted living technologies due to a rapidly aging society. The aging population, the increasing cost of formal health care, the caregiver burden, and the importance that the individuals place on living independently, all motivate development of innovative-assisted living technologies for safe and independent aging. In this survey, we will summarize the emergence of `ambient-assisted living” (AAL) tools for older adults based on ambient intelligence paradigm. We will summarize the state-of-the-art AAL technologies, tools, and techniques, and we will look at current and future challenges.

1,000 citations


Journal ArticleDOI
TL;DR: Investigating the Parkinson dataset using well-known machine learning tools, sustained vowels are found to carry more PD-discriminative information and representing the samples of a subject with central tendency and dispersion metrics improves generalization of the predictive model.
Abstract: There has been an increased interest in speech pattern analysis applications of Parkinsonism for building predictive telediagnosis and telemonitoring models. For this purpose, we have collected a wide variety of voice samples, including sustained vowels, words, and sentences compiled from a set of speaking exercises for people with Parkinson's disease. There are two main issues in learning from such a dataset that consists of multiple speech recordings per subject: 1) How predictive these various types, e.g., sustained vowels versus words, of voice samples are in Parkinson's disease (PD) diagnosis? 2) How well the central tendency and dispersion metrics serve as representatives of all sample recordings of a subject? In this paper, investigating our Parkinson dataset using well-known machine learning tools, as reported in the literature, sustained vowels are found to carry more PD-discriminative information. We have also found that rather than using each voice recording of each subject as an independent data sample, representing the samples of a subject with central tendency and dispersion metrics improves generalization of the predictive model.

445 citations


Journal ArticleDOI
TL;DR: Assessment of the use of multichannel surface electromyography (sEMG) to classify individual and combined finger movements for dexterous prosthetic control shows that finger and thumb movements can be decoded accurately with high accuracy with latencies as short as 200 ms.
Abstract: A method for the classification of finger movements for dexterous control of prosthetic hands is proposed. Previous research was mainly devoted to identify hand movements as these actions generate strong electromyography (EMG) signals recorded from the forearm. In contrast, in this paper, we assess the use of multichannel surface electromyography (sEMG) to classify individual and combined finger movements for dexterous prosthetic control. sEMG channels were recorded from ten intact-limbed and six below-elbow amputee persons. Offline processing was used to evaluate the classification performance. The results show that high classification accuracies can be achieved with a processing chain consisting of time domain-autoregression feature extraction, orthogonal fuzzy neighborhood discriminant analysis for feature reduction, and linear discriminant analysis for classification. We show that finger and thumb movements can be decoded accurately with high accuracy with latencies as short as 200 ms. Thumb abduction was decoded successfully with high accuracy for six amputee persons for the first time. We also found that subsets of six EMG channels provide accuracy values similar to those computed with the full set of EMG channels (98% accuracy over ten intact-limbed subjects for the classification of 15 classes of different finger movements and 90% accuracy over six amputee persons for the classification of 12 classes of individual finger movements). These accuracy values are higher than previous studies, whereas we typically employed half the number of EMG channels per identified movement.

269 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed method can provide, in almost all the cases, 100% accuracy, sensitivity, and specificity, especially in the case of discriminating seizure activities from the nonseizure ones for patients with epilepsy while being much faster as compared to the time-frequency analysis-based techniques.
Abstract: In this paper, a method using higher order statistical moments of EEG signals calculated in the empirical mode decomposition (EMD) domain is proposed for detecting seizure and epilepsy. The appropriateness of these moments in distinguishing the EEG signals is investigated through an extensive analysis in the EMD domain. An artificial neural network is employed as the classifier of the EEG signals wherein these moments are used as features. The performance of the proposed method is studied using a publicly available benchmark database for various classification cases that include healthy, interictal (seizure-free interval) and ictal (seizure), healthy and seizure, nonseizure and seizure, and interictal and ictal, and compared with that of several recent methods based on time-frequency analysis and statistical moments. It is shown that the proposed method can provide, in almost all the cases, 100% accuracy, sensitivity, and specificity, especially in the case of discriminating seizure activities from the nonseizure ones for patients with epilepsy while being much faster as compared to the time-frequency analysis-based techniques.

227 citations


Journal ArticleDOI
TL;DR: An EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defining the clinical data attributes is proposed.
Abstract: We present a cloud-based approach for the design of interoperable electronic health record (EHR) systems. Cloud computing environments provide several benefits to all the stakeholders in the healthcare ecosystem (patients, providers, payers, etc.). Lack of data interoperability standards and solutions has been a major obstacle in the exchange of healthcare data between different stakeholders. We propose an EHR system - cloud health information systems technology architecture (CHISTAR) that achieves semantic interoperability through the use of a generic design methodology which uses a reference model that defines a general purpose set of data structures and an archetype model that defines the clinical data attributes. CHISTAR application components are designed using the cloud component model approach that comprises of loosely coupled components that communicate asynchronously. In this paper, we describe the high-level design of CHISTAR and the approaches for semantic interoperability, data integration, and security.

209 citations


Journal ArticleDOI
TL;DR: This paper presents a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors that achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%.
Abstract: Human daily activity recognition using mobile personal sensing technology plays a central role in the field of pervasive healthcare. One major challenge lies in the inherent complexity of human body movements and the variety of styles when people perform a certain activity. To tackle this problem, in this paper, we present a novel human activity recognition framework based on recently developed compressed sensing and sparse representation theory using wearable inertial sensors. Our approach represents human activity signals as a sparse linear combination of activity signals from all activity classes in the training set. The class membership of the activity signal is determined by solving a l1 minimization problem. We experimentally validate the effectiveness of our sparse representation-based approach by recognizing nine most common human daily activities performed by 14 subjects. Our approach achieves a maximum recognition rate of 96.1%, which beats conventional methods based on nearest neighbor, naive Bayes, and support vector machine by as much as 6.7%. Furthermore, we demonstrate that by using random projection, the task of looking for “optimal features” to achieve the best activity recognition performance is less important within our framework.

183 citations


Journal ArticleDOI
TL;DR: A framework for activity awareness using surface electromyography and accelerometer signals is proposed and a continuous daily activity monitoring and fall detection scheme was performed, demonstrating the excellent fall detection performance and the great feasibility of the proposed method in daily activities awareness.
Abstract: As an essential branch of context awareness, activity awareness, especially daily activity monitoring and fall detection, is important to healthcare for the elderly and patients with chronic diseases. In this paper, a framework for activity awareness using surface electromyography and accelerometer (ACC) signals is proposed. First, histogram negative entropy was employed to determine the start- and end-points of static and dynamic active segments. Then, the angle of each ACC axis was calculated to indicate body postures, which assisted with sorting dynamic activities into two categories: dynamic gait activities and dynamic transition ones, by judging whether the pre- and post-postures are both standing. Next, the dynamic gait activities were identified by the double-stream hidden Markov models. Besides, the dynamic transition activities were distinguished into normal transition activities and falls by resultant ACC amplitude. Finally, a continuous daily activity monitoring and fall detection scheme was performed with the recognition accuracy over 98%, demonstrating the excellent fall detection performance and the great feasibility of the proposed method in daily activities awareness.

179 citations


Journal ArticleDOI
TL;DR: A low-complexity algorithm for the extraction of the fiducial points from the electrocardiogram, based on the discrete wavelet transform with the Haar function being the mother wavelet, which achieves an ideal tradeoff between computational complexity and performance, a key requirement in remote cardiovascular disease monitoring systems.
Abstract: This paper introduces a low-complexity algorithm for the extraction of the fiducial points from the electrocardiogram (ECG). The application area we consider is that of remote cardiovascular monitoring, where continuous sensing and processing takes place in low-power, computationally constrained devices, thus the power consumption and complexity of the processing algorithms should remain at a minimum level. Under this context, we choose to employ the discrete wavelet transform (DWT) with the Haar function being the mother wavelet, as our principal analysis method. From the modulus-maxima analysis on the DWT coefficients, an approximation of the ECG fiducial points is extracted. These initial findings are complimented with a refinement stage, based on the time-domain morphological properties of the ECG, which alleviates the decreased temporal resolution of the DWT. The resulting algorithm is a hybrid scheme of time- and frequency-domain signal processing. Feature extraction results from 27 ECG signals from QTDB were tested against manual annotations and used to compare our approach against the state-of-the art ECG delineators. In addition, 450 signals from the 15-lead PTBDB are used to evaluate the obtained performance against the CSE tolerance limits. Our findings indicate that all but one CSE limits are satisfied. This level of performance combined with a complexity analysis, where the upper bound of the proposed algorithm, in terms of arithmetic operations, is calculated as 2.423N+214 additions and 1.093N+12 multiplications for N ≤ 861 or 2.553N+102 additions and 1.093N+10 multiplications for N > 861 (N being the number of input samples), reveals that the proposed method achieves an ideal tradeoff between computational complexity and performance, a key requirement in remote cardiovascular disease monitoring systems.

173 citations


Journal ArticleDOI
TL;DR: The results clearly indicate that the availability of multivariable data and their effective combination can significantly increase the accuracy of both short-term and long-term predictions.
Abstract: Data-driven techniques have recently drawn significant interest in the predictive modeling of subcutaneous (s.c.) glucose concentration in type 1 diabetes. In this study, the s.c. glucose prediction is treated as a multivariate regression problem, which is addressed using support vector regression (SVR). The proposed method is based on variables concerning: 1) the s.c. glucose profile; 2) the plasma insulin concentration; 3) the appearance of meal-derived glucose in the systemic circulation; and 4) the energy expenditure during physical activities. Six cases corresponding to different combinations of the aforementioned variables are used to investigate the influence of the input on the daily glucose prediction. The proposed method is evaluated using a dataset of 27 patients in free-living conditions. Tenfold cross validation is applied to each dataset individually to both optimize and test the SVR model. In the case, where all the input variables are considered, the average prediction errors are 5.21, 6.03, 7.14, and 7.62 mg/dl for 15-, 30-, 60-, and 120-min prediction horizons, respectively. The results clearly indicate that the availability of multivariable data and their effective combination can significantly increase the accuracy of both short-term and long-term predictions.

163 citations


Journal ArticleDOI
TL;DR: The proposed classifier separates lower risk patients from higher risk ones, using standard long-term heart rate variability (HRV) measures, and is comprehensible and consistent with the consensus showed by previous studies that depressed HRV is a useful tool for risk assessment in patients suffering from CHF.
Abstract: This study aims to develop an automatic classifier for risk assessment in patients suffering from congestive heart failure (CHF). The proposed classifier separates lower risk patients from higher risk ones, using standard long-term heart rate variability (HRV) measures. Patients are labeled as lower or higher risk according to the New York Heart Association classification (NYHA). A retrospective analysis on two public Holter databases was performed, analyzing the data of 12 patients suffering from mild CHF (NYHA I and II), labeled as lower risk, and 32 suffering from severe CHF (NYHA III and IV), labeled as higher risk. Only patients with a fraction of total heartbeats intervals (RR) classified as normal-to-normal (NN) intervals (NN/RR) higher than 80% were selected as eligible in order to have a satisfactory signal quality. Classification and regression tree (CART) was employed to develop the classifiers. A total of 30 higher risk and 11 lower risk patients were included in the analysis. The proposed classification trees achieved a sensitivity and a specificity rate of 93.3% and 63.6%, respectively, in identifying higher risk patients. Finally, the rules obtained by CART are comprehensible and consistent with the consensus showed by previous studies that depressed HRV is a useful tool for risk assessment in patients suffering from CHF.

147 citations


Journal ArticleDOI
TL;DR: From the comprehensive experimental evaluations on datasets for 12 people, it is confirmed that the proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters.
Abstract: In this paper, we propose a novel computer vision-based fall detection system for monitoring an elderly person in a home care, assistive living application. Initially, a single camera covering the full view of the room environment is used for the video recording of an elderly person's daily activities for a certain time period. The recorded video is then manually segmented into short video clips containing normal postures, which are used to compose the normal dataset. We use the codebook background subtraction technique to extract the human body silhouettes from the video clips in the normal dataset and information from ellipse fitting and shape description, together with position information, is used to provide features to describe the extracted posture silhouettes. The features are collected and an online one class support vector machine (OCSVM) method is applied to find the region in feature space to distinguish normal daily postures and abnormal postures such as falls. The resultant OCSVM model can also be updated by using the online scheme to adapt to new emerging normal postures and certain rules are added to reduce false alarm rate and thereby improve fall detection performance. From the comprehensive experimental evaluations on datasets for 12 people, we confirm that our proposed person-specific fall detection system can achieve excellent fall detection performance with 100% fall detection rate and only 3% false detection rate with the optimally tuned parameters. This work is a semiunsupervised fall detection system from a system perspective because although an unsupervised-type algorithm (OCSVM) is applied, human intervention is needed for segmenting and selecting of video clips containing normal postures. As such, our research represents a step toward a complete unsupervised fall detection system.

Journal ArticleDOI
TL;DR: The experimental results show that the triaxial accelerometers around the chest and waist produce optimal results, and the proposed cascade-AdaBoost-support vector machine (SVM) classifier has the highest accuracy rate and detection rate as well as the lowest false alarm rate.
Abstract: In this paper, we propose a cascade-AdaBoost-support vector machine (SVM) classifier to complete the triaxial accelerometer-based fall detection method. The method uses the acceleration signals of daily activities of volunteers from a database and calculates feature values. By taking the feature values of a sliding window as an input vector, the cascade-AdaBoost-SVM algorithm can self-construct based on training vectors, and the AdaBoost algorithm of each layer can automatically select several optimal weak classifiers to form a strong classifier, which accelerates effectively the processing speed in the testing phase, requiring only selected features rather than all features. In addition, the algorithm can automatically determine whether to replace the AdaBoost classifier by support vector machine. We used the UCI database for the experiment, in which the triaxial accelerometers are, respectively, worn around the left and right ankles, and on the chest as well as the waist. The results are compared to those of the neural network, support vector machine, and the cascade-AdaBoost classifier. The experimental results show that the triaxial accelerometers around the chest and waist produce optimal results, and our proposed method has the highest accuracy rate and detection rate as well as the lowest false alarm rate.

Journal ArticleDOI
TL;DR: This work evaluates and rank seven popular machine learning algorithms for their performance in separating 30-s long BCG epochs into one of three classes: sinus rhythm, AF, and artifact and selects the best algorithm for random forests.
Abstract: We present a study on the feasibility of the automatic detection of atrial fibrillation (AF) from cardiac vibration signals (ballistocardiograms/BCGs) recorded by unobtrusive bed-mounted sensors. The proposed system is intended as a screening and monitoring tool in home-healthcare applications and not as a replacement for ECG-based methods used in clinical environments. Based on the BCG data recorded in a study with ten AF patients, we evaluate and rank seven popular machine learning algorithms (naive Bayes, linear and quadratic discriminant analysis, support vector machines, random forests as well as bagged and boosted trees) for their performance in separating 30-s long BCG epochs into one of three classes: sinus rhythm, AF, and artifact. For each algorithm, feature subsets of a set of statistical time-frequency-domain and time-domain features were selected based on the mutual information between features and class labels as well as the first- and second-order interactions among features. The classifiers were evaluated on a set of 856 epochs by means of tenfold cross validation. The best algorithm (random forests) achieved a Matthews correlation coefficient, mean sensitivity, and mean specificity of 0.921, 0.938, and 0.982, respectively.

Journal ArticleDOI
TL;DR: The proposed model is effective in removing OAs and meets the requirements of portable systems used for patient monitoring as typified by the OPTIMI project.
Abstract: A new model to remove ocular artifacts (OA) from electroencephalograms (EEGs) is presented. The model is based on discrete wavelet transformation (DWT) and adaptive noise cancellation (ANC). Using simulated and measured data, the accuracy of the model is compared with the accuracy of other existing methods based on stationary wavelet transforms and our previous work based on wavelet packet transform and independent component analysis. A particularly novel feature of the new model is the use of DWTs to construct an OA reference signal, using the three lowest frequency wavelet coefficients of the EEGs. The results show that the new model demonstrates an improved performance with respect to the recovery of true EEG signals and also has a better tracking performance. Because the new model requires only single channel sources, it is well suited for use in portable environments where constraints with respect to acceptable wearable sensor attachments usually dictate single channel devices. The model is also applied and evaluated against data recorded within the EUFP 7 Project-Online Predictive Tools for Intervention in Mental Illness (OPTIMI). The results show that the proposed model is effective in removing OAs and meets the requirements of portable systems used for patient monitoring as typified by the OPTIMI project.

Journal ArticleDOI
TL;DR: This paper approaches novel methods to segment the nucleus and cytoplasm of white blood cells (WBC) and proposes two different schemes, based on granulometric analysis and on morphological transformations, which have been successfully applied to a large number of images.
Abstract: This paper approaches novel methods to segment the nucleus and cytoplasm of white blood cells (WBC). This information is the basis to perform higher level tasks such as automatic differential counting, which plays an important role in the diagnosis of different diseases. We explore the image simplification and contour regularization resulting from the application of the selfdual multiscale morphological toggle (SMMT), an operator with scale-space properties. To segment the nucleus, the image preprocessing with SMMT has shown to be essential to ensure the accuracy of two well-known image segmentations techniques, namely, watershed transform and Level-Set methods. To identify the cytoplasm region, we propose two different schemes, based on granulometric analysis and on morphological transformations. The proposed methods have been successfully applied to a large number of images, showing promising segmentation and classification results for varying cell appearance and image quality, encouraging future works.

Journal ArticleDOI
TL;DR: The frequency-dependent absorption coefficients, refractive indices, and Debye relaxation times of whole blood, red blood cells, plasma, and a thrombus are presented.
Abstract: In the continuing development of terahertz technology to enable the determination of tissue pathologies in real-time during surgical procedures, it is important to distinguish the measured terahertz signal from biomaterials and fluids, such as blood, which may mask the signal from tissues of interest. In this paper, we present the frequency-dependent absorption coefficients, refractive indices, and Debye relaxation times of whole blood, red blood cells, plasma, and a thrombus.

Journal ArticleDOI
TL;DR: The results showed that EMG signals and ground reaction forces/moments were more informative than prosthesis kinematics and a protocol was suggested for determining the informative data sources and sensor configurations for future development of volitional control of powered artificial legs.
Abstract: Various types of data sources have been used to recognize user intent for volitional control of powered artificial legs. However, there is still a debate on what exact data sources are necessary for accurately and responsively recognizing the user's intended tasks. Motivated by this widely interested question, in this study we aimed to 1) investigate the usefulness of different data sources commonly suggested for user intent recognition and 2) determine an informative set of data sources for volitional control of prosthetic legs. The studied data sources included eight surface electromyography (EMG) signals from the residual thigh muscles of transfemoral (TF) amputees, ground reaction forces/moments from a prosthetic pylon, and kinematic measurements from the residual thigh and prosthetic knee. We then ranked and included data sources based on the usefulness for user intent recognition and selected a reduced number of data sources that ensured accurate recognition of the user's intended task by using three source selection algorithms. The results showed that EMG signals and ground reaction forces/moments were more informative than prosthesis kinematics. Nine to eleven of all the initial data sources were sufficient to maintain 95% accuracy for recognizing the studied seven tasks without missing additional task transitions in real time. The selected data sources produced consistent system performance across two experimental days for four recruited TF amputee subjects, indicating the potential robustness of the selected data sources. Finally, based on the study results, we suggested a protocol for determining the informative data sources and sensor configurations for future development of volitional control of powered artificial legs.

Journal ArticleDOI
TL;DR: Combination of RQA-based measures of the original signal and its subbands results in an overall accuracy of 98.67% that indicates high accuracy of the proposed method.
Abstract: This study presents applying recurrence quantification analysis (RQA) on EEG recordings and their subbands: delta, theta, alpha, beta, and gamma for epileptic seizure detection. RQA is adopted since it does not require assumptions about stationarity, length of signal, and noise. The decomposition of the original EEG into its five constituent subbands helps better identification of the dynamical system of EEG signal. This leads to better classification of the database into three groups: Healthy subjects, epileptic subjects during a seizure-free interval (Interictal) and epileptic subjects during a seizure course (Ictal). The proposed algorithm is applied to an epileptic EEG dataset provided by Dr. R. Andrzejak of the Epilepsy Center, University of Bonn, Bonn, Germany. Combination of RQA-based measures of the original signal and its subbands results in an overall accuracy of 98.67% that indicates high accuracy of the proposed method.

Journal ArticleDOI
TL;DR: This paper presents a child activity recognition approach using a single 3-axis accelerometer and a barometric pressure sensor worn on a waist of the body to prevent child accidents such as unintentional injuries at home.
Abstract: This paper presents a child activity recognition approach using a single 3-axis accelerometer and a barometric pressure sensor worn on a waist of the body to prevent child accidents such as unintentional injuries at home. Labeled accelerometer data are collected from children of both sexes up to the age of 16 to 29 months. To recognize daily activities, mean, standard deviation, and slope of time-domain features are calculated over sliding windows. In addition, the FFT analysis is adopted to extract frequency-domain features of the aggregated data, and then energy and correlation of acceleration data are calculated. Child activities are classified into 11 daily activities which are wiggling, rolling, standing still, standing up, sitting down, walking, toddling, crawling, climbing up, climbing down, and stopping. The overall accuracy of activity recognition was 98.43% using only a single- wearable triaxial accelerometer sensor and a barometric pressure sensor with a support vector machine.

Journal ArticleDOI
TL;DR: The results show that the system is welcome by the chronic patients who are especially willing to share healthcare information, and is easy to learn and use, while its features have been overall regarded by the patients as helpful for their disease management and treatment.
Abstract: In this paper, we present the design and development of a pervasive health system enabling self-management of chronic patients during their everyday activities. The proposed system integrates patient health monitoring, status logging for capturing various problems or symptoms met, and social sharing of the recorded information within the patient's community, aiming to facilitate disease management. A prototype is implemented on a mobile device illustrating the feasibility and applicability of the presented work by adopting unobtrusive vital signs monitoring through a wearable multisensing device, a service-oriented architecture for handling communication issues, and popular microblogging services. Furthermore, a study has been conducted with 16 hypertensive patients, in order to investigate the user acceptance, the usefulness, and the virtue of the proposed system. The results show that the system is welcome by the chronic patients who are especially willing to share healthcare information, and is easy to learn and use, while its features have been overall regarded by the patients as helpful for their disease management and treatment.

Journal ArticleDOI
TL;DR: It is demonstrated how ratios between original and recomputed geometric moments can be used as image features in a classifier-based strategy in order to determine the nature of a global image processing.
Abstract: In this paper, we present a medical image integrity verification system to detect and approximate local malevolent image alterations (e.g., removal or addition of lesions) as well as identifying the nature of a global processing an image may have undergone (e.g., lossy compression, filtering, etc.). The proposed integrity analysis process is based on nonsignificant region watermarking with signatures extracted from different pixel blocks of interest, which are compared with the recomputed ones at the verification stage. A set of three signatures is proposed. The first two devoted to detection and modification location are cryptographic hashes and checksums, while the last one is issued from the image moment theory. In this paper, we first show how geometric moments can be used to approximate any local modification by its nearest generalized 2-D Gaussian. We then demonstrate how ratios between original and recomputed geometric moments can be used as image features in a classifier-based strategy in order to determine the nature of a global image processing. Experimental results considering both local and global modifications in MRI and retina images illustrate the overall performances of our approach. With a pixel block signature of about 200 bit long, it is possible to detect, to roughly localize, and to get an idea about the image tamper.

Journal ArticleDOI
TL;DR: A classification scheme is developed based on boosting and random forest classifiers to make the classifier robust to untrained classes, and it is shown that the proposed scheme can reach up to about 92% accuracy in recognizing trained classes and 20% for unt trained classes.
Abstract: The high conventional accuracy of pattern recognition-based surface myoelectric classification in laboratory experiments does not necessarily result in high accessibility to practical protheses. An obvious reason is the effect of signals of untrained classes caused by the relatively small training dataset. In order to make the classifier robust to untrained classes, a classification scheme is developed based on boosting and random forest classifiers in this paper. Meanwhile, a threshold, the post probability of the prediction, is introduced as a balance (i.e., adjust) between the accurate classification and the rejection of the samples belonging to some untrained classes. The experiments are conducted to compare with other two schemes using linear discriminant analysis and support vector machines. Surface electromyogram signals, labeled with seven isometric movements, are collected from six healthy subjects' forearm. It is shown that the proposed scheme can reach up to about 92% accuracy in recognizing trained classes and 20% for untrained classes. Through adjusting the threshold, the accuracy of rejecting untrained classes reaches up to around 80%, with small decrease in recognizing trained classes (down to 80%). In the analysis of experiments' results, we also find that the proposed scheme has better error distribution among the classes.

Journal ArticleDOI
TL;DR: In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding, consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual.
Abstract: In this paper, lossless and near-lossless compression algorithms for multichannel electroencephalogram (EEG) signals are presented based on image and volumetric coding. Multichannel EEG signals have significant correlation among spatially adjacent channels; moreover, EEG signals are also correlated across time. Suitable representations are proposed to utilize those correlations effectively. In particular, multichannel EEG is represented either in the form of image (matrix) or volumetric data (tensor), next a wavelet transform is applied to those EEG representations. The compression algorithms are designed following the principle of “lossy plus residual coding,” consisting of a wavelet-based lossy coding layer followed by arithmetic coding on the residual. Such approach guarantees a specifiable maximum error between original and reconstructed signals. The compression algorithms are applied to three different EEG datasets, each with different sampling rate and resolution. The proposed multichannel compression algorithms achieve attractive compression ratios compared to algorithms that compress individual channels separately.

Journal ArticleDOI
TL;DR: The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system.
Abstract: This paper presents the design of a hardware-efficient, low-power image processing system for next-generation wireless endoscopy. The presented system is composed of a custom CMOS image sensor, a dedicated image compressor, a forward error correction (FEC) encoder protecting radio transmitted data against random and burst errors, a radio data transmitter, and a controller supervising all operations of the system. The most significant part of the system is the image compressor. It is based on an integer version of a discrete cosine transform and a novel, low complexity yet efficient, entropy encoder making use of an adaptive Golomb-Rice algorithm instead of Huffman tables. The novel hardware-efficient architecture designed for the presented system enables on-the-fly compression of the acquired image. Instant compression, together with elimination of the necessity of retransmitting erroneously received data by their prior FEC encoding, significantly reduces the size of the required memory in comparison to previous systems. The presented system was prototyped in a single, low-power, 65-nm field programmable gate arrays (FPGA) chip. Its power consumption is low and comparable to other application-specific-integrated-circuits-based systems, despite FPGA-based implementation.

Journal ArticleDOI
TL;DR: A system for estimating body postures on a bed using unconstrained measurements of electrocardiogram (ECG) signals using 12 capacitively coupled electrodes and a conductive textile sheet and the performance was better than the results that have been reported to date.
Abstract: We developed and tested a system for estimating body postures on a bed using unconstrained measurements of electrocardiogram (ECG) signals using 12 capacitively coupled electrodes and a conductive textile sheet. Thirteen healthy subjects participated in the experiment. After detecting the channels in contact with the body among the 12 electrodes, the features were extracted on the basis of the morphology of the QRS (Q wave, R wave, and S wave of ECG) complex using three main steps. The features were applied to linear discriminant analysis, support vector machines with linear and radial basis function (RBF) kernels, and artificial neural networks (one and two layers), respectively. SVM with RBF kernel had the highest performance with an accuracy of 98.4% for estimation of four body postures on the bed: supine, right lateral, prone, and left lateral. Overall, although ECG data were obtained from few sensors in an unconstrained manner, the performance was better than the results that have been reported to date. The developed system and algorithm can be applied to the obstructive apnea detection and analyses of sleep quality or sleep stages, as well as body posture detection for the management of bedsores.

Journal ArticleDOI
TL;DR: A new parameter computed on FHR time series and based on the phase-rectified signal average curve (PRSA) is introduced and it is suggested this new index might reliably contribute to the quality of early fetal diagnosis.
Abstract: Since the 1980s, cardiotocography (CTG) has been the most diffused technique to monitor fetal well-being during pregnancy. CTG consists of the simultaneous recording of fetal heart rate (FHR) signal and uterine contractions and its interpretation is usually performed through visual inspection by trained obstetric personnel. To reduce inter- and intraobserver variabilities and to improve the efficacy of prenatal diagnosis, new quantitative parameters, extracted from the CTG digitized signals, have been proposed as additional tools in the clinical diagnosis process. In this paper, a new parameter computed on FHR time series and based on the phase-rectified signal average curve (PRSA) is introduced. It is defined as acceleration phase-rectified slope (APRS) or deceleration phase-rectified slope (DPRS) depending on the slope sign of the PRSA curve. The new PRSA parameter was applied to FHR time series of 61 healthy and 61 intrauterine growth restricted (IUGR) fetuses during CTG nonstress tests. Performance of APRS and DPRS was compared with 1) the results provided by other parameters extracted from the PRSA curve itself but already existing in the literature, and 2) other clinical indices provided by computerized cardiotocographic systems. APRS and DPRS indices performed better than any other parameter in this study in the distinction between healthy and IUGR fetuses. Our results suggest this new index might reliably contribute to the quality of early fetal diagnosis.

Journal ArticleDOI
TL;DR: In this paper, the authors evaluated the performance of ECG, thermistor, chest belt, accelerometer, contact, and audio microphones for the detection of chronic chronic cough disease using three stages: mutual information conveyed by the features, ability to discriminate at the frame level cough from these latter other sources of ambiguity, and ability to detect cough events.
Abstract: The development of a system for the automatic, objective, and reliable detection of cough events is a need underlined by the medical literature for years. The benefit of such a tool is clear as it would allow the assessment of pathology severity in chronic cough diseases. Even though some approaches have recently reported solutions achieving this task with a relative success, there is still no standardization about the method to adopt or the sensors to use. The goal of this paper is to study objectively the performance of several sensors for cough detection: ECG, thermistor, chest belt, accelerometer, contact, and audio microphones. Experiments are carried out on a database of 32 healthy subjects producing, in a confined room and in three situations, voluntary cough at various volumes as well as other event categories which can possibly lead to some detection errors: background noise, forced expiration, throat clearing, speech, and laugh. The relevance of each sensor is evaluated at three stages: mutual information conveyed by the features, ability to discriminate at the frame level cough from these latter other sources of ambiguity, and ability to detect cough events. In this latter experiment, with both an averaged sensitivity and specificity of about 94.5%, the proposed approach is shown to clearly outperform the commercial Karmelsonix system which achieved a specificity of 95.3% and a sensitivity of 64.9%.

Journal ArticleDOI
TL;DR: The novel approach at developing a thermal signature template using four images taken at various instants of time ensured that unforeseen changes in the vasculature over time did not affect the biometric matching process as the authentication process relied only on consistent thermal features.
Abstract: A new thermal imaging framework with unique feature extraction and similarity measurements for face recognition is presented The research premise is to design specialized algorithms that would extract vasculature information, create a thermal facial signature, and identify the individual The proposed algorithm is fully integrated and consolidates the critical steps of feature extraction through the use of morphological operators, registration using the Linear Image Registration Tool, and matching through unique similarity measures designed for this task The novel approach at developing a thermal signature template using four images taken at various instants of time ensured that unforeseen changes in the vasculature over time did not affect the biometric matching process as the authentication process relied only on consistent thermal features Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system The matching using the similarity measures showed an average accuracy of 8846% for skeletonized signatures and 9039% for anisotropically diffused signatures The highly accurate results obtained in the matching process clearly demonstrate the ability of the thermal infrared system to extend in application to other thermal-imaging-based systems Empirical results applying this approach to an existing database of thermal images prove this assertion

Journal ArticleDOI
TL;DR: The use of fractal analysis (FA) is investigated as the basis of a system for multiclass prediction of the progression of glaucoma and novel FA-based features achieve better performance with fewer features and less computational complexity than WFA and FFA.
Abstract: We investigate the use of fractal analysis (FA) as the basis of a system for multiclass prediction of the progression of glaucoma. FA is applied to pseudo 2-D images converted from 1-D retinal nerve fiber layer data obtained from the eyes of normal subjects, and from subjects with progressive and nonprogressive glaucoma. FA features are obtained using a box-counting method and a multifractional Brownian motion method that incorporates texture and multiresolution analyses. Both features are used for Gaussian kernel-based multiclass classification. Sensitivity, specificity, and area under receiver operating characteristic curve (AUROC) are computed for the FA features and for metrics obtained using wavelet-Fourier analysis (WFA) and fast-Fourier analysis (FFA). The AUROCs that predict progressors from nonprogressors based on classifiers trained using a dataset comprised of nonprogressors and ocular normal subjects are 0.70, 0.71, and 0.82 for WFA, FFA, and FA, respectively. The correct multiclass classification rates among progressors, nonprogressors, and ocular normal subjects are 0.82, 0.86, and 0.88 for WFA, FFA, and FA, respectively. Simultaneous multiclass classification among progressors, nonprogressors, and ocular normal subjects has not been previously described. The novel FA-based features achieve better performance with fewer features and less computational complexity than WFA and FFA.

Journal ArticleDOI
TL;DR: This paper presents the design, implementation, and evaluation of a secure network admission and transmission subsystem based on a polynomial-based authentication scheme, and proposes to exploit the adversary's uncertainty regarding the PHI transmission to update the individual key dynamically and improve key secrecy.
Abstract: A body sensor network (BSN) is a wireless network of biosensors and a local processing unit, which is commonly referred to as the personal wireless hub (PWH). Personal health information (PHI) is collected by biosensors and delivered to the PWH before it is forwarded to the remote healthcare center for further processing. In a BSN, it is critical to only admit eligible biosensors and PWH into the network. Also, securing the transmission from each biosensor to PWH is essential not only for ensuring safety of PHI delivery, but also for preserving the privacy of PHI. In this paper, we present the design, implementation, and evaluation of a secure network admission and transmission subsystem based on a polynomial-based authentication scheme. The procedures in this subsystem to establish keys for each biosensor are communication efficient and energy efficient. Moreover, based on the observation that an adversary eavesdropping in a BSN faces inevitable channel errors, we propose to exploit the adversary's uncertainty regarding the PHI transmission to update the individual key dynamically and improve key secrecy. In addition to the theoretical analysis that demonstrates the security properties of our system, this paper also reports the experimental results of the proposed protocol on resource-limited sensor platforms, which show the efficiency of our system in practice.