scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Biomedical Engineering in 2019"


Journal ArticleDOI
TL;DR: This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and introduces a simple yet efficient CNN architecture to power the framework.
Abstract: Correctly identifying sleep stages is important in diagnosing and treating sleep disorders. This paper proposes a joint classification-and-prediction framework based on convolutional neural networks (CNNs) for automatic sleep staging, and, subsequently, introduces a simple yet efficient CNN architecture to power the framework. Given a single input epoch, the novel framework jointly determines its label (classification) and its neighboring epochs’ labels (prediction) in the contextual output. While the proposed framework is orthogonal to the widely adopted classification schemes, which take one or multiple epochs as contextual inputs and produce a single classification decision on the target epoch, we demonstrate its advantages in several ways. First, it leverages the dependency among consecutive sleep epochs while surpassing the problems experienced with the common classification schemes. Second, even with a single model, the framework has the capacity to produce multiple decisions, which are essential in obtaining a good performance as in ensemble-of-models methods, with very little induced computational overhead. Probabilistic aggregation techniques are then proposed to leverage the availability of multiple decisions. To illustrate the efficacy of the proposed framework, we conducted experiments on two public datasets: Sleep-EDF Expanded (Sleep-EDF), which consists of 20 subjects, and Montreal Archive of Sleep Studies (MASS) dataset, which consists of 200 subjects. The proposed framework yields an overall classification accuracy of 82.3% and 83.6%, respectively. We also show that the proposed framework not only is superior to the baselines based on the common classification schemes but also outperforms existing deep-learning approaches. To our knowledge, this is the first work going beyond the standard single-output classification to consider multitask neural networks for automatic sleep staging. This framework provides avenues for further studies of different neural-network architectures for automatic sleep staging.

288 citations


Journal ArticleDOI
TL;DR: Both information propagation patterns and activation difference in the brain were fused to improve the performance of emotional recognition to develop the effective human–computer interaction systems by adapting to human emotions in the real world applications.
Abstract: Objective: Spectral power analysis plays a predominant role in electroencephalogram-based emotional recognition. It can reflect activity differences among multiple brain regions. In addition to activation difference, different emotions also involve different large-scale network during related information processing. In this paper, both information propagation patterns and activation difference in the brain were fused to improve the performance of emotional recognition. Methods: We constructed emotion-related brain networks with phase locking value and adopted a multiple feature fusion approach to combine the compensative activation and connection information for emotion recognition. Results: Recognition results on three public emotional databases demonstrated that the combined features are superior to either single feature based on power distribution or network character. Furthermore, the conducted feature fusion analysis revealed the common characters between activation and connection patterns involved in the positive, neutral, and negative emotions for information processing. Significance: The proposed feasible combination of both information propagation patterns and activation difference in the brain is meaningful for developing the effective human–computer interaction systems by adapting to human emotions in the real world applications.

208 citations


Journal ArticleDOI
TL;DR: This work identifies the discriminative anatomical landmarks from MR images in a data-driven manner, and proposes a deep multi-task multi-channel convolutional neural network for joint classification and regression, using MRI data and demographic information of subjects.
Abstract: In the field of computer-aided Alzheimer's disease (AD) diagnosis, jointly identifying brain diseases and predicting clinical scores using magnetic resonance imaging (MRI) have attracted increasing attention since these two tasks are highly correlated Most of existing joint learning approaches require hand-crafted feature representations for MR images Since hand-crafted features of MRI and classification/regression models may not coordinate well with each other, conventional methods may lead to sub-optimal learning performance Also, demographic information ( eg , age, gender, and education) of subjects may also be related to brain status, and thus can help improve the diagnostic performance However, conventional joint learning methods seldom incorporate such demographic information into the learning models To this end, we propose a deep multi-task multi-channel learning (DM $^2$ L) framework for simultaneous brain disease classification and clinical score regression, using MRI data and demographic information of subjects Specifically, we first identify the discriminative anatomical landmarks from MR images in a data-driven manner, and then extract multiple image patches around these detected landmarks We then propose a deep multi-task multi-channel convolutional neural network for joint classification and regression Our DM $^2$ L framework can not only automatically learn discriminative features for MR images, but also explicitly incorporate the demographic information of subjects into the learning process We evaluate the proposed method on four large multi-center cohorts with 1984 subjects, and the experimental results demonstrate that DM $^2$ L is superior to several state-of-the-art joint learning methods in both the tasks of disease classification and clinical score regression

196 citations


Journal ArticleDOI
TL;DR: This paper presents a novel framework for dermoscopy image recognition via both a deep learning method and a local descriptor encoding strategy that is capable of generating more discriminative features to deal with large variations within melanoma classes, as well as small variations between melanoma and nonmelanoma classes with limited training data.
Abstract: In this paper, we present a novel framework for dermoscopy image recognition via both a deep learning method and a local descriptor encoding strategy. Specifically, deep representations of a rescaled dermoscopy image are first extracted via a very deep residual neural network pretrained on a large natural image dataset. Then these local deep descriptors are aggregated by orderless visual statistic features based on Fisher vector (FV) encoding to build a global image representation. Finally, the FV encoded representations are used to classify melanoma images using a support vector machine with a Chi-squared kernel. Our proposed method is capable of generating more discriminative features to deal with large variations within melanoma classes, as well as small variations between melanoma and nonmelanoma classes with limited training data. Extensive experiments are performed to demonstrate the effectiveness of our proposed method. Comparisons with state-of-the-art methods show the superiority of our method using the publicly available ISBI 2016 Skin lesion challenge dataset.

187 citations


Journal ArticleDOI
TL;DR: This tutorial is aimed at providing an introduction to brain functional connectivity from electrophysiological signals, including electroencephalographic, magnetoencephalography, electrocorticography, and stereoelectroencephalographers.
Abstract: We review the theory and algorithms of electrophysiological brain connectivity analysis. This tutorial is aimed at providing an introduction to brain functional connectivity from electrophysiological signals, including electroencephalography, magnetoencephalography, electrocorticography, and stereoelectroencephalography. Various connectivity estimators are discussed, and algorithms introduced. Important issues for estimating and mapping brain functional connectivity with electrophysiology are discussed.

148 citations


Journal ArticleDOI
TL;DR: This work proposes a new approach, based on a novel deep learning architecture that is called a Multi-directional Recurrent Neural Network that interpolates within data streams and imputes across data streams that provides dramatically improved estimation of missing measurements.
Abstract: Missing data is a ubiquitous problem. It is especially challenging in medical settings because many streams of measurements are collected at different—and often irregular—times. Accurate estimation of the missing measurements is critical for many reasons, including diagnosis, prognosis, and treatment. Existing methods address this estimation problem by interpolating within data streams or imputing across data streams (both of which ignore important information) or ignoring the temporal aspect of the data and imposing strong assumptions about the nature of the data-generating process and/or the pattern of missing data (both of which are especially problematic for medical data). We propose a new approach, based on a novel deep learning architecture that we call a Multi-directional Recurrent Neural Network that interpolates within data streams and imputes across data streams. We demonstrate the power of our approach by applying it to five real-world medical datasets. We show that it provides dramatically improved estimation of missing measurements in comparison to 11 state-of-the-art benchmarks (including Spline and Cubic Interpolations, MICE, MissForest, matrix completion, and several RNN methods); typical improvements in Root Mean Squared Error are between 35%–50%. Additional experiments based on the same five datasets demonstrate that the improvements provided by our method are extremely robust.

145 citations


Journal ArticleDOI
TL;DR: A novel multi-View convolutional neural network (CNN) framework is proposed by combining classical sEMG feature sets with a CNN-based deep learning model, taking advantage of both early and late fusion of learned multi-view deep features.
Abstract: Gesture recognition using sparse multichannel surface electromyography (sEMG) is a challenging problem, and the solutions are far from optimal from the point of view of muscle–computer interface. In this paper, we address this problem from the context of multi-view deep learning. A novel multi-view convolutional neural network (CNN) framework is proposed by combining classical sEMG feature sets with a CNN-based deep learning model. The framework consists of two parts. In the first part, multi-view representations of sEMG are modeled in parallel by a multistream CNN, and a performance-based view construction strategy is proposed to choose the most discriminative views from classical feature sets for sEMG-based gesture recognition. In the second part, the learned multi-view deep features are fused through a view aggregation network composed of early and late fusion subnetworks, taking advantage of both early and late fusion of learned multi-view deep features. Evaluations on 11 sparse multichannel sEMG databases as well as five databases with both sEMG and inertial measurement unit data demonstrate that our multi-view framework outperforms single-view methods on both unimodal and multimodal sEMG data streams.

142 citations


Journal ArticleDOI
TL;DR: A simple yet powerful method for matching the statistical distributions of two datasets, thus paving the way to BCI systems capable of reusing data from previous sessions and avoid the need of a calibration procedure.
Abstract: Objective: This paper presents a Transfer Learning approach for dealing with the statistical variability of electroencephalographic (EEG) signals recorded on different sessions and/or from different subjects. This is a common problem faced by brain–computer interfaces (BCI) and poses a challenge for systems that try to reuse data from previous recordings to avoid a calibration phase for new users or new sessions for the same user. Method: We propose a method based on Procrustes analysis for matching the statistical distributions of two datasets using simple geometrical transformations (translation, scaling, and rotation) over the data points. We use symmetric positive definite matrices (SPD) as statistical features for describing the EEG signals, so the geometrical operations on the data points respect the intrinsic geometry of the SPD manifold. Because of its geometry-aware nature, we call our method the Riemannian Procrustes analysis (RPA). We assess the improvement in transfer learning via RPA by performing classification tasks on simulated data and on eight publicly available BCI datasets covering three experimental paradigms (243 subjects in total). Results: Our results show that the classification accuracy with RPA is superior in comparison to other geometry-aware methods proposed in the literature. We also observe improvements in ensemble classification strategies when the statistics of the datasets are matched via RPA. Conclusion and significance: We present a simple yet powerful method for matching the statistical distributions of two datasets, thus paving the way to BCI systems capable of reusing data from previous sessions and avoid the need of a calibration procedure.

130 citations


Journal ArticleDOI
TL;DR: An overview of the recent developments of LIPUS for therapeutic applications is provided, based on the papers that report positive effects, and the findings on the understanding of its mechanism are presented.
Abstract: Ultrasound therapy has a long history of novel applications in medicine. Compared to high-intensity ultrasound used for tissue heating, low-intensity ultrasound has drawn increasing attention recently due to its ability to induce therapeutic changes without biologically significant temperature increase. Low-intensity pulsed ultrasound (LIPUS) is a specific type of ultrasound that delivers at a low intensity and outputs in the mode of pulsed waves. It has minimal thermal effects while maintaining the transmission of acoustic energy to the target tissue, which is able to provide noninvasive physical stimulation for therapeutic applications. LIPUS has been demonstrated to accelerate the healing of fresh fracture, nonunion and delayed union in both animal and clinical studies. The effectiveness of LIPUS for the applications of soft-tissue regeneration and inhibiting inflammatory responses has also been investigated experimentally. Additionally, research has shown that LIPUS is a promising modality for neuromodulation. The purpose of this review is to provide an overview of the recent developments of LIPUS for therapeutic applications, based on the papers that report positive effects, and to present the findings on the understanding of its mechanism. Current available LIPUS devices are also briefly described in this paper.

127 citations


Journal ArticleDOI
TL;DR: A driver drowsiness detection algorithm based on heart rate variability (HRV) analysis is proposed and validates the proposed method by comparing with electroencephalography (EEG)-based sleep scoring and demonstrates the usefulness of the framework of HRV-based anomaly detection.
Abstract: Objective: Driver drowsiness detection is a key technology that can prevent fatal car accidents caused by drowsy driving. The present work proposes a driver drowsiness detection algorithm based on heart rate variability (HRV) analysis and validates the proposed method by comparing with electroencephalography (EEG)-based sleep scoring. Methods: Changes in sleep condition affect the autonomic nervous system and then HRV, which is defined as an RR interval (RRI) fluctuation on an electrocardiogram trace. Eight HRV features are monitored for detecting changes in HRV by using multivariate statistical process control, which is a well known anomaly detection method. Result: The performance of the proposed algorithm was evaluated through an experiment using a driving simulator. In this experiment, RRI data were measured from 34 participants during driving, and their sleep onsets were determined based on the EEG data by a sleep specialist. The validation result of the experimental data with the EEG data showed that drowsiness was detected in 12 out of 13 pre-N1 episodes prior to the sleep onsets, and the false positive rate was 1.7 times per hour. Conclusion: The present work also demonstrates the usefulness of the framework of HRV-based anomaly detection that was originally proposed for epileptic seizure prediction. Significance: The proposed method can contribute to preventing accidents caused by drowsy driving.

125 citations


Journal ArticleDOI
TL;DR: The applicability of radar for gait classification with application to home security, medical diagnosis, rehabilitation, and assisted living is demonstrated and radar micro-Doppler signatures and their Fourier transforms are well suited to capture changes in gait.
Abstract: Objective: In this paper, we demonstrate the applicability of radar for gait classification with application to home security, medical diagnosis, rehabilitation, and assisted living. Aiming at identifying changes in gait patterns based on radar micro-Doppler signatures, this paper is concerned with solving the intra motion category classification problem of gait recognition. Methods: New gait classification approaches utilizing physical features, subspace features, and sum-of-harmonics modeling are presented and their performances are evaluated using experimental K -band radar data of four test subjects. Five different gait classes are considered for each person, including normal, pathological, and assisted walks. Results: The proposed approaches are shown to outperform existing methods for radar-based gait recognition, which utilize physical features from the cadence-velocity data representation domain as in this paper. The analyzed gait classes are correctly identified with an average accuracy of 93.8%, where a classification rate of 98.5% is achieved for a single gait class. When applied to new data of another individual, a classification accuracy on the order of 80% can be expected. Conclusion: Radar micro-Doppler signatures and their Fourier transforms are well suited to capture changes in gait. Five different walking styles are recognized with high accuracy. Significance: Radar-based sensing of gait is an emerging technology with multi-faceted applications in security and health care industries. We show that radar, as a contact-less sensing technology, can supplement existing gait diagnostic tools with respect to long-term monitoring and reproducibility of the examinations.

Journal ArticleDOI
TL;DR: A dominant-current deep learning scheme for EIT imaging, in which dominant parts of ICC are utilized to generate multi-channel inputs of CNN and significant performance improvements of the proposed methods are shown in reconstructing targets with sharp corners or edges.
Abstract: Objective: Deep learning has recently been applied to electrical impedance tomography (EIT) imaging. Nevertheless, there are still many challenges that this approach has to face, e.g., targets with sharp corners or edges cannot be well recovered when using circular inclusion training data. This paper proposes an iterative-based inversion method and a convolutional neural network (CNN) based inversion method to recover some challenging inclusions such as triangular, rectangular, or lung shapes, where the CNN-based method uses only random circle or ellipse training data. Methods: First, the iterative method, i.e., bases-expansion subspace optimization method (BE-SOM), is proposed based on a concept of induced contrast current (ICC) with total variation regularization. Second, the theoretical analysis of BE-SOM and the physical concepts introduced there motivate us to propose a dominant-current deep learning scheme for EIT imaging, in which dominant parts of ICC are utilized to generate multi-channel inputs of CNN. Results: The proposed methods are tested with both numerical and experimental data, where several realistic phantoms including simulated pneumothorax and pleural effusion pathologies are also considered. Conclusions and Significance: Significant performance improvements of the proposed methods are shown in reconstructing targets with sharp corners or edges. It is also demonstrated that the proposed methods are capable of fast, stable, and high-quality EIT imaging, which is promising in providing quantitative images for potential clinical applications.

Journal ArticleDOI
TL;DR: A fully convolutional neural network with attentional deep supervision for the automatic and accurate segmentation of the ultrasound images with improvement in overall segmentation accuracy is developed.
Abstract: Objective: Segmentation of anatomical structures in ultrasound images requires vast radiological knowledge and experience. Moreover, the manual segmentation often results in subjective variations, therefore, an automatic segmentation is desirable. We aim to develop a fully convolutional neural network (FCNN) with attentional deep supervision for the automatic and accurate segmentation of the ultrasound images. Method: FCNN/CNNs are used to infer high-level context using low-level image features. In this paper, a sub-problem specific deep supervision of the FCNN is performed. The attention of fine resolution layers is steered to learn object boundary definitions using auxiliary losses, whereas coarse resolution layers are trained to discriminate object regions from the background. Furthermore, a customized scheme for downweighting the auxiliary losses and a trainable fusion layer are introduced. This produces an accurate segmentation and helps in dealing with the broken boundaries, usually found in the ultrasound images. Results: The proposed network is first tested for blood vessel segmentation in liver images. It results in $F1$ score, mean intersection over union, and dice index of 0.83, 0.83, and 0.79, respectively. The best values observed among the existing approaches are produced by U-net as 0.74, 0.81, and 0.75, respectively. The proposed network also results in dice index value of 0.91 in the lumen segmentation experiments on MICCAI 2011 IVUS challenge dataset, which is near to the provided reference value of 0.93. Furthermore, the improvements similar to vessel segmentation experiments are also observed in the experiment performed to segment lesions. Conclusion: Deep supervision of the network based on the input-output characteristics of the layers results in improvement in overall segmentation accuracy. Significance: Sub-problem specific deep supervision for ultrasound image segmentation is the main contribution of this paper. Currently the network is trained and tested for fixed size inputs. It requires image resizing and limits the performance in small size images.

Journal ArticleDOI
TL;DR: It is concluded that the measurement of arterial impedance via IPG methods is an adequate indicator to estimate BP and the proposed method appears to offer superiority compared to the conventional PTT estimation approaches.
Abstract: Objective: To demonstrate the feasibility of everaging impedance plethysmography (IPG) for detection of pulse transit time (PTT) and estimation of blood pressure (BP). Methods: We first established the relationship between BP, PTT, and arterial impedance ( i.e. , the IPG observations). The IPG sensor was placed on the wrist while the photoplethysmography sensor was attached to the index finger to measure the PTT. With a cuff-based BP monitoring system placed on the upper arm as a reference, our proposed methodology was evaluated on 15 young, healthy human subjects leveraging handgrip exercises to manipulate BP/PTT and compared to several conventional PTT models to assess the efficacy of PTT/BP detections. Results: The proposed model correlated with BP fairly well with group average correlation coefficients of ${\text{0.88}}\pm {\text{0.07}}$ for systolic BP (SBP) and ${\text{0.88}}\pm {\text{0.06}}$ for diastolic BP (DBP). In comparison with the other PTT methods, PTT-IPG-based BP estimation provided a lower root-mean-squared-error of ${\text{8.47}}\pm {\text{0.91}}\,{\text{mmHg}}$ and ${\text{5.02}}\pm {\text{0.73}}\,{\text{mmHg}}$ for SBP and DBP, respectively. Conclusion: We conclude that the measurement of arterial impedance via IPG methods is an adequate indicator to estimate BP. The proposed method appears to offer superiority compared to the conventional PTT estimation approaches. Significance: Using impedance magnitude to estimate PTT offers promise to realize wearable and cuffless BP devices.

Journal ArticleDOI
TL;DR: This work reviews existing technologies currently used for measurement of the four primary vital signs: temperature, heart rate, respiration rate, and blood pressure, along with physical activity, sweat, and emotion.
Abstract: Wearable technologies will play an important role in advancing precision medicine by enabling measurement of clinically-relevant parameters describing an individual's health state. The lifestyle and fitness markets have provided the driving force for the development of a broad range of wearable technologies that can be adapted for use in healthcare. Here we review existing technologies currently used for measurement of the four primary vital signs: temperature, heart rate, respiration rate, and blood pressure, along with physical activity, sweat, and emotion. We review the relevant physiology that defines the measurement needs and evaluate the different methods of signal transduction and measurement modalities for the use of wearables in healthcare.

Journal ArticleDOI
TL;DR: The study demonstrated that dry-contact electrode ear-EEG is a feasible technology for EEG recording, and represents an important technological advancement of the method in terms of user-friendliness, because it eliminates the need for gel in the electrode-skin interface.
Abstract: Objective: Ear-EEG is a recording method in which EEG signals are acquired from electrodes placed on an earpiece inserted into the ear. Thereby, ear-EEG provides a noninvasive and discreet way of recording EEG, and has the potential to be used for long-term brain monitoring in real-life environments. Whereas previously reported ear-EEG recordings have been performed with wet electrodes, the objective of this study was to develop and evaluate dry-contact electrode ear-EEG. Methods: To achieve a well-functioning dry-contact interface, a new ear-EEG platform was developed. The platform comprised actively shielded and nanostructured electrodes embedded in an individualized soft-earpiece. The platform was evaluated in a study of 12 subjects and four EEG paradigms: auditory steady-state response, steady-state visual evoked potential, mismatch negativity, and alpha-band modulation. Results: Recordings from the prototyped dry-contact ear-EEG platform were compared to conventional scalp EEG recordings. When all electrodes were referenced to a common scalp electrode (Cz), the performance was on par with scalp EEG measured close to the ear. With both the measuring electrode and the reference electrode located within the ear, statistically significant (p Conclusion: The study demonstrated that dry-contact electrode ear-EEG is a feasible technology for EEG recording. Significance: The prototyped dry-contact ear-EEG platform represents an important technological advancement of the method in terms of user-friendliness, because it eliminates the need for gel in the electrode-skin interface.

Journal ArticleDOI
TL;DR: The results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k-space and negligible aliasing artifacts or signal-noise-ratio loss.
Abstract: T1-weighted image (T1WI) and T2-weighted image (T2WI) are the two routinely acquired magnetic resonance (MR) modalities that can provide complementary information for clinical and research usages. However, the relatively long acquisition time makes the acquired image vulnerable to motion artifacts. To speed up the imaging process, various algorithms have been proposed to reconstruct high-quality images from under-sampled k -space data. However, most of the existing algorithms only rely on mono-modality acquisition for the image reconstruction. In this paper, we propose to combine complementary MR acquisitions (i.e., T1WI and under-sampled T2WI particularly) to reconstruct the high-quality image (i.e., corresponding to the fully sampled T2WI). To the best of our knowledge, this is the first work to fuse multi-modal MR acquisitions through deep learning to speed up the reconstruction of a certain target image. Specifically, we present a novel deep learning approach, namely Dense-Unet, to accomplish the reconstruction task. The proposed Dense-Unet requires fewer parameters and less computation, while achieving promising performance. Our results have shown that Dense-Unet can reconstruct a three-dimensional T2WI volume in less than 10 s with an under-sampling rate of 8 for the k -space and negligible aliasing artifacts or signal-noise-ratio loss. Experiments also demonstrate excellent transferring capability of Dense-Unet when applied to the datasets acquired by different MR scanners. The above-mentioned results imply great potential of our method in many clinical scenarios.

Journal ArticleDOI
TL;DR: The proposed Rehab-Net framework was validated on sensor data collected in two situations: semi-naturalistic environment involving an archetypal activity of “making-tea” with four stroke survivors and natural environment where ten stroke survivors were free to perform any desired arm movement for the duration of 120 min.
Abstract: In this paper, we present a deep learning framework “ Rehab-Net ” for effectively classifying three upper limb movements of the human arm, involving extension, flexion, and rotation of the forearm, which, over the time, could provide a measure of rehabilitation progress. The proposed framework, Rehab-Net is formulated with a personalized, light weight and low-complex, customized convolutional neural network (CNN) model, using two-layers of CNN, interleaved with pooling layers, followed by a fully connected layer that classifies the three movements from tri-axial acceleration input data collected from the wrist. The proposed Rehab-Net framework was validated on sensor data collected in two situations: 1) semi-naturalistic environment involving an archetypal activity of “ making-tea ” with four stroke survivors and 2) natural environment, where ten stroke survivors were free to perform any desired arm movement for the duration of 120 min. We achieved an overall accuracy of 97.89% on semi-naturalistic data and 88.87% on naturalistic data which exceeded state-of-the-art learning algorithms namely, linear discriminant analysis, support vector machines, and k- means clustering with an average accuracy of 48.89%, 44.14%, and 27.64%. Subsequently, a computational complexity analysis of the proposed model has been discussed with an eye toward hardware implementation. The clinical significance of this study is to accurately monitor the clinical progress of the rehabilitated subjects under the ambulatory settings.

Journal ArticleDOI
TL;DR: The approach validates the increased temporal synchronization in epileptic EEG and achieves a comparable detection performance to previous studies, which enable a patient-specific approach for real-time seizure detection for personalized diagnosis and treatment.
Abstract: Objective: Synchronization phenomena of epileptic electroencephalography (EEG) have long been studied. In this study, we aim at investigating the spatial-temporal synchronization pattern in epileptic human brains using the spectral graph theoretic features extracted from scalp EEG and developing an efficient multivariate approach for detecting seizure onsets in real time. Methods: A complex network model is used for representing the recurrence pattern of EEG signals, based on which the temporal synchronization patterns are quantified using the spectral graph theoretic features. Furthermore, a statistical control chart is applied to the extracted features overtime for monitoring the transits from normal to epileptic states in multivariate EEG systems. Results: Our method is tested on 23 patients from CHB-MIT Scalp EEG database. The results show that the graph theoretic feature yields a high sensitivity ( $\sim$ 98%) and low latency ( $\sim$ 6 s) on average, and seizure onsets in 18 patients are 100% detected. Conclusion: Our approach validates the increased temporal synchronization in epileptic EEG and achieves a comparable detection performance to previous studies. Significance: We characterize the temporal synchronization patterns of epileptic EEG using spectral network metrics. In addition, we found significant changes in temporal synchronization in epileptic EEG, which enable a patient-specific approach for real-time seizure detection for personalized diagnosis and treatment.

Journal ArticleDOI
TL;DR: The proposed depth-resolved MWPPG method can provide accurate measurements of SVR and BP, which are traditionally difficult to measure in a noninvasive or continuous fashion.
Abstract: Objective: To fight the “silent killer” hypertension, continuous blood pressure (BP) monitoring has been one of the most desired functions in wearable electronics. However, current BP measuring principles and protocols either involve a vessel occlusion process with a cuff or require multiple sensing nodes on the body, which makes it difficult to implement them in compact wearable electronics like smartwatches and wristbands with long-term wearability. Methods: In this work, we proposed a highly compact multi-wavelength photoplethysmography (MWPPG) module and a depth-resolved MWPPG approach for continuous monitoring of BP and systemic vascular resistance (SVR). By associating the wavelength-dependent light penetration depth in the skin with skin vasculatures, our method exploited the pulse transit time (PTT) on skin arterioles for tracking SVR ( n = 20). Then, we developed an arteriolar PTT-based method for beat-to-beat BP measurement. The BP estimation accuracy of the proposed arteriolar PTT method was validated against Finometer ( n = 20) and the arterial line ( n = 4). Results: The correlation between arteriolar PTT and SVR was theoretically deduced and experimentally validated on 20 human subjects performing various maneuvers. The proposed arteriolar PTT-based method outperformed the traditional arterial PTT-based method with better BP estimation accuracy and simpler measurement setup, i.e., with a single sensing node. Conclusion: The proposed depth-resolved MWPPG method can provide accurate measurements of SVR and BP, which are traditionally difficult to measure in a noninvasive or continuous fashion. Significance: This MWPPG work provides the wearable healthcare electronics of compact size with a low-cost and physiology-based solution for continuous measurement of BP and SVR.

Journal ArticleDOI
TL;DR: By simplifying the experimental setup, reducing the hardware encumbrance, and improving signal quality during dynamic contractions, the developed system opens new perspectives in the use of HD-sEMG in applied and clinical settings.
Abstract: Objective: The use of linear or bi-dimensional electrode arrays for surface EMG detection (HD-sEMG) is gaining attention as it increases the amount and reliability of information extracted from the surface EMG. However, the complexity of the setup and the encumbrance of HD-sEMG hardware currently limits its use in dynamic conditions. The aim of this paper was to develop a miniaturized, wireless, and modular HD-sEMG acquisition system for applications requiring high portability and robustness to movement artifacts. Methods: A system with modular architecture was designed. Its core is a miniaturized 32-channel amplifier (Sensor Unit - SU) sampling at 2048 sps/ch with 16 bit resolution and wirelessly transmitting data to a PC or a mobile device. Each SU is a node of a Body Sensor Network for the synchronous signal acquisition from different muscles. Results: A prototype with two SUs was developed and tested. Each SU is small (3.4 cm × 3 cm × 1.5 cm), light (16.7 g), and can be connected directly to the electrodes; thus, avoiding the need for customary, wired setup. It allows to detect HD-sEMG signals with an average noise of 1.8 μVRMS and high performance in terms of rejection of power-line interference and motion artefacts. Tests performed on two SUs showed no data loss in a 22 m range and a ±500 μs maximum synchronization delay. Conclusions: Data collected in a wide spectrum of experimental conditions confirmed the functionality of the designed architecture and the quality of the acquired signals. Significance : By simplifying the experimental setup, reducing the hardware encumbrance, and improving signal quality during dynamic contractions, the developed system opens new perspectives in the use of HD-sEMG in applied and clinical settings.

Journal ArticleDOI
TL;DR: The proposed CNN-based information fusion (CIF) algorithm is generalizable, robust and efficient in detecting heartbeat location from multiple signals, and would accurately estimate heartbeat locations even when only a subset of channels are reliable.
Abstract: Objective: Heartbeat detection remains central to cardiac disease diagnosis and management, and is traditionally performed based on electrocardiogram (ECG). To improve robustness and accuracy of detection, especially, in certain critical-care scenarios, the use of additional physiological signals such as arterial blood pressure (BP) has recently been suggested. Therefore, estimation of heartbeat location requires information fusion from multiple signals. However, reported efforts in this direction often obtain multimodal estimates somewhat indirectly, by voting among separately obtained signal-specific intermediate estimates. In contrast, we propose to directly fuse information from multiple signals without requiring intermediate estimates, and thence estimate heartbeat location in a robust manner. Method: We propose as a heartbeat detector, a convolutional neural network (CNN) that learns fused features from multiple physiological signals. This method eliminates the need for hand-picked signal-specific features and ad hoc fusion schemes. Furthermore, being data-driven, the same algorithm learns suitable features from arbitrary set of signals. Results: Using ECG and BP signals of PhysioNet 2014 Challenge database, we obtained a score of 94%. Furthermore, using two ECG channels of MIT-BIH arrhythmia database, we scored 99.92%. Both those scores compare favorably with previously reported database-specific results. Also, our detector achieved high accuracy in a variety of clinical conditions. Conclusion: The proposed CNN-based information fusion (CIF) algorithm is generalizable, robust and efficient in detecting heartbeat location from multiple signals. Significance: In medical signal monitoring systems, our technique would accurately estimate heartbeat locations even when only a subset of channels are reliable.

Journal ArticleDOI
TL;DR: It is suggested that it is feasible to apply myoelectric pattern recognition to control the robotic hand in some but not all of the stroke patients.
Abstract: Objective: Myoelectric pattern recognition has been successfully applied as a human-machine interface to control robotic devices such as prostheses and exoskeletons, significantly improving the dexterity of myoelectric control. This study investigates the feasibility of applying myoelectric pattern recognition for controlling a robotic hand in stroke patients. Methods: Myoelectric pattern recognition of six hand motion patterns was performed using forearm electromyogram signals in paretic side of eight stroke subjects. Both the random cross validation (RCV) and the chronological handout validation (CHV) were applied to assess the offline myoelectric pattern recognition performance. Experiments on real-time myoelectric pattern recognition control of an exoskeleton robotic hand were also performed. Results: An average classification accuracy of 84.1% (the mean value from two different classifiers) and individual subject differences were observed in the offline myoelectric pattern recognition analysis using the RCV, while the accuracy decreased to 65.7% when the CHV was used. The stroke subjects achieved an average accuracy of 61.3 ± 20.9% for controlling the robotic hand. However, our study did not reveal a clear correlation between the real-time control accuracy and the offline myoelectric pattern recognition performance, or any specific characteristics of the stroke subjects. Conclusion: The findings suggest that it is feasible to apply myoelectric pattern recognition to control the robotic hand in some but not all of the stroke patients. Each stroke subject should be individually online tested for the feasibility of applying myoelectric pattern recognition control for robot-assisted rehabilitation.

Journal ArticleDOI
TL;DR: The dynamic model of cell out-of-plane orientation control is formulated by using the T-matrix approach and produces impactful benefits to cell surgery applications such as nucleus transplantation and organelle biopsy in precision medicine.
Abstract: In many cell surgery applications, cell must be oriented properly such that the microsurgery tool can access the target components with minimum damage to the cell. In this paper, a scheme for out of image plane orientation control of suspended biological cells using robotic controlled optical tweezers is presented for orientation-based cell surgery. Based on our previous work on planar cell rotation using optical tweezers, the dynamic model of cell out-of-plane orientation control is formulated by using the T-matrix approach. Vision-based algorithms are developed to extract the cell out of image plane orientation angles, based on 2-D image slices obtained under an optical microscope. A robust feedback controller is then proposed to achieve cell out-of-plane rotation. Experiments of automated out of image plane rotational control for cell nucleus extraction surgery are performed to demonstrate the effectiveness of the proposed approach. This approach advances robot-aided single cell manipulation and produces impactful benefits to cell surgery applications such as nucleus transplantation and organelle biopsy in precision medicine.

Journal ArticleDOI
TL;DR: This study is the first time to design and fabricate a miniature and lightweight head-mounted ultrasound stimulator for inducing neuromodulation in freely moving mice, and indicates that the proposed method can be used to induce noninvasive neuromomodulated mice.
Abstract: Neuromodulation is a fundamental method for obtaining basic information about neuronal circuits for use in treatments for neurological and psychiatric disorders. Ultrasound stimulation has become a promising approach for noninvasively inducing neuromodulation in animals and humans. However, the previous investigations were subject to substantial limitations, due to most of them involving anesthetized and fixed small-animal models. Studies of awake and freely moving animals are needed, but the currently used ultrasound devices are too bulky to be applied to a freely moving animal. This study is the first time to design and fabricate a miniature and lightweight head-mounted ultrasound stimulator for inducing neuromodulation in freely moving mice. The main components of the stimulator include a miniature piezoelectric ceramic, a concave epoxy acoustic lens, and housing and connection components. The device was able to induce action potentials recorded in situ and evoke head-turning behaviors by stimulating the primary somatosensory cortex barrel field of the mouse. These findings indicate that the proposed method can be used to induce noninvasive neuromodulation in freely moving mice. This novel method could potentially lead to the application of ultrasonic neuromodulation in more-extensive neuroscience investigations.

Journal ArticleDOI
TL;DR: This paper aims to develop automated cough sound analysis methods to objectively diagnose croup, and proposes the use of mathematical features inspired by the human auditory system, including the cochleagram for feature extraction and mel-frequency cepstral coefficients to capture the relevant aspects of the short-term power spectrum of speech signals.
Abstract: Objective: Croup, a respiratory tract infection common in children, causes an inflammation of the upper airway restricting normal breathing and producing cough sounds typically described as seallike “barking cough.” Physicians use the existence of barking cough as the defining characteristic of croup. This paper aims to develop automated cough sound analysis methods to objectively diagnose croup. Methods: In automating croup diagnosis, we propose the use of mathematical features inspired by the human auditory system. In particular, we utilize the cochleagram for feature extraction, a time-frequency representation where the frequency components are based on the frequency selectivity property of the human cochlea. Speech and cough share some similarities in the generation process and physiological wetware used. As such, we also propose the use of mel-frequency cepstral coefficients which has been shown to capture the relevant aspects of the short-term power spectrum of speech signals. Feature combination and backward sequential feature selection are also experimented with. Experimentation is performed on cough sound recordings from patients presenting various clinically diagnosed respiratory tract infections divided into croup and non-croup. The dataset is divided into training and test sets of 364 and 115 patients, respectively, with automatically segmented cough sound segments. Results: Croup and non-croup patient classification on the test dataset with the proposed methods achieve a sensitivity and specificity of 92.31% and 85.29%, respectively. Conclusion: Experimental results show the significant improvement in automatic croup diagnosis against earlier methods. Significance: This paper has the potential to automate croup diagnosis based solely on cough sound analysis.

Journal ArticleDOI
TL;DR: In this article, a lumped bio-physical model of human body communication (HBC) is developed, supported by experimental validations that provide insight into some of the key discrepancies found in previous studies.
Abstract: Human body communication (HBC) has emerged as an alternative to radio wave communication for connecting low power, miniaturized wearable, and implantable devices in, on, and around the human body. HBC uses the human body as the communication channel between on-body devices. Previous studies characterizing the human body channel has reported widely varying channel response much of which has been attributed to the variation in measurement setup. This calls for the development of a unifying bio-physical model of HBC, supported by in-depth analysis and an understanding of the effect of excitation, termination modality on HBC measurements. This paper characterizes the human body channel up to 1 MHz frequency to evaluate it as a medium for the broadband communication. The communication occurs primarily in the electro-quasistatic (EQS) regime at these frequencies through the subcutaneous tissues. A lumped bio-physical model of HBC is developed, supported by experimental validations that provide insight into some of the key discrepancies found in previous studies. Voltage loss measurements are carried out both with an oscilloscope and a miniaturized wearable prototype to capture the effects of non-common ground. Results show that the channel loss is strongly dependent on the termination impedance at the receiver end, with up to 4 dB variation in average loss for different termination in an oscilloscope and an additional 9 dB channel loss with wearable prototype compared to an oscilloscope measurement. The measured channel response with capacitive termination reduces low-frequency loss and allows flat-band transfer function down to 13 KHz, establishing the human body as a broadband communication channel. Analysis of the measured results and the simulation model shows that instruments with 50 Ω input impedance (Vector Network Analyzer, Spectrum Analyzer) provides pessimistic estimation of channel loss at low frequencies. Instead, high impedance and capacitive termination should be used at the receiver end for accurate voltage mode loss measurements of the HBC channel at low frequencies. The experimentally validated bio-physical model shows that capacitive voltage mode termination can improve the low frequency loss by up to 50 dB, which helps broadband communication significantly.

Journal ArticleDOI
TL;DR: It is demonstrated that infrared thermography might become a clinically relevant alternative for the currently available RR monitoring modalities in neonatal care.
Abstract: Monitoring of respiratory rate (RR) is very important for patient assessment In fact, it is considered one of the relevant vital parameters in critical care medicine Nowadays, standard monitoring relies on obtrusive and invasive techniques, which require adhesive electrodes or sensors to be attached to the patient's body Unfortunately, these procedures cause stress, pain, and frequently damage the vulnerable skin of preterm infants This paper presents a “black-box” algorithm for remote monitoring of RR in thermal videos “Black-box” in this context means that the algorithm does not rely on tracking of specific anatomic landmarks Instead, it automatically distinguishes regions of interest in the video containing the respiratory signal from those containing only noise To examine its performance and robustness during physiological (phase A) and pathological scenarios (phase B), a study on 12 healthy volunteers was carried out After a successful validation on adults, a clinical study on eight newborn infants was conducted A good agreement between estimated RR and ground truth was achieved In the study involving adult volunteers, a mean root-mean-square error (RMSE) of ( $031 \pm 009$ ) breaths/min and ( $327 \pm 072$ ) breaths/min was obtained for phase A and phase B, respectively In the study involving infants, the mean RMSE hovered around ( $415 \pm 144$ ) breaths/min In brief, this paper demonstrates that infrared thermography might become a clinically relevant alternative for the currently available RR monitoring modalities in neonatal care

Journal ArticleDOI
TL;DR: The results show the feasibility of a real-time photoacoustic thermometry system for safe and effective monitoring of HIFU treatment and investigate the relationship between the photoac acoustic amplitude and the measured temperature with in vitro phantoms and in vivo tumor-bearing mice.
Abstract: High-intensity focused ultrasound (HIFU) treatment is a promising non-invasive method for killing or destroying the diseased tissues by locally delivering thermal and mechanical energy without damaging surrounding normal tissues. In HIFU, measuring the temperature at the site of delivery is important for improving therapeutic efficacy, controlling safety, and appropriately planning a treatment. Several researchers have proposed photoacoustic thermometry for monitoring HIFU treatment, but they had many limitations, including the inability to image while the HIFU is on, inability to provide two-dimensional monitoring, and the inability to be used clinically. In this paper, we propose a novel integrated real-time photoacoustic thermometry system for HIFU treatment monitoring. The system provides ultrasound B-mode imaging, photoacoustic structural imaging, and photoacoustic thermometry during HIFU treatment in real-time for both in vitro and in vivo environments, without any interference from the strong therapeutic HIFU waves. We have successfully tested the real-time photoacoustic thermometry by investigating the relationship between the photoacoustic amplitude and the measured temperature with in vitro phantoms and in vivo tumor-bearing mice. The results show the feasibility of a real-time photoacoustic thermometry system for safe and effective monitoring of HIFU treatment.

Journal ArticleDOI
TL;DR: It is noticed that the MEG modality may be particularly effective in distinguishing between subjects with MCI and healthy controls, a high classification accuracy was reported recently; whereas the EEG seems to be performing well in classifying AD and healthy subjects, which also reached around 98% of the accuracy.
Abstract: This paper reviews the state-of-the-art neuromarkers development for the prognosis of Alzheimer's disease (AD) and mild cognitive impairment (MCI). The first part of this paper is devoted to reviewing the recently emerged machine learning (ML) algorithms based on electroencephalography (EEG) and magnetoencephalography (MEG) modalities. In particular, the methods are categorized by different types of neuromarkers. The second part of the review is dedicated to a series of investigations that further highlight the differences between these two modalities. First, several source reconstruction methods are reviewed and their source-level performances explored, followed by an objective comparison between EEG and MEG from multiple perspectives. Finally, a number of the most recent reports on classification of MCI/AD during resting state using EEG/MEG are documented to show the up-to-date performance for this well-recognized data collecting scenario. It is noticed that the MEG modality may be particularly effective in distinguishing between subjects with MCI and healthy controls, a high classification accuracy of more than 98% was reported recently; whereas the EEG seems to be performing well in classifying AD and healthy subjects, which also reached around 98% of the accuracy. A number of influential factors have also been raised and suggested for careful considerations while evaluating the ML-based diagnosis systems in the real-world scenarios.