scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Biomedical Engineering in 2018"


Journal ArticleDOI
TL;DR: A comparison of BCI performance between the proposed TRCA-based method and an extended canonical correlation analysis (CCA)-based method using a 40-class SSVEP dataset recorded from 12 subjects validated the efficiency of the proposal.
Abstract: Objective: This study proposes and evaluates a novel data-driven spatial filtering approach for enhancing steady-state visual evoked potentials (SSVEPs) detection toward a high-speed brain-computer interface (BCI) speller. Methods: Task-related component analysis (TRCA), which can enhance reproducibility of SSVEPs across multiple trials, was employed to improve the signal-to-noise ratio (SNR) of SSVEP signals by removing background electroencephalographic (EEG) activities. An ensemble method was further developed to integrate TRCA filters corresponding to multiple stimulation frequencies. This study conducted a comparison of BCI performance between the proposed TRCA-based method and an extended canonical correlation analysis (CCA)-based method using a 40-class SSVEP dataset recorded from 12 subjects. An online BCI speller was further implemented using a cue-guided target selection task with 20 subjects and a free-spelling task with 10 of the subjects. Results: The offline comparison results indicate that the proposed TRCA-based approach can significantly improve the classification accuracy compared with the extended CCA-based method. Furthermore, the online BCI speller achieved averaged information transfer rates (ITRs) of 325.33 ± 38.17 bits/min with the cue-guided task and 198.67 ± 50.48 bits/min with the free-spelling task. Conclusion: This study validated the efficiency of the proposed TRCA-based method in implementing a high-speed SSVEP-based BCI. Significance: The high-speed SSVEP-based BCIs using the TRCA method have great potential for various applications in communication and control.

455 citations


Journal ArticleDOI
TL;DR: This paper trains a fully convolutional network to generate a target image given a source image and proposes to use the adversarial learning strategy to better model the FCN, designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images.
Abstract: Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.

417 citations


Journal ArticleDOI
TL;DR: A new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process is proposed which can bring consistent performance improvement for both deep and shallow network architectures.
Abstract: Objective: Deep learning based methods for retinal vessel segmentation are usually trained based on pixel-wise losses, which treat all vessel pixels with equal importance in pixel-to-pixel matching between a predicted probability map and the corresponding manually annotated segmentation. However, due to the highly imbalanced pixel ratio between thick and thin vessels in fundus images, a pixel-wise loss would limit deep learning models to learn features for accurate segmentation of thin vessels, which is an important task for clinical diagnosis of eye-related diseases. Methods: In this paper, we propose a new segment-level loss which emphasizes more on the thickness consistency of thin vessels in the training process. By jointly adopting both the segment-level and the pixel-wise losses, the importance between thick and thin vessels in the loss calculation would be more balanced. As a result, more effective features can be learned for vessel segmentation without increasing the overall model complexity. Results: Experimental results on public data sets demonstrate that the model trained by the joint losses outperforms the current state-of-the-art methods in both separate-training and cross-training evaluations. Conclusion: Compared to the pixel-wise loss, utilizing the proposed joint-loss framework is able to learn more distinguishable features for vessel segmentation. In addition, the segment-level loss can bring consistent performance improvement for both deep and shallow network architectures. Significance: The findings from this study of using joint losses can be applied to other deep learning models for performance improvement without significantly changing the network architectures.

317 citations


Journal ArticleDOI
TL;DR: In this paper, a deep residual learning network is proposed to remove aliasing artifacts from artifact corrupted images, which can work as an iterative k-space interpolation algorithm using framelet representation.
Abstract: Objective: Accelerated magnetic resonance (MR) image acquisition with compressed sensing (CS) and parallel imaging is a powerful method to reduce MR imaging scan time. However, many reconstruction algorithms have high computational costs. To address this, we investigate deep residual learning networks to remove aliasing artifacts from artifact corrupted images. Methods: The deep residual learning networks are composed of magnitude and phase networks that are separately trained. If both phase and magnitude information are available, the proposed algorithm can work as an iterative k-space interpolation algorithm using framelet representation. When only magnitude data are available, the proposed approach works as an image domain postprocessing algorithm. Results: Even with strong coherent aliasing artifacts, the proposed network successfully learned and removed the aliasing artifacts, whereas current parallel and CS reconstruction methods were unable to remove these artifacts. Conclusion: Comparisons using single and multiple coil acquisition show that the proposed residual network provides good reconstruction results with orders of magnitude faster computational time than existing CS methods. Significance: The proposed deep learning framework may have a great potential for accelerated MR reconstruction by generating accurate results immediately.

275 citations


Journal ArticleDOI
TL;DR: This paper proposes to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable, providing a significant improvement in the BCI transfer learning problem.
Abstract: Objective: This paper tackles the problem of transfer learning in the context of electroencephalogram (EEG)-based brain–computer interface (BCI) classification. In particular, the problems of cross-session and cross-subject classification are considered. These problems concern the ability to use data from previous sessions or from a database of past users to calibrate and initialize the classifier, allowing a calibration-less BCI mode of operation. Methods: Data are represented using spatial covariance matrices of the EEG signals, exploiting the recent successful techniques based on the Riemannian geometry of the manifold of symmetric positive definite (SPD) matrices. Cross-session and cross-subject classification can be difficult, due to the many changes intervening between sessions and between subjects, including physiological, environmental, as well as instrumental changes. Here, we propose to affine transform the covariance matrices of every session/subject in order to center them with respect to a reference covariance matrix, making data from different sessions/subjects comparable. Then, classification is performed both using a standard minimum distance to mean classifier, and through a probabilistic classifier recently developed in the literature, based on a density function (mixture of Riemannian Gaussian distributions) defined on the SPD manifold. Results: The improvements in terms of classification performances achieved by introducing the affine transformation are documented with the analysis of two BCI datasets. Conclusion and significance: Hence, we make, through the affine transformation proposed, data from different sessions and subject comparable, providing a significant improvement in the BCI transfer learning problem.

241 citations


Journal ArticleDOI
TL;DR: It is demonstrated that a robust set of features can be learned from scalp EEG that characterize the preictal state of focal seizures, and the results significantly outperform a random predictor and other seizure prediction algorithms.
Abstract: Objective: This paper investigates the hypothesis that focal seizures can be predicted using scalp electroencephalogram (EEG) data. Our first aim is to learn features that distinguish between the interictal and preictal regions. The second aim is to define a prediction horizon in which the prediction is as accurate and as early as possible, clearly two competing objectives. Methods: Convolutional filters on the wavelet transformation of the EEG signal are used to define and learn quantitative signatures for each period: interictal, preictal, and ictal. The optimal seizure prediction horizon is also learned from the data as opposed to making an a priori assumption. Results: Computational solutions to the optimization problem indicate a 10-min seizure prediction horizon. This result is verified by measuring Kullback–Leibler divergence on the distributions of the automatically extracted features. Conclusion: The results on the EEG database of 204 recordings demonstrate that (i) the preictal phase transition occurs approximately ten minutes before seizure onset, and (ii) the prediction results on the test set are promising, with a sensitivity of 87.8% and a low false prediction rate of 0.142 FP/h. Our results significantly outperform a random predictor and other seizure prediction algorithms. Significance: We demonstrate that a robust set of features can be learned from scalp EEG that characterize the preictal state of focal seizures.

212 citations


Journal ArticleDOI
TL;DR: The current state of the art in microwave breast imaging is defined, and suitable design characteristics for ease of clinical use are identified, to identify suitable system design features for clinical use.
Abstract: Objective: Microwave breast imaging has seen significant academic and commercial development in recent years, with four new operational microwave imaging systems used with patients since 2016. In this paper, a comprehensive review of these recent clinical advances is presented, comparing patient populations and study outcomes. For the first time, the designs of operational microwave imaging systems are compared in detail. Methods: First, the current understanding of dielectric properties of human breast tissues is reviewed, considering evidence from operational microwave imaging systems and from dielectric properties measurement studies. Second, design features of operational microwave imaging systems are discussed in terms of advantages and disadvantages during clinical operation. Results: Reported results from patient imaging trials are compared, contrasting the principal results from each trial. Additionally, clinical experience from each trial is highlighted, identifying desirable system design features for clinical use. Conclusions: Increasingly, evidence from patient imaging studies indicate that a contrast in dielectric properties between healthy and cancerous breast tissues exists. However, despite the significant and encouraging results from patient trials, variation still exists in the microwave imaging system design. Significance: This study seeks to define the current state of the art in microwave breast imaging, and identify suitable design characteristics for ease of clinical use.

191 citations


Journal ArticleDOI
Mei Zhou, Kai Jin1, Shaoze Wang, Juan Ye1, Dahong Qian1 
TL;DR: The proposed image enhancement method to improve color retinal image luminosity and contrast is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness.
Abstract: Objective: Many common eye diseases and cardiovascular diseases can be diagnosed through retinal imaging. However, due to uneven illumination, image blurring, and low contrast, retinal images with poor quality are not useful for diagnosis, especially in automated image analyzing systems. Here, we propose a new image enhancement method to improve color retinal image luminosity and contrast. Methods: A luminance gain matrix, which is obtained by gamma correction of the value channel in the HSV (hue, saturation, and value) color space, is used to enhance the R, G, and B (red, green and blue) channels, respectively. Contrast is then enhanced in the luminosity channel of L*a*b* color space by CLAHE (contrast-limited adaptive histogram equalization). Image enhancement by the proposed method is compared to other methods by evaluating quality scores of the enhanced images. Results: The performance of the method is mainly validated on a dataset of 961 poor-quality retinal images. Quality assessment (range 0–1) of image enhancement of this poor dataset indicated that our method improved color retinal image quality from an average of 0.0404 (standard deviation 0.0291) up to an average of 0.4565 (standard deviation 0.1000). Conclusion: The proposed method is shown to achieve superior image enhancement compared to contrast enhancement in other color spaces or by other related methods, while simultaneously preserving image naturalness. Significance: This method of color retinal image enhancement may be employed to assist ophthalmologists in more efficient screening of retinal diseases and in development of improved automated image analysis for clinical diagnosis.

164 citations


Journal ArticleDOI
TL;DR: A new BCI speller based on miniature asymmetric visual evoked potentials (aVEPs), which encodes 32 characters with a space-code division multiple access scheme and decodes EEG features with a discriminative canonical pattern matching algorithm is developed.
Abstract: Goal: Traditional visual brain–computer interfaces (BCIs) preferred to use large-size stimuli to attract the user's attention and elicit distinct electroencephalography (EEG) features. However, the visual stimuli are of no interest to the users as they just serve as the hidden codes behind the characters. Furthermore, using stronger visual stimuli could cause visual fatigue and other adverse symptoms to users. Therefore, it's imperative for visual BCIs to use small and inconspicuous visual stimuli to code characters. Methods: This study developed a new BCI speller based on miniature asymmetric visual evoked potentials (aVEPs), which encodes 32 characters with a space-code division multiple access scheme and decodes EEG features with a discriminative canonical pattern matching algorithm. Notably, the visual stimulus used in this study only subtended 0.5° of visual angle and was placed outside the fovea vision on the lateral side, which could only induce a miniature potential about 0.5 μ V in amplitude and about 16.5 dB in signal-to-noise rate. A total of 12 subjects were recruited to use the miniature aVEP speller in both offline and online tests. Results: Information transfer rates up to 63.33 b/min could be achieved from online tests (online demo URL: https://www.youtube.com/edit?o=U&video_id=kC7btB3mvGY ). Conclusion: Experimental results demonstrate the feasibility of using very small and inconspicuous visual stimuli to implement an efficient BCI system, even though the elicited EEG features are very weak. Significance: The proposed innovative technique can broaden the category of BCIs and strengthen the brain-computer communication.

154 citations


Journal ArticleDOI
TL;DR: A novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS.
Abstract: Photoacoustic imaging (PAI) is an emerging medical imaging modality capable of providing high spatial resolution of Ultrasound (US) imaging and high contrast of optical imaging. Delay-and-Sum (DAS) is the most common beamforming algorithm in PAI. However, using DAS beamformer leads to low resolution images and considerable contribution of off-axis signals. A new paradigm namely delay-multiply-and-sum (DMAS), which was originally used as a reconstruction algorithm in confocal microwave imaging, was introduced to overcome the challenges in DAS. DMAS was used in PAI systems and it was shown that this algorithm results in resolution improvement and sidelobe degrading. However, DMAS is still sensitive to high levels of noise, and resolution improvement is not satisfying. Here, we propose a novel algorithm based on DAS algebra inside DMAS formula expansion, double stage DMAS (DS-DMAS), which improves the image resolution and levels of sidelobe, and is much less sensitive to high level of noise compared to DMAS. The performance of DS-DMAS algorithm is evaluated numerically and experimentally. The resulted images are evaluated qualitatively and quantitatively using established quality metrics including signal-to-noise ratio (SNR), full-width-half-maximum (FWHM) and contrast ratio (CR). It is shown that DS-DMAS outperforms DAS and DMAS at the expense of higher computational load. DS-DMAS reduces the lateral valley for about 15 dB and improves the SNR and FWHM better than 13% and 30%, respectively. Moreover, the levels of sidelobe are reduced for about 10 dB in comparison with those in DMAS.

152 citations


Journal ArticleDOI
TL;DR: The main parameters (displacements, stress, volume fluctuations and temperature) affecting the stump-socket interface and reducing the comfort/stability of limb prostheses are described and the technological solutions available to face an altered distribution of stresses on the residual limb tissues.
Abstract: In the prosthetics field, one of the most important bottlenecks is still the human-machine interface, namely the socket. Indeed, a large number of amputees still rejects prostheses or points out a low satisfaction level, due to a sub-optimal interaction between the socket and the residual limb tissues. The aim of this paper is to describe the main parameters (displacements, stress, volume fluctuations and temperature) affecting the stump-socket interface and reducing the comfort/stability of limb prostheses. In this review, a classification of the different socket types proposed in the literature is reported, together with an analysis of advantages and disadvantages of the different solutions, from multiple viewpoints. The paper then describes the technological solutions available to face an altered distribution of stresses on the residual limb tissues, volume fluctuations affecting the stump overtime and temperature variations affecting the residual tissues within the socket. The open challenges in this research field are highlighted and the possible future routes are discussed, towards the ambitious objective of achieving an advanced socket able to self-adapt in real-time to the complex interplay of factors affecting the stump, during both static and dynamic tasks.

Journal ArticleDOI
TL;DR: The tested set of wearable sensors was able to successfully capture human stress and quantify stress level and may be useful in designing portable and remote control systems, such as medical devices used to turn on interventions and prevent stress consequences.
Abstract: Objective: The objectives of this paper are to develop and test the ability of a wearable physiological sensors system, based on ECG, EDA, and EEG, to capture human stress and to assess whether the detected changes in physiological signals correlate with changes in salivary cortisol level, which is a reliable, objective biomarker of stress. Methods: 15 healthy participants, eight males and seven females, mean age 40.8 ± 9.5 years, wore a set of three commercial sensors to record physiological signals during the Maastricht Acute Stress Test, an experimental protocol known to elicit robust physical and mental stress in humans. Salivary samples were collected throughout the different phases of the test. Statistical analysis was performed using a support vector machine (SVM) classification algorithm. A correlation analysis between extracted physiological features and salivary cortisol levels was also performed. Results: 15 features extracted from heart rate variability, electrodermal, and electroencephalography signals showed a high degree of significance in disentangling stress from a relaxed state. The classification algorithm, based on significant features, provided satisfactory outcomes with 86% accuracy. Furthermore, correlation analysis showed that the observed changes in physiological features were consistent with the trend of salivary cortisol levels (R2 = 0.714). Conclusion: The tested set of wearable sensors was able to successfully capture human stress and quantify stress level. Significance: The results of this pilot study may be useful in designing portable and remote control systems, such as medical devices used to turn on interventions and prevent stress consequences.

Journal ArticleDOI
TL;DR: The developed technique necessitates less processing time in real-time health monitoring scenarios to construct 128-bit RBSs in comparisons with current methods, and can potentially be used as keys for encryption or entity identifiers to secure WBSNs.
Abstract: Heartbeats based random binary sequences (RBSs) are the backbone for several security aspects in wireless body sensor networks (WBSNs). However, current heartbeats based methods require a lot of processing time (∼25–30 s) to generate 128-bit RBSs in real-time healthcare applications. In order to improve time efficiency, a biometric RBSs generation technique using interpulse intervals (IPIs) of heartbeats is developed in this study. The proposed technique incorporates a finite monotonic increasing sequences generation mechanism of IPIs and a cyclic block encoding procedure that extracts a high number of entropic bits from each IPI. To validate the proposed technique, 89 ECG recordings including 25 healthy individuals in a laboratory environment, 20 from MIT-BIH Arrhythmia Database, and 44 cardiac patients from the clinical environment are considered. By applying the proposed technique on the ECG signals, at most 16 random bits can be extracted from each heartbeat to generate 128-bit RBSs via concatenation of eight consecutive IPIs. And the randomness and distinctiveness of generated 128-bit RBSs are measured based on the National Institute of Standards and Technology statistical tests and hamming distance, respectively. From the experimental results, the generated 128-bit RBSs from both healthy subjects and patients can potentially be used as keys for encryption or entity identifiers to secure WBSNs. Moreover, the proposed approach is examined to be up to four times faster than the existing heartbeat-based RBSs generation schemes. Therefore, the developed technique necessitates less processing time (0–8 s) in real-time health monitoring scenarios to construct 128-bit RBSs in comparisons with current methods.

Journal ArticleDOI
TL;DR: A novel and automated lesion detection scheme that highlights an improved performance over the existing methods with an average accuracy of $97.71\% and robustness in detecting the various types of DR lesions irrespective of their intrinsic properties.
Abstract: Objective: Diabetic retinopathy (DR) is characterized by the progressive deterioration of retina with the appearance of different types of lesions that include microaneurysms, hemorrhages, exudates, etc Detection of these lesions plays a significant role for early diagnosis of DR Methods: To this aim, this paper proposes a novel and automated lesion detection scheme, which consists of the four main steps: vessel extraction and optic disc removal, preprocessing, candidate lesion detection, and postprocessing The optic disc and the blood vessels are suppressed first to facilitate further processing Curvelet-based edge enhancement is done to separate out the dark lesions from the poorly illuminated retinal background, while the contrast between the bright lesions and the background is enhanced through an optimally designed wideband bandpass filter The mutual information of the maximum matched filter response and the maximum Laplacian of Gaussian response are then jointly maximized Differential evolution algorithm is used to determine the optimal values for the parameters of the fuzzy functions that determine the thresholds of segmenting the candidate regions Morphology-based postprocessing is finally applied to exclude the falsely detected candidate pixels Results and Conclusions: Extensive simulations on different publicly available databases highlight an improved performance over the existing methods with an average accuracy of $9771\%$ and robustness in detecting the various types of DR lesions irrespective of their intrinsic properties

Journal ArticleDOI
TL;DR: TR-BREATH is a time-reversal (TR)-based contact-free breathing monitoring system capable of breathing detection and multiperson breathing rate estimation within a short period of time using off-the-shelf WiFi devices and is robust against packet loss and motions.
Abstract: In this paper, we introduce TR-BREATH, a time-reversal (TR)-based contact-free breathing monitoring system. It is capable of breathing detection and multiperson breathing rate estimation within a short period of time using off-the-shelf WiFi devices. The proposed system exploits the channel state information (CSI) to capture the miniature variations in the environment caused by breathing. To magnify the CSI variations, TR-BREATH projects CSIs into the TR resonating strength (TRRS) feature space and analyzes the TRRS by the Root-MUSIC and affinity propagation algorithms. Extensive experiment results indoor demonstrate a perfect detection rate of breathing. With only 10 s of measurement, a mean accuracy of $99\%$ can be obtained for single-person breathing rate estimation under the non-line-of-sight (NLOS) scenario. Furthermore, it achieves a mean accuracy of $98.65\%$ in breathing rate estimation for a dozen people under the line-of-sight scenario and a mean accuracy of $98.07\%$ in breathing rate estimation of nine people under the NLOS scenario, both with 63 s of measurement. Moreover, TR-BREATH can estimate the number of people with an error around 1. We also demonstrate that TR-BREATH is robust against packet loss and motions. With the prevailing of WiFi, TR-BREATH can be applied for in-home and real-time breathing monitoring.

Journal ArticleDOI
TL;DR: This paper proposes a segmentation-free radiomics method to classify malignant and benign breast tumors with shear-wave elastography (SWE) data to integrate the advantage of both SWE in providing important elastic with morphology information and convolutional neural network in automatic feature extraction and accurate classification.
Abstract: This paper proposes a segmentation-free radiomics method to classify malignant and benign breast tumors with shear-wave elastography (SWE) data. The method is targeted to integrate the advantage of both SWE in providing important elastic with morphology information and convolutional neural network (CNN) in automatic feature extraction and accurate classification. Compared to traditional methods, the proposed method is designed to directly extract features from the dataset without the prerequisite of segmentation and manual operation. This can keep the peri-tumor information, which is lost by segmentation-based methods. With the proposed model trained on 540 images (318 of malignant breast tumors and 222 of benign breast tumors, respectively), an accuracy of 95.8%, a sensitivity of 96.2%, and a specificity of 95.7% was obtained for the final test. The superior performances compared to the existing state-of-the-art methods and its automatic nature both demonstrate that the proposed method has a great potential to be applied to clinical computer-aided diagnosis of breast cancer.

Journal ArticleDOI
TL;DR: A novel deformable registration method, which is based on a cue-aware deep regression network, to deal with multiple databases with minimal parameter tuning, and can tackle various registration tasks on different databases without the need of manual parameter tuning.
Abstract: Significance: Analysis of modern large-scale, multicenter or diseased data requires deformable registration algorithms that can cope with data of diverse nature. Objective: We propose a novel deformable registration method, which is based on a cue-aware deep regression network, to deal with multiple databases with minimal parameter tuning. Methods: Our method learns and predicts the deformation field between a reference image and a subject image. Specifically, given a set of training images, our method learns the displacement vector associated with a pair of reference–subject patches. To achieve this, we first introduce a key-point truncated-balanced sampling strategy to facilitate accurate learning from the image database of limited size. Then, we design a cue-aware deep regression network, where we propose to employ the contextual cue, i.e., the scale-adaptive local similarity, to more apparently guide the learning process. The deep regression network is aware of the contextual cue for accurate prediction of local deformation. Results and Conclusion : Our experiments show that the proposed method can tackle various registration tasks on different databases, giving consistent good performance without the need of manual parameter tuning, which could be applicable to various clinical applications.

Journal ArticleDOI
TL;DR: An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia.
Abstract: Objective : Contemporary and future outpatient long-term artificial pancreas (AP) studies need to cope with the well-known large intra- and interday glucose variability occurring in type 1 diabetic (T1D) subjects. Here, we propose an adaptive model predictive control (MPC) strategy to account for it and test it in silico. Methods : A run-to-run (R2R) approach adapts the subcutaneous basal insulin delivery during the night and the carbohydrate-to-insulin ratio (CR) during the day, based on some performance indices calculated from subcutaneous continuous glucose sensor data. In particular, R2R aims, first, to reduce the percentage of time in hypoglycemia and, secondarily, to improve the percentage of time in euglycemia and average glucose. In silico simulations are performed by using the University of Virginia/Padova T1D simulator enriched by incorporating three novel features: intra- and interday variability of insulin sensitivity, different distributions of CR at breakfast, lunch, and dinner, and dawn phenomenon. Results : After about two months, using the R2R approach with a scenario characterized by a random $\pm$ 30% variation of the nominal insulin sensitivity the time in range and the time in tight range are increased by 11.39% and 44.87%, respectively, and the time spent above 180 mg/dl is reduced by 48.74%. Conclusions : An adaptive MPC algorithm based on R2R shows in silico great potential to capture intra- and interday glucose variability by improving both overnight and postprandial glucose control without increasing hypoglycemia. Significance : Making an AP adaptive is key for long-term real-life outpatient studies. These good in silico results are very encouraging and worth testing in vivo .

Journal ArticleDOI
TL;DR: A new methodology for the segmentation of heart sounds is introduced, suggesting an event detection approach with DRNNs using spectral or envelope features and the performance of different deep recurrent neural network (DRNN) architectures to detect the state sequence is investigated.
Abstract: Objective: In this paper, we accurately detect the state-sequence first heart sound (S1)–systole–second heart sound (S2)–diastole , i.e., the positions of S1 and S2, in heart sound recordings. We propose an event detection approach without explicitly incorporating a priori information of the state duration. This renders it also applicable to recordings with cardiac arrhythmia and extendable to the detection of extra heart sounds (third and fourth heart sound), heart murmurs, as well as other acoustic events. Methods: We use data from the 2016 PhysioNet/CinC Challenge, containing heart sound recordings and annotations of the heart sound states. From the recordings, we extract spectral and envelope features and investigate the performance of different deep recurrent neural network (DRNN) architectures to detect the state sequence. We use virtual adversarial training, dropout, and data augmentation for regularization. Results: We compare our results with the state-of-the-art method and achieve an average score for the four events of the state sequence of ${\bf F}_{1}\approx 96$ % on an independent test set. Conclusion: Our approach shows state-of-the-art performance carefully evaluated on the 2016 PhysioNet/CinC Challenge dataset. Significance: In this work, we introduce a new methodology for the segmentation of heart sounds, suggesting an event detection approach with DRNNs using spectral or envelope features.

Journal ArticleDOI
TL;DR: A method that enables online analysis of neuromusculoskeletal function in vivo in the intact human and incorporates the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system.
Abstract: Objective: Current clinical biomechanics involves lengthy data acquisition and time-consuming offline analyses with biomechanical models not operating in real-time for man–machine interfacing. We developed a method that enables online analysis of neuromusculoskeletal function in vivo in the intact human. Methods: We used electromyography (EMG)-driven musculoskeletal modeling to simulate all transformations from muscle excitation onset (EMGs) to mechanical moment production around multiple lower-limb degrees of freedom (DOFs). We developed a calibration algorithm that enables adjusting musculoskeletal model parameters specifically to an individual's anthropometry and force-generating capacity. We incorporated the modeling paradigm into a computationally efficient, generic framework that can be interfaced in real-time with any movement data collection system. Results: The framework demonstrated the ability of computing forces in 13 lower-limb muscle-tendon units and resulting moments about three joint DOFs simultaneously in real-time. Remarkably, it was capable of extrapolating beyond calibration conditions, i.e., predicting accurate joint moments during six unseen tasks and one unseen DOF. Conclusion: The proposed framework can dramatically reduce evaluation latency in current clinical biomechanics and open up new avenues for establishing prompt and personalized treatments, as well as for establishing natural interfaces between patients and rehabilitation systems. Significance: The integration of EMG with numerical modeling will enable simulating realistic neuromuscular strategies in conditions including muscular/orthopedic deficit, which could not be robustly simulated via pure modeling formulations. This will enable translation to clinical settings and development of healthcare technologies including real-time bio-feedback of internal mechanical forces and direct patient-machine interfacing.

Journal ArticleDOI
TL;DR: A fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema is presented and includes a novel approach in estimating the number of clusters in an automated manner.
Abstract: This paper presents a fully automated algorithm to segment fluid-associated (fluid-filled) and cyst regions in optical coherence tomography (OCT) retina images of subjects with diabetic macular edema. The OCT image is segmented using a novel neutrosophic transformation and a graph-based shortest path method. In neutrosophic domain, an image $g$ is transformed into three sets: $T$ (true), $I$ (indeterminate) that represents noise, and $F$ (false). This paper makes four key contributions. First, a new method is introduced to compute the indeterminacy set $I$ , and a new $\lambda$ -correction operation is introduced to compute the set $T$ in neutrosophic domain. Second, a graph shortest-path method is applied in neutrosophic domain to segment the inner limiting membrane and the retinal pigment epithelium as regions of interest (ROI) and outer plexiform layer and inner segment myeloid as middle layers using a novel definition of the edge weights . Third, a new cost function for cluster-based fluid/cyst segmentation in ROI is presented which also includes a novel approach in estimating the number of clusters in an automated manner. Fourth, the final fluid regions are achieved by ignoring very small regions and the regions between middle layers. The proposed method is evaluated using two publicly available datasets: Duke, Optima, and a third local dataset from the UMN clinic which is available online. The proposed algorithm outperforms the previously proposed Duke algorithm by 8% with respect to the dice coefficient and by 5% with respect to precision on the Duke dataset, while achieving about the same sensitivity. Also, the proposed algorithm outperforms a prior method for Optima dataset by 6%, 22%, and 23% with respect to the dice coefficient, sensitivity, and precision, respectively. Finally, the proposed algorithm also achieves sensitivity of 67.3%, 88.8%, and 76.7%, for the Duke, Optima, and the university of minnesota (UMN) datasets, respectively.

Journal ArticleDOI
TL;DR: Results show that the developed multisite BBB chip is expected to be used for screening drug by more accurately predicting their permeability through BBB as well as their toxicity.
Abstract: Objective: The blood–brain barrier (BBB) poses a unique challenge to the development of therapeutics against neurological disorders due to its impermeabi-lity to most of the chemical compounds. Most in vitro BBB models have limitations in mimicking in vivo conditions and functions. Here, we show a co-culture microfluidic BBB-on-a-chip that provides interactions between neurovascular endothelial cells and neuronal cells across a porous polycarbonate membrane, which better mimics the in vivo conditions, as well as allows in vivo level shear stress to be applied. Methods: A 4 × 4 intersecting microchannel array forms 16 BBB sites on a chip, with a multielectrode array integrated to measure the transendothelial electrical resistance (TEER) from all 16 different sites, which allows label-free real-time analysis of the barrier function. Primary mouse endothelial cells and primary astrocytes were co-cultured in the chip while applying in vivo level shear stress. The chip allows the barrier function to be analyzed through TEER measurement, dextran permeability, as well as immunostaining. Results: Co-culture between astrocytes and endothelial cells, as well as in vivo level shear stress applied, led to the formation of tighter junctions and significantly lower barrier permeability. Moreover, drug testing with histamine showed increased permeability when using only endothelial cells compared to almost no change when using co-culture. Conclusion: Results show that the developed BBB chip more closely mimics the in vivo BBB environment. Significance: The developed multisite BBB chip is expected to be used for screening drug by more accurately predicting their permeability through BBB as well as their toxicity.

Journal ArticleDOI
TL;DR: These experiments show that STRAS (v2) provides sufficient DoFs, workspace, and force to perform ESD, that it allows a single surgeon to perform all the surgical tasks and those performances are improved with respect to manual systems.
Abstract: Objective: Minimally invasive surgical interventions in the gastrointestinal tract, such as endoscopic submucosal dissection (ESD), are very difficult for surgeons when performed with standard flexible endoscopes. Robotic flexible systems have been identified as a solution to improve manipulation. However, only a few such systems have been brought to preclinical trials as of now. As a result, novel robotic tools are required. Methods: We developed a telemanipulated robotic device, called STRAS, which aims to assist surgeons during intraluminal surgical endoscopy. This is a modular system, based on a flexible endoscope and flexible instruments, which provides 10 degrees of freedom (DoFs). The modularity allows the user to easily set up the robot and to navigate toward the operating area. The robot can then be teleoperated using master interfaces specifically designed to intuitively control all available DoFs. STRAS capabilities have been tested in laboratory conditions and during preclinical experiments. Results: We report 12 colorectal ESDs performed in pigs, in which large lesions were successfully removed. Dissection speeds are compared with those obtained in similar conditions with the manual Anubiscope platform from Karl Storz. We show significant improvements ( $\mathbf {p= 0.01}$ ). Conclusion: These experiments show that STRAS (v2) provides sufficient DoFs, workspace, and force to perform ESD, that it allows a single surgeon to perform all the surgical tasks and those performances are improved with respect to manual systems. Significance: The concepts developed for STRAS are validated and could bring new tools for surgeons to improve comfort, ease, and performances for intraluminal surgical endoscopy.

Journal ArticleDOI
TL;DR: This method of prosthesis control has the potential to deliver real-world clinical benefits to amputees: better condition-tolerant performance, reduced training burden in terms of frequency and duration, and increased adoption of myoelectric prostheses.
Abstract: Myoelectric signals can be used to predict the intended movements of an amputee for prosthesis control. However, untrained effects like limb position changes influence myoelectric signal characteristics, hindering the ability of pattern recognition algorithms to discriminate among motion classes. Despite frequent and long training sessions, these deleterious conditional influences may result in poor performance and device abandonment. Goal: We present a robust sparsity-based adaptive classification method that is significantly less sensitive to signal deviations resulting from untrained conditions. Methods: We compare this approach in the offline and online contexts of untrained upper-limb positions for amputee and able-bodied subjects to demonstrate its robustness compared against other myoelectric classification methods. Results: We report significant performance improvements ( $p ) in untrained limb positions across all subject groups. Significance: The robustness of our suggested approach helps to ensure better untrained condition performance from fewer training conditions. Conclusions: This method of prosthesis control has the potential to deliver real-world clinical benefits to amputees: better condition-tolerant performance, reduced training burden in terms of frequency and duration, and increased adoption of myoelectric prostheses.

Journal ArticleDOI
TL;DR: A system with wrist and ankle motion sensors can provide accurate measures of tremor, bradykinesia, and dysKinesia as patients complete routine activities and could provide insight on motor fluctuations in the context of daily life to guide clinical management and aid in development of new therapies.
Abstract: Objective: Fluctuations in response to levodopa in Parkinson's disease (PD) are difficult to treat as tools to monitor temporal patterns of symptoms are hampered by several challenges. The objective was to use wearable sensors to quantify the dose response of tremor, bradykinesia, and dyskinesia in individuals with PD. Methods: Thirteen individuals with PD and fluctuating motor benefit were instrumented with wrist and ankle motion sensors and recorded by video. Kinematic data were recorded as subjects completed a series of activities in a simulated home environment through transition from off to on medication. Subjects were evaluated using the unified Parkinson disease rating scale motor exam (UPDRS-III) at the start and end of data collection. Algorithms were applied to the kinematic data to score tremor, bradykinesia, and dyskinesia. A blinded clinician rated severity observed on video. Accuracy of algorithms was evaluated by comparing scores with clinician ratings using a receiver operating characteristic (ROC) analysis. Results: Algorithm scores for tremor, bradykinesia, and dyskinesia agreed with clinician ratings of video recordings (ROC area > 0.8). Summary metrics extracted from time intervals before and after taking medication provided quantitative measures of therapeutic response (p Conclusion: A system with wrist and ankle motion sensors can provide accurate measures of tremor, bradykinesia, and dyskinesia as patients complete routine activities. Significance: This technology could provide insight on motor fluctuations in the context of daily life to guide clinical management and aid in development of new therapies.

Journal ArticleDOI
TL;DR: This study designs, applies, and evaluates a deep 3-D CNN framework for automatic, effective, and accurate classification and recognition of large number of functional brain networks reconstructed by sparse representation of whole-brain fMRI signals and provides a new deep learning approach for modeling functional connectomes based on fMRI data.
Abstract: Current functional magnetic resonance imaging (fMRI) data modeling techniques, such as independent component analysis and sparse coding methods, can effectively reconstruct dozens or hundreds of concurrent interacting functional brain networks simultaneously from the whole brain fMRI signals. However, such reconstructed networks have no correspondences across different subjects. Thus, automatic, effective, and accurate classification and recognition of these large numbers of fMRI-derived functional brain networks are very important for subsequent steps of functional brain analysis in cognitive and clinical neuroscience applications. However, this task is still a challenging and open problem due to the tremendous variability of various types of functional brain networks and the presence of various sources of noises. In recognition of the fact that convolutional neural networks (CNN) has superior capability of representing spatial patterns with huge variability and dealing with large noises, in this paper, we design, apply, and evaluate a deep 3-D CNN framework for automatic, effective, and accurate classification and recognition of large number of functional brain networks reconstructed by sparse representation of whole-brain fMRI signals. Our extensive experimental results based on the Human Connectome Project fMRI data showed that the proposed deep 3-D CNN can effectively and robustly perform functional networks classification and recognition tasks, while maintaining a high tolerance for mistakenly labeled training instances. This study provides a new deep learning approach for modeling functional connectomes based on fMRI data.

Journal ArticleDOI
TL;DR: The proposed framework integrated seamlessly with a wide variety of popular MPC variants reported in AP research, customizes tradeoff between glycemic regulation and efficacy according to prior design specifications, and eliminates judicious prior selection of controller sampling times.
Abstract: Objective: The development of artificial pancreas (AP) technology for deployment in low-energy, embedded devices is contingent upon selecting an efficient control algorithm for regulating glucose in people with type 1 diabetes mellitus In this paper, we aim to lower the energy consumption of the AP by reducing controller updates, that is, the number of times the decision-making algorithm is invoked to compute an appropriate insulin dose Methods: Physiological insights into glucose management are leveraged to design an event-triggered model predictive controller (MPC) that operates efficiently, without compromising patient safety The proposed event-triggered MPC is deployed on a wearable platform Its robustness to latent hypoglycemia, model mismatch, and meal misinformation is tested, with and without meal announcement, on the full version of the US-FDA accepted UVA/Padova metabolic simulator Results: The event-based controller remains on for 18 h of 41 h in closed loop with unannounced meals, while maintaining glucose in 70–180 mg/dL for 25 h, compared to 27 h for a standard MPC controller With meal announcement, the time in 70–180 mg/dL is almost identical, with the controller operating a mere 2588% of the time in comparison with a standard MPC Conclusion: A novel control architecture for AP systems enables safe glycemic regulation with reduced processor computations Significance: Our proposed framework integrated seamlessly with a wide variety of popular MPC variants reported in AP research, customizes tradeoff between glycemic regulation and efficacy according to prior design specifications, and eliminates judicious prior selection of controller sampling times

Journal ArticleDOI
TL;DR: The proposed EOG-based HMI provides a novel nonmanual approach for severely paralyzed individuals to control a wheelchair that can generate more commands with higher accuracy, lower FPR, and fewer electrodes.
Abstract: Objective: Nonmanual human–machine interfaces (HMIs) have been studied for wheelchair control with the aim of helping severely paralyzed individuals regain some mobility. The challenge is to rapidly, accurately, and sufficiently produce control commands, such as left and right turns, forward and backward motions, acceleration, deceleration, and stopping. In this paper, a novel electrooculogram (EOG) based HMI is proposed for wheelchair control. Methods: A total of 13 flashing buttons, each of which corresponds to a command, are presented in the graphical user interface. These buttons flash on a one-by-one manner in a predefined sequence. The user can select a button by blinking in sync with its flashes. The algorithm detects the eye blinks from a channel of vertical EOG data and determines the user's target button based on the synchronization between the detected blinks and the button's flashes. Results: For healthy subjects/patients with spinal cord injuries, the proposed HMI achieved an average accuracy of 96.7% / 91.7% and a response time of 3.53 s/3.67 s with 0 false positive rates (FPRs). Conclusion: Using one channel of vertical EOG signals associated with eye blinks, the proposed HMI can accurately provide sufficient commands with a satisfactory response time. Significance: The proposed HMI provides a novel nonmanual approach for severely paralyzed individuals to control a wheelchair. Compared with a newly established EOG-based HMI, the proposed HMI can generate more commands with higher accuracy, lower FPR, and fewer electrodes.

Journal ArticleDOI
TL;DR: Experimental findings in this study support the feasibility of using biomechanically-assistive garments to reduce low back muscle loading, which may help reduce injury risks or fatigue due to high or repetitive forces.
Abstract: Goal: The purpose of this study was: 1) to design and fabricate a biomechanically-assistive garment which was sufficiently lightweight and low-profile to be worn underneath, or as, clothing, and then 2) to perform human subject testing to assess the ability of the garment to offload the low back muscles during leaning and lifting. Methods: We designed a prototype garment which acts in parallel with the low back extensor muscles to reduce forces borne by the lumbar musculature. We then tested eight healthy subjects while they performed common leaning and lifting tasks with and without the garment. We recorded muscle activity, body kinematics, and assistive forces. Results: The biomechanically-assistive garment offloaded the low back muscles, reducing erector spinae muscle activity by an average of 23–43% during leaning tasks, and 14–16% during lifting tasks. Conclusion: Experimental findings in this study support the feasibility of using biomechanically-assistive garments to reduce low back muscle loading, which may help reduce injury risks or fatigue due to high or repetitive forces. Significance: Biomechanically-assistive garments may have broad societal appeal as a lightweight, unobtrusive, and cost-effective means to mitigate low back loading in daily life.

Journal ArticleDOI
TL;DR: In this article, a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs is proposed.
Abstract: Objective : In this paper, we develop a personalized real-time risk scoring algorithm that provides timely and granular assessments for the clinical acuity of ward patients based on their (temporal) lab tests and vital signs; the proposed risk scoring system ensures timely intensive care unit admissions for clinically deteriorating patients Methods : The risk scoring system is based on the idea of sequential hypothesis testing under an uncertain time horizon The system learns a set of latent patient subtypes from the offline electronic health record data, and trains a mixture of Gaussian Process experts , where each expert models the physiological data streams associated with a specific patient subtype Transfer learning techniques are used to learn the relationship between a patient's latent subtype and her static admission information (eg, age, gender, transfer status, ICD-9 codes, etc) Results : Experiments conducted on data from a heterogeneous cohort of 6321 patients admitted to Ronald Reagan UCLA medical center show that our score significantly outperforms the currently deployed risk scores, such as the Rothman index, MEWS, APACHE, and SOFA scores, in terms of timeliness, true positive rate, and positive predictive value Conclusion : Our results reflect the importance of adopting the concepts of personalized medicine in critical care settings; significant accuracy and timeliness gains can be achieved by accounting for the patients’ heterogeneity Significance : The proposed risk scoring methodology can confer huge clinical and social benefits on a massive number of critically ill inpatients who exhibit adverse outcomes including, but not limited to, cardiac arrests, respiratory arrests, and septic shocks