scispace - formally typeset
Search or ask a question

Showing papers in "Physiological Measurement in 2017"


Journal ArticleDOI
TL;DR: A comprehensive literature review on how to measure ICP invasively and noninvasively, with a sense of their relative strengths, drawbacks and areas for further improvement is provided.
Abstract: Measurement of intracranial pressure (ICP) can be invaluable in the management of critically ill patients. Cerebrospinal fluid is produced by the choroid plexus in the brain ventricles (a set of communicating chambers), after which it circulates through the different ventricles and exits into the subarachnoid space around the brain, where it is reabsorbed into the venous system. If the fluid does not drain out of the brain or get reabsorbed, the ICP increases, which may lead to brain damage or death. ICP elevation accompanied by dilatation of the cerebral ventricles is termed hydrocephalus, whereas ICP elevation accompanied by normal or small ventricles is termed idiopathic intracranial hypertension. Objective We performed a comprehensive literature review on how to measure ICP invasively and noninvasively. Approach This review discusses the advantages and disadvantages of current invasive and noninvasive approaches. Main results Invasive methods remain the most accurate at measuring ICP, but they are prone to a variety of complications including infection, hemorrhage and neurological deficits. Ventricular catheters remain the gold standard but also carry the highest risk of complications, including difficult or incorrect placement. Direct telemetric intraparenchymal ICP monitoring devices are a good alternative. Noninvasive methods for measuring and evaluating ICP have been developed and classified in five broad categories, but have not been reliable enough to use on a routine basis. These methods include the fluid dynamic, ophthalmic, otic, and electrophysiologic methods, as well as magnetic resonance imaging, transcranial Doppler ultrasonography (TCD), cerebral blood flow velocity, near-infrared spectroscopy, transcranial time-of-flight, spontaneous venous pulsations, venous ophthalmodynamometry, optical coherence tomography of retina, optic nerve sheath diameter (ONSD) assessment, pupillometry constriction, sensing tympanic membrane displacement, analyzing otoacoustic emissions/acoustic measure, transcranial acoustic signals, visual-evoked potentials, electroencephalography, skull vibrations, brain tissue resonance and the jugular vein. Significance This review provides a current perspective of invasive and noninvasive ICP measurements, along with a sense of their relative strengths, drawbacks and areas for further improvement. At present, none of the noninvasive methods demonstrates sufficient accuracy and ease of use while allowing continuous monitoring in routine clinical use. However, they provide a realizable ICP measurement in specific patients especially when invasive monitoring is contraindicated or unavailable. Among all noninvasive ICP measurement methods, ONSD and TCD are attractive and may be useful in selected settings though they cannot be used as invasive ICP measurement substitutes. For a sufficiently accurate and universal continuous ICP monitoring method/device, future research and developments are needed to integrate further refinements of the existing methods, combine telemetric sensors and/or technologies, and validate large numbers of clinical studies on relevant patient populations.

157 citations


Journal ArticleDOI
TL;DR: The results indicate that a reasonable degree of sleep staging accuracy can be achieved using a wrist-worn device, which may be of utility in longitudinal studies of sleep habits.
Abstract: OBJECTIVE This paper aims to report on the accuracy of estimating sleep stages using a wrist-worn device that measures movement using a 3D accelerometer and an optical pulse photoplethysmograph (PPG). APPROACH Overnight recordings were obtained from 60 adult participants wearing these devices on their left and right wrist, simultaneously with a Type III home sleep testing device (Embletta MPR) which included EEG channels for sleep staging. The 60 participants were self-reported normal sleepers (36 M: 24 F, age = 34 ± 10, BMI = 28 ± 6). The Embletta recordings were scored for sleep stages using AASM guidelines and were used to develop and validate an automated sleep stage estimation algorithm, which labeled sleep stages as one of Wake, Light (N1 or N2), Deep (N3) and REM (REM). Features were extracted from the accelerometer and PPG sensors, which reflected movement, breathing and heart rate variability. MAIN RESULTS Based on leave-one-out validation, the overall per-epoch accuracy of the automated algorithm was 69%, with a Cohen's kappa of 0.52 ± 0.14. There was no observable bias to under- or over-estimate wake, light, or deep sleep durations. REM sleep duration was slightly over-estimated by the system. The most common misclassifications were light/REM and light/wake mislabeling. SIGNIFICANCE The results indicate that a reasonable degree of sleep staging accuracy can be achieved using a wrist-worn device, which may be of utility in longitudinal studies of sleep habits.

135 citations


Journal ArticleDOI
TL;DR: It is concluded that the 'noisy numbers' in medical measurements, caused by ANS variability, are part and parcel of how the system works.
Abstract: The results of many medical measurements are directly or indirectly influenced by the autonomic nervous system (ANS). For example pupil size or heart rate may demonstrate striking moment-to-moment variability. This review intends to elucidate the physiology behind this seemingly unpredictable system. The review is split up into: 1. The peripheral ANS, parallel innervation by the sympathetic and parasympathetic branches, their transmitters and co-transmitters. It treats questions like the supposed sympatho/vagal balance, organization in plexuses and the 'little brains' that are active like in the enteric system or around the heart. Part 2 treats ANS-function in some (example-) organs in more detail: the eye, the heart, blood vessels, lungs, respiration and cardiorespiratory coupling. Part 3 poses the question of who is directing what? Is the ANS a strictly top-down directed system or is its organization bottom-up? Finally, it is concluded that the 'noisy numbers' in medical measurements, caused by ANS variability, are part and parcel of how the system works. This topical review is a one-man's undertaking and may possibly give a biased view. The author has explicitly indicated in the text where his views are not (yet) supported by facts, hoping to provoke discussion and instigate new research.

135 citations


Journal ArticleDOI
TL;DR: Recommendations based on the results are provided regarding device designs for BR estimation, and clinical applications.
Abstract: OBJECTIVE Breathing rate (BR) can be estimated by extracting respiratory signals from the electrocardiogram (ECG) or photoplethysmogram (PPG). The extracted respiratory signals may be influenced by several technical and physiological factors. In this study, our aim was to determine how technical and physiological factors influence the quality of respiratory signals. APPROACH Using a variety of techniques 15 respiratory signals were extracted from the ECG, and 11 from PPG signals collected from 57 healthy subjects. The quality of each respiratory signal was assessed by calculating its correlation with a reference oral-nasal pressure respiratory signal using Pearson's correlation coefficient. MAIN RESULTS Relevant results informing device design and clinical application were obtained. The results informing device design were: (i) seven out of 11 respiratory signals were of higher quality when extracted from finger PPG compared to ear PPG; (ii) laboratory equipment did not provide higher quality of respiratory signals than a clinical monitor; (iii) the ECG provided higher quality respiratory signals than the PPG; (iv) during downsampling of the ECG and PPG significant reductions in quality were first observed at sampling frequencies of <250 Hz and <16 Hz respectively. The results informing clinical application were: (i) frequency modulation-based respiratory signals were generally of lower quality in elderly subjects compared to young subjects; (ii) the qualities of 23 out of 26 respiratory signals were reduced at elevated BRs; (iii) there were no differences associated with gender. SIGNIFICANCE Recommendations based on the results are provided regarding device designs for BR estimation, and clinical applications. The dataset and code used in this study are publicly available.

109 citations


Journal ArticleDOI
TL;DR: The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.
Abstract: Research suggests wearables and not instrumented walkways are better suited to quantify gait outcomes in clinic and free-living environments, providing a more comprehensive overview of walking due to continuous monitoring. Numerous validation studies in controlled settings exist, but few have examined the validity of wearables and associated algorithms for identifying and quantifying step counts and walking bouts in uncontrolled (free-living) environments. Studies which have examined free-living step and bout count validity found limited agreement due to variations in walking speed, changing terrain or task. Here we present a gait segmentation algorithm to define free-living step count and walking bouts from an open-source, high-resolution, accelerometer-based wearable (AX3, Axivity). Ten healthy participants (20-33 years) wore two portable gait measurement systems; a wearable accelerometer on the lower-back and a wearable body-mounted camera (GoPro HERO) on the chest, for 1 h on two separate occasions (24 h apart) during free-living activities. Step count and walking bouts were derived for both measurement systems and compared. For all participants during a total of almost 20 h of uncontrolled and unscripted free-living activity data, excellent relative (rho ⩾ 0.941) and absolute (ICC(2,1) ⩾ 0.975) agreement with no presence of bias were identified for step count compared to the camera (gold standard reference). Walking bout identification showed excellent relative (rho ⩾ 0.909) and absolute agreement (ICC(2,1) ⩾ 0.941) but demonstrated significant bias. The algorithm employed for identifying and quantifying steps and bouts from a single wearable accelerometer worn on the lower-back has been demonstrated to be valid and could be used for pragmatic gait analysis in prolonged uncontrolled free-living environments.

98 citations


Journal ArticleDOI
TL;DR: This pilot study suggests that analysis of variability in the time and frequency domain from pulse rate obtained through PPG may be potentially as reliable as that derived from the analysis of the electrocardiogram, provided that f s’s 25 Hz sampling frequency is used.
Abstract: Pulse rate variability (PRV) analysis appears as the first alternative to heart rate variability analysis for wearable devices; however, there is a constraint on computational load and energy consumption for the limited system resources available to the devices. Considering that adjustment of the sampling frequency is one of the strategies for reducing computational load and power consumption, this study aimed to investigate the influence of sampling frequency (f s) on PRV analysis and to find the minimum sampling frequency while maintaining reliability. We generated 5000, 2500, 1000, 500, 250, 100, 50, 25, 20, 15, 10, 5 Hz down-sampled photoplethysmograms from 10 kHz-sampled PPGs and derived time- and frequency-domain variables of the PRV. These included AVNN, SDNN, SDSD, RMSSD, NN50, pNN50, total power, VLF, LF, HF, LF/HF, nLF and nHF for each down-sampled signal. Derived variables were compared with heart rate variability of the 10 kHz-sampled electrocardiograms, and then statistically investigated using one-way ANOVA test and Bland-Altman analysis. As a result, significant differences (P < 0.05) were found for SDNN, SDSD, RMSSD, NN50, pNN50, TP, HF, LF/HF, nLF and nHF, but not for AVNN, VLF and LF. Based on the post hoc tests, it was found that the NN50 and pNN50, SDSD and RMSSD, LF/HF and nHF, SDNN, TP and nLF analysis had significant differences at f s ⩽ 20 Hz, f s ⩽ 15 Hz, f s ⩽10 Hz; f s = 5 Hz, respectively. In other words, a significant difference was not seen for any variable if the f s was greater than 25 Hz. Consequently, our pilot study suggests that analysis of variability in the time and frequency domain from pulse rate obtained through PPG may be potentially as reliable as that derived from the analysis of the electrocardiogram, provided that f s ⩾25 Hz sampling frequency is used.

93 citations


Journal ArticleDOI
TL;DR: The results suggest that the proposed SB method considerably increases the robustness of heart-rate measurement in challenging fitness applications, and outperforms the state-of-the-art method.
Abstract: Remote photoplethysmography (rPPG) enables contactless heart-rate monitoring using a regular video camera. Objective: This paper aims to improve the rPPG technology targeting continuous heart-rate measurement during fitness exercises. The fundamental limitation of the existing (multi-wavelength) rPPG methods is that they can suppress at most n − 1 independent distortions by linearly combining n wavelength color channels. Their performance are highly restricted when more than n − 1 independent distortions appear in a measurement, as typically occurs in fitness applications with vigorous body motions. Approach: To mitigate this limitation, we propose an effective yet very simple method that algorithmically extends the number of possibly suppressed distortions without using more wavelengths. Our core idea is to increase the degrees-of-freedom of noise reduction by decomposing the n wavelength camera-signals into multiple orthogonal frequency bands and extracting the pulse-signal per band-basis. This processing, namely Sub-band rPPG (SB), can suppress different distortion-frequencies using independent combinations of color channels. Main results: A challenging fitness benchmark dataset is created, including 25 videos recorded from 7 healthy adult subjects (ages from 25 to 40 yrs; six male and one female) running on a treadmill in an indoor environment. Various practical challenges are simulated in the recordings, such as different skin-tones, light sources, illumination intensities, and exercising modes. The basic form of SB is benchmarked against a state-of-the-art method (POS) on the fitness dataset. Using non-biased parameter settings, the average signal-to-noise-ratio (SNR) for POS varies in [−4.18, −2.07] dB, for SB varies in [−1.08, 4.77] dB. The ANOVA test shows that the improvement of SB over POS is statistically significant for almost all settings (p-value <0.05). Significance: The results suggest that the proposed SB method considerably increases the robustness of heart-rate measurement in challenging fitness applications, and outperforms the state-of-the-art method.

86 citations


Journal ArticleDOI
TL;DR: The aim of this study was to analyze the blood volume pulse (BVP) signals obtained from a wristband device and develop an algorithm for discriminating AF from normal sinus rhythm (NSR) or from other arrhythmias (ARR).
Abstract: Objective: Undiagnosed atrial fibrillation (AF) patients are at high risk of cardioembolic stroke or other complications. The aim of this study was to analyze the blood volume pulse (BVP) signals obtained from a wristband device and develop an algorithm for discriminating AF from normal sinus rhythm (NSR) or from other arrhythmias (ARR). Approach: Thirty patients with AF, 9 with ARR and 31 in NSR were included in the study. The recordings were obtained at rest from Empatica E4 wristband device and lasted 10 min. The analysis, on a 2 min segment, included spectral, variability and irregularity analysis performed on the inter-diastolic interval series, and similarity analysis performed on the BVP signal. Main results and Significance: Variability parameters were the highest in AF, the lowest in NSR and intermediate for ARR, as an example pNN50 values were, respectively, , , (p < 0.05). The similarity parameters were the highest in NSR, the lowest in AF and intermediate for ARR, as an example using a threshold for assessing similarity of : , , , all p < 0.05. The rhythm classification was preceded by over-sampling (using synthetic minority over-sampling technique) the class of ARR, being it the smallest class. Then, the features selection was performed (using the sequential forward floating search algorithm) which identified two variability parameters (pNN70 and pNN40) as the best selection. The classification by the k-nearest neighbor classifier reached an accuracy of about 0.9 for NSR and AF, and 0.8 for ARR. Using pNN70 and pNN40, the specificity for the three rhythms was Spnsr = 0.928, Spaf = 0.963, Sparr = 0.768, while the sensitivity was Spnsr = 0.773, Spaf = 0.754, Sparr = 0.758.

86 citations


Journal ArticleDOI
TL;DR: The Version of Record is available online at https://doi.org/10.1088/1361-6579/aa7ec8.
Abstract: "This is an author-created, un-copyedited version of an article published in Physiological Measurement. IOP Publishing Ltd is not responsible for any errors or omissions in this version of the manuscript or any version derived from it. The Version of Record is available online at https://doi.org/10.1088/1361-6579/aa7ec8".

82 citations


Journal ArticleDOI
TL;DR: Deep convolutional neural networks and mel-frequency spectral coefficients were used for recognition of normal-abnormal phonocardiographic signals of the human heart in the PhysioNet.org Heart Sound database.
Abstract: Intensive care unit patients are heavily monitored, and several clinically-relevant parameters are routinely extracted from high resolution signals. Objective: The goal of the 2016 PhysioNet/CinC Challenge was to encourage the creation of an intelligent system that fused information from different phonocardiographic signals to create a robust set of normal/abnormal signal detections. Approach: Deep convolutional neural networks and mel-frequency spectral coefficients were used for recognition of normal–abnormal phonocardiographic signals of the human heart. This technique was developed using the PhysioNet.org Heart Sound database and was submitted for scoring on the challenge test set. Main results: The current entry for the proposed approach obtained an overall score of 84.15% in the last phase of the challenge, which provided the sixth official score and differs from the best score of 86.02% by just 1.87%.

81 citations


Journal ArticleDOI
TL;DR: The results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification and can be combined with additional feature extraction methods to improve classification accuracy.
Abstract: Objective: This paper builds upon work submitted as part of the 2016 PhysioNet/CinC Challenge, which used sparse coding as a feature extraction tool on audio PCG data for heart sound classification Approach: In sparse coding, preprocessed data is decomposed into a dictionary matrix and a sparse coefficient matrix The dictionary matrix represents statistically important features of the audio segments The sparse coefficient matrix is a mapping that represents which features are used by each segment Working in the sparse domain, we train support vector machines (SVMs) for each audio segment (S1, systole, S2, diastole) and the full cardiac cycle We train a sixth SVM to combine the results from the preliminary SVMs into a single binary label for the entire PCG recording In addition to classifying heart sounds using sparse coding, this paper presents two novel modifications The first uses a matrix norm in the dictionary update step of sparse coding to encourage the dictionary to learn discriminating features from the abnormal heart recordings The second combines the sparse coding features with time-domain features in the final SVM stage Main results: The original algorithm submitted to the challenge achieved a cross-validated mean accuracy (MAcc) score of 08652 (Se = 08669 and Sp = 08634) After incorporating the modifications new to this paper, we report an improved cross-validated MAcc of 08926 (Se = 09007 and Sp = 08845) Significance: Our results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification In addition, we demonstrate that sparse coding can be combined with additional feature extraction methods to improve classification accuracy

Journal ArticleDOI
TL;DR: In this article, the authors used synchrosqueezing transform (SST) to characterize ECG patterns and used the proposed model to enhance heart beat detection and classification between normal and abnormal rhythms.
Abstract: The processing of ECG signal provides a wealth of information on cardiac function and overall cardiovascular health. While multi-lead ECG recordings are often necessary for a proper assessment of cardiac rhythms, they are not always available or practical, for example in fetal ECG applications. Moreover, a wide range of small non-obtrusive single-lead ECG ambulatory monitoring devices are now available, from which heart rate variability (HRV) and other health-related metrics are derived. Proper beat detection and classification of abnormal rhythms is important for reliable HRV assessment and can be challenging in single-lead ECG monitoring devices. In this manuscript, we modelled the heart rate signal as an adaptive non-harmonic model and used the newly developed synchrosqueezing transform (SST) to characterize ECG patterns. We show how the proposed model can be used to enhance heart beat detection and classification between normal and abnormal rhythms. In particular, using the Massachusetts Institute of Technology-Beth Israel Hospital (MIT-BIH) arrhythmia database and the Association for the Advancement of Medical Instrumentation (AAMI) beat classes, we trained and validated a support vector machine (SVM) classifier on a portion of the annotated beat database using the SST-derived instantaneous phase, the R-peak amplitudes and R-peak to R-peak interval durations, based on a single ECG lead. We obtained sentivities and positive predictive values comparable to other published algorithms using multiple leads and many more features.

Journal ArticleDOI
TL;DR: This review provides a methodological account on the multi-channel approach for the study of myoelectric manifestation of fatigue and on the experimental conditions to which it applies, as well as examples of their current applications.
Abstract: In a broad view, fatigue is used to indicate a degree of weariness. On a muscular level, fatigue posits the reduced capacity of muscle fibres to produce force, even in the presence of motor neuron excitation via either spinal mechanisms or electric pulses applied externally. Prior to decreased force, when sustaining physically demanding tasks, alterations in the muscle electrical properties take place. These alterations, termed myoelectric manifestation of fatigue, can be assessed non-invasively with a pair of surface electrodes positioned appropriately on the target muscle; traditional approach. A relatively more recent approach consists of the use of multiple electrodes. This multi-channel approach provides access to a set of physiologically relevant variables on the global muscle level or on the level of single motor units, opening new fronts for the study of muscle fatigue; it allows for: (i) a more precise quantification of the propagation velocity, a physiological variable of marked interest to the study of fatigue; (ii) the assessment of regional, myoelectric manifestations of fatigue; (iii) the analysis of single motor units, with the possibility to obtain information about motor unit control and fibre membrane changes. This review provides a methodological account on the multi-channel approach for the study of myoelectric manifestation of fatigue and on the experimental conditions to which it applies, as well as examples of their current applications.

Journal ArticleDOI
TL;DR: Automatic heart sound analysis has the potential to improve the diagnosis of valvular heart diseases in the primary care phase, as well as in countries where there is neither the expertise nor the equipment to perform echocardiograms.
Abstract: Objective: Automatic heart sound analysis has the potential to improve the diagnosis of valvular heart diseases in the primary care phase, as well as in countries where there is neither the expertise nor the equipment to perform echocardiograms. An algorithm has been trained, on the PhysioNet open-access heart sounds database, to classify heart sounds as normal or abnormal. Approach: The heart sounds are segmented using an open-source algorithm based on a hidden semi-Markov model. Following this, the time-frequency behaviour of a single heartbeat is characterized by using a novel implementation of the continuous wavelet transform, mel-frequency cepstral coefficients, and certain complexity measures. These features help detect the presence of any murmurs. A number of other features are also extracted to characterise the inter-beat behaviour of the heart sounds, which helps to recognize diseases such as arrhythmia. The extracted features are normalized and their dimensionality is reduced using principal component analysis. They are then used as the input to a fully-connected, two-hidden-layer neural network, trained by error backpropagation, and regularized with DropConnect. Main results: This algorithm achieved an accuracy of 85.2% on the test data, which placed third in the PhysioNet/Computing in Cardiology Challenge (first place scored 86.0%). However, this is unrealistic of real-world performance, as the test data contained a dataset (dataset-e) in which normal and abnormal heart sounds were recorded with different stethoscopes. A 10-fold cross-validation study on the training data (excluding dataset-e) gives a mean score of 74.8%, which is a more realistic estimate of accuracy. With dataset-e excluded from training, the algorithm scored only 58.1% on the test data.

Journal ArticleDOI
TL;DR: Clinical study results demonstrate technical versatility of DCS/DCT in providing important information for disease diagnosis and intervention monitoring in a variety of organs/tissues including brain, skeletal muscle, and tumor.
Abstract: OBJECTIVE Blood flow is one such available observable promoting a wealth of physiological insight both individually and in combination with other metrics. APPROACH Near-infrared diffuse correlation spectroscopy (DCS) and, to a lesser extent, diffuse correlation tomography (DCT), have increasingly received interest over the past decade as noninvasive methods for tissue blood flow measurements and imaging. DCS/DCT offers several attractive features for tissue blood flow measurements/imaging such as noninvasiveness, portability, high temporal resolution, and relatively large penetration depth (up to several centimeters). MAIN RESULTS This review first introduces the basic principle and instrumentation of DCS/DCT, followed by presenting clinical application examples of DCS/DCT for the diagnosis and therapeutic monitoring of diseases in a variety of organs/tissues including brain, skeletal muscle, and tumor. SIGNIFICANCE Clinical study results demonstrate technical versatility of DCS/DCT in providing important information for disease diagnosis and intervention monitoring.

Journal ArticleDOI
TL;DR: A novel photoplethysmograph probe employing dual photodiodes excited using a single infrared light source was developed for local pulse wave velocity (PWV) measurement and the potential use of the proposed system in cuffless blood pressure techniques was demonstrated.
Abstract: Objective: A novel photoplethysmograph probe employing dual photodiodes excited using a single infrared light source was developed for local pulse wave velocity (PWV) measurement. The potential use of the proposed system in cuffless blood pressure (BP) techniques was demonstrated. Approach: Initial validation measurements were performed on a phantom using a reference method. Further, an in vivo study was carried out in 35 volunteers (age = 28 ± 4.5 years). The carotid local PWV, carotid to finger pulse transit time (PTTR) and pulse arrival time at the carotid artery (PATC) were simultaneously measured. Beat-by-beat variation of the local PWV due to BP changes was studied during post-exercise relaxation. The cuffless BP estimation accuracy of local PWV, PATC, and PTTR was investigated based on inter- and intra-subject models with best-case calibration. Main results: The accuracy of the proposed system, hardware inter-channel delay (<0.1 ms), repeatability (beat-to-beat variation = 4.15%–11.38%) and reproducibility of measurement (r = 0.96) were examined. For the phantom experiment, the measured PWV values did not differ by more than 0.74 m s−1 compared to the reference PWV. Better correlation was observed between brachial BP parameters versus local PWV (r = 0.74–0.78) compared to PTTR (|r| = 0.62–0.67) and PATC (|r| = 0.52–0.68). Cuffless BP estimation using local PWV was better than PTTR and PATC with population-specific models. More accurate estimates of arterial BP levels were achieved using local PWV via subject-specific models (root-mean-square error ≤2.61 mmHg). Significance: A reliable system for cuffless BP measurement and local estimation of arterial wall properties.

Journal ArticleDOI
TL;DR: A brief snapshot into the fast changing research field of measurement and physiological links to nanoparticle use and its potential in the future is provided.
Abstract: Nanotechnology is of increasing interest in the fields of medicine and physiology over recent years. Its application could considerably improve disease detection and therapy, and although the potential is considerable, there are still many challenges, which need to be addressed before it is accepted in routine clinical use. This review focuses on emerging applications that nanotechnology could enhance or provide new approaches in diagnoses and therapy. The main focus of recent research centres on targeted therapies and enhancing imaging; however, the introduction of nanomaterial into the human body must be controlled, as there are many issues with possible toxicity and long-term effects. Despite these issues, the potential for nanotechnology to provide new methods of combating cancer and other disease conditions is considerable. There are still key challenges for researchers in this field, including the means of delivery and targetting in the body to provide effective treatment for specific disease conditions. Nanoparticles are difficult to measure due to the size and physical properties; hence there is still a great need to improve physiological measurements method in the field to ascertain how effective their use is in the human subject. This review is a brief snapshot into the fast changing research field of measurement and physiological links to nanoparticle use and its potential in the future.

Journal ArticleDOI
TL;DR: Novel HRP indices improve the accuracy of assessment due to their more appropriate consideration of complex autonomic processes across the recording technologies (CTG, handheld Doppler, MCG, ECG) and the reported novel developments significantly extend the possibilities for the established CTG methodology.
Abstract: Monitoring the fetal behavior does not only have implications for acute care but also for identifying developmental disturbances that burden the entire later life. The concept, of 'fetal programming', also known as 'developmental origins of adult disease hypothesis', e.g. applies for cardiovascular, metabolic, hyperkinetic, cognitive disorders. Since the autonomic nervous system is involved in all of those systems, cardiac autonomic control may provide relevant functional diagnostic and prognostic information. The fetal heart rate patterns (HRP) are one of the few functional signals in the prenatal period that relate to autonomic control and, therefore, is predestinated for its evaluation. The development of sensitive markers of fetal maturation and its disturbances requires the consideration of physiological fundamentals, recording technology and HRP parameters of autonomic control. Based on the ESGCO2016 special session on monitoring the fetal maturation we herein report the most recent results on: (i) functional fetal autonomic brain age score (fABAS), Recurrence Quantitative Analysis and Binary Symbolic Dynamics of complex HRP resolve specific maturation periods, (ii) magnetocardiography (MCG) based fABAS was validated for cardiotocography (CTG), (iii) 30 min recordings are sufficient for obtaining episodes of high variability, important for intrauterine growth restriction (IUGR) detection in handheld Doppler, (iv) novel parameters from PRSA to identify Intra IUGR fetuses, (v) evaluation of fetal electrocardiographic (ECG) recordings, (vi) correlation between maternal and fetal HRV is disturbed in pre-eclampsia. The reported novel developments significantly extend the possibilities for the established CTG methodology. Novel HRP indices improve the accuracy of assessment due to their more appropriate consideration of complex autonomic processes across the recording technologies (CTG, handheld Doppler, MCG, ECG). The ultimate objective is their dissemination into routine practice and studies of fetal developmental disturbances with implications for programming of adult diseases.

Journal ArticleDOI
TL;DR: Compared to studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models and may be viable alternative modeling techniques for EE prediction for hip- or thigh- worn accelerometers.
Abstract: This study had three purposes, all related to evaluating energy expenditure (EE) prediction accuracy from body-worn accelerometers: (1) compare linear regression to linear mixed models, (2) compare linear models to artificial neural network models, and (3) compare accuracy of accelerometers placed on the hip, thigh, and wrists. Forty individuals performed 13 activities in a 90 min semi-structured, laboratory-based protocol. Participants wore accelerometers on the right hip, right thigh, and both wrists and a portable metabolic analyzer (EE criterion). Four EE prediction models were developed for each accelerometer: linear regression, linear mixed, and two ANN models. EE prediction accuracy was assessed using correlations, root mean square error (RMSE), and bias and was compared across models and accelerometers using repeated-measures analysis of variance. For all accelerometer placements, there were no significant differences for correlations or RMSE between linear regression and linear mixed models (correlations: r = 0.71-0.88, RMSE: 1.11-1.61 METs; p > 0.05). For the thigh-worn accelerometer, there were no differences in correlations or RMSE between linear and ANN models (ANN-correlations: r = 0.89, RMSE: 1.07-1.08 METs. Linear models-correlations: r = 0.88, RMSE: 1.10-1.11 METs; p > 0.05). Conversely, one ANN had higher correlations and lower RMSE than both linear models for the hip (ANN-correlation: r = 0.88, RMSE: 1.12 METs. Linear models-correlations: r = 0.86, RMSE: 1.18-1.19 METs; p < 0.05), and both ANNs had higher correlations and lower RMSE than both linear models for the wrist-worn accelerometers (ANN-correlations: r = 0.82-0.84, RMSE: 1.26-1.32 METs. Linear models-correlations: r = 0.71-0.73, RMSE: 1.55-1.61 METs; p < 0.01). For studies using wrist-worn accelerometers, machine learning models offer a significant improvement in EE prediction accuracy over linear models. Conversely, linear models showed similar EE prediction accuracy to machine learning models for hip- and thigh-worn accelerometers and may be viable alternative modeling techniques for EE prediction for hip- or thigh-worn accelerometers.

Journal ArticleDOI
TL;DR: This article reviews the current engineering approaches for the detection and treatment of sleep apnea and provides a current perspective of the classes of tools at hand, along with a sense of their relative strengths and areas for further improvement.
Abstract: While public awareness of sleep related disorders is growing, sleep apnea syndrome (SAS) remains a public health and economic challenge. Over the last two decades, extensive controlled epidemiologic research has clarified the incidence, risk factors including the obesity epidemic, and global prevalence of obstructive sleep apnea (OSA), as well as establishing a growing body of literature linking OSA with cardiovascular morbidity, mortality, metabolic dysregulation, and neurocognitive impairment. The US Institute of Medicine Committee on Sleep Medicine estimates that 50-70 million US adults have sleep or wakefulness disorders. Furthermore, the American Academy of Sleep Medicine (AASM) estimates that more than 29 million US adults suffer from moderate to severe OSA, with an estimated 80% of those individuals living unaware and undiagnosed, contributing to more than $149.6 billion in healthcare and other costs in 2015. Although various devices have been used to measure physiological signals, detect apneic events, and help treat sleep apnea, significant opportunities remain to improve the quality, efficiency, and affordability of sleep apnea care. As our understanding of respiratory and neurophysiological signals and sleep apnea physiological mechanisms continues to grow, and our ability to detect and process biomedical signals improves, novel diagnostic and treatment modalities emerge. Objective This article reviews the current engineering approaches for the detection and treatment of sleep apnea. Approach It discusses signal acquisition and processing, highlights the current nonsurgical and nonpharmacological treatments, and discusses potential new therapeutic approaches. Main results This work has led to an array of validated signal and sensor modalities for acquiring, storing and viewing sleep data; a broad class of computational and signal processing approaches to detect and classify SAS disease patterns; and a set of distinctive therapeutic technologies whose use cases span the continuum of disease severity. Significance This review provides a current perspective of the classes of tools at hand, along with a sense of their relative strengths and areas for further improvement.

Journal ArticleDOI
TL;DR: This paper introduces a novel method for automatic classification of normal and abnormal heart sound recordings using a nested set of ensemble algorithms and helps reduce overfitting and improved classification performance.
Abstract: Objective: Heart sound classification and analysis play an important role in the early diagnosis and prevention of cardiovascular disease. To this end, this paper introduces a novel method for automatic classification of normal and abnormal heart sound recordings. Approach: Signals are first preprocessed to extract a total of 131 features in the time, frequency, wavelet and statistical domains from the entire signal and from the timings of the states. Outlier signals are then detected and separated from those with a standard range using an interquartile range algorithm. After that, feature extreme values are given special consideration, and finally features are reduced to the most significant ones using a feature reduction technique. In the classification stage, the selected features either for standard or outlier signals are fed separately into an ensemble of 20 two-step classifiers for the classification task. The first step of the classifier is represented by a nested set of ensemble algorithms which was cross-validated on the training dataset provided by PhysioNet Challenge 2016, while the second one uses a voting rule of the class label. Main results: The results show that this method is able to recognize heart sound recordings efficiently, achieving an overall score of 96.30% for standard signals and 90.18% for outlier signals on a cross-validated experiment using the available training data. Significance: The approach of our proposed method helped reduce overfitting and improved classification performance, achieving an overall score on the hidden test set of 80.1% (79.6% sensitivity and 80.6% specificity).

Journal ArticleDOI
TL;DR: The feasibility of accurate classification without segmentation of the characteristic heart sounds has been demonstrated and classification accuracy is comparable to other algorithms but achieved without the complexity of segmentation.
Abstract: Objective: Most algorithms for automated analysis of phonocardiograms (PCG) require segmentation of the signal into the characteristic heart sounds. The aim was to assess the feasibility for accurate classification of heart sounds on short, unsegmented recordings. Approach: PCG segments of 5 s duration from the PhysioNet/Computing in Cardiology Challenge database were analysed. Initially the 5 s segment at the start of each recording (seg 1) was analysed. Segments were zero-mean but otherwise had no pre-processing or segmentation. Normalised spectral amplitude was determined by fast Fourier transform and wavelet entropy by wavelet analysis. For each of these a simple single feature threshold-based classifier was implemented and the frequency/scale and thresholds for optimum classification accuracy determined. The analysis was then repeated using relatively noise free 5 s segments (seg 2) of each recording. Spectral amplitude and wavelet entropy features were then combined in a classification tree. Main results: There were significant differences between normal and abnormal recordings for both wavelet entropy and spectral amplitude across scales and frequency. In the wavelet domain the differences between groups were greatest at highest frequencies (wavelet scale 1, pseudo frequency 1 kHz) whereas in the frequency domain the differences were greatest at low frequencies (12 Hz). Abnormal recordings had significantly reduced high frequency wavelet entropy: (Median (interquartile range)) 6.63 (2.42) versus 8.36 (1.91), p < 0.0001, suggesting the presence of discrete high frequency components in these recordings. Abnormal recordings exhibited significantly greater low frequency (12 Hz) spectral amplitude: 0.24 (0.22) versus 0.09 (0.15), p < 0.0001. Classification accuracy (mean of specificity and sensitivity) was greatest for wavelet entropy: 76% (specificity 54%, sensitivity 98%) versus 70% (specificity 65%, sensitivity 75%) and was further improved by selecting the lowest noise segment (seg 2): 80% (specificity 65%, sensitivity 94%) versus 71% (specificity 63%, sensitivity 79%). Classification tree with combined features gave accuracy 79% (specificity 80%, sensitivity 77%). Significance: The feasibility of accurate classification without segmentation of the characteristic heart sounds has been demonstrated. Classification accuracy is comparable to other algorithms but achieved without the complexity of segmentation.

Journal ArticleDOI
TL;DR: A spectral filtering approach (SFA) is developed, which is a new technique for thermography-based blood flow imaging that eliminates the need to solve differential equations for the determination of the relationship between skin blood flow and skin temperature dynamics.
Abstract: The determination of the relationship between skin blood flow and skin temperature dynamics is the main problem in thermography-based blood flow imaging. Oscillations in skin blood flow are the source of thermal waves propagating from micro-vessels toward the skin's surface, as assumed in this study. This hypothesis allows us to use equations for the attenuation and dispersion of thermal waves for converting the temperature signal into the blood flow signal, and vice versa. We developed a spectral filtering approach (SFA), which is a new technique for thermography-based blood flow imaging. In contrast to other processing techniques, the SFA implies calculations in the spectral domain rather than in the time domain. Therefore, it eliminates the need to solve differential equations. The developed technique was verified within 0.005-0.1 Hz, including the endothelial, neurogenic and myogenic frequency bands of blood flow oscillations. The algorithm for an inverse conversion of the blood flow signal into the skin temperature signal is addressed. The examples of blood flow imaging of hands during cuff occlusion and feet during heating of the back are illustrated. The processing of infrared (IR) thermograms using the SFA allowed us to restore the blood flow signals and achieve correlations of about 0.8 with a waveform of a photoplethysmographic signal. The prospective applications of the thermography-based blood flow imaging technique include non-contact monitoring of the blood supply during engraftment of skin flaps and burns healing, as well the use of contact temperature sensors to monitor low-frequency oscillations of peripheral blood flow.

Journal ArticleDOI
TL;DR: The results indicate that the proposed method is effective in classifying heart sounds as normal versus abnormal recordings.
Abstract: Heart sound analysis has been a major topic of research over the past few decades. However, the necessity for a large and reliable database has been a major concern in these studies. Objective: Noting that the current heart sound classification methods do not work properly for noisy signals, the PhysioNet/CinC Challenge 2016 aims to develop the heart sound classification algorithms by providing a global open database for challengers. This paper addresses the problem of heart sound classification methods within noisy real-world phonocardiogram recordings by implementing an innovative approach. Significance: After locating the fundamental heart sounds and the systolic and diastolic components, a novel method named cycle quality assessment is applied to each recording. The presented method detects those cycles which are less affected by noise and better segmented by the use of two criteria here proposed in this paper. The selected cycles are the inputs of a further feature extraction process. Approach: Due to the variability of the heart sound signal induced by various cardiac arrhythmias, four sets of features from the time, time-frequency and perceptual domains are extracted. Before starting the main classification process, the obtained 90-dimensional feature vector is mapped to a new feature space to pre-detect normal recordings by applying a Fisher's discriminant analysis. The main classification procedure is then done based on three feed-forward neural networks and a voting system among classifiers. Main results: The presented method is evaluated using the training and hidden test sets of the PhysioNet/CinC Challenge 2016. Also, the results are compared with the top five ranked submissions. The results indicate that the proposed method is effective in classifying heart sounds as normal versus abnormal recordings.

Journal ArticleDOI
TL;DR: Results show that using earlobe photoplethysmographic signal is a viable, inexpensive, and non-invasive AF detection method that could be invaluable in detecting subclinical AF.
Abstract: Atrial fibrillation (AF) is the most common cardiac arrhythmia in the world, associated with increased risk of thromboembolic events and an increased mortality rate. In addition, a significant portion of AF patients are asymptomatic. Current AF diagnostic methods, often including a body surface electrocardiogram or implantable loop recorder, are both expensive and invasive and offer limited access within the general community. Objective: We tested the feasibility of the detection of AF using a photoplethysmographic signal acquired from an inexpensive, non-invasive earlobe photoplethysmographic sensor. This technology can be implemented into wearable devices and would enable continuous cardiac monitoring capabilities, greatly improving the rate of asymptomatic AF detection. Approach: We conducted a clinical study of patients going through electrical cardioversion for AF treatment. Photoplethysmographic recordings were taken from these AF patients before and after their cardioversion procedure, along with recordings from a healthy control group. Using these recordings, cardiac beats were identified and the inter-systolic interval was calculated. The inter-systolic interval was used to calculate four parameters to quantify the heart rate variability indicative of AF. Receiver operating characteristic curves were used to calculate discriminant thresholds between the AF and non-AF cohorts. Main results: The parameter with the greatest discriminant capability resulted in a sensitivity and specificity of 90.9%. These results are comparable to expensive ECG-based and invasive implantable loop recorder AF detection methods. Significance: These results demonstrate that using a non-invasive earlobe photoplethysmographic signal is a viable and inexpensive alternative to ECG-based AF detection methods, and an alternative that could be invaluable in detecting subclinical AF.

Journal ArticleDOI
TL;DR: The proposed two-step processing scheme to estimate heart rate (HR) from wrist-type PPG signals strongly corrupted by motion artifacts is fully automatic, induces an average estimation delay of 0.93, and is therefore suitable for real-time monitoring applications.
Abstract: Photoplethysmographic (PPG) signals are easily corrupted by motion artifacts when the subjects perform physical exercise. This paper introduces a two-step processing scheme to estimate heart rate (HR) from wrist-type PPG signals strongly corrupted by motion artifacts. Adaptive noise cancellation, using normalized least-mean-square algorithm, is first performed to attenuate motion artifacts and reconstruct multiple PPG waveforms from different combinations of corrupted PPG waveforms and accelerometer data. An adaptive band-pass filter is then used to track the common instantaneous frequency component (i.e. HR) of the reconstructed PPG waveforms. The proposed HR estimation scheme was evaluated on two datasets, composed of records from running subjects and subjects performing different kinds of arm/forearm movements and resulted in average absolute errors of 1.40 ± 0.60 and 4.28 ± 3.16 beats-per-minute for these two datasets, respectively. Importantly, the proposed method is fully automatic, induces an average estimation delay of 0.93 s, and is therefore suitable for real-time monitoring applications.

Journal ArticleDOI
TL;DR: It is concluded that SYNC individuals featured an impaired cerebral autoregulation visible during TILT and were unable to activate cardiac baroreflex to cope with the postural challenge.
Abstract: Objective A model-based conditional transfer entropy approach was exploited to quantify the information transfer in cerebrovascular (CBV) and cardiovascular (CV) systems in subjects prone to develop postural syncope. Approach Spontaneous beat-to-beat variations of mean cerebral blood flow velocity (MCBFV) derived from a transcranial Doppler device, heart period (HP) derived from surface electrocardiogram, mean arterial pressure (MAP) and systolic arterial pressure (SAP) derived from finger plethysmographic arterial pressure device were monitored at rest in supine position (REST) and during 60° head-up tilt (TILT) in 13 individuals (age mean ± standard deviation: 28 ± 9 years, min-max range: 18-44 years, 5 males) with a history of recurrent episodes of syncope (SYNC) and in 13 age- and gender-matched controls (NonSYNC). Respiration (R) obtained from a thoracic belt was acquired as well and considered as a conditioning signal in transfer entropy assessment. Synchronous sequences of 250 consecutive MCBFV, HP, MAP, SAP and R values were utilized to estimate the information genuinely transferred from MAP to MCBFV (i.e. disambiguated from R influences) and vice versa. Analogous indexes were computed from SAP to HP and vice versa. Traditional time and frequency domain analyses were carried out as well. Main results SYNC subjects showed an increased genuine information transfer from MAP to MCBFV during TILT, while they did not exhibit the expected rise of the genuine information transfer from SAP to HP. Significance We conclude that SYNC individuals featured an impaired cerebral autoregulation visible during TILT and were unable to activate cardiac baroreflex to cope with the postural challenge. Traditional frequency domain markers based on transfer function modulus, phase and coherence functions were less powerful or less specific in typifying the CBV and CV controls of SYNC individuals. Conditional transfer entropy approach can identify the impairment of CBV and CV controls and provide specific clues to identify subjects prone to develop postural syncope.

Journal ArticleDOI
TL;DR: The thermal pattern and symmetry of 103 healthy pairs of feet is described and refinements to the definition of hotspots are proposed when considering feet at risk of ulceration.
Abstract: Early identification of areas of inflammation may aid prevention of diabetic foot ulcers. A new bespoke thermal camera system has been developed to thermally image feet at risk. Hotspots (areas at least 2.2 °C hotter than the contralateral site) may indicate areas of inflammation prior to any apparent visual signs. This article describes the thermal pattern and symmetry of 103 healthy pairs of feet. 68% of participants were thermally symmetric at the 33 foot sites measured. 32% of participants had at least one hotspot, but hotspots overall only accounted for 5% of the measurements made. Refinements to the definition of hotspots are proposed when considering feet at risk of ulceration.

Journal ArticleDOI
TL;DR: The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness and provided essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.
Abstract: Objective: Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Approach: Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. Main results: The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The score was shown to increases with an increases in the tolerance window size, as expected. Significance: The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for evaluators who need to test their algorithms with realistic data and share reproducible results.

Journal ArticleDOI
TL;DR: Assessing directional cardiovascular interactions among the basic variability signals of RR, SBP and diastolic blood pressure (DBP), using an approach which allows direct comparison between bivariate and multivariate coupling measures finds that bivariate measures better quantify the overall information transferred between variables, while trivariate measures better reflect the existence and delay of directed interactions.
Abstract: The study of short-term cardiovascular interactions is classically performed through the bivariate analysis of the interactions between the beat-to-beat variability of heart period (RR interval from the ECG) and systolic blood pressure (SBP). Recent progress in the development of multivariate time series analysis methods is making it possible to explore how directed interactions between two signals change in the context of networks including other coupled signals. Exploiting these advances, the present study aims at assessing directional cardiovascular interactions among the basic variability signals of RR, SBP and diastolic blood pressure (DBP), using an approach which allows direct comparison between bivariate and multivariate coupling measures. To this end, we compute information-theoretic measures of the strength and delay of causal interactions between RR, SBP and DBP using both bivariate and trivariate (conditioned) formulations in a group of healthy subjects in a resting state and during stress conditions induced by head-up tilt (HUT) and mental arithmetics (MA). We find that bivariate measures better quantify the overall (direct+indirect) information transferred between variables, while trivariate measures better reflect the existence and delay of directed interactions. The main physiological results are: (i) the detection during supine rest of strong interactions along the pathway RR���DBP���SBP, reflecting marked Windkessel and/or Frank-Starling effects; (ii) the finding of relatively weak baroreflex effects SBP���RR at rest; (iii) the invariance of cardiovascular interactions during MA, and the emergence of stronger and faster SBP���RR interactions, as well as of weaker RR���DBP interactions, during HUT. These findings support the importance of investigating cardiovascular interactions from a network perspective, and suggest the usefulness of directed information measures to assess physiological mechanisms and track their changes across different physiological states.