scispace - formally typeset
Search or ask a question
Author

Lionel Tarassenko

Bio: Lionel Tarassenko is an academic researcher from University of Oxford. The author has contributed to research in topics: Vital signs & Artificial neural network. The author has an hindex of 67, co-authored 395 publications receiving 16265 citations. Previous affiliations of Lionel Tarassenko include National Institutes of Health & National Institute for Health Research.


Papers
More filters
Journal ArticleDOI
TL;DR: This review aims to provide an updated and structured investigation of novelty detection research papers that have appeared in the machine learning literature during the last decade.

1,425 citations

Journal ArticleDOI
TL;DR: A dynamical model based on three coupled ordinary differential equations is introduced which is capable of generating realistic synthetic electrocardiogram (ECG) signals and may be employed to assess biomedical signal processing techniques which are used to compute clinical statistics from the ECG.
Abstract: A dynamical model based on three coupled ordinary differential equations is introduced which is capable of generating realistic synthetic electrocardiogram (ECG) signals. The operator can specify the mean and standard deviation of the heart rate, the morphology of the PQRST cycle, and the power spectrum of the RR tachogram. In particular, both respiratory sinus arrhythmia at the high frequencies (HFs) and Mayer waves at the low frequencies (LFs) together with the LF/HF ratio are incorporated in the model. Much of the beat-to-beat variation in morphology and timing of the human ECG, including QT dispersion and R-peak amplitude modulation are shown to result. This model may be employed to assess biomedical signal processing techniques which are used to compute clinical statistics from the ECG.

1,103 citations

Journal ArticleDOI
TL;DR: The authors' evidence-based centile charts for children from birth to 18 years should help clinicians to update clinical and resuscitation guidelines and show decline in respiratory rate fromBirth to early adolescence.

947 citations

Journal ArticleDOI
TL;DR: This work has devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation, and has been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model.
Abstract: Remote sensing of the reflectance photoplethysmogram using a video camera typically positioned 1 m away from the patient's face is a promising method for monitoring the vital signs of patients without attaching any electrodes or sensors to them. Most of the papers in the literature on non-contact vital sign monitoring report results on human volunteers in controlled environments. We have been able to obtain estimates of heart rate and respiratory rate and preliminary results on changes in oxygen saturation from double-monitored patients undergoing haemodialysis in the Oxford Kidney Unit. To achieve this, we have devised a novel method of cancelling out aliased frequency components caused by artificial light flicker, using auto-regressive (AR) modelling and pole cancellation. Secondly, we have been able to construct accurate maps of the spatial distribution of heart rate and respiratory rate information from the coefficients of the AR model. In stable sections with minimal patient motion, the mean absolute error between the camera-derived estimate of heart rate and the reference value from a pulse oximeter is similar to the mean absolute error between two pulse oximeter measurements at different sites (finger and earlobe). The activities of daily living affect the respiratory rate, but the camera-derived estimates of this parameter are at least as accurate as those derived from a thoracic expansion sensor (chest belt). During a period of obstructive sleep apnoea, we tracked changes in oxygen saturation using the ratio of normalized reflectance changes in two colour channels (red and blue), but this required calibration against the reference data from a pulse oximeter.

381 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation, and implements a modified Viterbi algorithm for decoding the most likely sequence of states.
Abstract: The identification of the exact positions of the first and second heart sounds within a phonocardiogram (PCG), or heart sound segmentation, is an essential step in the automatic analysis of heart sound recordings, allowing for the classification of pathological events. While threshold-based segmentation methods have shown modest success, probabilistic models, such as hidden Markov models, have recently been shown to surpass the capabilities of previous methods. Segmentation performance is further improved when a priori information about the expected duration of the states is incorporated into the model, such as in a hidden semi-Markov model (HSMM). This paper addresses the problem of the accurate segmentation of the first and second heart sound within noisy real-world PCG recordings using an HSMM, extended with the use of logistic regression for emission probability estimation. In addition, we implement a modified Viterbi algorithm for decoding the most likely sequence of states, and evaluated this method on a large dataset of 10 172 s of PCG recorded from 112 patients (including 12 181 first and 11 627 second heart sounds). The proposed method achieved an average $F_{1}$ score of 95.63 $\,\pm \,$ 0.85%, while the current state of the art achieved 86.28 $\pm \,$ 1.55% when evaluated on unseen test recordings. The greater discrimination between states afforded using logistic regression as opposed to the previous Gaussian distribution-based emission probability estimation as well as the use of an extended Viterbi algorithm allows this method to significantly outperform the current state-of-the-art method based on a two-sided paired t-test.

366 citations


Cited by
More filters
01 Jan 2016
TL;DR: The using multivariate statistics is universally compatible with any devices to read, allowing you to get the most less latency time to download any of the authors' books like this one.
Abstract: Thank you for downloading using multivariate statistics. As you may know, people have look hundreds times for their favorite novels like this using multivariate statistics, but end up in infectious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they juggled with some harmful bugs inside their laptop. using multivariate statistics is available in our digital library an online access to it is set as public so you can download it instantly. Our books collection saves in multiple locations, allowing you to get the most less latency time to download any of our books like this one. Merely said, the using multivariate statistics is universally compatible with any devices to read.

14,604 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: This survey tries to provide a structured and comprehensive overview of the research on anomaly detection by grouping existing techniques into different categories based on the underlying approach adopted by each technique.
Abstract: Anomaly detection is an important problem that has been researched within diverse research areas and application domains. Many anomaly detection techniques have been specifically developed for certain application domains, while others are more generic. This survey tries to provide a structured and comprehensive overview of the research on anomaly detection. We have grouped existing techniques into different categories based on the underlying approach adopted by each technique. For each category we have identified key assumptions, which are used by the techniques to differentiate between normal and anomalous behavior. When applying a given technique to a particular domain, these assumptions can be used as guidelines to assess the effectiveness of the technique in that domain. For each category, we provide a basic anomaly detection technique, and then show how the different existing techniques in that category are variants of the basic technique. This template provides an easier and more succinct understanding of the techniques belonging to each category. Further, for each category, we identify the advantages and disadvantages of the techniques in that category. We also provide a discussion on the computational complexity of the techniques since it is an important issue in real application domains. We hope that this survey will provide a better understanding of the different directions in which research has been done on this topic, and how techniques developed in one area can be applied in domains for which they were not intended to begin with.

9,627 citations

Journal ArticleDOI
TL;DR: With adequate recognition and effective engagement of all issues, BCI systems could eventually provide an important new communication and control option for those with motor disabilities and might also give those without disabilities a supplementary control channel or a control channel useful in special circumstances.

6,803 citations

Book ChapterDOI
16 Nov 1992
TL;DR: Optical coherence tomography (OCT) has developed rapidly since its first realisation in medicine and is currently an emerging technology in the diagnosis of skin disease as mentioned in this paper, where OCT is an interferometric technique that detects reflected and backscattered light from tissue.
Abstract: Optical coherence tomography (OCT) has developed rapidly since its first realisation in medicine and is currently an emerging technology in the diagnosis of skin disease. OCT is an interferometric technique that detects reflected and backscattered light from tissue and is often described as the optical analogue to ultrasound. The inherent safety of the technology allows for in vivo use of OCT in patients. The main strength of OCT is the depth resolution. In dermatology, most OCT research has turned on non-melanoma skin cancer (NMSC) and non-invasive monitoring of morphological changes in a number of skin diseases based on pattern recognition, and studies have found good agreement between OCT images and histopathological architecture. OCT has shown high accuracy in distinguishing lesions from normal skin, which is of great importance in identifying tumour borders or residual neoplastic tissue after therapy. The OCT images provide an advantageous combination of resolution and penetration depth, but specific studies of diagnostic sensitivity and specificity in dermatology are sparse. In order to improve OCT image quality and expand the potential of OCT, technical developments are necessary. It is suggested that the technology will be of particular interest to the routine follow-up of patients undergoing non-invasive therapy of malignant or premalignant keratinocyte tumours. It is speculated that the continued technological development can propel the method to a greater level of dermatological use.

6,095 citations