scispace - formally typeset
Search or ask a question
Author

Reinhold Orglmeister

Other affiliations: Bosch, Biotronik
Bio: Reinhold Orglmeister is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Independent component analysis & Blind signal separation. The author has an hindex of 19, co-authored 119 publications receiving 2561 citations. Previous affiliations of Reinhold Orglmeister include Bosch & Biotronik.


Papers
More filters
Journal ArticleDOI
TL;DR: The authors provide an overview of these recent developments as well as of formerly proposed algorithms for QRS detection, which reflects the electrical activity within the heart during the ventricular contraction.
Abstract: The QRS complex is the most striking waveform within the electrocardiogram (ECG). Since it reflects the electrical activity within the heart during the ventricular contraction, the time of its occurrence as well as its shape provide much information about the current state of the heart. Due to its characteristic shape it serves as the basis for the automated determination of the heart rate, as an entry point for classification schemes of the cardiac cycle, and often it is also used in ECG data compression algorithms. In that sense, QRS detection provides the fundamentals for almost all automated ECG analysis algorithms. Software QRS detection has been a research topic for more than 30 years. The evolution of these algorithms clearly reflects the great advances in computer technology. Within the last decade many new approaches to QRS detection have been proposed; for example, algorithms from the field of artificial neural networks genetic algorithms wavelet transforms, filter banks as well as heuristic methods mostly based on nonlinear transforms. The authors provide an overview of these recent developments as well as of formerly proposed algorithms.

1,307 citations

Proceedings ArticleDOI
09 Jun 1997
TL;DR: The FIR polynomial algebra techniques as used by Lambert present an efficient tool to solve true phase inverse systems allowing a simple implementation of noncausal filters.
Abstract: We present a method to separate and deconvolve sources which have been recorded in real environments. The use of noncausal FIR filters allows us to deal with nonminimum mixing systems. The learning rules can be derived from different viewpoints such as information maximization, maximum likelihood and negentropy which result in similar rules for the weight update. We transform the learning rule into the frequency domain where the convolution and deconvolution property becomes a multiplication and division operation. In particular the FIR polynomial algebra techniques as used by Lambert present an efficient tool to solve true phase inverse systems allowing a simple implementation of noncausal filters. The significance of the methods is shown by the successful separation of two voices and separating a voice that has been recorded with loud music in the background. The recognition rate of an automatic speech recognition system is increased after separating the speech signals.

174 citations

Proceedings ArticleDOI
24 Sep 1997
TL;DR: In this article, a new set of learning rules for the nonlinear blind source separation problem based on the information maximization criterion is presented, which focuses on a parametric sigmoidal nonlinearity and higher order polynomials.
Abstract: We present a new set of learning rules for the nonlinear blind source separation problem based on the information maximization criterion. The mixing model is divided into a linear mixing part and a nonlinear transfer channel. The proposed model focuses on a parametric sigmoidal nonlinearity and higher order polynomials. Our simulation results verify the convergence of the proposed algorithms.

103 citations

01 Jan 2003
TL;DR: A novel technique for the detection of QRS complexes in electrocardiographic signals that is based on a feature obtained by counting the number of zero crossings per segment, which provides a computationally efficient solution to the QRS detection problem.
Abstract: Summary There is a novel technique for the detection of QRS complexes in electrocardiographic signals that is based on a feature obtained by counting the number of zero crossings per segment. It is well-known that zero crossing methods are robust against noise and are particularly useful for finite precision arithmetic. The new detection method inherits this robustness and provides a high degree of detection performance even in cases of very noisy electrocardiographic signals. Furthermore, due to the simplicity of detecting and counting zero crossings, the proposed technique provides a computationally efficient solution to the QRS detection problem. The excellent performance of the algorithm is confirmed by a sensitivity of 99.70% (277 false negatives) and a positive predictivity of 99.57% (390 false positives) against the MIT-BIH arrhythmia database.

99 citations

Proceedings ArticleDOI
12 May 1998
TL;DR: Methods to separate blindly mixed signals recorded in a room using an infomax approach in a feedforward neural network implemented in the frequency domain using the polynomial filter matrix algebra technique and a fast convergence speed was achieved.
Abstract: We present methods to separate blindly mixed signals recorded in a room. The learning algorithm is based on the information maximization in a single layer neural network. We focus on the implementation of the learning algorithm and on issues that arise when separating speakers in room recordings. We used an infomax approach in a feedforward neural network implemented in the frequency domain using the polynomial filter matrix algebra technique. A fast convergence speed was achieved by using a time-delayed decorrelation method as a preprocessing step. Under minimum-phase mixing conditions this preprocessing step was sufficient for the separation of signals. These methods successfully separated a recorded voice with music in the background (cocktail party problem). Finally, we discuss problems that arise in real world recordings and their potential solutions.

80 citations


Cited by
More filters
Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
01 Jan 2010
TL;DR: A variety of system implementations are compared in an approach to identify the technological shortcomings of the current state-of-the-art in wearable biosensor solutions and evaluate the maturity level of the top current achievements in wearable health-monitoring systems.
Abstract: The design and development of wearable biosensor systems for health monitoring has garnered lots of attention in the scientific community and the industry during the last years. Mainly motivated by increasing healthcare costs and propelled by recent technological advances in miniature biosensing devices, smart textiles, microelectronics, and wireless communications, the continuous advance of wearable sensor-based systems will potentially transform the future of healthcare by enabling proactive personal health management and ubiquitous monitoring of a patient's health condition. These systems can comprise various types of small physiological sensors, transmission modules and processing capabilities, and can thus facilitate low-cost wearable unobtrusive solutions for continuous all-day and any-place health, mental and activity status monitoring. This paper attempts to comprehensively review the current research and development on wearable biosensor systems for health monitoring. A variety of system implementations are compared in an approach to identify the technological shortcomings of the current state-of-the-art in wearable biosensor solutions. An emphasis is given to multiparameter physiological sensing system designs, providing reliable vital signs measurements and incorporating real-time decision support for early detection of symptoms or context awareness. In order to evaluate the maturity level of the top current achievements in wearable health-monitoring systems, a set of significant features, that best describe the functionality and the characteristics of the systems, has been selected to derive a thorough study. The aim of this survey is not to criticize, but to serve as a reference for researchers and developers in this scientific area and to provide direction for future research improvements.

2,051 citations

Journal ArticleDOI
TL;DR: Independent component analysis (ICA), a generalization of PCA, was used, using a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons, which was superior to representations based on PCA for recognizing faces across days and changes in expression.
Abstract: A number of current face recognition algorithms use face representations found by unsupervised statistical methods. Typically these methods find a set of basis images and represent faces as a linear combination of those images. Principal component analysis (PCA) is a popular example of such methods. The basis images found by PCA depend only on pairwise relationships between pixels in the image database. In a task such as face recognition, in which important information may be contained in the high-order relationships among pixels, it seems reasonable to expect that better basis images may be found by methods sensitive to these high-order statistics. Independent component analysis (ICA), a generalization of PCA, is one such method. We used a version of ICA derived from the principle of optimal information transfer through sigmoidal neurons. ICA was performed on face images in the FERET database under two different architectures, one which treated the images as random variables and the pixels as outcomes, and a second which treated the pixels as random variables and the images as outcomes. The first architecture found spatially local basis images for the faces. The second architecture produced a factorial face code. Both ICA representations were superior to representations based on PCA for recognizing faces across days and changes in expression. A classifier that combined the two ICA representations gave the best performance.

2,044 citations

Journal ArticleDOI
TL;DR: An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions and is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sources in the brain.
Abstract: An extension of the infomax algorithm of Bell and Sejnowski (1995) is presented that is able blindly to separate mixed signals with sub- and supergaussian source distributions. This was achieved by using a simple type of learning rule first derived by Girolami (1997) by choosing negentropy as a projection pursuit index. Parameterized probability distributions that have sub- and supergaussian regimes were used to derive a general learning rule that preserves the simple architecture proposed by Bell and Sejnowski (1995), is optimized using the natural gradient by Amari (1998), and uses the stability analysis of Cardoso and Laheld (1996) to switch between sub- and supergaussian regimes. We demonstrate that the extended infomax algorithm is able to separate 20 sources with a variety of source distributions easily. Applied to high-dimensional data from electroencephalographic recordings, it is effective at separating artifacts such as eye blinks and line noise from weaker electrical signals that arise from sou...

1,795 citations

Journal ArticleDOI
TL;DR: A robust single-lead electrocardiogram (ECG) delineation system based on the wavelet transform (WT), outperforming the results of other well known algorithms, especially in determining the end of T wave.
Abstract: In this paper, we developed and evaluated a robust single-lead electrocardiogram (ECG) delineation system based on the wavelet transform (WT). In a first step, QRS complexes are detected. Then, each QRS is delineated by detecting and identifying the peaks of the individual waves, as well as the complex onset and end. Finally, the determination of P and T wave peaks, onsets and ends is performed. We evaluated the algorithm on several manually annotated databases, such as MIT-BIH Arrhythmia, QT, European ST-T and CSE databases, developed for validation purposes. The QRS detector obtained a sensitivity of Se=99.66% and a positive predictivity of P+=99.56% over the first lead of the validation databases (more than 980,000 beats), while for the well-known MIT-BIH Arrhythmia Database, Se and P+ over 99.8% were attained. As for the delineation of the ECG waves, the mean and standard deviation of the differences between the automatic and manual annotations were computed. The mean error obtained with the WT approach was found not to exceed one sampling interval, while the standard deviations were around the accepted tolerances between expert physicians, outperforming the results of other well known algorithms, especially in determining the end of T wave.

1,490 citations