scispace - formally typeset
Search or ask a question

Showing papers by "Klaus-Robert Müller published in 2003"


Journal ArticleDOI
28 Jul 2003
TL;DR: Overall, it was agreed that simplicity is generally best and, therefore, the use of linear methods is recommended wherever possible and nonlinear methods in some applications can provide better results, particularly with complex and/or other very large data sets.
Abstract: At the recent Second International Meeting on Brain-Computer Interfaces (BCIs) held in June 2002 in Rensselaerville, NY, a formal debate was held on the pros and cons of linear and nonlinear methods in BCI research. Specific examples applying EEG data sets to linear and nonlinear methods are given and an overview of the various pros and cons of each approach is summarized. Overall, it was agreed that simplicity is generally best and, therefore, the use of linear methods is recommended wherever possible. It was also agreed that nonlinear methods in some applications can provide better results, particularly with complex and/or other very large data sets.

369 citations


Journal ArticleDOI
28 Jul 2003
TL;DR: The present variant of the Berlin BCI is designed to achieve fast classifications in normally behaving subjects and opens a new perspective for assistance of action control in time-critical behavioral contexts; the potential transfer to paralyzed patients will require further study.
Abstract: Brain-computer interfaces (BCIs) involve two coupled adapting systems-the human subject and the computer. In developing our BCI, our goal was to minimize the need for subject training and to impose the major learning load on the computer. To this end, we use behavioral paradigms that exploit single-trial EEG potentials preceding voluntary finger movements. Here, we report recent results on the basic physiology of such premovement event-related potentials (ERP). 1) We predict the laterality of imminent left- versus right-hand finger movements in a natural keyboard typing condition and demonstrate that a single-trial classification based on the lateralized Bereitschaftspotential (BP) achieves good accuracies even at a pace as fast as 2 taps/s. Results for four out of eight subjects reached a peak information transfer rate of more than 15 b/min; the four other subjects reached 6-10 b/min. 2) We detect cerebral error potentials from single false-response trials in a forced-choice task, reflecting the subject's recognition of an erroneous response. Based on a specifically tailored classification procedure that limits the rate of false positives at, e.g., 2%, the algorithm manages to detect 85% of error trials in seven out of eight subjects. Thus, concatenating a primary single-trial BP-paradigm involving finger classification feedback with such secondary error detection could serve as an efficient online confirmation/correction tool for improvement of bit rates in a future BCI setting. As the present variant of the Berlin BCI is designed to achieve fast classifications in normally behaving subjects, it opens a new perspective for assistance of action control in time-critical behavioral contexts; the potential transfer to paralyzed patients will require further study.

238 citations


Journal ArticleDOI
TL;DR: Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, this work proposes nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions.
Abstract: We incorporate prior knowledge to construct nonlinear algorithms for invariant feature extraction and discrimination. Employing a unified framework in terms of a nonlinearized variant of the Rayleigh coefficient, we propose nonlinear generalizations of Fisher's discriminant and oriented PCA using support vector kernel functions. Extensive simulations show the utility of our approach.

213 citations


Journal ArticleDOI
TL;DR: In this paper, a kernel-based algorithm for nonlinear blind source separation (BSS) with temporal information is proposed. But this algorithm requires the data to be mapped to a high (possibly infinite)-dimensional kernel feature space.
Abstract: We propose kTDSEP, a kernel-based algorithm for nonlinear blind source separation (BSS). It combines complementary research fields: kernel feature spaces and BSS using temporal information. This yields an efficient algorithm for nonlinear BSS with invertible nonlinearity. Key assumptions are that the kernel feature space is chosen rich enough to approximate the nonlinearity and that signals of interest contain temporal information. Both assumptions are fulfilled for a wide set of real-world applications. The algorithm works as follows: First, the data are (implicitly) mapped to a high (possibly infinite)-dimensional kernel feature space. In practice, however, the data form a smaller submanifold in feature space-- even smaller than the number of training data points--a fact that has already been used by, for example, reduced set techniques for support vector machines. We propose to adapt to this effective dimension as a preprocessing step and to construct an orthonormal basis of this submanifold. The latter dimension-reduction step is essential for making the subsequent application of BSS methods computationally and numerically tractable. In the reduced space, we use a BSS algorithm that is based on second-order temporal decorrelation. Finally, we propose a selection procedure to obtain the original sources from the extracted nonlinear components automatically.Experiments demonstrate the excellent performance and efficiency of our kTDSEP algorithm for several problems of nonlinear BSS and for more than two sources.

156 citations


Journal ArticleDOI
28 Jul 2003
TL;DR: Three datasets were used to conduct an open competition for evaluating the performance of various machine-learning algorithms used in brain-computer interfaces for tasks that included detecting explicit left/right (L/R) button press.
Abstract: We present three datasets that were used to conduct an open competition for evaluating the performance of various machine-learning algorithms used in brain-computer interfaces. The datasets were collected for tasks that included: 1) detecting explicit left/right (L/R) button press; 2) predicting imagined L/R button press; and 3) vertical cursor control. A total of ten entries were submitted to the competition, with winning results reported for two of the three datasets.

126 citations


Proceedings Article
09 Dec 2003
TL;DR: The present paper studies the implications of using more classes, e.g., left vs. right hand vs. foot, for operating a BCI and contributes two extensions of the common spatial pattern (CSP) algorithm, one interestingly based on simultaneous diagonalization, and controlled EEG experiments that underline the theoretical findings and show excellent improved ITRs.
Abstract: Brain-Computer Interfaces (BCI) are an interesting emerging technology that is driven by the motivation to develop an effective communication interface translating human intentions into a control signal for devices like computers or neuroprostheses. If this can be done bypassing the usual human output pathways like peripheral nerves and muscles it can ultimately become a valuable tool for paralyzed patients. Most activity in BCI research is devoted to finding suitable features and algorithms to increase information transfer rates (ITRs). The present paper studies the implications of using more classes, e.g., left vs. right hand vs. foot, for operating a BCI. We contribute by (1) a theoretical study showing under some mild assumptions that it is practically not useful to employ more than three or four classes, (2) two extensions of the common spatial pattern (CSP) algorithm, one interestingly based on simultaneous diagonalization, and (3) controlled EEG experiments that underline our theoretical findings and show excellent improved ITRs.

117 citations


Journal ArticleDOI
TL;DR: Two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem are proposed, based on the concept of maximal correlation and a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed.
Abstract: We propose two methods that reduce the post-nonlinear blind source separation problem (PNL-BSS) to a linear BSS problem. The first method is based on the concept of maximal correlation: we apply the alternating conditional expectation (ACE) algorithm---a powerful technique from non-parametric statistics---to approximately invert the componentwise non-linear functions.The second method is a Gaussianizing transformation, which is motivated by the fact that linearly mixed signals before nonlinear transformation are approximately Gaussian distributed. This heuristic, but simple and efficient procedure works as good as the ACE method.Using the framework provided by ACE, convergence can be proven. The optimal transformations obtained by ACE coincide with the sought-after inverse functions of the nonlinearities. After equalizing the nonlinearities, temporal decorrelation separation (TDSEP) allows us to recover the source signals. Numerical simulations testing "ACE-TD" and "Gauss-TD" on realistic examples are performed with excellent results.

75 citations


Journal ArticleDOI
TL;DR: In this paper, a novel method is presented to reveal the significance and contribution of source types and characteristic formation times for individual aerosol constituents: Backward trajectory analyses are used to allocate time-resolved information about residence time of air masses over different types of ground surfaces.

33 citations


Journal ArticleDOI
TL;DR: The range of applicability of SIC is extended, and it is shown that even if the reproducing kernels centered on training sample points do not span the whole space H, Sic is an unbiased estimator of an essential part of the generalization error.
Abstract: A central problem in learning is selection of an appropriate model. This is typically done by estimating the unknown generalization errors of a set of models to be selected from and then choosing the model with minimal generalization error estimate. In this article, we discuss the problem of model selection and generalization error estimation in the context of kernel regression models, e.g., kernel ridge regression, kernel subset regression or Gaussian process regression. Previously, a non-asymptotic generalization error estimator called the subspace information criterion (SIC) was proposed, that could be successfully applied to finite dimensional subspace models. SIC is an unbiased estimator of the generalization error for the finite sample case under the conditions that the learning target function belongs to a specified reproducing kernel Hilbert space (RKHS) H and the reproducing kernels centered on training sample points span the whole space H. These conditions hold only if dim H < l, where l < ∞ is the number of training examples. Therefore, SIC could be applied only to finite dimensional RKHSs. In this paper, we extend the range of applicability of SIC, and show that even if the reproducing kernels centered on training sample points do not span the whole space H, SIC is an unbiased estimator of an essential part of the generalization error. Our extension allows the use of any RKHSs including infinite dimensional ones, i.e., richer function classes commonly used in Gaussian processes, support vector machines or boosting. We further show that when the kernel matrix is invertible, SIC can be expressed in a much simpler form, making its computation highly efficient. In computer simulations on ridge parameter selection with real and artificial data sets, SIC is compared favorably with other standard model selection techniques for instance leave-one-out cross-validation or an empirical Bayesian method.

19 citations


Journal ArticleDOI
TL;DR: In this article, secondary and primary biogenic hydrocarbons were determined in airborne particles collected on quartz fibre filters in summer 2001 in a Norway spruce forest (Fichtelgebirge, Germany) at two different heights.
Abstract: Atmospheric oxidation of biogenic hydrocarbons such as monoterpenes is believed to be a globally significant source of aerosols. Secondary and primary biogenic hydrocarbons were determined in airborne particles. The particles were collected on quartz fibre filters in summer 2001 in a Norway spruce forest (Fichtelgebirge, Germany) at two different heights. The filters were Soxhlet extracted, the extract was concentrated and separated into five fractions with different polarity after flash chromatography and then measured with GC-MS. The first, second and third fractions were measured without derivatisation, the fourth fraction was silylated with BSTFA and the fifth fraction was methylated with BF3-Methanol. Many single compounds were detected and the highest concentrations were found for the polar components, especially for the dicarboxylic acids. Further quantified compounds include alkanes, ketones, aldehydes and carboxylic acids. Of special interest were the terpene oxidation products as pinonaldehyde, norpinic acid, pinic acid and pinonic acid. Typical concentrations of single compounds were in the sub ng m−3 range. Concentrations of terpene oxidation products are low in comparison to the terpene concentrations. More primary than secondary biogenic organic compounds were identified.

10 citations


Journal ArticleDOI
TL;DR: Results suggest that mass-related properties at two sites in the same city are not necessarily more similar than at an urban and arural site outside the city, and stress the limited horizontal homogeneity of urban atmospheric aerosol.
Abstract: We studied the mass-related aerosol properties, simultaneously attwo sites at the urban roof top level in the same city. Nosystematic influence of the wind vector on the difference in theaerosol concentrations between the two locations could be found.These results are compared with results from a second, similarexperiment over a larger distance including one urban and onerural site. Surprisingly, we could not detect a tendency whichwould indicate that sampling air at distance in the order of 1 kmwould be less affected by the heterogeneity than samplingdistanced in the order of 10 km apart. On the contrary, theresults suggest that mass-related properties at two sites in thesame city are not necessarily more similar than at an urban and arural site outside the city. These results stress the limitedhorizontal homogeneity of urban atmospheric aerosol. As aconclusion it is suggested that single-site measurements of mass-related aerosol properties should be considered to berepresentative for an area smaller than 1 km2 on size.

Book
01 Jan 2003
TL;DR: BSS methods, such as the ones based on independent component analysis (ICA) and temporal decorrelation (TD) methods have been shown to be an eificient tool for artifact identification and extraction from electroencephalographic and magnetoencephalography recordings, as well as the analysis of some evoked and spontaneous brain activity.
Abstract: The advent of new brain mapping techniques, together with better and faster data storage capabilities, is generating a considerable amount of high-dimensional data. Suitable projecting or feature extraction mechanisms are required, able co reveal simple structures that may be easier to analyse than the complex brain activity that is often available to the physician, or brain researcher.In data analysis we often face the following dilemma: if we impose a too strong model on the data, we might only get the structure that we are imposing; if our model is too weak we might get no useful result at all. As there is no systematic answer to this fundamental problem for all situations, we will discuss about possibilities and limits of the new blind source separation (BSS) technique in the context of specific biomedical applications. Here a fair amount of physiological and physics knowledge is available and we can use this prior information to bias our solution - of course carefully avoiding to predetermine the solution.BSS methods, such as the ones based on independent component analysis (ICA) and temporal decorrelation (TD) methods have been shown to be an eificient tool for artifact identification and extraction from electroencephalographic (EEG) and magnetoencephalographic (MEG) recordings, as well as the analysis of some evoked and spontaneous brain activity.This chapter reviews our recent results to the application of blind and not so blind source separation techniques to the analysis of evoked brain signals, elicited by sensory stimuli, and to the analysis of single trials of near DC brain fields.