scispace - formally typeset
Search or ask a question

Showing papers by "Charles W. Anderson published in 2003"


Journal ArticleDOI
28 Jul 2003
TL;DR: The results of a linear (linear discriminant analysis) and two nonlinear classifiers applied to the classification of spontaneous EEG during five mental tasks are reported, showing that non linear classifiers produce only slightly better classification results.
Abstract: The reliable operation of brain-computer interfaces (BCIs) based on spontaneous electroencephalogram (EEG) signals requires accurate classification of multichannel EEG. The design of EEG representations and classifiers for BCI are open research questions whose difficulty stems from the need to extract complex spatial and temporal patterns from noisy multidimensional time series obtained from EEG measurements. The high-dimensional and noisy nature of EEG may limit the advantage of nonlinear classification methods over linear ones. This paper reports the results of a linear (linear discriminant analysis) and two nonlinear classifiers (neural networks and support vector machines) applied to the classification of spontaneous EEG during five mental tasks, showing that nonlinear classifiers produce only slightly better classification results. An approach to feature selection based on genetic algorithms is also presented with preliminary results of application to EEG during finger movement.

686 citations


Journal ArticleDOI
28 Jul 2003
TL;DR: Overall, it was agreed that simplicity is generally best and, therefore, the use of linear methods is recommended wherever possible and nonlinear methods in some applications can provide better results, particularly with complex and/or other very large data sets.
Abstract: At the recent Second International Meeting on Brain-Computer Interfaces (BCIs) held in June 2002 in Rensselaerville, NY, a formal debate was held on the pros and cons of linear and nonlinear methods in BCI research. Specific examples applying EEG data sets to linear and nonlinear methods are given and an overview of the various pros and cons of each approach is summarized. Overall, it was agreed that simplicity is generally best and, therefore, the use of linear methods is recommended wherever possible. It was also agreed that nonlinear methods in some applications can provide better results, particularly with complex and/or other very large data sets.

369 citations


Proceedings ArticleDOI
16 Jun 2003
TL;DR: Six-channel EEG is recorded from a subject performing two mental tasks and the signals are transformed via the Karhunen-Loéve or maximum noise fraction transformations and classified by quadratic discriminant analysis.
Abstract: Electroencephalogram (EEG) signals recorded from a persons scalp have been used to control binary cursor movements. Multiple choice paradigms will require more sophisticated protocols involving multiple mental tasks and signal representations that capture discriminatory characteristics of the EEG signals. In this study, six-channel EEG is recorded from a subject performing two mental tasks. The signals are transformed via the Karhunen-Loeve or maximum noise fraction transformations and classified by quadratic discriminant analysis. In addition, classification accuracy is tested for all subsets of the six EEG channels. Best results are approximately 90% correct when training and testing data are recorded on the same day and 75% correct when training and testing data are recorded on different days.

14 citations


Book ChapterDOI
01 Jan 2003
TL;DR: This work addresses the important and practical problem of whether two time-series are generated by the same process and presents SFA, a method that optimizes the amount of signal retained when signals are superposed and canonical correlation analysis (CCA), a method for contructing transformations that allow the comparison of two data sets.
Abstract: Subspace methodologies, such as the Karhunen-Loeve (KL) transform, are powerful geometric tools for the characterization of high-dimensional data sets. The KL transform, or the related singular value decomposition (SVD), maximizes the mean-square projection of the data ensemble on subspaces of reduced rank. Other interesting subspace approaches solve modified optimization problems and have received comparably less attention in the literature. Here we present two such methodologies: 1) signal fraction analysis (SFA), a method that optimizes the amount of signal retained when signals are superposed and 2) canonical correlation analysis (CCA), a method for contructing transformations that allow the comparison of two data sets. We compare these methods to the more widely employed SVD in the context of real data. We address the important and practical problem of whether two time-series are generated by the same process. As a specific example, the classification of noisy multivariate electroencephalogram (EEG) time-series data is considered.

14 citations