scispace - formally typeset
Search or ask a question

Showing papers by "Gert Pfurtscheller published in 2007"


Journal ArticleDOI
TL;DR: The proposed method was able to reduce EOG artifacts by 80% and has been implemented for offline and online analysis and is available through BioSig, an open source software library for biomedical signal processing.

540 citations


Journal ArticleDOI
TL;DR: The aim of the present study was to demonstrate for the first time that brain waves can be used by a tetraplegic to control movements of his wheelchair in virtual reality (VR) using a single bipolar EEG recording.
Abstract: The aim of the present study was to demonstrate for the first time that brain waves can be used by a tetraplegic to control movements of his wheelchair in virtual reality (VR). In this case study, the spinal cord injured (SCI) subject was able to generate bursts of beta oscillations in the electroencephalogram (EEG) by imagination of movements of his paralyzed feet. These beta oscillations were used for a self-paced (asynchronous) brain-computer interface (BCI) control based on a single bipolar EEG recording. The subject was placed inside a virtual street populated with avatars. The task was to "go" from avatar to avatar towards the end of the street, but to stop at each avatar and talk to them. In average, the participant was able to successfully perform this asynchronous experiment with a performance of 90%, single runs up to 100%.

488 citations


Journal ArticleDOI
12 Dec 2007
TL;DR: This work shows that ten naive subjects can be trained in a synchronous paradigm within three sessions to navigate freely through a virtual apartment, whereby at every junction the subjects could decide by their own, how they wanted to explore the virtual environment (VE).
Abstract: The step away from a synchronized or cue-based brain-computer interface (BCI) and from laboratory conditions towards real world applications is very important and crucial in BCI research. This work shows that ten naive subjects can be trained in a synchronous paradigm within three sessions to navigate freely through a virtual apartment, whereby at every junction the subjects could decide by their own, how they wanted to explore the virtual environment (VE). This virtual apartment was designed similar to a real world application, with a goal-oriented task, a high mental workload, and a variable decision period for the subject. All subjects were able to perform long and stable motor imagery over a minimum time of 2 s. Using only three electroencephalogram (EEG) channels to analyze these imaginations, we were able to convert them into navigation commands. Additionally, it could be demonstrated that motivation is a very crucial factor in BCI research; motivated subjects perform much better than unmotivated ones.

412 citations


Journal ArticleDOI
TL;DR: Three independent components analysis (ICA) algorithms (Infomax, FastICA and SOBI) have been compared with other preprocessing methods in order to find out whether and to which extent spatial filtering of EEG data can improve single trial classification accuracy.

178 citations


Journal ArticleDOI
TL;DR: This work compares ERD/ERS patterns in paraplegic patients (suffering from a complete spinal cord injury) and healthy subjects during attempted (active) and passive foot movements, and shows midcentral-focused beta ERD-ERS patterns during passive, active, and imagined foot movements in healthy subjects.

172 citations


Journal ArticleDOI
TL;DR: Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all.
Abstract: A study of different on-line adaptive classifiers, using various feature types is presented. Motor imagery brain computer interface (BCI) experiments were carried out with 18 naive able-bodied subjects. Experiments were done with three two-class, cue-based, electroencephalogram (EEG)-based systems. Two continuously adaptive classifiers were tested: adaptive quadratic and linear discriminant analysis. Three feature types were analyzed, adaptive autoregressive parameters, logarithmic band power estimates and the concatenation of both. Results show that all systems are stable and that the concatenation of features with continuously adaptive linear discriminant analysis classifier is the best choice of all. Also, a comparison of the latter with a discontinuously updated linear discriminant analysis, carried out in on-line experiments with six subjects, showed that on-line adaptation performed significantly better than a discontinuous update. Finally a static subject-specific baseline was also provided and used to compare performance measurements of both types of adaptation

136 citations


Journal ArticleDOI
TL;DR: The self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery is presented.
Abstract: We present the self-paced 3-class Graz brain-computer interface (BCI) which is based on the detection of sensorimotor electroencephalogram (EEG) rhythms induced by motor imagery. Self-paced operation means that the BCI is able to determine whether the ongoing brain activity is intended as control signal (intentional control) or not (non-control state). The presented system is able to automatically reduce electrooculogram (EOG) artifacts, to detect electromyographic (EMG) activity, and uses only three bipolar EEG channels. Two applications are presented: the freeSpace virtual environment (VE) and the Brainloop interface. The freeSpace is a computer-game-like application where subjects have to navigate through the environment and collect coins by autonomously selecting navigation commands. Three subjects participated in these feedback experiments and each learned to navigate through the VE and collect coins. Two out of the three succeeded in collecting all three coins. The Brainloop interface provides an interface between the Graz-BCI and Google Earth.

117 citations


01 Jan 2007
TL;DR: In this paper, the authors investigated the relation between the number of trials and the classification accuracy and provided answers to the question: Does my classifier perform better than random? And they also provided more general knowledge about the relationship between number of tries and classification accuracy.
Abstract: . Brain-Computer Interface (BCI) research has become a growing field of interest in the last years. The work presented ranges from machine learning approaches in offline results to the application of a BCI in patients. However, reliable classification of brain activity is a crucial issue in BCI research. In contrast to most articles which present methods to enhance classification accuracies, we investigate the opposite side in this work and provide answers to the question: Does my classifier perform better than random? Keywords: Brain-Computer Interface, statistical analyses, classification accuracy, pattern recognition methods 1. Introduction Brain-Computer Interface (BCI) research is a growing field [Pfurtscheller et al., 2006]. As a consequence numerous papers appeared during the last years [e.g., Dornhege et al., 2007]. Most articles introduce new feature extraction, optimization or classification methods. However, to be able to estimate the reliability of a new method and compare the achieved results with results obtained by other algorithms, some standard signal processing stages are necessary. One of these standards, and often recommended by reviewers, is the use of a cross-validation statistic when presenting offline classification results. This procedure prevents the classifier from over fitting the data (curse of dimensionality) [Duda and Hart, 1973]. Related to this, it is not only meaningful to present classification accuracies, but also the number of trials on which the computations are based. Exemplarily, the chance level in a simple 2-class paradigm is not exactly 50%; more precisely, it is 50% with a confidence interval at a certain level α depending on the number of trials. The aim of this paper is to provide more general knowledge about the relation between the number of trials and the classification accuracy.

113 citations


Journal ArticleDOI
TL;DR: A genetic algorithm has been used to find the best combination of the features with the aforementioned classifiers and led to dramatic reduction of the classification error and also best results in the four subjects.
Abstract: In this paper, a comparative evaluation of state-of-the art feature extraction and classification methods is presented for five subjects in order to increase the performance of a cue-based Brain-Computer interface (BCI) system for imagery tasks (left and right hand movements). To select an informative feature with a reliable classifier features containing standard bandpower, AAR coefficients, and fractal dimension along with support vector machine (SVM), Adaboost and Fisher linear discriminant analysis (FLDA) classifiers have been assessed. In the single feature-classifier combinations, bandpower with FLDA gave the best results for three subjects, and fractal dimension and FLDA and SVM classifiers lead to the best results for two other subjects. A genetic algorithm has been used to find the best combination of the features with the aforementioned classifiers and led to dramatic reduction of the classification error and also best results in the four subjects. Genetic feature combination results have been compared with the simple feature combination to show the performance of the Genetic algorithm.

82 citations


Journal ArticleDOI
TL;DR: A brain-computer interface is set up to be used as an input device to a highly immersive virtual reality CAVE-like system and the interrelations between BCI and presence are studied.
Abstract: We have set up a brain-computer interface (BCI) to be used as an input device to a highly immersive virtual reality CAVE-like system. We have carried out two navigation experiments: three subjects were required to rotate in a virtual bar room by imagining left or right hand movement, and to walk along a single axis in a virtual street by imagining foot or hand movement. In this paper we focus on the subjective experience of navigating virtual reality "by thought," and on the interrelations between BCI and presence.

79 citations


Journal ArticleDOI
TL;DR: This work analyzes whether the respiratory heart rate response, induced by brisk inspiration, can be used as an additional communication channel for self-initiation in brain-computer interface users.
Abstract: Self-initiation, that is the ability of a brain–computer interface (BCI) user to autonomously switch on and off the system, is a very important issue. In this work we analyze whether the respiratory heart rate response, induced by brisk inspiration, can be used as an additional communication channel. After only 20 min of feedback training, ten healthy subjects were able to self-initiate and operate a 4-class steady-state visual evoked potential-based (SSVEP) BCI by using one bipolar ECG and one bipolar EEG channel only. Threshold detection was used to measure a beat-to-beat heart rate increase. Despite this simple method, during a 30 min evaluation period on average only 2.9 non-intentional switches (heart rate changes) were detected.

Journal ArticleDOI
TL;DR: Findings provide first evidence of the sensitivity of the theta and alpha ERS/ERD measure to lexical-semantic processes involved in language translation.

01 Jan 2007
TL;DR: The results of a self-paced Brain-Computer Interface (BCI) are presented which are based on the detection of senorimotor electroencephalogram rhythms during motor imagery as mentioned in this paper.
Abstract: The results of a self-paced Brain-Computer Interface (BCI) are presented which are based on the detection of senorimotor electroencephalogram rhythms during motor imagery. The participants were given the task of moving through a virtual model of the Austrian National Library by performing motor imagery. This work shows that five participants which were trained in a synchronous BCI could sucessfully perform the asynchronous experiment. K eywords: Brain-Computer Interface, asynchronous, self-paced, motor imagery, navigation, virtual environment

Journal ArticleDOI
TL;DR: Viewing a moving hand results in a stronger desynchronization of the central beta rhythm than viewing a moving cube, which provides further evidence for some extent of motor processing related to visual presentation of objects and implies a greater involvement of motor areas in the brain with the observation of action of different body parts.
Abstract: We studied the impact of different visual objects such as a moving hand and a moving cube on the bioelectrical brain activity (i.e., electroencephalogram; EEG). The moving objects were presented in a virtual reality (VR) system via a head mounted display (HMD). Nine healthy volunteers were confronted with 3D visual stimulus presentations in four experimental conditions: (i) static hand, (ii) dynamic hand, (iii) static cube, and (iv) dynamic cube. The results reveal that the processing of moving visual stimuli depends on the type of object: viewing a moving hand results in a stronger desynchronization of the central beta rhythm than viewing a moving cube. This provides further evidence for some extent of motor processing related to visual presentation of objects and implies a greater involvement of motor areas in the brain with the observation of action of different body parts than with the observation of non-body part movements.

Journal ArticleDOI
TL;DR: The extent to which mixed reality (MR) and VR participants realistically respond to virtually generated sensory data is considered, and the similarity of their response with what the authors might observe or predict if the sensory data-the situation, place, or events-were real, rather than virtual.
Abstract: People who experience an immersive VR system usually report feeling as if they were really in the displayed virtual situation, and can often be observed behaving in accordance with that feeling, even though they know that they're not actually there. Researchers refer to this feeling as "presence" in virtual environments, yet the term has come to have many uses and meanings, all of which evolved from the notion of telepresence in teleoperator systems. In Presenccia, we take an operational approach to the presence concept. Our approach lets us assess the extent of presence using tools beyond traditional questionnaires, and therefore we avoid many of the problems involved with sole reliance on these. Instead, we consider the extent to which mixed reality (MR) and VR participants realistically respond to virtually generated sensory data. Specifically, we measure the similarity of their response with what we might observe or predict if the sensory data-the situation, place, or events-were real, rather than virtual. We consider this response on several levels.

Journal ArticleDOI
TL;DR: If a small amount of data was available, the best classifier was linear discriminant analysis (LDA) and if enough data were available all three classifiers performed very similar, which suggests that the effort needed to find regularizing parameters for RDA can be avoided.
Abstract: We present a study of linear, quadratic and regularized discriminant analysis (RDA) applied to motor imagery data of three subjects. The aim of the work was to find out which classifier can separate better these two-class motor imagery data: linear, quadratic or some function in between the linear and quadratic solutions. Discriminant analysis methods were tested with two different feature extraction techniques, adaptive autoregressive parameters and logarithmic band power estimates, which are commonly used in brain–computer interface research. Differences in classification accuracy of the classifiers were found when using different amounts of data; if a small amount was available, the best classifier was linear discriminant analysis (LDA) and if enough data were available all three classifiers performed very similar. This suggests that the effort needed to find regularizing parameters for RDA can be avoided by using LDA.

01 Jan 2007
TL;DR: Electroencephalogram (EEG) of hemiparetic stroke patients during left hand and right hand motor imagery is analyzed to determine whether time-frequency maps of Event-Related Desynchronization (ERD) and Event- Related Synchronization (ERS) and single-trial classification by means of the Distinctive Sensitive Learning Vector Quantification (DSLVQ) method are suited to keep record of the changing brain activity.
Abstract: Motor imagery as rehabilitation method after stroke is becoming an important tool and is currently also heavily researched. One issue, however, is to quantify and monitor changes in the ongoing brain activity and to document brain plasticity. Here, we analyze the electroencephalogram (EEG) of hemiparetic stroke patients during left hand and right hand motor imagery in order to determine whether time-frequency maps of Event-Related Desynchronization (ERD) and Event- Related Synchronization (ERS), and single-trial classification by means of the Distinctive Sensitive Learning Vector Quantification (DSLVQ) method are suited to keep record of the changing brain activity. K eywords: Motor imagery, Stroke, Electroencephalogram (EEG), Event-Related Desynchronization (ERD)

Proceedings ArticleDOI
05 Sep 2007
TL;DR: In this paper, feature extraction based on self organizing maps (SOM) using auto-regressive (AR) spectrum was introduced to discriminate the EEG signals recorded during right hand, left hand and foot motor imagery.
Abstract: Electroencephalograph (EEG) recordings during right and left hand motor imagery can be used to move a cursor to a target on a computer screen. Such an EEG-based brain-computer interface (BCI) can provide a new communication channel to replace an impaired motor function. It can be used by e.g., handicap users with amyotrophic lateral sclerosis (ALS). The conventional method purposes the recognition of right hand and left hand motor imagery. In this paper, feature extraction based on self organizing maps (SOM) using auto-regressive (AR) spectrum was introduced to discriminate the EEG signals recorded during right hand, left hand and foot motor imagery. The features in pattern recognition are discussed through the experimental studies.

Proceedings Article
12 Jun 2007
TL;DR: A training procedure is described that allows subjects to produce one brain pattern (elicited with motor imagery) of two different durations (e.g., 1s and 3s) and shows that it is possible to elicit one brain patterns over two differenturations.
Abstract: Brain-Computer Interfaces (BCIs) are systems that establish a direct connection between the human brain and a computer, thus providing an additional communication channel. In patients suffering from a high spinal cord injury (SCI), BCIs can be used to control neuroprostheses such as functional electrical stimulation for grasp restoration. In this paper, we describe a training procedure that allows subjects to produce one brain pattern (elicited with motor imagery) of two different durations (e.g., 1s and 3s). For this purpose a “Jump and Run” game was implemented. Results of 5 able-bodied subjects show that it is possible to elicit one brain pattern over two different durations.

Book Chapter
01 Jan 2007
TL;DR: In this chapter, an overview is given of BCI-based control of VR and four examples are reported in this chapter of brain-computer interface control of virtual reality.
Abstract: 23.1 Abstract A brain-computer interface (BCI) is a closed-loop system with feedback as one important component. Dependent on the BCI application either to establish communication in patients with severe motor paralysis, to control neuroprosthesis, or to perform neurofeed-back, information is visually fed back to the user about success or failure of the intended act. One way to realize feedback is the use of virtual reality (VR). In this chapter, an overview is given of BCI-based control of VR. In addition, four examples are reported in

Proceedings Article
01 Jan 2007
TL;DR: The first ever study where participants control their own avatar using only their thoughts is described, and it is reported that natural mapping was reported to feel more natural and easy than when the mapping was reversed, however, the results do not indicate that BCI accuracy was better with natural mapping than with reversed mapping.
Abstract: A brain-computer interface (BCI) can arguably be considered the ultimate user interface, where humans operate a computer using thought alone. We have integrated the Graz-BCI into a highly immersive Cave-like system. In this paper we report a case study where three participants were able to control their avatar using only their thought. We have analyzed the participants’ subjective experience using an in-depth qualitative methodology. We also discuss some limitations of BCI in controlling a virtual environment, and interaction design decisions that needed to be made. Brain-computer interface (BCI) has been studied extensively as a tool for paralyzed patients, which may augment their communication with the external world and allow them better control of their limbs. However, once it has been developed for these critical applications, we expect it will have profound implications on many other types of user interfaces and applications. BCI could be one of the most significant steps following “direct manipulation interfaces” (Schneiderman, 1983) – where intention is mapped directly into interaction, rather than being conveyed through motor movements. Furthermore, if used in an immersive virtual environment (IVE) this could be a completely novel experience and, in the future, lead to unprecedented levels in the sense of presence (for recent reviews of the concept of presence see (Vives and Slater, 2005) and (Riva et al., 2003)). A key requirement for a successful experience in an immersive virtual environment (IVE) is the representation of the participant, or its avatar (Pandzic et al., 1997; Slater et al., 1994; Slater et al., 1998). This paper describes the first ever study where participants control their own avatar using only their thoughts. Three subjects were able to use the GrazBCI to control an avatar, and their subjective experience was assessed using questionnaires and a semistructured interview. Naturally, a third-person avatar, such as used in this experiment, is only one possible interface to an IVE. Using a BCI to control an IVE by thought raises several major human-computer interaction (HCI) issues: whether classification of thought patterns is continuous (asynchronous BCI) or only takes place in specific moments (synchronous BCI), the number of input classes recognized, the importance of feedback, and the nature of the mapping between thoughts and resulting action in the IVE. In this paper we refer to these issues, and present a case study that specifically addresses the issues of feedback and mapping. A critical initial hypothesis is that natural mapping between thought processes and IVE functionality would improve the experience. A one-to-one mapping seemingly makes intuitive sense, but having this mapping is constraining because we are limited in the scope of thought patterns that we can detect based on contemporary brain recording techniques. In addition, it precludes other more complex or more fanciful body image mappings; what if we want to experiment with lobster avatars? (See Jaron Lanier’s “everyone can be a lobster” statement in http://www.edge.org/q2006/q06 7.html#lanier). In the case study reported here we have found out that natural mapping was reported to feel more natural and easy than when the mapping was reversed. However, the results do not indicate that BCI accuracy was better with natural mapping than with reversed mapping. The main implication of our case study is that this new type of interface, whereby IVE participants control their avatars by thought, is possible, and should be further pursued. In addition, we reveal new insights about the HCI issues that are involved in such an interface, and provide a first glance into what the experience of using such an interface may be like.


Journal ArticleDOI
TL;DR: It is shown that anticipatory HR deceleration and HR changes induced by motor preparation and activity due to typing the translation do not depend on task difficulty, providing first evidence of a link between task difficulty in language translation and event-related HR changes.
Abstract: The heart rate (HR) can be modulated by diverse mental activities ranging from stimulus anticipation to higher order cognitive information processing. In the present study we report on HR changes during word translation and examine how the HR is influenced by the difficulty of the translation task. Twelve students of translation and interpreting were presented English high- and low-frequency words as well as familiar and unfamiliar technical terms that had to be translated into German. Analyses revealed that words of higher translation difficulty were accompanied by a more pronounced HR deceleration than words that were easier to translate. We additionally show that anticipatory HR deceleration and HR changes induced by motor preparation and activity due to typing the translation do not depend on task difficulty. These results provide first evidence of a link between task difficulty in language translation and event-related HR changes.

Proceedings Article
12 Jun 2007
TL;DR: It could be demonstrated that brain waves can be used by a tetraplegic to control movements of his wheelchair in virtual reality (VR) and he reported that he had the sense of being in the street and going to the people, similar to a task in a real street.
Abstract: In this study it could be demonstrated that brain waves can be used by a tetraplegic to control movements of his wheelchair in virtual reality (VR). In this case study a spinal cord injured (SCI) subject was able to induce centrally localized beta oscillations in the electroencephalogram (EEG) by imagination of movements of his paralyzed feet. These oscillations were used for a self-paced (asynchronous) Brain-Computer Interface (BCI) control based on a single bipolar EEG recording. The subject was placed inside a virtual street populated with avatars and was able to move the wheelchair from one position in a virtual street to another other at free will. The task was to “go” from avatar to avatar towards the end of the street, but to stop at each avatar and talk to them. In average the participant was able to successfully perform the experiment with a performance of 90%, single runs up to 100%. After the experiment he reported that he had the sense of being in the street and going to the people, similar to a task in a real street




Proceedings ArticleDOI
01 Sep 2007
TL;DR: In this sturdy, feature extraction based on Directed Information analysis is introduced to discriminate the EEG signals recorded during the right hand, the left hand and the right foot motor imagery.
Abstract: Electroencephalograph (EEG) recordings during the right and the left hand motor imagery can be used to move a cursor to a target on a computer screen. Such an EEG-based brain-computer interface (BCI) can provide a new communication channel to replace an impaired motor function. It can be used by e.g., handicap users with amyotrophic lateral sclerosis (ALS). The conventional method purposes the recognition of the right hand and the left hand motor imagery. In this sturdy, feature extraction based on Directed Information analysis is introduced to discriminate the EEG signals recorded during the right hand, the left hand and the right foot motor imagery. The effectiveness of our method is confirmed through the experimental studies.