scispace - formally typeset
Search or ask a question

Showing papers by "Klaus-Robert Müller published in 2009"


Journal ArticleDOI
TL;DR: The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H.264, due to improved sharp edge preservation.
Abstract: This article investigates the interaction between different techniques for depth compression and view synthesis rendering with multiview video plus scene depth data Two different approaches for depth coding are compared, namely H264/MVC, using temporal and inter-view reference images for efficient prediction, and the novel platelet-based coding algorithm, characterized by being adapted to the special characteristics of depth-images Since depth-images are a 2D representation of the 3D scene geometry, depth-image errors lead to geometry distortions Therefore, the influence of geometry distortions resulting from coding artifacts is evaluated for both coding approaches in two different ways First, the variation of 3D surface meshes is analyzed using the Hausdorff distance and second, the distortion is evaluated for 2D view synthesis rendering, where color and depth information are used together to render virtual intermediate camera views of the scene The results show that-although its rate-distortion (R-D) performance is worse-platelet-based depth coding outperforms H264, due to improved sharp edge preservation Therefore, depth coding needs to be evaluated with respect to geometry distortions

287 citations


Journal ArticleDOI
TL;DR: A new unique public Ames mutagenicity data set comprising about 6500 nonconfidential compounds together with their biological activity is described and three commercial tools and an off-the-shelf Bayesian machine learner in Pipeline Pilot are compared.
Abstract: Up to now, publicly available data sets to build and evaluate Ames mutagenicity prediction tools have been very limited in terms of size and chemical space covered. In this report we describe a new unique public Ames mutagenicity data set comprising about 6500 nonconfidential compounds (available as SMILES strings and SDF) together with their biological activity. Three commercial tools (DEREK, MultiCASE, and an off-the-shelf Bayesian machine learner in Pipeline Pilot) are compared with four noncommercial machine learning implementations (Support Vector Machines, Random Forests, k-Nearest Neighbors, and Gaussian Processes) on the new benchmark data set.

261 citations


Proceedings Article
07 Dec 2009
TL;DR: This work devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1 and applies lp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.
Abstract: Learning linear combinations of multiple kernels is an appealing strategy when the right choice of features is unknown. Previous approaches to multiple kernel learning (MKL) promote sparse kernel combinations to support interpretability. Unfortunately, l1-norm MKL is hardly observed to outperform trivial baselines in practical applications. To allow for robust kernel mixtures, we generalize MKL to arbitrary lp-norms. We devise new insights on the connection between several existing MKL formulations and develop two efficient interleaved optimization strategies for arbitrary p > 1. Empirically, we demonstrate that the interleaved optimization strategies are much faster compared to the traditionally used wrapper approaches. Finally, we apply lp-norm MKL to real-world problems from computational biology, showing that non-sparse MKL achieves accuracies that go beyond the state-of-the-art.

257 citations


Journal ArticleDOI
TL;DR: This work uses a large database of EEG recordings from 45 subjects, who took part in movement imagination task experiments, to construct an ensemble of classifiers derived from subject-specific temporal and spatial filters.

250 citations


Journal ArticleDOI
TL;DR: This Letter proposes a novel technique, stationary subspace analysis (SSA), that decomposes a multivariate time series into its stationary and nonstationary part and succeeds in finding stationary components that lead to a significantly improved prediction accuracy and meaningful topographic maps which contribute to a better understanding of the underlyingnonstationary brain processes.
Abstract: Identifying temporally invariant components in complex multivariate time series is key to understanding the underlying dynamical system and predict its future behavior. In this Letter, we propose a novel technique, stationary subspace analysis (SSA), that decomposes a multivariate time series into its stationary and nonstationary part. The method is based on two assumptions: (a) the observed signals are linear superpositions of stationary and nonstationary sources; and (b) the nonstationarity is measurable in the first two moments. We characterize theoretical and practical properties of SSA and study it in simulations and cortical signals measured by electroencephalography. Here, SSA succeeds in finding stationary components that lead to a significantly improved prediction accuracy and meaningful topographic maps which contribute to a better understanding of the underlying nonstationary brain processes.

247 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a machine learning-based approach to train a classifier for the first IDA-recognition task in the field of Biological Psychology, Clinical Psychology and Psychotherapy.
Abstract: a Berlin Institute of Technology, Machine Learning Laboratory, Berlin, Germany b Fraunhofer FIRST IDA, Berlin, Germany c Dept. of Neurology, Campus Benjamin Franklin, Charite University Medicine Berlin, Germany d Institute of Medical Psychology and Behavioral Neurobiology, Universitat Tubingen, Germany e Department of Biological Psychology, Clinical Psychology and Psychotherapy, University of Wurzburg, Germany

102 citations


Journal ArticleDOI
TL;DR: This paper examines how interactions can be designed to explicitly take into account the uncertainty and dynamics of control inputs, and highlights the asymmetry of feedback and control channels in current non-invasive brain-computer interfaces.
Abstract: Designing user interfaces which can cope with unconventional control properties is challenging, and conventional interface design techniques are of little help. This paper examines how interactions can be designed to explicitly take into account the uncertainty and dynamics of control inputs. In particular, the asymmetry of feedback and control channels is highlighted as a key design constraint, which is especially obvious in current non-invasive brain-computer interfaces (BCIs). Brain-computer interfaces are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by thought. BCIs, however, have totally different signal properties than most conventional interaction devices. Bandwidth is very limited and there are comparatively long and unpredictable delays. Such interfaces cannot simply be treated as unwieldy mice. In this respect they are an example of a growing field of sensor-based interfaces which have unorthodox control properties. As a concrete example, we present the text entry application ''Hex-O-Spell'', controlled via motor-imagery based electroencephalography (EEG). The system utilizes the high visual display bandwidth to help compensate for the limited control signals, where the timing of the state changes encodes most of the information. We present results showing the comparatively high performance of this interface, with entry rates exceeding seven characters per minute.

88 citations


Proceedings Article
07 Dec 2009
TL;DR: This paper harvest a large database of EEG BCI motor imagination recordings for constructing a library of subject-specific spatio-temporal filters and derive a subject independent BCI classifier and offline results indicate that BCI-native users could start real-time BCI use with no prior calibration at only a very moderate performance loss.
Abstract: In the quest to make Brain Computer Interfacing (BCI) more usable, dry electrodes have emerged that get rid of the initial 30 minutes required for placing an electrode cap. Another time consuming step is the required individualized adaptation to the BCI user, which involves another 30 minutes calibration for assessing a subject's brain signature. In this paper we aim to also remove this calibration proceedure from BCI setup time by means of machine learning. In particular, we harvest a large database of EEG BCI motor imagination recordings (83 subjects) for constructing a library of subject-specific spatio-temporal filters and derive a subject independent BCI classifier. Our offline results indicate that BCI-native users could start real-time BCI use with no prior calibration at only a very moderate performance loss.

63 citations


Proceedings ArticleDOI
04 May 2009
TL;DR: Video plus depth is an interesting alternative to conventional stereo video for mobile 3D services and can be compressed at significantly lower bitrates than a secondary video, however at the expense of increased complexity for rendering the second view at the decoder.
Abstract: This paper presents a study on video plus depth compression using available MPEG standards and its optimization for mobile 3D services. Video plus depth enables 3D television, but as mobile services are subject to various limitations, including bandwidth, memory, and processing power, efficient compression as well as low complexity view synthesis is required. Two MPEG coding standards are applicable for video plus depth coding, namely MPEG-C Part 3 and H.264 Auxiliary Picture Syntax. These methods are evaluated with respect to the limitations of mobile services and the achievable quality for rendering the second stereo view from compressed video plus depth. In conclusion video plus depth is an interesting alternative to conventional stereo video for mobile 3D services. The results indicate that depth can be compressed at significantly lower bitrates than a secondary video, however at the expense of increased complexity for rendering the second view at the decoder.

58 citations


Book ChapterDOI
01 Jan 2009
TL;DR: This chapter tackles a difficult challenge: presenting signal processing material to non-experts, and includes some simple methods to demonstrate the basics of adaptive data processing, then some advanced methods that are fundamental in adaptive signal processing, and are likely to be useful in a variety of applications.
Abstract: This chapter tackles a difficult challenge: presenting signal processing material to non-experts This chapter is meant to be comprehensible to people who have some math background, including a course in linear algebra and basic statistics, but do not specialize in mathematics, engineering, or related fields Some formulas assume the reader is familiar with matrices and basic matrix operations, but not more advanced material Furthermore, we tried to make the chapter readable even if you skip the formulas Nevertheless, we include some simple methods to demonstrate the basics of adaptive data processing, then we proceed with some advanced methods that are fundamental in adaptive signal processing, and are likely to be useful in a variety of applications The advanced algorithms are also online available [30] In the second part, these techniques are applied to some real-world BCI data

44 citations


Journal ArticleDOI
TL;DR: The generalized ERD represents a powerful novel analysis tool for extending the understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli in the presence of dynamical cortical states.
Abstract: Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent—to a large extent—from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD), which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization), which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a “conditional ERD,” through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli.

01 Jan 2009
TL;DR: Two showcase application scenarios are studied: Lateralized Readiness Potential (LRP) analysis, where it is shown that a robust treatment of the EEG allows to reduce the necessary number of trials for averaging and the detrimental influence of ocular artifacts, and single trial classification in the context of Brain Computer Interfacing.
Abstract: Biomedical signals such as EEG are typically contaminated by measurement artifacts, outliers and non-standard noise sources. We propose to use techniques from robust statistics and machine learning to reduce the influence of such distortions. Two showcase application scenarios are studied: (a) Lateralized Readiness Potential (LRP) analysis, where we show that a robust treatment of the EEG allows to reduce the necessary number of trials for averaging and the detrimental influence of e.g. ocular artifacts and (b) single trial classification in the context of Brain Computer Interfacing, where outlier removal procedures can strongly enhance the classification performance.

Proceedings ArticleDOI
04 May 2009
TL;DR: Although both techniques require more complex processing at the encoder side, their coding efficiency offers the chance to realize 3D stereo at the bitrate of conventional video for mobile services.
Abstract: This paper presents a study on different techniques for stereo video compression and its optimization for mobile 3D services. Stereo video enables 3D television, but as mobile services are subject to various limitations, including bandwidth, memory, and processing power, efficient compression is required. Three of the currently available MPEG coding standards are applicable for stereo video coding, namely H.264/AVC with and without stereo SEI message and H.264/MVC. These methods are evaluated with respect to the limitations of mobile services. The results clearly indicate that for a certain bitrate inter-view prediction as well as temporal prediction with hierarchical B pictures lead to a significantly increased subjective and objective quality. Although both techniques require more complex processing at the encoder side, their coding efficiency offers the chance to realize 3D stereo at the bitrate of conventional video for mobile services.

Journal ArticleDOI
TL;DR: This special issue is publishing multidisciplinary studies on BMI/BCI research, incorporating papers from a broad range of disciplines: invasive and noninvasive BMIs/BCIs, techniques for decoding brain-derived signals, neuroethics and applications of neuro-technology.

Journal ArticleDOI
TL;DR: A novel method is presented which aims to detect defected trials taking into account the intended task by use of Relevant Dimensionality Estimation (RDE), a new machine learning method for denoising in feature space and effectively "cleans" the training data and thus allows better BCI classification.

01 Jan 2009
TL;DR: It is shown that the non-sparse MKL outperforms both the standard MKL and SVMs with average kernel mixtures on the PASCAL VOC data sets.
Abstract: Combining information from various image descriptors has become a standard technique for image classification tasks. Multiple kernel learning (MKL) approaches allow to determine the optimal combination of such similarity matrices and the optimal classifier simultaneously. Most MKL approaches employ an `-regularization on the mixing coefficients to promote sparse solutions; an assumption that is often violated in image applications where descriptors hardly encode orthogonal pieces of information. In this paper, we compare `-MKL with a recently developed non-sparse MKL in object classification tasks. We show that the non-sparse MKL outperforms both the standard MKL and SVMs with average kernel mixtures on the PASCAL VOC data sets.

Book ChapterDOI
01 Jan 2009
TL;DR: There is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm).
Abstract: The Berlin Brain-Computer Interface (BBCI) uses a machine learning approach to extract user-specific patterns from high-dimensional EEG-features optimized for revealing the user’s mental state Classical BCI applications are brain actuated tools for patients such as prostheses (see Section 41) or mental text entry systems ([1] and see [2–5] for an overview on BCI) In these applications, the BBCI uses natural motor skills of the users and specifically tailored pattern recognition algorithms for detecting the user’s intent But beyond rehabilitation, there is a wide range of possible applications in which BCI technology is used to monitor other mental states, often even covert ones (see also [6] in the fMRI realm) While this field is still largely unexplored, two examples from our studies are exemplified in Sections 43 and 44

Book ChapterDOI
16 Mar 2009
TL;DR: SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace and can robustify other methods by restricting them to the stationary subspace.
Abstract: Non-stationarities are an ubiquitous phenomenon in time-series data, yet they pose a challenge to standard methodology: classification models and ICA components, for example, cannot be estimated reliably under distribution changes because the classic assumption of a stationary data generating process is violated. Conversely, understanding the nature of observed non-stationary behaviour often lies at the heart of a scientific question. To this end, we propose a novel unsupervised technique: Stationary Subspace Analysis (SSA). SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace. This factorization is a universal tool for furthering the understanding of non-stationary data. Moreover, we can robustify other methods by restricting them to the stationary subspace. We demonstrate the performance of our novel concept in simulations and present a real world application from Brain Computer Interfacing.

Proceedings ArticleDOI
04 May 2009
TL;DR: Different methods for coding of stereo video content for mobile 3DTV are examined and compared and results are provided in average subjective scoring, PSNR and VSSIM (Video Structure Similarity).
Abstract: Different methods for coding of stereo video content for mobile 3DTV are examined and compared. These methods are H.264/MPEG-4 AVC simulcast transmission, H.264/MPEG-4 AVC Stereo SEI message, mixed resolution coding, and video plus depth coding using MPEG-C Part 3. The first two methods are based on a full left and right video (V+V) representation, the third method uses a full and a subsampled view and the fourth method is based on a one video plus associated depth (V+D) representation. Each method was optimized and tested using professional 3D video content. Subjective tests were carried out on a small size autostereoscopic display that is used in mobile devices. A comparison of the four methods at two different bitrates is presented. Results are provided in average subjective scoring, PSNR and VSSIM (Video Structure Similarity).

Journal ArticleDOI
TL;DR: The results show that not only regression but also component-based methods are vulnerable to over- or under-compensation and can cause significant distortion of EEG.
Abstract: The aim is to compare various fully automated methods for reducing ocular artifacts from EEG recordings. Seven automated methods including regression, six component-based methods for reducing ocular artifacts have been applied to 36 data sets from two different labs. The influence of various noise sources is analyzed and the ratio between corrected and uncorrected EEG spectra, has been used to quantify the distortion. Results: The results show that not only regression but also component-based methods are vulnerable to over- or under-compensation and can cause significant distortion of EEG. Despite common belief, component-based methods did not demonstrate an advantage over the simple regression method. Conclusion: The newly proposed evaluation criterion showed to be an effective approach to evaluate 252 results from 36 data sets and 7 different methods. Significance: Currently, the regression method provides the most robust and stable results and is therefore the state-of-the-art-method for fully automated reduction of ocular artifacts.

01 Jan 2009
TL;DR: In this article, a comparison of measurements techniques for graphitic, light-absorbing, and elemental carbon, and non-volatile particle volume under field conditions is presented, showing that the non- volatile particle residues in the sub-µm range are closely associated with light absorbing carbon (LAC) and graphitic carbon (GC).
Abstract: Part 2: Comparison of measurements techniques for graphitic, light-absorbing, and elemental carbon, and non-volatile particle volume under field conditions Abstract Since the end of 2008, four experimental methods have been applied in the German Ultrafine Aerosol Network (GUAN) to characterize the light-absorbing and low-volatile components in the atmospheric aerosol known as "soot". These methods include: a) Multi-Angle Absorption Photo metry (MAAP), b) Raman spectroscopy, c) thermo- graphic analysis of samples from Berner impactors, and d) determination of the particle volume of the non-volatile fraction (at 300 °C) from num- ber size distributions. The mass concentration of Graphitic Carbon (GC) measured by Raman spectroscopy correlated well with the light absorp- tion coefficient measured by MAAP, and yielded GC absorption efficien- cies in the range of 4.7 to 4.9 m² g-1 . The comparison between elemental carbon (EC) mass concentrations from the thermographic method applied on Berner impactor samples and the light absorption measure- ment yielded a median EC absorption efficiency of 7.5 m2 g -1 , but showed a considerable scattering within the data set. The particle volume of the non-volatile fraction compared relatively well with the light absorption measurement when assuming an effective particle density in the range of 0.8 to 1.1 g cm-3. The results suggested that the non- volatile particle residues in the sub-µm range are closely associated with light-absorbing carbon (LAC) and GC.

Book ChapterDOI
05 Jun 2009
TL;DR: The proposed maxmin CSP method significantly improves the classical CSP approach in multiple BCI scenarios and can transform the respective complex mathematical program into a simple generalized eigenvalue problem and thus obtain robust spatial filters very efficiently.
Abstract: Electroencephalographic single-trial analysis requires methods that are robust with respect to noise, artifacts and non-stationarity among other problems. This work contributes by developing a maxmin approach to robustify the common spatial patterns (CSP) algorithm. By optimizing the worst-case objective function within a prefixed set of the covariance matrices, we can transform the respective complex mathematical program into a simple generalized eigenvalue problem and thus obtain robust spatial filters very efficiently. We test our maxmin CSP method with real world brain-computer interface (BCI) data sets in which we expect substantial fluctuations caused by day-to-day or paradigm-to-paradigm variability or different forms of stimuli. The results clearly show that the proposed method significantly improves the classical CSP approach in multiple BCI scenarios.

Journal IssueDOI
TL;DR: This work proposes an architecture for an autonomous and self-sufficient monitoring and protection system for devices and infrastructure inspired by network intrusion detection techniques, and proposes a signature-less detection of abnormal events and zero-day attacks.
Abstract: Fixed mobile convergence (FMC) based on the 3GPP IP Multimedia Subsystem (IMS) is considered one of the most important communication technologies of this decade. Yet this all-IP-based network technology brings about the growing danger of security vulnerabilities in communication and data services. Protecting IMS infrastructure servers against malicious exploits poses a major challenge due to the huge number of systems that may be affected. We approach this problem by proposing an architecture for an autonomous and self-sufficient monitoring and protection system for devices and infrastructure inspired by network intrusion detection techniques. The crucial feature of our system is a signature-less detection of abnormal events and zero-day attacks. These attacks may be hidden in a single message or spread across a sequence of messages. Anomalies identified at any of the network domain's ingresses can be further analyzed for discriminative patterns that can be immediately distributed to all edge nodes in the network domain.

Book ChapterDOI
05 Jun 2009
TL;DR: The effectiveness of introducing an intermediary state between state probabilities and interface command, driven by a dynamic control law, is investigated and the strategies used by 2 subjects to achieve idle state BCI control are outlined.
Abstract: The use of Electro-encephalography (EEG) for Brain Computer Interface (BCI) provides a cost-efficient, safe, portable and easy to use BCI for both healthy users and the disabled. This paper will first briefly review some of the current challenges in BCI research and then discuss two of them in more detail, namely modeling the "no command" (rest) state and the use of control paradigms in BCI. For effective prosthetic control of a BCI system or when employing BCI as an additional control-channel for gaming or other generic man machine interfacing, a user should not be required to be continuously in an active state, as is current practice. In our approach, the signals are first transduced by computing Gaussian probability distributions of signal features for each mental state, then a prior distribution of idle-state is inferred and subsequently adapted during use of the BCI. We furthermore investigate the effectiveness of introducing an intermediary state between state probabilities and interface command, driven by a dynamic control law, and outline the strategies used by 2 subjects to achieve idle state BCI control.



Patent
02 Nov 2009
TL;DR: In this paper, a non-linear model is elaborated for training objects based on a mechanical learning method, especially a kernel-based learning method in such a manner that it allows a statement regarding at least one property for at least 1 object, and at least measure is automatically determined by means of an analytical element using the representer theorem, said measure indicating which training object or which training objects that have become part of the nonlinear model have the strongest influence on the predictions of the model.
Abstract: The invention relates to a method and a device for the automatic analysis of a non-linear model for predicting the properties of an object which is a priori not characterized. According to the method, a) the non-linear model is elaborated for training objects based on a mechanical learning method, especially a kernel-based learning method, in such a manner that it allows a statement regarding at least one property for at least one object, b) at least one measure is automatically determined by means of an analytical element using the representer theorem, said measure indicating which training object or which training objects that have become part of the non-linear model have the strongest influence on the predictions of the non-linear model, and c) a prioritized data set is automatically produced in which the measures of the influencing factors are put in the order of a predetermined condition.

Journal ArticleDOI
TL;DR: Poster presentation Despite its young age, functional Magnetic Resonance Imaging (fMRI) has become one of the most popular brain imaging techniques, however, the relationship between brain activity and the blood oxygen level dependent (BOLD) contrast as measured with fMRI is not yet fully understood.
Abstract: Poster presentation Despite its young age, functional Magnetic Resonance Imaging (fMRI) has become one of the most popular brain imaging techniques. However, the relationship between brain activity and the blood oxygen level dependent (BOLD) contrast as measured with fMRI, the so called neurovascular coupling, is not yet fully understood. One possibility of experimentally manipulating the neurovascular coupling mechanisms is administration of vaso-active and neuro-active substances, such as Acetylcholine (ACh). Combining those pharmacological interventions with simultaneous measurements of electrophysiological and BOLD response allows for deeper insights in the dependencies between neural and hemodynamic response to sensory stimulation.

Journal ArticleDOI
TL;DR: A combination of machine learning techniques, employing a graph kernel, Gaussian process regression and clustered cross-validation, are introduced to find ligands of peroxisome-proliferator activated receptor gamma (PPAR-y) in a virtual screening study.
Abstract: For a virtual screening study, we introduce a combination of machine learning techniques, employing a graph kernel, Gaussian process regression and clustered cross-validation. The aim was to find ligands of peroxisome-proliferator activated receptor gamma (PPAR-y). The receptors in the PPAR family belong to the steroid-thyroid-retinoid superfamily of nuclear receptors and act as transcription factors. They play a role in the regulation of lipid and glucose metabolism in vertebrates and are linked to various human processes and diseases. For this study, we used a dataset of 176 PPAR-y agonists published by Ruecker et al. ...

Journal ArticleDOI
TL;DR: Multi-channel EEG is acquired in four subjects while they were interacting with computer applications that have been specifically designed in order to provoke – in alternating phases – neural, positive or negative emotions.
Abstract: Introduction and method Previous neurophysiological studies of emotions have focused on the affective response in the emotional valence of a situation in which the reaction is rooted in perception or memories [1]. Furthermore emotions have been investigated with regard to the trait of a subject, e.g. anger-out vs. anger control [2] and regarding motivational direction, e.g. approach vs. withdrawal [3]. Aiming at an enhancement of human-computer interaction by incorporating the emotional state of the user, a novel type of investigation is required. Neuronal correlates of emotional reactions related to interaction (e.g. annoyance due to one's own failure or an error of the machine; joy of success) have to be analyzed and methods for their detection in real-time need to be developed. In the present study we have acquired multi-channel EEG in four subjects while they were interacting with computer applications that have been specifically designed in order to provoke – in alternating phases – neural, positive or negative (stress, annoyance) emotions. In particular, a two-player variant of a two-alternative forced-choice task had to be performed while in alternating periods either one or the other player was given "unfair" preferential treatment by providing the task stimulus slightly in advance. This bias could not be noticed by the players.