scispace - formally typeset
Search or ask a question

Showing papers by "Klaus-Robert Müller published in 2007"


Journal ArticleDOI
TL;DR: It is proposed that the key to quick efficiency in the BBCI system is its flexibility due to complex but physiologically meaningful features and its adaptivity which respects the enormous inter-subject variability.

865 citations


Journal Article
TL;DR: This paper proposes a new method called importance weighted cross validation (IWCV), for which its unbiasedness even under the covariate shift is proved, and the IWCV procedure is the only one that can be applied for unbiased classification under covariates.
Abstract: A common assumption in supervised learning is that the input points in the training set follow the same probability distribution as the input points that will be given in the future test phase However, this assumption is not satisfied, for example, when the outside of the training region is extrapolated The situation where the training input points and test input points follow different distributions while the conditional distribution of output values given input points is unchanged is called the covariate shift Under the covariate shift, standard model selection techniques such as cross validation do not work as desired since its unbiasedness is no longer maintained In this paper, we propose a new method called importance weighted cross validation (IWCV), for which we prove its unbiasedness even under the covariate shift The IWCV procedure is the only one that can be applied for unbiased classification under covariate shift, whereas alternatives to IWCV exist for regression The usefulness of our proposed method is illustrated by simulations, and furthermore demonstrated in the brain-computer interface, where strong non-stationarity effects can be seen between training and test sessions

807 citations


Journal ArticleDOI
TL;DR: An experimental analysis of multiview video coding (MVC) for various temporal and inter-view prediction structures is presented, showing that prediction with temporal reference pictures is highly efficient, but for 20% of a picture's blocks on average prediction with reference pictures from adjacent views is more efficient.
Abstract: An experimental analysis of multiview video coding (MVC) for various temporal and inter-view prediction structures is presented. The compression method is based on the multiple reference picture technique in the H.264/AVC video coding standard. The idea is to exploit the statistical dependencies from both temporal and inter-view reference pictures for motion-compensated prediction. The effectiveness of this approach is demonstrated by an experimental analysis of temporal versus inter-view prediction in terms of the Lagrange cost function. The results show that prediction with temporal reference pictures is highly efficient, but for 20% of a picture's blocks on average prediction with reference pictures from adjacent views is more efficient. Hierarchical B pictures are used as basic structure for temporal prediction. Their advantages are combined with inter-view prediction for different temporal hierarchy levels, starting from simulcast coding with no inter-view prediction up to full level inter-view prediction. When using inter-view prediction at key picture temporal levels, average gains of 1.4-dB peak signal-to-noise ratio (PSNR) are reported, while additionally using inter-view prediction at nonkey picture temporal levels, average gains of 1.6-dB PSNR are reported. For some cases, gains of more than 3 dB, corresponding to bit-rate savings of up to 50%, are obtained.

645 citations


Proceedings ArticleDOI
12 Nov 2007
TL;DR: The impact on image quality of rendered arbitrary intermediate views is investigated and analyzed in a second part, comparing compressed multi-view video plus depth data at different bit rates with the uncompressed original.
Abstract: A study on the video plus depth representation for multi-view video sequences is presented. Such a 3D representation enables functionalities like 3D television and free viewpoint video. Compression is based on algorithms for multi-view video coding, which exploit statistical dependencies from both temporal and inter-view reference pictures for prediction of both color and depth data. Coding efficiency of prediction structures with and without inter-view reference pictures is analyzed for multi-view video plus depth data, reporting gains in luma PSNR of up to 0.5 dB for depth and 0.3 dB for color. The main benefit from using a multi-view video plus depth representation is that intermediate views can be easily rendered. Therefore the impact on image quality of rendered arbitrary intermediate views is investigated and analyzed in a second part, comparing compressed multi-view video plus depth data at different bit rates with the uncompressed original.

485 citations


Proceedings Article
03 Dec 2007
TL;DR: This work defines features based on a variant of the common spatial patterns (CSP) algorithm that are constructed invariant with respect to such nonstationarities such as disturbance covariance matrices from fluctuations in visual processing.
Abstract: Brain-Computer Interfaces can suffer from a large variance of the subject conditions within and across sessions. For example vigilance fluctuations in the individual, variable task involvement, workload etc. alter the characteristics of EEG signals and thus challenge a stable BCI operation. In the present work we aim to define features based on a variant of the common spatial patterns (CSP) algorithm that are constructed invariant with respect to such nonstationarities. We enforce invariance properties by adding terms to the denominator of a Rayleigh coefficient representation of CSP such as disturbance covariance matrices from fluctuations in visual processing. In this manner physiological prior knowledge can be used to shape the classification engine for BCI. As a proof of concept we present a BCI classifier that is robust to changes in the level of parietal α-activity. In other words, the EEG decoding still works when there are lapses in vigilance.

233 citations


Journal ArticleDOI
TL;DR: This contribution introduces the Berlin Brain–Computer Interface (BBCI) and presents setups where the user is provided with intuitive control strategies in plausible gaming applications that use biofeedback.
Abstract: The investigation of innovative Human-Computer Interfaces (HCI) provides a challenge for future multimedia research and development. Brain-Computer Interfaces (BCI) exploit the ability of human communication and control bypassing the classical neuromuscular communication channels. In general, BCIs offer a possibility of communication for people with severe neuromuscular disorders, such as Amyotrophic Lateral Sclerosis (ALS) or spinal cord injury. Beyond medical applications, a BCI conjunction with exciting multimedia applications, e.g., a dexterity game, could define a new level of control possibilities also for healthy customers decoding information directly from the user's brain, as reflected in electroencephalographic (EEG) signals which are recorded non-invasively from user's scalp. This contribution introduces the Berlin Brain---Computer Interface (BBCI) and presents setups where the user is provided with intuitive control strategies in plausible gaming applications that use biofeedback. Yet at its beginning, BBCI thus adds a new dimension in multimedia research by offering the user an additional and independent communication channel based on brain activity only. First successful experiments already yielded inspiring proofs-of-concept. A diversity of multimedia application models, say computer games, and their specific intuitive control strategies, as well as various Virtual Reality (VR) scenarios are now open for BCI research aiming at a further speed up of user adaptation and increase of learning success and transfer bit rates.

223 citations


Journal ArticleDOI
TL;DR: It is argued that the situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model, and a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.
Abstract: Open source tools have recently reached a level of maturity which makes them suitable for building large-scale real-world systems. At the same time, the field of machine learning has developed a large body of powerful learning algorithms for diverse applications. However, the true potential of these methods is not used, since existing implementations are not openly shared, resulting in software with low usability, and weak interoperability. We argue that this situation can be significantly improved by increasing incentives for researchers to publish their software under an open source model. Additionally, we outline the problems authors are faced with when trying to publish algorithmic implementations of machine learning methods. We believe that a resource of peer reviewed software accompanied by short articles would be highly valuable to both the machine learning and the general scientific community.

211 citations


01 Jan 2007
TL;DR: This chapter presents an EEG-based system that is able to detect high mental workload in drivers operating under real traffic conditions and shows that in these cases an induced mitigation of a reaction time experiment leads to an increase of the driver's overall task performance.
Abstract: The ability to directly detect mental over- and under-load in human operators is an essential feature of complex monitoring and control processes Such processes can be found, for example, in industrial production lines, in aviation, as well as in common everyday tasks such as driving In this chapter, we present an EEG-based system that is able to detect high mental workload in drivers operating under real traffic conditions This information is used immediately to mitigate the workload typically induced by the influx of information that is generated by the car's electronic systems Two experimental paradigms were tested: an auditory workload scheme and a mental calculation task The result is twofold The system's performance is strongly subject-dependent; however, the results are good to excellent for the majority of subjectsWe show that in these cases an induced mitigation of a reaction time experiment leads to an increase of the driver's overall task performance

177 citations


01 Jan 2007
TL;DR: The chapter briefly assembles information on recording methods and introduces the physiological signals that are being used in BCI paradigms and expands on clinical and human-machine interface (HMI) applications for BCI.
Abstract: We provide a compact overview of invasive and noninvasive brain-computer interfaces (BCI) This serves as a high-level introduction to an exciting and active field and sets the scene for the following sections of this book In particular, the chapter briefly assembles information on recording methods and introduces the physiological signals that are being used in BCI paradigms Furthermore, we review the spectrum from subject training to machine learning approaches We expand on clinical and human-machine interface (HMI) applications for BCI and discuss future directions and open challenges in the BCI field

166 citations



01 Jan 2007
TL;DR: Methods and techniques that take advantage of modern graphics hardware for real-time tracking and recognition of feature-points and the generation of feature vectors from input images in the various stages are presented.
Abstract: With the addition of free programmable components to modern graphics hardware, graphics processing units (GPUs) become increasingly interesting for general purpose computations, especially due to utilizing parallel buffer processing. In this paper we present methods and techniques that take advantage of modern graphics hardware for real-time tracking and recognition of feature-points. The focus lies on the generation of feature vectors from input images in the various stages. For the generation of feature-vectors the Scale Invariant Feature Transform (SIFT) method [Low04a] is used due to its high stability against rotation, scale and lighting condition changes of the processed images. We present results of the various stages for feature vector generation of our GPU implementation and compare it to the CPU version of the SIFT algorithm. The approach works well on Geforce6 series graphics board and above and takes advantage of new hardware features, e.g. dynamic branching and multiple render targets (MRT) in the fragment processor [KF05]. With the presented methods feature-tracking with real time frame rates can be achieved on the GPU and meanwhile the CPU can be used for other tasks.

01 Jan 2007
TL;DR: This chapter provides an overview of evaluation criteria used in BCI research and discusses different methods for calculating the information transfer rate, including a criterion for taking into account the response time.
Abstract: To analyze the performance of BCI systems, some evaluation criteria must be applied. The most popular is accuracy or error rate. Because of some strict prerequisites, accuracy is not always a suitable criterion, and other evaluation criteria have been proposed. This chapter provides an overview of evaluation criteria used in BCI research. An example from the BCI Competition 2005 is used to display results using different criteria. Within this chapter, evaluation criteria for BCI systems with more than two classes are presented, criteria for evaluating discrete and continuous output are included, and the problem of evaluating self-paced BCI operation is addressed. Special emphasis is put on discussing different methods for calculating the information transfer rate. Finally, a criterion for taking into account the response time is suggested.

Journal Article
TL;DR: Results of a recent feedback study with six healthy subjects with no or very little experience with BCI control are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials.
Abstract: The Berlin Brain-Computer Interface (BBCI) project develops an EEG-based BCI system that uses machine learning techniques to adapt to the specific brain signatures of each user. This concept allows to achieve high quality feedback already in the very first session without subject training. Here we present the broad range of investigations and experiments that have been performed within the BBCI project. The first kind of experiments analyzes the predictability of performing limbs from the premovement (readiness) potentials including successful feedback experiments. The limits with respect to the spatial resolution of the somatotopy are explored by contrasting brain patterns of movements of (1) left vs. right foot, (2) index vs. little finger within one hand, and (3) finger vs. wrist vs. elbow vs. shoulder within one arm. A study of phantom movements of patients with traumatic amputations shows the potential applicability of this BCI approach. In a complementary approach, voluntary modulations of sensorimotor rhythms caused by motor imagery (left hand vs. right hand vs. foot) are translated into a proportional feedback signal. We report results of a recent feedback study with six healthy subjects with no or very little experience with BCI control: Half of the subjects achieved an information transfer rate above 35 bits per minute (bpm). Furthermore, one subject used the BBCI to operate a mental typewriter in free spelling mode. The overall spelling speed was 4.5 letters per minute including the time needed for the correction errors. These results are encouraging for an EEG-based BCI system in untrained subjects that is independent of peripheral nervous system activity and does not rely on evoked potentials.

Book ChapterDOI
22 Jul 2007
TL;DR: This work presents the mental text entry application 'Hex-o-Spell' which incorporates principles of Human-Computer Interaction research into BCI feedback design and utilises the high visual display bandwidth to help compensate for the extremely limited control bandwidth.
Abstract: Brain-Computer Interfaces (BCIs) are systems capable of decoding neural activity in real time, thereby allowing a computer application to be directly controlled by the brain. Since the characteristics of such direct brain-to-computer interaction are limited in several aspects, one major challenge in BCI research is intelligent front-end design. Here we present the mental text entry application 'Hex-o-Spell' which incorporates principles of Human-Computer Interaction research into BCI feedback design. The system utilises the high visual display bandwidth to help compensate for the extremely limited control bandwidth which operates with only two mental states, where the timing of the state changes encodes most of the information. The display is visually appealing, and control is robust. The effectiveness and robustness of the interface was demonstrated at the CeBIT 2006 (world's largest IT fair) where two subjects operated the mental text entry system at a speed of up to 7.6 char/min.

Journal ArticleDOI
TL;DR: It can be argued that 3-D scene and texture representation techniques are mature enough to serve and fulfill the requirements of 3D extraction, transmission and display sides in a 3DTV scenario.
Abstract: 3-D scene representation is utilized during scene extraction, modeling, transmission and display stages of a 3DTV framework. To this end, different representation technologies are proposed to fulfill the requirements of 3DTV paradigm. Dense point-based methods are appropriate for free-view 3DTV applications, since they can generate novel views easily. As surface representations, polygonal meshes are quite popular due to their generality and current hardware support. Unfortunately, there is no inherent smoothness in their description and the resulting renderings may contain unrealistic artifacts. NURBS surfaces have embedded smoothness and efficient tools for editing and animation, but they are more suitable for synthetic content. Smooth subdivision surfaces, which offer a good compromise between polygonal meshes and NURBS surfaces, require sophisticated geometry modeling tools and are usually difficult to obtain. One recent trend in surface representation is point-based modeling which can meet most of the requirements of 3DTV, however the relevant state-of-the-art is not yet mature enough. On the other hand, volumetric representations encapsulate neighborhood information that is useful for the reconstruction of surfaces with their parallel implementations for multiview stereo algorithms. Apart from the representation of 3-D structure by different primitives, texturing of scenes is also essential for a realistic scene rendering. Image-based rendering techniques directly render novel views of a scene from the acquired images, since they do not require any explicit geometry or texture representation. 3-D human face and body modeling facilitate the realistic animation and rendering of human figures that is quite crucial for 3DTV that might demand real-time animation of human bodies. Physically based modeling and animation techniques produce impressive results, thus have potential for use in a 3DTV framework for modeling and animating dynamic scenes. As a concluding remark, it can be argued that 3-D scene and texture representation techniques are mature enough to serve and fulfill the requirements of 3-D extraction, transmission and display sides in a 3DTV scenario.

Journal ArticleDOI
TL;DR: This work investigates the use of different Machine Learning methods to construct models for aqueous solubility, evaluating all approaches in terms of their prediction accuracy and in how far the individual error bars can faithfully represent the actual prediction error.
Abstract: We investigate the use of different Machine Learning methods to construct models for aqueous solubility. Models are based on about 4000 compounds, including an in-house set of 632 drug discovery molecules of Bayer Schering Pharma. For each method, we also consider an appropriate method to obtain error bars, in order to estimate the domain of applicability (DOA) for each model. Here, we investigate error bars from a Bayesian model (Gaussian Process (GP)), an ensemble based approach (Random Forest), and approaches based on the Mahalanobis distance to training data (for Support Vector Machine and Ridge Regression models). We evaluate all approaches in terms of their prediction accuracy (in cross-validation, and on an external validation set of 536 molecules) and in how far the individual error bars can faithfully represent the actual prediction error.

Journal ArticleDOI
TL;DR: This work presents a statistical modeling of aqueous solubility based on measured data, using a Gaussian Process nonlinear regression model (GPsol), and shows that the developed model achieves much higher accuracy than available commercial tools for the prediction ofsolubility of electrolytes.
Abstract: Accurate in silico models for predicting aqueous solubility are needed in drug design and discovery and many other areas of chemical research. We present a statistical modeling of aqueous solubility based on measured data, using a Gaussian Process nonlinear regression model (GPsol). We compare our results with those of 14 scientific studies and 6 commercial tools. This shows that the developed model achieves much higher accuracy than available commercial tools for the prediction of solubility of electrolytes. On top of the high accuracy, the proposed machine learning model also provides error bars for each individual prediction.

Journal ArticleDOI
TL;DR: The Berlin Brain-Computer Interface (BBCI) as mentioned in this paper is a BCI-based system for interactive bio-feedback applications, where the user is provided with intuitive control strategies.
Abstract: The investigation of innovative Human-Computer Interfaces (HCI) provides a challenge for future interaction research and development. Brain-Computer Interfaces (BCIs) exploit the ability of human communication and control bypassing the classical neuromuscular communication channels. In general, BCIs offer a possibility of communication for people with severe neuromuscular disorders, such as amyotrophic lateral sclerosis (ALS) or complete paralysis of all extremities due to high spinal cord injury. Beyond medical applications, a BCI conjunction with exciting multimedia applications, e.g., a dexterity discovery, could define a new level of control possibilities also for healthy customers decoding information directly from the user's brain, as reflected in EEG signals which are recorded non-invasively from the scalp. This contribution introduces the Berlin Brain-Computer Interface (BBCI) and presents set-ups where the user is provided with intuitive control strategies in plausible interactive bio-feedback applications. Yet at its beginning, BBCI thus adds a new dimension in HCI research by offering the user an additional and independent communication channel based on brain activity only. Successful experiments already yielded inspiring proofs-of-concept. A diversity of interactive application models, say computer games, and their specific intuitive control strategies are now open for BCI research aiming at a further speed up of user adaptation and increase of learning success and transfer bit rates. BBCI is a complex distributed software system that can be run on several communicating computers responsible for (i) the signal acquisition, (ii) the data processing and (iii) the feedback application. Developing a BCI system, special attention must be paid to the design of the feedback application that serves as the HCI unit. This should provide the user with the information about her/his brain activity in a way that is intuitively intelligible. Exciting discovery applications qualify perfectly for this role. However, most of these applications incorporate control strategies that are developed especially for the control with haptic devices, e.g., joystick, keyboard or mouse. Therefore, novel control strategies should be developed for this purpose that (i) allow the user to incorporate additional information for the control of animated objects and (ii) do not frustrate the user in the case of a misclassification of the decoded brain signal. BCIs are able to decode different information types from the user's brain activity, such as sensory perception or motor intentions and imaginations, movement preparations, levels of stress, workload or task-related idling. All of these diverse brain signals can be incorporated in an exciting discovery scenario. Modern HCI research and development technologies can provide BCI researchers with the know-how about interactive feedback applications and corresponding control strategies.

Journal ArticleDOI
TL;DR: A new algorithm building an optimal dyadic decision tree (ODT) that combines guaranteed performance in the learning theoretical sense and optimal search from the algorithmic point of view and improves performance over classical approaches such as CART/C4.5.
Abstract: We introduce a new algorithm building an optimal dyadic decision tree (ODT). The method combines guaranteed performance in the learning theoretical sense and optimal search from the algorithmic point of view. Furthermore it inherits the explanatory power of tree approaches, while improving performance over classical approaches such as CART/C4.5, as shown on experiments on artificial and benchmark data.

Proceedings ArticleDOI
07 May 2007
TL;DR: Results show that additionally applying inter-view prediction to temporal prediction with hierarchical B pictures improves coding efficiency for depth as well as color, reporting average gains in PSNR-Y of 0.5 dB for depth and 0.3 dB for color.
Abstract: This paper presents a method for efficient compression of multi-view depth data based on our multi-view video coding approach for color data. The idea is to exploit statistical dependencies from both temporal and inter-view reference pictures for prediction. For this purpose a multi-view video data set including color and depth information is analyzed in terms of coding efficiency. Coding experiments using prediction structures with and without inter-view reference pictures are performed with multi-view depth data and compared to multi-view video coding. The results show that additionally applying inter-view prediction to temporal prediction with hierarchical B pictures improves coding efficiency for depth as well as color, reporting average gains in PSNR-Y of 0.5 dB for depth and 0.3 dB for color.

01 Jan 2007
TL;DR: This chapter discusses signal processing and machine learning techniques and their application to brain-computer interfacing and validation techniques and robustification are discussed briefly.
Abstract: This chapter discusses signal processing and machine learning techniques and their application to brain-computer interfacing. A broader overview of the general signal processing and classification methods as used in single-trial EEG analysis is given. For more specialized algorithms, the reader is referred to the original publications. Furthermore, validation techniques and robustification are discussed briefly.

01 Jan 2007
TL;DR: The EEG-based brain-computer interface (BCI) could provide a new augmentative communication technology for those who are totally paralyzed or have other severe motor impairments as mentioned in this paper.
Abstract: Studies at the Wadsworth Center over the past 14 years have shown that people with or without motor disabilities can learn to control the amplitude of mu or beta rhythms in electroencephalographic (EEG) activity recorded from the scalp over sensorimotor cortex and can use that control to move a cursor on a computer screen in one or two dimensions. This EEG-based brain-computer interface (BCI) could provide a new augmentative communication technology for those who are totally paralyzed or have other severe motor impairments. Present research focuses on improving the speed and accuracy of BCI communication.

Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper formally analyze the asymptotic Bayesian generalization error and establishes its upper bound under a very general setting, and proposes a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change.
Abstract: In supervised learning, we commonly assume that training and test data are sampled from the same distribution. However, this assumption can be violated in practice and then standard machine learning techniques perform poorly. This paper focuses on revealing and improving the performance of Bayesian estimation when the training and test distributions are different. We formally analyze the asymptotic Bayesian generalization error and establish its upper bound under a very general setting. Our important finding is that lower order terms---which can be ignored in the absence of the distribution change---play an important role under the distribution change. We also propose a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change.

Journal ArticleDOI
TL;DR: This article investigates compression approaches for 3D scene representations, where image and geometry are combined, and focuses on work in which the authors have participated.
Abstract: This article investigates compression approaches for 3D scene representations, where image and geometry are combined. The approaches exemplified in this article mostly focus on work in which the authors have participated.

Book ChapterDOI
22 Jul 2007
TL;DR: A particular focus is placed on linear classification methods which can be applied in the BCI context and an overview on the Berlin-Brain Computer Interface (BBCI) is provided.
Abstract: This paper discusses machine learning methods and their application to Brain-Computer Interfacing. A particular focus is placed on linear classification methods which can be applied in the BCI context. Finally, we provide an overview on the Berlin-Brain Computer Interface (BBCI).

Journal ArticleDOI
TL;DR: An alternative algorithm called iterative metric adaptation for radial kernel functions (IMAK) is developed, which is theoretically better justifiable within the NGCA framework and tends to outperform existing methods through numerical examples.
Abstract: We consider high-dimensional data which contains a linear low-dimensional non-Gaussian structure contaminated with Gaussian noise, and discuss a method to identify this non-Gaussian subspace. For this problem, we provided in our previous work a very general semi-parametric framework called non-Gaussian component analysis (NGCA). NGCA has a uniform probabilistic bound on the error of finding the non-Gaussian components and within this framework, we presented an efficient NGCA algorithm called Multi-index Projection Pursuit. The algorithm is justified as an extension of the ordinary projection pursuit (PP) methods and is shown to outperform PP particularly when the data has complicated non-Gaussian structure. However, it turns out that multi-index PP is not optimal in the context of NGCA. In this article, we therefore develop an alternative algorithm called iterative metric adaptation for radial kernel functions (IMAK), which is theoretically better justifiable within the NGCA framework. We demonstrate that the new algorithm tends to outperform existing methods through numerical examples.

Journal ArticleDOI
TL;DR: This study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma, and considers error bars for each method, and investigates how well they quantify the domain of applicability of each model.
Abstract: Unfavorable lipophilicity and water solubility cause many drug failures; therefore these properties have to be taken into account early on in lead discovery. Commercial tools for predicting lipophilicity usually have been trained on small and neutral molecules, and are thus often unable to accurately predict in-house data. Using a modern Bayesian machine learning algorithm--a Gaussian process model--this study constructs a log D7 model based on 14,556 drug discovery compounds of Bayer Schering Pharma. Performance is compared with support vector machines, decision trees, ridge regression, and four commercial tools. In a blind test on 7013 new measurements from the last months (including compounds from new projects) 81% were predicted correctly within 1 log unit, compared to only 44% achieved by commercial software. Additional evaluations using public data are presented. We consider error bars for each method (model based error bars, ensemble based, and distance based approaches), and investigate how well they quantify the domain of applicability of each model.

01 Jan 2007
TL;DR: This chapter investigates the effect of adaptation on the performance of the classifier embedded in three different BCI systems, all of them based on non-invasive electroencephalogram (EEG) signals.
Abstract: One major challenge in Brain-Computer Interface (BCI) research is to cope with the inherent nonstationarity of the recorded brain signals caused by changes in the subjects brain processes during an experiment. Online adaptation of the classifier embedded into the BCI is a possible way of tackling this issue. In this chapter we investigate the effect of adaptation on the performance of the classifier embedded in three different BCI systems, all of them based on non-invasive electroencephalogram (EEG) signals. Through this adaptation we aim to keep the classifier constantly tuned to the EEG signals it is receiving in the current session. Although the experimental results reported here show the benefits of online adaptation, some questions need still to be addressed. The chapter ends discussing some of these open issues.

Journal ArticleDOI
TL;DR: This study constructs different log P and log D7 models that exhibit excellent predictions which compare favorably to state-of-the-art tools on both benchmark and in-house data sets.
Abstract: Many drug failures are due to an unfavorable ADMET profile (Absorption, Distribution, Metabolism, Excretion & Toxicity). Lipophilicity is intimately connected with ADMET and in today’s drug discovery process, the octanol water partition coefficient log P and it’s pH dependant counterpart log D have to be taken into account early on in lead discovery. Commercial tools available for ’in silico’ prediction of ADMET or lipophilicity parameters usually have been trained on relatively small and mostly neutral molecules, therefore their accuracy on industrial in-house data leaves room for considerable improvement (see Bruneau et al. and references therein). Using modern kernel-based machine learning algorithms – so called Gaussian Processes (GP)– this study constructs different log P and log D7 models that exhibit excellent predictions which compare favorably to state-of-the-art tools on both benchmark and in-house data sets.

01 Jan 2007
TL;DR: By substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments and is currently being used in a variety of studies by many research groups.
Abstract: BCI2000 is a flexible general-purpose platform for brain-computer interface (BCI) research and development that is aimed mainly at reducing the complexity and cost of implementing BCI systems. Since 2000, we have been developing this system in a collaboration between the Wadsworth Center of the New York State Department of Health in Albany, New York, and the Institute of Medical Psychology and Behavioral Neurobiology at the University of Tubingen, Germany. This system currently is used for a variety of studies in more than 110 laboratories around the world. BCI2000 currently supports a variety of data acquisition systems, brain signals, and feedback modalities and can thus be configured to implement many commonly used BCI systems without any programming. We provide the source code and corresponding documentation with the system to facilitate the implementation of BCI methods that are not supported by the current system. This process, and thus the evaluation of different BCI methods, is further encouraged by the modular design of BCI2000, which is designed such that a change in a module or a component requires little or no change in other modules or components. In summary, by substantially reducing labor and cost, BCI2000 facilitates the implementation of different BCI systems and other psychophysiological experiments. It is available with full documentation and free of charge for research or educational purposes and is currently being used in a variety of studies by many research groups (http://www.bci2000.org).