scispace - formally typeset
Search or ask a question

Showing papers by "Klaus-Robert Müller published in 2010"


Journal Article
TL;DR: This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.
Abstract: After building a classifier with modern tools of machine learning we typically have a black box at hand that is able to predict well for unseen data. Thus, we get an answer to the question what is the most likely label of a given unseen data point. However, most methods will provide no answer why the model predicted a particular label for a single instance and what features were most influential for that particular instance. The only method that is currently able to provide such explanations are decision trees. This paper proposes a procedure which (based on a set of assumptions) allows to explain the decisions of any classification method.

888 citations


Journal ArticleDOI
TL;DR: This paper focuses on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT) and identifies four application areas where disabled individuals could greatly benefit from advancements inBCI technology, namely, “Communication and Control”, ‘Motor Substitution’, ”Entertainment” and “Motor Recovery”.
Abstract: In recent years, new research has brought the field of electroencephalogram (EEG)-based brain–computer interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely, “Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user–machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human–computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices.

792 citations


Journal ArticleDOI
TL;DR: A neurophysiological predictor of BCI performance is proposed which can be determined from a two minute recording of a 'relax with eyes open' condition using two Laplacian EEG channels.

606 citations


Journal ArticleDOI
TL;DR: Examples of novel BCI applications which provide evidence for the promising potential of BCI technology for non-medical uses are presented and distinct methodological improvements required to bring non- medical applications ofBCI technology to a diversity of layperson target groups are discussed.
Abstract: Brain–computer interfacing (BCI) is a steadily growing area of research. While initially BCI research was focused on applications for paralyzed patients, increasingly more alternative applications in healthy human subjects are proposed and investigated. In particular, monitoring of mental states and decoding of covert user states have seen a strong rise of interest. Here, we present some examples of such novel applications which provide evidence for the promising potential of BCI technology for non-medical uses. Furthermore, we discuss distinct methodological improvements required to bring non-medical applications of BCI technology to a diversity of layperson target groups, e.g., ease of use, minimal training, general usability, short control latencies.

353 citations


Journal ArticleDOI
TL;DR: This work demonstrates that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs and can be used to halve the cost of experimental measurements by providing a similar prediction accuracy.
Abstract: The estimation of accuracy and applicability of QSAR and QSPR models for biological and physicochemical properties represents a critical problem. The developed parameter of “distance to model” (DM) is defined as a metric of similarity between the training and test set compounds that have been subjected to QSAR/QSPR modeling. In our previous work, we demonstrated the utility and optimal performance of DM metrics that have been based on the standard deviation within an ensemble of QSAR models. The current study applies such analysis to 30 QSAR models for the Ames mutagenicity data set that were previously reported within the 2009 QSAR challenge. We demonstrate that the DMs based on an ensemble (consensus) model provide systematically better performance than other DMs. The presented approach identifies 30−60% of compounds having an accuracy of prediction similar to the interlaboratory accuracy of the Ames test, which is estimated to be 90%. Thus, the in silico predictions can be used to halve the cost of exp...

205 citations


Proceedings ArticleDOI
19 Jul 2010
TL;DR: A depth image-based rendering (DIBR) approach with advanced inpainting methods with significant objective and subjective gains of the proposed method in comparison to the state-of-the-art methods is presented.
Abstract: In free viewpoint television or 3D video, depth image based rendering (DIBR) is used to generate virtual views based on a textured image and its associated depth information. In doing so, image regions which are occluded in the original view may become visible in the virtual image. One of the main challenges in DIBR is to extrapolate known textures into the disoccluded area without inserting subjective annoyance. In this paper, a new hole filling approach for DIBR using texture synthesis is presented. Initially, the depth map in the virtual view is filled at disoccluded locations. Then, in the textured image, holes of limited spatial extent are closed by solving Laplace equations. Larger disoccluded regions are initialized via median filtering and subsequently refined by patch-based texture synthesis. Experimental results show that the proposed approach provides improved rendering results in comparison to the latest MPEG view synthesis reference software (VSRS) version 3.6 [1].

186 citations


Journal ArticleDOI
TL;DR: A framework for signal analysis of electroencephalography (EEG) that unifies tasks such as feature extraction, feature selection, feature combination, and classification, which are often independently tackled conventionally, under a regularized empirical risk minimization problem is proposed.

168 citations


Journal ArticleDOI
TL;DR: This work proposes a novel technique, called sparsely connected sources analysis (SCSA), that can overcome the problem of volume conduction by modeling neural data innovatively with the following ingredients: the EEG/MEG is assumed to be a linear mixture of correlated sources following a multivariate autoregressive (MVAR) model.
Abstract: We propose a novel technique to assess functional brain connectivity in electroencephalographic (EEG)/magnetoencephalographic (MEG) signals. Our method, called sparsely connected sources analysis (SCSA), can overcome the problem of volume conduction by modeling neural data innovatively with the following ingredients: 1) the EEG/MEG is assumed to be a linear mixture of correlated sources following a multivariate autoregressive (MVAR) model; 2) the demixing is estimated jointly with the source MVAR parameters; and 3) overfitting is avoided by using the group lasso penalty. This approach allows us to extract the appropriate level of crosstalk between the extracted sources and, in this manner, we obtain a sparse data-driven model of functional connectivity. We demonstrate the usefulness of SCSA with simulated data and compare it to a number of existing algorithms with excellent results.

112 citations


Journal ArticleDOI
TL;DR: A simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one, confirming recent models of the hemodynamic response to neural activity and allowing for a more detailed analysis of neurovascular coupling dynamics.
Abstract: Data recorded from multiple sources sometimes exhibit non-instantaneous couplings. For simple data sets, cross-correlograms may reveal the coupling dynamics. But when dealing with high-dimensional multivariate data there is no such measure as the cross-correlogram. We propose a simple algorithm based on Kernel Canonical Correlation Analysis (kCCA) that computes a multivariate temporal filter which links one data modality to another one. The filters can be used to compute a multivariate extension of the cross-correlogram, the canonical correlogram, between data sources that have different dimensionalities and temporal resolutions. The canonical correlogram reflects the coupling dynamics between the two sources. The temporal filter reveals which features in the data give rise to these couplings and when they do so. We present results from simulations and neuroscientific experiments showing that tkCCA yields easily interpretable temporal filters and correlograms. In the experiments, we simultaneously performed electrode recordings and functional magnetic resonance imaging (fMRI) in primary visual cortex of the non-human primate. While electrode recordings reflect brain activity directly, fMRI provides only an indirect view of neural activity via the Blood Oxygen Level Dependent (BOLD) response. Thus it is crucial for our understanding and the interpretation of fMRI signals in general to relate them to direct measures of neural activity acquired with electrodes. The results computed by tkCCA confirm recent models of the hemodynamic response to neural activity and allow for a more detailed analysis of neurovascular coupling dynamics.

104 citations


Proceedings ArticleDOI
03 Dec 2010
TL;DR: A new temporally and spatially consistent hole filling method for DIBR is presented, highlighting that gains in objective and visual quality can be achieved in comparison to the latest MPEG view synthesis reference software (VSRS).
Abstract: Depth-image-based rendering (DIBR) is used to generate additional views of a real-world scene from images or videos and associated per-pixel depth information. An inherent problem of the view synthesis concept is the fact that image information which is occluded in the original view may become visible in the “virtual” image. The resulting question is: how can these disocclusions be covered in a visually plausible manner? In this paper, a new temporally and spatially consistent hole filling method for DIBR is presented. In a first step, disocclusions in the depth map are filled. Then, a background sprite is generated and updated with every frame using the original and synthesized information from previous frames to achieve temporally consistent results. Next, small holes resulting from depth estimation inaccuracies are closed in the textured image, using methods that are based on solving Laplace equations. The residual disoccluded areas are coarsely initialized and subsequently refined by patch-based texture synthesis. Experimental results are presented, highlighting that gains in objective and visual quality can be achieved in comparison to the latest MPEG view synthesis reference software (VSRS).

77 citations


Journal ArticleDOI
TL;DR: The neurovascular relationship during periods of spontaneous activity is explored by using temporal kernel canonical correlation analysis (tkCCA), a multivariate method that can take into account any features in the signals that univariate analysis cannot and represent the first multivariate analysis of intracranial electrophysiology and high-resolution fMRI.

Journal ArticleDOI
TL;DR: This work investigates the selection of EEG channels in a BCI that uses the popular CSP algorithm in order to classify voluntary modulations of sensorimotor rhythms (SMR), and finds a setting with 22 channels centered over the motor areas to be the best.
Abstract: One crucial question in the design of electroencephalogram (EEG)-based brain-computer interface (BCI) experiments is the selection of EEG channels. While a setup with few channels is more convenient and requires less preparation time, a dense placement of electrodes provides more detailed information and henceforth could lead to a better classification performance. Here, we investigate this question for a specific setting: a BCI that uses the popular CSP algorithm in order to classify voluntary modulations of sensorimotor rhythms (SMR). In a first approach 13 different fixed channel configurations are compared to the full one consisting of 119 channels. The configuration with 48 channels results to be the best one, while configurations with less channels, from 32 to 8, performed not significantly worse than the best configuration in cases where only few training trials are available. In a second approach an optimal channel configuration is obtained by an iterative procedure in the spirit of stepwise variable selection with nonparametric multiple comparisons. As a surprising result, in the second approach a setting with 22 channels centered over the motor areas was selected. Thanks to the acquisition of a large data set recorded from 80 novice participants using 119 EEG channels, the results of this study can be expected to have a high degree of generalizability.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: It is shown that Stationary Subspace Analysis (SSA), a time series analysis method, can be used to identify the underlying stationary and non-stationary brain sources from high-dimensional EEG measurements, and restricts the BCI to the stationary sources found by SSA can significantly increase the performance.
Abstract: Neurophysiological measurements obtained from eg EEG or fMRI are inherently non-stationary because the properties of the underlying brain processes vary over time For example, in Brain-Computer-Interfacing (BCI), deteriorating performance (bitrate) is a common phenomenon since the parameters determined during the calibration phase can be suboptimal under the application regime, where the brain state is different, eg due to increased tiredness or changes in the experimental paradigm We show that Stationary Subspace Analysis (SSA), a time series analysis method, can be used to identify the underlying stationary and non-stationary brain sources from high-dimensional EEG measurements Restricting the BCI to the stationary sources found by SSA can significantly increase the performance Moreover, SSA yields topographic maps corresponding to stationary- and non-stationary brain sources which reveal their spatial characteristics

Journal ArticleDOI
TL;DR: Pyff provides a platform-independent framework that allows users to develop and run neuroscientific experiments in the programming language Python and is suitable to be used with any system that may be adapted to send its data in the specified format.
Abstract: This paper introduces Pyff, the Pythonic Feedback Framework for feedback applications and stimulus presentation. Pyff provides a platform independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e. feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.

Journal ArticleDOI
TL;DR: The results of this study suggest that pharmacophoric patterns of synthetic bioactive compounds can be traced back to natural products, and this will be useful for “de-orphanizing” the natural bioactive agent.
Abstract: Peroxisome proliferator-activated receptors (PPARs) are nuclear proteins that act as transcription factors. They represent a validated drug target class involved in lipid and glucose metabolism as well as inflammatory response regulation. We combined state-of-the-art machine learning methods including Gaussian process (GP) regression, multiple kernel learning, the ISOAK molecular graph kernel, and a novel loss function to virtually screen a large compound collection for potential PPAR activators; 15 compounds were tested in a cellular reporter gene assay. The most potent PPARg-selective hit (EC50 = 10 0.2 mm) is a derivative of the natural product truxillic acid. Truxillic acid derivatives are known to be anti-inflammatory agents, potentially due to PPARg activation. Our study underscores the usefulness of modern machine learning algorithms for finding potent bioactive compounds and presents an example of scaffold-hopping from synthetic compounds to natural products. We thus motivate virtual screening of natural product collections as a source of novel lead compounds. The results of our study suggest that pharmacophoric patterns of synthetic bioactive compounds can be traced back to natural products, and this will be useful for “de-orphanizing” the natural bioactive agent. PPARs are present in three known isoforms: PPARa, PPARb (d), and PPARg, with different expression patterns according to their function. PPAR activation leads to an increased expression of key enzymes and proteins involved in the uptake and metabolism of lipids and glucose. Unsaturated fatty acids and eicosanoids such as linoleic acid and arachidonic acid are physiological PPAR activators. Owing to their central role in glucose and lipid homeostasis, PPARs represent attractive drug targets for the treatment of diabetes and dyslipidemia. Glitazones (thiazolidinediones) such as pioglitazone and rosiglitazone act as selective activators of PPARg and are used as therapeutics for diabetes mellitus type 2. In addition to synthetic activators, herbs are traditionally used for treatment of metabolic disorders, and some herbal ingredients have been identified as PPARg activators, for example, carnosol and carnosic acid, as well as several terpenoids and flavonoids. 12] We used several machine learning methods, with synthetic PPAR agonists as input, to find common pharmacophoric patterns for virtual screening in both synthetic and natural product derived substances. We focused on GP models, which originate from Bayesian statistics. Their original applications in cheminformatics were aimed at predicting aqueous solubility, blood–brain barrier penetration, hERG (human ethergo-go-related gene) inhibition, 15] and metabolic stability. A particular advantage of GPs is that they provide error estimates with their predictions. In GP modeling of molecular properties, one defines a positive definite kernel function to model molecular similarity. Compound information enters GP models only via this function, so relevant (context-dependent) physicochemical properties must be captured. This is done by computing molecular descriptors (physicochemical property vectors), or by graph kernels that are defined directly on the molecular graph. From a family of functions that are potentially able to model the underlying structure–activity relationship (“prior”), only functions that agree with the data are retained (Figure 1). The weighted average of the retained functions (“posterior”) acts as predictor, and its variance as an estimate of the confidence in the predic-

Proceedings Article
18 Feb 2010
TL;DR: It is found that Granger causality, in contrast to PSI, fails also for non-mixed noise if the memory-time of the sender of information is long compared to the transmission time of the information, and that PSI eventually misses nonlinear interactions but is unlikely to give false positive results.
Abstract: We recently proposed a new measure, termed Phase Slope Index (PSI), It estimates the causal direction of interactions robustly with respect to instantaneous mixtures of independent sources with arbitrary spectral content. We compared this method to Granger Causality for linear systems containing spatially and temporarily mixed noise and found that, in contrast to PSI, the latter was not able to properly distinguish truly interacting systems from mixed noise. Here, we extent this analysis with respect to two aspects: a) we analyze Granger causality and PSI also for non-mixed noise, and b) we analyze PSI for nonlinear interactions. We found a) that Granger causality, in contrast to PSI, fails also for non-mixed noise if the memory-time of the sender of information is long compared to the transmission time of the information, and b) that PSI, being a linear method, eventually misses nonlinear interactions but is unlikely to give false positive results.

Journal ArticleDOI
TL;DR: A number of methods that allow for addressing brain connectivity and especially causality between different brain regions from EEG or MEG are reviewed, all based on the insight that the imaginary part of the cross-spectra cannot be explained as a mixing artifact.
Abstract: Estimating brain connectivity and especially causality between different brain regions from EEG or MEG is limited by the fact that the data are a largely unknown superposition of the actual brain activities. Any method, which is not robust to mixing artifacts, is prone to yield false positive results. We here review a number of methods that allow to address this problem. They are all based on the insight that the imaginary part of the cross-spectra cannot be explained as a mixing artifact. First, a joined decomposition of these imaginary parts into pairwise activities allows to separate subsystems containing different rhythmic activities. Second, assuming that the respective source estimates are least overlapping allows a separation of the rhythmic interacting subsystem into the source topographies themselves. Finally, a causal relation between these sources can be estimated using the newly proposed measure Phase Slope Index (PSI). This work, for the first time, presents the above methods in combination; all applied to a single data set.

Proceedings Article
18 Feb 2010
TL;DR: In this article, the authors propose to enforce sparsity for the subgroups of coefficients that belong to each pair of time series, as the absence of a causal relation requires the coefficients for all time-lags to become jointly zero.
Abstract: Our goal is to estimate causal interactions in multivariate time series. Using vector autoregressive (VAR) models, these can be defined based on non-vanishing coefficients belonging to respective time-lagged instances. As in most cases a parsimonious causality structure is assumed, a promising approach to causal discovery consists in fitting VAR models with an additional sparsity-promoting regularization. Along this line we here propose that sparsity should be enforced for the subgroups of coefficients that belong to each pair of time series, as the absence of a causal relation requires the coefficients for all time-lags to become jointly zero. Such behavior can be achieved by means of l1,2-norm regularized regression, for which an efficient active set solver has been proposed recently. Our method is shown to outperform standard methods in recovering simulated causality graphs. The results are on par with a second novel approach which uses multiple statistical testing.

Journal Article
TL;DR: This article proposes an effective approximation technique for parse tree kernels, which attain run-time improvements up to three orders of magnitude while preserving the predictive accuracy of regular tree kernels.
Abstract: Convolution kernels for trees provide simple means for learning with tree-structured data. The computation time of tree kernels is quadratic in the size of the trees, since all pairs of nodes need to be compared. Thus, large parse trees, obtained from HTML documents or structured network data, render convolution kernels inapplicable. In this article, we propose an effective approximation technique for parse tree kernels. The approximate tree kernels (ATKs) limit kernel computation to a sparse subset of relevant subtrees and discard redundant structures, such that training and testing of kernel-based learning methods are significantly accelerated. We devise linear programming approaches for identifying such subsets for supervised and unsupervised learning tasks, respectively. Empirically, the approximate tree kernels attain run-time improvements up to three orders of magnitude while preserving the predictive accuracy of regular tree kernels. For unsupervised tasks, the approximate tree kernels even lead to more accurate predictions by identifying relevant dimensions in feature space.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: The use of an ensemble of local CSP patches (CSPP) is proposed, which can be considered as a compromise between Laplacians and CSP: CSPP needs less data and channels than CSP, while being superior to LaPLacian filtering.
Abstract: Laplacian filters are commonly used in Brain Computer Interfacing (BCI). When only data from few channels are available, or when, like at the beginning of an experiment, no previous data from the same user is available complex features cannot be used. In this case band power features calculated from Laplacian filtered channels represents an easy, robust and general feature to control a BCI, since its calculation does not involve any class information. For the same reason, the performance obtained with Laplacian features is poor in comparison to subject-specific optimized spatial filters, such as Common Spatial Patterns (CSP) analysis, which, on the other hand, can be used just in a later phase of the experiment, since they require a considerable amount of training data in order to enroll a stable and good performance. This drawback is particularly evident in case of poor performing BCI users, whose data is highly non-stationary and contains little class relevant information. Therefore, Laplacian filtering is preferred to CSP, e.g., in the initial period of co-adaptive calibration, a novel BCI paradigm designed to alleviate the problem of BCI illiteracy. In fact, in the co-adaptive calibration design the experiment starts with a subject-independent classifier and simple features are needed in order to obtain a fast adaptation of the classifier to the newly acquired user's data. Here, the use of an ensemble of local CSP patches (CSPP) is proposed, which can be considered as a compromise between Laplacians and CSP: CSPP needs less data and channels than CSP, while being superior to Laplacian filtering. This property is shown to be particularly useful for the co-adaptive calibration design and is demonstrated on off-line data from a previous co-adaptive BCI study.

Proceedings Article
06 Dec 2010
TL;DR: This analysis uses Gaussian kernels to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are obtained when the deep network discriminates only in the last layers.
Abstract: Deep networks can potentially express a learning problem more efficiently than local learning machines. While deep networks outperform local learning machines on some problems, it is still unclear how their nice representation emerges from their complex structure. We present an analysis based on Gaussian kernels that measures how the representation of the learning problem evolves layer after layer as the deep network builds higher-level abstract representations of the input. We use this analysis to show empirically that deep networks build progressively better representations of the learning problem and that the best representations are obtained when the deep network discriminates only in the last layers.

Book ChapterDOI
23 Jun 2010
TL;DR: This work investigates to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.
Abstract: “BCI illiteracy” is one of the biggest problems and challenges in BCI research It means that BCI control cannot be achieved by a non-negligible number of subjects (estimated 20% to 25%) There are two main causes for BCI illiteracy in BCI users: either no SMR idle rhythm is observed over motor areas, or this idle rhythm is not attenuated during motor imagery, resulting in a classification performance lower than 70% (criterion level) already for offline calibration data In a previous work of the same authors, the concept of machine learning based co-adaptive calibration was introduced This new type of calibration provided substantially improved performance for a variety of users Here, we use a similar approach and investigate to what extent co-adapting learning enables substantial BCI control for completely novice users and those who suffered from BCI illiteracy before.

Journal ArticleDOI
TL;DR: The structure-activity relationships of 16 truxillic acid derivatives are presented, investigated by a cell-based reporter gene assay guided by molecular docking analysis.

Journal ArticleDOI
TL;DR: Pyff as discussed by the authors is a platform independent framework that allows users to develop and run neuroscientific experiments in the programming language Python, which is not well suited for more advanced visual or auditory applications.
Abstract: This paper introduces Pyff, the Pythonic Feedback Framework for feedback applications and stimulus presentation. Pyff provides a platform independent framework that allows users to develop and run neuroscientific experiments in the programming language Python. Existing solutions have mostly been implemented in C++, which makes for a rather tedious programming task for non-computer-scientists, or in Matlab, which is not well suited for more advanced visual or auditory applications. Pyff was designed to make experimental paradigms (i.e. feedback and stimulus applications) easily programmable. It includes base classes for various types of common feedbacks and stimuli as well as useful libraries for external hardware such as eyetrackers. Pyff is also equipped with a steadily growing set of ready-to-use feedbacks and stimuli. It can be used as a standalone application, for instance providing stimulus presentation in psychophysics experiments, or within a closed loop such as in biofeedback or brain-computer interfacing experiments. Pyff communicates with other systems via a standardized communication protocol and is therefore suitable to be used with any system that may be adapted to send its data in the specified format. Having such a general, open source framework will help foster a fruitful exchange of experimental paradigms between research groups. In particular, it will decrease the need of reprogramming standard paradigms, ease the reproducibility of published results, and naturally entail some standardization of stimulus presentation.

01 Jan 2010
TL;DR: The effectiveness of introducing an intermediary state between state probabilities and interface command, driven by a dynamic control law, is investigated and the strategies used by 2 subjects to achieve idle state BCI control are outlined.
Abstract: The use of Electro-encephalography (EEG) for Brain Computer Interface (BCI) provides a cost-efficient, safe, portable and easy to use BCI for both healthy users and the disabled. This paper will first briefly review some of the current challenges in BCI research and then discuss two of them in more detail, namely modeling the "no command" (rest) state and the use of control paradigms in BCI. For effective prosthetic control of a BCI system or when employing BCI as an additional control-channel for gaming or other generic man machine interfacing, a user should not be required to be continuously in an active state, as is current practice. In our approach, the signals are first transduced by computing Gaussian probability distributions of signal features for each mental state, then a prior distribution of idle-state is inferred and subsequently adapted during use of the BCI. We furthermore investigate the effectiveness of introducing an intermediary state between state probabilities and interface command, driven by a dynamic control law, and outline the strategies used by 2 subjects to achieve idle state BCI control.

Proceedings ArticleDOI
07 Jun 2010
TL;DR: Results show that optimized synthesized views achieve absolute lower distortion values than the best result of the approach that uses a fixed QP for the whole sequence.
Abstract: We present a preliminary study on the Rate-Distortion (RD) gain that can be achieved applying RD optimization techniques in a multiview plus depth encoder. We consider the use of Multiview Video Coding (MVC) for both, color and depth sequences, and evaluate the improvement that can be obtained allowing a quantization parameter (QP) assignment on a macroblock basis compared to the use of a fixed QP for the whole sequence. The optimization criterion is the minimization of the distortion of the synthesized views generated at the receiver. Our motivation for this criterion is to capture the impact of depth coding according to its final purpose: the generation of virtual views. Since a unique objective quality metric for view synthesis artifacts evaluation has not been set yet, the performance of several algorithms for quality evaluation of the target synthesized view have been compared. Beyond obtaining a better RD performance, as could be expected, results also show that optimized synthesized views achieve absolute lower distortion values than the best result of the approach that uses a fixed QP for the whole sequence.

Proceedings ArticleDOI
11 Nov 2010
TL;DR: The approach is based on localization of single-trial Fourier coefficients using sparse basis field expansions (S-FLEX) and reveals focal sources in the sensorimotor cortices, a finding which can be regarded as a proof for the expected neurophysiological origin of the BCI control signal.
Abstract: We localize the sources of class-dependent event-related desynchronisation (ERD) of the mu-rhythm related to different types of motor imagery in Brain-Computer Interfacing (BCI) sessions. Our approach is based on localization of single-trial Fourier coefficients using sparse basis field expansions (S-FLEX). The analysis reveals focal sources in the sensorimotor cortices, a finding which can be regarded as a proof for the expected neurophysiological origin of the BCI control signal. As a technical contribution, we extend S-FLEX to the multiple measurement case in a way that the activity of different frequency bins within the mu-band is coherently localized.

Journal ArticleDOI
TL;DR: Two separate online studies with healthy subjects investigate the usability and the speed of novel Brain-Computer Interface paradigms that exclusively use spatial-auditory stimuli to drive an ERP speller and find that they qualify for future studies with patients, that suffer from a loss of gaze control.
Abstract: Two separate online studies with healthy subjects investigate the usability and the speed of novel Brain-Computer Interface paradigms that exclusively use spatial-auditory stimuli to drive an ERP speller. It was found that participants could use both paradigms (named AMUSE and PASS2D) for a spelling task with an average accuracy of over 85% and high speed (~0.9char/min). Based on these results, the paradigms qualify for future studies with patients, that suffer from a loss of gaze control.

Journal ArticleDOI
TL;DR: It is demonstrated how the interplay of several modern kernel-based machine learning approaches can successfully improve ligand-based virtual screening results.
Abstract: We demonstrate the theoretical and practical application of modern kernel-based machine learning methods to ligand-based virtual screening by successful prospective screening for novel agonists of the peroxisome proliferator-activated receptor γ (PPARγ) [1]. PPARγ is a nuclear receptor involved in lipid and glucose metabolism, and related to type-2 diabetes and dyslipidemia. Applied methods included a graph kernel designed for molecular similarity analysis [2], kernel principle component analysis [3], multiple kernel learning [4], and, Gaussian process regression [5]. In the machine learning approach to ligand-based virtual screening, one uses the similarity principle [6] to identify potentially active compounds based on their similarity to known reference ligands. Kernel-based machine learning [7] uses the "kernel trick", a systematic approach to the derivation of non-linear versions of linear algorithms like separating hyperplanes and regression. Prerequisites for kernel learning are similarity measures with the mathematical property of positive semidefiniteness (kernels). The iterative similarity optimal assignment graph kernel (ISOAK) [2] is defined directly on the annotated structure graph, and was designed specifically for the comparison of small molecules. In our virtual screening study, its use improved results, e.g., in principle component analysis-based visualization and Gaussian process regression. Following a thorough retrospective validation using a data set of 176 published PPARγ agonists [8], we screened a vendor library for novel agonists. Subsequent testing of 15 compounds in a cell-based transactivation assay [9] yielded four active compounds. The most interesting hit, a natural product derivative with cyclobutane scaffold, is a full selective PPARγ agonist (EC50 = 10 ± 0.2 μM, inactive on PPARα and PPARβ/δ at 10 μM). We demonstrate how the interplay of several modern kernel-based machine learning approaches can successfully improve ligand-based virtual screening results.

Patent
31 Mar 2010
TL;DR: In this article, the authors present a method and system for analyzing messages transmitted in a communication network, which comprises the steps of determining an information-related citation index indicating how often information comprised in a message has been forwarded in consecutive messages to other users of the communication network.
Abstract: The invention relates to a method and system for analyzing messages transmitted in a communication network. An embodiment of the method comprises the steps of: determining an information-related citation index indicating how often information comprised in a message has been forwarded in consecutive messages to other users of the communication network, and providing an analysis result based on the information-related citation index.