scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors show that post- and pre-stimulus connectivity in the calibration recording is significantly correlated with online feedback performance in μ and feedback frequency bands, and that the significance of the correlation between connectivity and BCI feedback accuracy was not due to the signal-to-noise ratio of the oscillations in the corresponding post and prestimulus intervals.
Abstract: Brain-Computer Interfaces (BCIs) are systems that allow users to control devices using brain activity alone. However, the ability of participants to command BCIs varies from subject to subject. About 20% of potential users of sensorimotor BCIs do not gain reliable control of the system. The inefficiency to decode user's intentions requires the identification of neurophysiological factors determining "good" and "poor" BCI performers. One of the important neurophysiological aspects in BCI research is that the neuronal oscillations, used to control these systems, show a rich repertoire of spatial sensorimotor interactions. Considering this, we hypothesized that neuronal connectivity in sensorimotor areas would define BCI performance. Analyses for this study were performed on a large dataset of 80 inexperienced participants. They took part in a calibration and an online feedback session recorded on the same day. Undirected functional connectivity was computed over sensorimotor areas by means of the imaginary part of coherency. The results show that post- as well as pre-stimulus connectivity in the calibration recording is significantly correlated to online feedback performance in μ and feedback frequency bands. Importantly, the significance of the correlation between connectivity and BCI feedback accuracy was not due to the signal-to-noise ratio of the oscillations in the corresponding post and pre-stimulus intervals. Thus, this study demonstrates that BCI performance is not only dependent on the amplitude of sensorimotor oscillations as shown previously, but that it also relates to sensorimotor connectivity measured during the preceding training session. The presence of such connectivity between motor and somatosensory systems is likely to facilitate motor imagery, which in turn is associated with the generation of a more pronounced modulation of sensorimotor oscillations (manifested in ERD/ERS) required for the adequate BCI performance. We also discuss strategies for the up-regulation of such connectivity in order to enhance BCI performance.

15 citations

Book ChapterDOI
16 Mar 2009
TL;DR: SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace and can robustify other methods by restricting them to the stationary subspace.
Abstract: Non-stationarities are an ubiquitous phenomenon in time-series data, yet they pose a challenge to standard methodology: classification models and ICA components, for example, cannot be estimated reliably under distribution changes because the classic assumption of a stationary data generating process is violated. Conversely, understanding the nature of observed non-stationary behaviour often lies at the heart of a scientific question. To this end, we propose a novel unsupervised technique: Stationary Subspace Analysis (SSA). SSA decomposes a multi-variate time-series into a stationary and a non-stationary subspace. This factorization is a universal tool for furthering the understanding of non-stationary data. Moreover, we can robustify other methods by restricting them to the stationary subspace. We demonstrate the performance of our novel concept in simulations and present a real world application from Brain Computer Interfacing.

15 citations

Journal ArticleDOI
TL;DR: It is proved that a necessary and sufficient condition for uniqueness is that the non-Gaussian signal subspace is of minimal dimension, and this result guarantees that projection algorithms uniquely recover the underlying lower dimensional data signals.
Abstract: Dimension reduction is a key step in preprocessing large-scale data sets. A recently proposed method named non-Gaussian component analysis searches for a projection onto the non-Gaussian part of a given multivariate recording, which is a generalization of the deflationary projection pursuit model. In this contribution, we discuss the uniqueness of the subspaces of such a projection. We prove that a necessary and sufficient condition for uniqueness is that the non-Gaussian signal subspace is of minimal dimension. Furthermore, we propose a measure for estimating this minimal dimension and illustrate it by numerical simulations. Our result guarantees that projection algorithms uniquely recover the underlying lower dimensional data signals.

15 citations

Proceedings Article
04 Dec 2006
TL;DR: It is shown that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem.
Abstract: We show that the relevant information about a classification problem in feature space is contained up to negligible error in a finite number of leading kernel PCA components if the kernel matches the underlying learning problem. Thus, kernels not only transform data sets such that good generalization can be achieved even by linear discriminant functions, but this transformation is also performed in a manner which makes economic use of feature space dimensions. In the best case, kernels provide efficient implicit representations of the data to perform classification. Practically, we propose an algorithm which enables us to recover the subspace and dimensionality relevant for good classification. Our algorithm can therefore be applied (1) to analyze the interplay of data set and kernel in a geometric fashion, (2) to help in model selection, and to (3) de-noise in feature space in order to yield better classification results.

15 citations

Book ChapterDOI
01 Jan 2020
TL;DR: It is found that nuclear and stromal morphology and lymphocyte infiltration play an important role in the classification of the ER status, demonstrating that interpretable machine learning can be a vital tool for validating and generating hypotheses about morphological biomarkers.
Abstract: The eligibility for hormone therapy to treat breast cancer largely depends on the tumor’s estrogen receptor (ER) status Recent studies show that the ER status correlates with morphological features found in Haematoxylin-Eosin (HE) slides Thus, HE analysis might be sufficient for patients for whom the classifier confidently predicts the ER status and thereby obviate the need for additional examination, such as immunohistochemical (IHC) staining Several prior works are limited by either the use of engineered features, multi-stage models that use features unspecific to HE images or a lack of explainability To address these limitations, this work proposes an end-to-end neural network ensemble that shows state-of-the-art performance We demonstrate that the approach also translates to the prediction of the cancer grade Moreover, subsets can be selected from the test data for which the model can detect a positive ER status with a precision of 94% while classifying 13% of the patients To compensate for the reduced interpretability of the model that comes along with end-to-end training, this work applies Layer-wise Relevance Propagation (LRP) to determine the relevant parts of the images a posteriori, commonly visualized as a heatmap overlayed with the input image We found that nuclear and stromal morphology and lymphocyte infiltration play an important role in the classification of the ER status This demonstrates that interpretable machine learning can be a vital tool for validating and generating hypotheses about morphological biomarkers

14 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations