scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Proceedings ArticleDOI
11 Nov 2010
TL;DR: It is shown that Stationary Subspace Analysis (SSA), a time series analysis method, can be used to identify the underlying stationary and non-stationary brain sources from high-dimensional EEG measurements, and restricts the BCI to the stationary sources found by SSA can significantly increase the performance.
Abstract: Neurophysiological measurements obtained from eg EEG or fMRI are inherently non-stationary because the properties of the underlying brain processes vary over time For example, in Brain-Computer-Interfacing (BCI), deteriorating performance (bitrate) is a common phenomenon since the parameters determined during the calibration phase can be suboptimal under the application regime, where the brain state is different, eg due to increased tiredness or changes in the experimental paradigm We show that Stationary Subspace Analysis (SSA), a time series analysis method, can be used to identify the underlying stationary and non-stationary brain sources from high-dimensional EEG measurements Restricting the BCI to the stationary sources found by SSA can significantly increase the performance Moreover, SSA yields topographic maps corresponding to stationary- and non-stationary brain sources which reveal their spatial characteristics

69 citations

Proceedings Article
04 Dec 2006
TL;DR: A new paradigm is proposed that allows to completely omit such calibration and instead transfer knowledge from prior sessions and construct a classifier based on individualized prototypes and shows that classifiers can be successfully transferred to a new session for a number of subjects.
Abstract: Up to now even subjects that are experts in the use of machine learning based BCI systems still have to undergo a calibration session of about 20-30 min. From this data their (movement) intentions are so far infered. We now propose a new paradigm that allows to completely omit such calibration and instead transfer knowledge from prior sessions. To achieve this goal we first define normalized CSP features and distances in-between. Second, we derive prototypical features across sessions: (a) by clustering or (b) by feature concatenation methods. Finally, we construct a classifier based on these individualized prototypes and show that, indeed, classifiers can be successfully transferred to a new session for a number of subjects.

68 citations

Journal ArticleDOI
10 Feb 2015
TL;DR: In this paper, a co-adaptive closed-loop learning strategy was proposed for regression-based myoelectric control of a prosthetic hand with more than one degree of freedom (DoF).
Abstract: Myoelectric control of a prosthetic hand with more than one degree of freedom (DoF) is challenging, and clinically available techniques require a sequential actuation of the DoFs. Simultaneous and proportional control of multiple DoFs is possible with regression-based approaches allowing for fluent and natural movements. Conventionally, the regressor is calibrated in an open-loop with training based on recorded data and the performance is evaluated subsequently. For individuals with amputation or congenital limb-deficiency who need to (re)learn how to generate suitable muscle contractions, this open-loop process may not be effective. We present a closed-loop real-time learning scheme in which both the user and the machine learn simultaneously to follow a common target. Experiments with ten able-bodied individuals show that this co-adaptive closed-loop learning strategy leads to significant performance improvements compared to a conventional open-loop training paradigm. Importantly, co-adaptive learning allowed two individuals with congenital deficiencies to perform simultaneous 2-D proportional control at levels comparable to the able-bodied individuals, despite having to a learn completely new and unfamiliar mapping from muscle activity to movement trajectories. To our knowledge, this is the first study which investigates man-machine co-adaptation for regression-based myoelectric control. The proposed training strategy has the potential to improve myographic prosthetic control in clinically relevant settings.

68 citations

Posted Content
TL;DR: An improved method is proposed that may serve as an extension for existing back-projection and decomposition techniques and formulate a quality criterion for explanation methods.
Abstract: Deep learning has significantly advanced the state of the art in machine learning However, neural networks are often considered black boxes There is significant effort to develop techniques that explain a classifier's decisions Although some of these approaches have resulted in compelling visualisations, there is a lack of theory of what is actually explained Here we present an analysis of these methods and formulate a quality criterion for explanation methods On this ground, we propose an improved method that may serve as an extension for existing back-projection and decomposition techniques

68 citations

Proceedings Article
01 Jan 2002
TL;DR: An alternative embedding to multi-dimensional scaling (MDS) that allows us to apply a variety of classical machine learning and signal processing algorithms, and a class of pair-wise grouping algorithms which share the shift-in variance property is statistically invariant under this embedding procedure.
Abstract: Pairwise data in empirical sciences typically violate metricity, either due to noise or due to fallible estimates, and therefore are hard to analyze by conventional machine learning technology. In this paper we therefore study ways to work around this problem. First, we present an alternative embedding to multi-dimensional scaling (MDS) that allows us to apply a variety of classical machine learning and signal processing algorithms. The class of pair-wise grouping algorithms which share the shift-in variance property is statistically invariant under this embedding procedure, leading to identical assignments of objects to clusters. Based on this new vectorial representation, denoising methods are applied in a second step. Both steps provide a theoretically well controlled setup to translate from pairwise data to the respective denoised metric representation. We demonstrate the practical usefulness of our theoretical reasoning by discovering structure in protein sequence data bases, visibly improving performance upon existing automatic methods.

68 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations