scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Proceedings ArticleDOI
20 Jun 2007
TL;DR: This paper formally analyze the asymptotic Bayesian generalization error and establishes its upper bound under a very general setting, and proposes a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change.
Abstract: In supervised learning, we commonly assume that training and test data are sampled from the same distribution. However, this assumption can be violated in practice and then standard machine learning techniques perform poorly. This paper focuses on revealing and improving the performance of Bayesian estimation when the training and test distributions are different. We formally analyze the asymptotic Bayesian generalization error and establish its upper bound under a very general setting. Our important finding is that lower order terms---which can be ignored in the absence of the distribution change---play an important role under the distribution change. We also propose a novel variant of stochastic complexity which can be used for choosing an appropriate model and hyper-parameters under a particular distribution change.

37 citations

Journal ArticleDOI
TL;DR: The generalized ERD represents a powerful novel analysis tool for extending the understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli in the presence of dynamical cortical states.
Abstract: Brains were built by evolution to react swiftly to environmental challenges. Thus, sensory stimuli must be processed ad hoc, i.e., independent—to a large extent—from the momentary brain state incidentally prevailing during stimulus occurrence. Accordingly, computational neuroscience strives to model the robust processing of stimuli in the presence of dynamical cortical states. A pivotal feature of ongoing brain activity is the regional predominance of EEG eigenrhythms, such as the occipital alpha or the pericentral mu rhythm, both peaking spectrally at 10 Hz. Here, we establish a novel generalized concept to measure event-related desynchronization (ERD), which allows one to model neural oscillatory dynamics also in the presence of dynamical cortical states. Specifically, we demonstrate that a somatosensory stimulus causes a stereotypic sequence of first an ERD and then an ensuing amplitude overshoot (event-related synchronization), which at a dynamical cortical state becomes evident only if the natural relaxation dynamics of unperturbed EEG rhythms is utilized as reference dynamics. Moreover, this computational approach also encompasses the more general notion of a “conditional ERD,” through which candidate explanatory variables can be scrutinized with regard to their possible impact on a particular oscillatory dynamics under study. Thus, the generalized ERD represents a powerful novel analysis tool for extending our understanding of inter-trial variability of evoked responses and therefore the robust processing of environmental stimuli.

37 citations

Proceedings ArticleDOI
12 Nov 2012
TL;DR: This pilot study shows that robust control is also possible with conventional linear regression if EMG power measures are available for a large number of electrodes and it is possible to linearize the problem with simple nonlinear transformations of band-pass power.
Abstract: Previous approaches for extracting real-time proportional control information simultaneously for multiple degree of Freedom(DoF) from the electromyogram (EMG) often used non-linear methods such as the multilayer perceptron (MLP). In this pilot study we show that robust control is also possible with conventional linear regression if EMG power measures are available for a large number of electrodes. In particular, we show that it is possible to linearize the problem with simple nonlinear transformations of band-pass power. Because of its simplicity the method scales well to high dimensions, is easily regularized when insufficient training data is available, and is particularly well suited for real-time control as well as on-line optimization.

37 citations

Journal ArticleDOI
TL;DR: This simulation study demonstrates how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods and proposes a cross-validation procedure which achieves a reconstruction quality close to the optimum.
Abstract: Functional near-infrared spectroscopy (fNIRS) is an optical method for noninvasively determining brain activation by estimating changes in the absorption of near-infrared light. Diffuse optical tomography (DOT) extends fNIRS by applying overlapping “high density” measurements, and thus providing a three-dimensional imaging with an improved spatial resolution. Reconstructing brain activation images with DOT requires solving an underdetermined inverse problem with far more unknowns in the volume than in the surface measurements. All methods of solving this type of inverse problem rely on regularization and the choice of corresponding regularization or convergence criteria. While several regularization methods are available, it is unclear how well suited they are for cerebral functional DOT in a semi-infinite geometry. Furthermore, the regularization parameter is often chosen without an independent evaluation, and it may be tempting to choose the solution that matches a hypothesis and rejects the other. In this simulation study, we start out by demonstrating how the quality of cerebral DOT reconstructions is altered with the choice of the regularization parameter for different methods. To independently select the regularization parameter, we propose a cross-validation procedure which achieves a reconstruction quality close to the optimum. Additionally, we compare the outcome of seven different image reconstruction methods for cerebral functional DOT. The methods selected include reconstruction procedures that are already widely used for cerebral DOT [minimum l2-norm estimate (l2MNE) and truncated singular value decomposition], recently proposed sparse reconstruction algorithms [minimum l1- and a smooth minimum l0-norm estimate (l1MNE, l0MNE, respectively)] and a depth- and noise-weighted minimum norm (wMNE). Furthermore, we expand the range of algorithms for DOT by adapting two EEG-source localization algorithms [sparse basis field expansions and linearly constrained minimum variance (LCMV) beamforming]. Independent of the applied noise level, we find that the LCMV beamformer is best for single spot activations with perfect location and focality of the results, whereas the minimum l1-norm estimate succeeds with multiple targets.

37 citations

Journal Article
TL;DR: The segmentation achieved by the unsupervised segmentation of time series is very precise and transients are included, a fact, which makes the approach promising for future applications.
Abstract: We present a framework for the unsupervised segmentation of time series. It applies to non-stationary signals originating from di erent dynamical systems which alternate in time, a phenomenon which appears in many natural systems. In our approach, predictors compete for data points of a given time series. We combine competition and evolutionary inertia to a learning rule. Under this learning rule the system evolves such that the predictors, which nally survive, unambiguously identify the underlying processes. The segmentation achieved by this method is very precise and transients are included, a fact, which makes our approach promising for future applications. key words: neural networks, non-linear dynamics, chaos, time series analysis, prediction, competing neural networks

37 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations