scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
BookDOI
01 Jan 2015
TL;DR: This book discusses the development of non-invasive brain-machine interfacing technology, current trends in memory Implantation and Rehabilitation, and the limits of multimodal neuroimaging for Brain Computer Interfaces.
Abstract: Part I. Non-invasive Brain-Computer Interface.- Chapter 1. Future directions for brain-machine interfacing technology.- Chapter 2. Brain-Computer Interface for Smart Vehicle: Detection of Braking Intention during Simulated Driving.- Chapter 3. Benefits and limits of multimodal neuroimaging for Brain Computer Interfaces.- Chapter 4. Multifrequency Analysis of Brain-Computer Interfaces.- Part II. Cognitive- and Neural-rehabilitation Engineering.- Chapter 5. Current Trends in Memory Implantation and Rehabilitation.- Chapter 6. Moving Brain Controlled Devices Outside the Lab: Principles and Applications.- Part III. Big Data Neurocomputing.- Chapter 7. Across cultures: a Cognitive and Computational Analysis of Emotional and Conversational Facial Expressions in Germany and Korea.- Chapter 8. Bottom-Up Processing in Complex Scenes: a unifying perspective on segmentation, fixation saliency, candidate regions, base-detail decomposition, and image enhancement.- Chapter 9. Perception-based motion cueing: a Cybernetics approach to motion simulation.- Chapter 10. The other-race effect revisited: no effect for faces varying in race only.- Part IV. Early Diagnosis and Prediction of Neural Diseases.- Chapter 11. Functional neuromonitoring in acquired head injury.- Chapter 12. Diagnostic Optical Imaging Technology and its Principles.- Chapter 13. Detection of Brain Metastases using Magnetic Resonance Imaging.- Chapter 14. Deep Learning in Diagnosis of Brain Disorders.

8 citations

Patent
11 Sep 1998
TL;DR: In this paper, a time series of at least one system variable x(t) is subjected to modeling, for example switch segmentation, so that in each time segment of a predetermined minimum length a predetermined prediction model for a system mode is detected for each system variable, whereby modeling of the time series is followed by drift segmentation in which, in each transition of the system from a first system mode to a second system mode, a series of mixed prediction models is detected produced by linear, paired superimposition of the prediction models of the two system modes.
Abstract: In a method for detecting the modes of a dynamic system with a large number of modes that each have a set α (t) of characteristic system parameters, a time series of at least one system variable x(t) is subjected to modeling, for example switch segmentation, so that in each time segment of a predetermined minimum length a predetermined prediction model, for example a neural network, for a system mode is detected for each system variable x(t), whereby modeling of the time series is followed by drift segmentation in which, in each time segment in which there is transition of the system from a first system mode to a second system mode, a series of mixed prediction models is detected produced by linear, paired superimposition of the prediction models of the two system modes.

7 citations

Posted Content
TL;DR: In this article, the authors extend the shrinkage concept by allowing simultaneous shrinkage to a set of targets, and derive conditions under which the estimation of the shrinkag intensities yields optimal expected squared error in the limit.
Abstract: Stein showed that the multivariate sample mean is outperformed by "shrinking" to a constant target vector. Ledoit and Wolf extended this approach to the sample covariance matrix and proposed a multiple of the identity as shrinkage target. In a general framework, independent of a specific estimator, we extend the shrinkage concept by allowing simultaneous shrinkage to a set of targets. Application scenarios include settings with (A) additional data sets from potentially similar distributions, (B) non-stationarity, (C) a natural grouping of the data or (D) multiple alternative estimators which could serve as targets. We show that this Multi-Target Shrinkage can be translated into a quadratic program and derive conditions under which the estimation of the shrinkage intensities yields optimal expected squared error in the limit. For the sample mean and the sample covariance as specific instances, we derive conditions under which the optimality of MTS is applicable. We consider two asymptotic settings: the large dimensional limit (LDL), where the dimensionality and the number of observations go to infinity at the same rate, and the finite observations large dimensional limit (FOLDL), where only the dimensionality goes to infinity while the number of observations remains constant. We then show the effectiveness in extensive simulations and on real world data.

7 citations

Journal ArticleDOI
TL;DR: The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size), leading to a couple of fundamental insights into online learning with kernels.
Abstract: The autocorrelation matrix of the kernelized input vector is well approximated by the squared Gram matrix (scaled down by the dictionary size). This holds true under the condition that the input covariance matrix in the feature space is approximated by its sample estimate based on the dictionary elements, leading to a couple of fundamental insights into online learning with kernels. First, the eigenvalue spread of the autocorrelation matrix relevant to the hyperplane projection along affine subspace algorithm is approximately a square root of that for the kernel normalized least mean square algorithm. This clarifies the mechanism behind fast convergence due to the use of a Hilbertian metric. Second, for efficient function estimation, the dictionary needs to be constructed in general by taking into account the distribution of the input vector, so as to satisfy the condition. The theoretical results are justified by computer experiments.

7 citations

Proceedings Article
28 May 2022
TL;DR: This work proposes a modified attention mechanism adapted to the underlying physics, which allows to recover the relevant non-local effects of quantum mechanical effects over arbitrary length scales and introduces spherical harmonic coordinates (SPHCs) to reflect higher-order geometric information for each atom in a molecule, enabling a non- local formulation of attention in the SPHC space.
Abstract: The application of machine learning methods in quantum chemistry has enabled the study of numerous chemical phenomena, which are computationally intractable with traditional ab-initio methods. However, some quantum mechanical properties of molecules and materials depend on non-local electronic effects, which are often neglected due to the difficulty of modeling them efficiently. This work proposes a modified attention mechanism adapted to the underlying physics, which allows to recover the relevant non-local effects. Namely, we introduce spherical harmonic coordinates (SPHCs) to reflect higher-order geometric information for each atom in a molecule, enabling a non-local formulation of attention in the SPHC space. Our proposed model So3krates - a self-attention based message passing neural network - uncouples geometric information from atomic features, making them independently amenable to attention mechanisms. Thereby we construct spherical filters, which extend the concept of continuous filters in Euclidean space to SPHC space and serve as foundation for a spherical self-attention mechanism. We show that in contrast to other published methods, So3krates is able to describe non-local quantum mechanical effects over arbitrary length scales. Further, we find evidence that the inclusion of higher-order geometric correlations increases data efficiency and improves generalization. So3krates matches or exceeds state-of-the-art performance on popular benchmarks, notably, requiring a significantly lower number of parameters (0.25 - 0.4x) while at the same time giving a substantial speedup (6 - 14x for training and 2 - 11x for inference) compared to other models.

7 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations