scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors proposed new efficient representations for matrices with low-entropy statistics, which exploit the statistical properties of the data in order to reduce the size and execution complexity.
Abstract: At the core of any inference procedure, deep neural networks are dot product operations, which are the component that requires the highest computational resources. For instance, deep neural networks, such as VGG-16, require up to 15-G operations in order to perform the dot products present in a single forward pass, which results in significant energy consumption and thus limits their use in resource-limited environments, e.g., on embedded devices or smartphones. One common approach to reduce the complexity of the inference is to prune and quantize the weight matrices of the neural network. Usually, this results in matrices whose entropy values are low, as measured relative to the empirical probability mass distribution of its elements. In order to efficiently exploit such matrices, one usually relies on, inter alia, sparse matrix representations. However, most of these common matrix storage formats make strong statistical assumptions about the distribution of the elements; therefore, cannot efficiently represent the entire set of matrices that exhibit low-entropy statistics (thus, the entire set of compressed neural network weight matrices). In this paper, we address this issue and present new efficient representations for matrices with low-entropy statistics. Alike sparse matrix data structures, these formats exploit the statistical properties of the data in order to reduce the size and execution complexity. Moreover, we show that the proposed data structures can not only be regarded as a generalization of sparse formats but are also more energy and time efficient under practically relevant assumptions. Finally, we test the storage requirements and execution performance of the proposed formats on compressed neural networks and compare them to dense and sparse representations. We experimentally show that we are able to attain up to $\times 42$ compression ratios, $\times 5$ speed ups, and $\times 90$ energy savings when we lossless convert the state-of-the-art networks, such as AlexNet, VGG-16, ResNet152, and DenseNet, into the new data structures and benchmark their respective dot product.

58 citations

Journal ArticleDOI
TL;DR: A method for the interpretation of kernel‐based prediction models that helps to assess the domain of applicability of a model, to judge the reliability of a prediction, and to determine relevant molecular features is developed and validated.
Abstract: Statistical models are frequently used to estimate molecular properties, e.g., to establish quantitative structure-activity and structure-property relationships. For such models, interpretability, knowledge of the domain of applicability, and an estimate of confidence in the predictions are essential. We develop and validate a method for the interpretation of kernel-based prediction models. As a consequence of interpretability, the method helps to assess the domain of applicability of a model, to judge the reliability of a prediction, and to determine relevant molecular features. Increased interpretability also facilitates the acceptance of such models. Our method is based on visualization: For each prediction, the most contributing training samples are computed and visualized. We quantitatively show the effectiveness of our approach by conducting a questionnaire study with 71 participants, resulting in significant improvements of the participants' ability to distinguish between correct and incorrect predictions of a Gaussian process model for Ames mutagenicity.

58 citations

Journal ArticleDOI
TL;DR: A novel framework for the analysis of time series from dynamical systems that alternate between different operating modes by using predictive models, which has a high temporal resolution and reveals previously unclassified details of the transitions.
Abstract: We present a novel framework for the analysis of time series from dynamical systems that alternate between different operating modes. The method simultaneously segments and identifies the dynamical modes by using predictive models. In extension to previous approaches, it allows an identification of smooth transition between successive modes. The method can be used for analysis, diagnosis, prediction, and control. In an application to EEG and respiratory data recorded from humans during afternoon naps, the obtained segmentations of the data agree with the sleep stage segmentation of a medical expert to a large extent. However, in contrast to the manual segmentation, our method does not require a priori knowledge about physiology. Moreover, it has a high temporal resolution and reveals previously unclassified details of the transitions. In particular, a parameter is found that is potentially helpful for vigilance monitoring. We expect that the method will generally be useful for the analysis of nonstationary dynamical systems, which are abundant in medicine, chemistry, biology and engineering.

57 citations

Posted Content
TL;DR: A new framework is proposed that can, for the first time, explain cluster assignments in terms of input features in a comprehensive manner, based on the novel theoretical insight that clustering models can be rewritten as neural networks, or 'neuralized'.
Abstract: A wealth of algorithms have been developed to extract natural cluster structure in data. Identifying this structure is desirable but not always sufficient: We may also want to understand why the data points have been assigned to a given cluster. Clustering algorithms do not offer a systematic answer to this simple question. Hence we propose a new framework that can, for the first time, explain cluster assignments in terms of input features in a comprehensive manner. It is based on the novel theoretical insight that clustering models can be rewritten as neural networks, or 'neuralized'. Predictions of the obtained networks can then be quickly and accurately attributed to the input features. Several showcases demonstrate the ability of our method to assess the quality of learned clusters and to extract novel insights from the analyzed data and representations.

57 citations

Journal ArticleDOI
TL;DR: Chmiela et al. as mentioned in this paper used symmetrized gradient-domain machine learning (sGDML) to reconstruct complex high-dimensional potential energy surfaces from a few 100s of molecular conformations extracted from ab initio molecular dynamics trajectories.
Abstract: We present the construction of molecular force fields for small molecules (less than 25 atoms) using the recently developed symmetrized gradient-domain machine learning (sGDML) approach [Chmiela et al., Nat. Commun. 9, 3887 (2018); Sci. Adv. 3, e1603015 (2017)]. This approach is able to accurately reconstruct complex high-dimensional potential-energy surfaces from just a few 100s of molecular conformations extracted from ab initio molecular dynamics trajectories. The data efficiency of the sGDML approach implies that atomic forces for these conformations can be computed with high-level wavefunction-based approaches, such as the "gold standard" CCSD(T) method. We demonstrate that the flexible nature of the sGDML model recovers local and non-local electronic interactions (e.g. H-bonding, proton transfer, lone pairs, changes in hybridization states, steric repulsion and $n\to\pi^*$ interactions) without imposing any restriction on the nature of interatomic potentials. The analysis of sGDML molecular dynamics trajectories yields new qualitative insights into dynamics and spectroscopy of small molecules close to spectroscopic accuracy.

57 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations