scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Journal ArticleDOI
27 Mar 2017-PLOS ONE
TL;DR: GPOIM, a general measure of feature importance for arbitrary learning machines and feature sets and devise a sampling strategy for efficient computation is proposed and a convex formulation of motifPOIM is derived that leads to more reliable motif extraction from gPOIMs.
Abstract: High prediction accuracies are not the only objective to consider when solving problems using machine learning. Instead, particular scientific applications require some explanation of the learned prediction function. For computational biology, positional oligomer importance matrices (POIMs) have been successfully applied to explain the decision of support vector machines (SVMs) using weighted-degree (WD) kernels. To extract relevant biological motifs from POIMs, the motifPOIM method has been devised and showed promising results on real-world data. Our contribution in this paper is twofold: as an extension to POIMs, we propose gPOIM, a general measure of feature importance for arbitrary learning machines and feature sets (including, but not limited to, SVMs and CNNs) and devise a sampling strategy for efficient computation. As a second contribution, we derive a convex formulation of motifPOIMs that leads to more reliable motif extraction from gPOIMs. Empirical evaluations confirm the usefulness of our approach on artificially generated data as well as on real-world datasets.

9 citations

Journal ArticleDOI
TL;DR: This work states that chemistry has remained in a somewhat backward state of informatics development compared to its two close scientific relatives, primarily for historical reasons, but recently data banks containing millions of small molecules have become freely available, and large repositories of chemical reactions have been developed.
Abstract: In spite of its central role and position between physics and biology, chemistry has remained in a somewhat backward state of informatics development compared to its two close scientific relatives, primarily for historical reasons. Computers, open public databases, and large collaborative projects have become the pervasive hallmark of research in physics and biology, but are still at a comparably early stage of development in chemistry. Recently, however, data banks containing millions of small molecules have become freely available, and large repositories of chemical reactions have been developed. These data create a wealth of fascinating informatics and machine learning challenges to efficiently store, search, and predict the physical, chemical, and biological properties of small molecules and reactions and chart “chemical space”. Profound understanding of its structure and appropriate computational models will have significant scientific and technological impact.

9 citations

Journal ArticleDOI
TL;DR: A transductive conditional random field regression model is able to infer the latent states by combining limited labeled data of high precision with unlabeled data containing measurement uncertainty, which can propagate accurate information and greatly reduce uncertainty.
Abstract: Analyzing data with latent spatial and/or temporal structure is a challenge for machine learning. In this paper, we propose a novel nonlinear model for studying data with latent dependence structure. It successfully combines the concepts of Markov random fields, transductive learning, and regression, making heavy use of the notion of joint feature maps. Our transductive conditional random field regression model is able to infer the latent states by combining limited labeled data of high precision with unlabeled data containing measurement uncertainty. In this manner, we can propagate accurate information and greatly reduce uncertainty. We demonstrate the usefulness of our novel framework on generated time series data with the known temporal structure and successfully validate it on synthetic as well as real-world offshore data with the spatial structure from the oil industry to predict rock porosities from acoustic impedance data.

9 citations

Proceedings ArticleDOI
28 Dec 2015
TL;DR: It is shown that these divergence-based methods can be used for robust spatial filtering and thus increase the systems' reliability when confronted to, e.g., environmental noise, users' motions or electrode artifacts, and extended to heavy-tail distributions.
Abstract: Although the field of Brain-Computer Interfacing (BCI) has made incredible advances in the last decade, current BCIs are still scarcely used outside laboratories. One reason is the lack of robustness to noise, artifacts and nonstationarity which are intrinsic parts of the recorded brain signal. Furthermore out-of-lab environments imply the presence of external variables that are largely beyond the control of the user, but can severely corrupt signal quality. This paper presents a new generation of robust EEG signal processing approaches based on the information geometric notion of divergence. We show that these divergence-based methods can be used for robust spatial filtering and thus increase the systems' reliability when confronted to, e.g., environmental noise, users' motions or electrode artifacts. Furthermore we extend the divergence-based framework to heavy-tail distributions and investigate the advantages of a joint optimization for robustness and stationarity.

9 citations

Proceedings ArticleDOI
01 Jan 2012
TL;DR: This study shows how healthy subjects are able to use a non-invasive Motor Imagery-based Brain Computer Interface (BCI) to achieve linear control of an upper-limb neuromuscular electrical stimulation (NMES) controlled neuroprosthesis in a simple binary target selection task.
Abstract: In this study we show how healthy subjects are able to use a non-invasive Motor Imagery (MI)-based Brain Computer Interface (BCI) to achieve linear control of an upper-limb neuromuscular electrical stimulation (NMES) controlled neuroprosthesis in a simple binary target selection task. Linear BCI control can be achieved if two motor imagery classes can be discriminated with a reliability over 80% in single trial. The results presented in this work show that there was no significant loss of performance using the neuroproshesis in comparison to MI where no stimulation was present. However, it is remarkable how different the experience of the users was in the same experiment. The stimulation either provoked a positive reinforcement feedback, or prevented the user from concentrating in the task.

9 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations