scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper , the authors discuss the relation between instantaneous frequency, peak frequency, and local frequency, the latter also known as spectral centroid, and propose and validate three different methods to extract source signals from multichannel data whose (instantaneous, local, or peak) frequency estimate is maximally correlated to an experimental variable of interest.
Journal ArticleDOI
06 May 1999-Nature
TL;DR: The editor-in-chief of Nutrition, an international medical journal, and the director of a research laboratory, found the Briefing on science and fraud most interesting, because he is both a producer and a consumer of science.
Abstract: NATURE | VOL 399 | 6 MAY 1999 | www.nature.com 13 Sir — As editor-in-chief of Nutrition, an international medical journal, and as director of a research laboratory, I found your Briefing on science and fraud most interesting, because I am both a producer and a consumer of science (Nature 398, 13–17; 1999). My editorial colleagues and I have a high state of awareness of ‘fabrication, falsification and plagiarism (FFP)’. As reviewers of manuscripts, we have a difficult time detecting the two Fs, but allegations of the P have come to our attention several times. I believe that editors have an obligation to the scientific community to pass such concerns to the authors and to their institutes’ research dean or administrative supervisor in a confidential manner for investigation according to the guidelines of the US Office of Research Integrity. In doing so, we do not act as “secret police”, as the editor of the Journal of the Norwegian Medical Association maintains. Instead, we align ourselves with the UK Committee on Publication Ethics and the World Association of Medical Editors, whose recommendations are in my view appropriate. It does untold harm to the scientific community to be betrayed, deceived and defrauded. Such harm ranges from the squandering of limited research resources to the undermining of confidence and trust in the reporting of scientific findings. A journal should not be used to validate misconduct by publishing fraudulent data submitted knowingly by the author. If this occurs, editors bear an obligation to retract the paper. Our journal asks authors to sign a declaration of scientific integrity in their letter of transmittal. To avoid scientific misconduct in my laboratory, each new research fellow’s attention is drawn to this potential problem via policy and procedure material given to them on arrival, and the consequences of such temptations are clearly spelled out. Each new fellow also repeats a portion of their predecessor’s work to confirm the results, as an internal control standard. This has not dampened the lust for data among the ‘young and hungry’. But, ultimately, solid, reliable laboratory habits and supervision and mentoring are critical components to prevent misconduct. Michael M. Meguid Nutrition, Department of Surgery, 750 E. Adams St., Syracuse, New York 13210, USA Editors’ responsibility in defeating fraud
Journal ArticleDOI
TL;DR: A new machine learning technique for quantifying the structure of responses to single-pulse intracranial electrical brain stimulation, which dramatically simplifies the study of CCEP shapes, and may also be applied in a wide range of other settings involving event-triggered data.
Abstract: Single-pulse electrical stimulation in the nervous system, often called cortico-cortical evoked potential (CCEP) measurement, is an important technique to understand how brain regions interact with one another. Voltages are measured from implanted electrodes in one brain area while stimulating another with brief current impulses separated by several seconds. Historically, researchers have tried to understand the significance of evoked voltage polyphasic deflections by visual inspection, but no general-purpose tool has emerged to understand their shapes or describe them mathematically. We describe and illustrate a new technique to parameterize brain stimulation data, where voltage response traces are projected into one another using a semi-normalized dot product. The length of timepoints from stimulation included in the dot product is varied to obtain a temporal profile of structural significance, and the peak of the profile uniquely identifies the duration of the response. Using linear kernel PCA, a canonical response shape is obtained over this duration, and then single-trial traces are parameterized as a projection of this canonical shape with a residual term. Such parameterization allows for dissimilar trace shapes from different brain areas to be directly compared by quantifying cross-projection magnitudes, response duration, canonical shape projection amplitudes, signal-to-noise ratios, explained variance, and statistical significance. Artifactual trials are automatically identified by outliers in sub-distributions of cross-projection magnitude, and rejected. This technique, which we call “Canonical Response Parameterization” (CRP) dramatically simplifies the study of CCEP shapes, and may also be applied in a wide range of other settings involving event-triggered data. Author summary We introduce a new machine learning technique for quantifying the structure of responses to single-pulse intracranial electrical brain stimulation. This approach allows voltage response traces of very different shape to be compared with one another. A tool like this has been needed to replace the status quo, where researchers may understand their data in terms of discovered structure rather than in terms of a pre-assigned, hand-picked, feature. The method compares single-trial responses pairwise to understand if there is a reproducible shape and how long it lasts. When significant structure is identified, the shape underlying it is isolated and each trial is parameterized in terms of this shape. This simple parameterization enables quantification of statistical significance, signal-to-noise ratio, explained variance, and average voltage of the response. Differently-shaped voltage traces from any setting can be compared with any other in a succinct mathematical framework. This versatile tool to quantify single-pulse stimulation data should facilitate a blossoming in the study of brain connectivity using implanted electrodes.
Proceedings ArticleDOI
18 Sep 2007
TL;DR: This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma, and discusses the quality of error bars that can be computed.
Abstract: Unfavorable physicochemical properties often cause drug failures. It is therefore important to take lipophilicity and water solubility into account early on in lead discovery. This study presents log D7 models built using Gaussian Process regression, Support Vector Machines, decision trees and ridge regression algorithms based on 14556 drug discovery compounds of Bayer Schering Pharma. A blind test was conducted using 7013 new measurements from the last months. We also present independent evaluations using public data. Apart from accuracy, we discuss the quality of error bars that can be computed by Gaussian Process models, and ensemble and distance based techniques for the other modelling approaches.

Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations