scispace - formally typeset
Search or ask a question
Author

Klaus-Robert Müller

Other affiliations: Korea University, University of Tokyo, Fraunhofer Society  ...read more
Bio: Klaus-Robert Müller is an academic researcher from Technical University of Berlin. The author has contributed to research in topics: Artificial neural network & Support vector machine. The author has an hindex of 129, co-authored 764 publications receiving 79391 citations. Previous affiliations of Klaus-Robert Müller include Korea University & University of Tokyo.


Papers
More filters
01 Jan 2017
TL;DR: An overview of recent EEG-electrode and fNIRSoptode approaches that aim to improve usability is provided, including a new multi-function clip-on design that allows the use of conventional gel-based ring electrodes with water.
Abstract: Brain-computer interfaces are now entering real-life environments. Particular hybrid systems using more than one input signal, e.g. electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS), offer a broad spectrum of applications in basic research and clinical neuroscience. Here, we provide an overview of recent EEG-electrode and fNIRSoptode approaches that aim to improve usability. We include our new multi-function clip-on design that allows the use of conventional gel-based ring electrodes with water. For EEG electrode approaches (conventional gel, solid gel, new custom water-based) we compared impedances and frequency response over multi-hour recordings. While the water-based solutions showed comparable performance in terms of signal quality, applicability and comfort, solid-gel electrodes on hairy skin required additional contact pressure. Overall, however, all tested EEG electrode types were well compatible with concurrent fNIRS recordings using a novel hybrid fNIRS/EEG headgear, paving the way for cognitive workload experiments under real-life conditions.

3 citations

Proceedings ArticleDOI
21 Apr 2016
TL;DR: A novel reliable method for estimating the Hurst exponent, a quantity that has recently become popular for describing network properties and is being used for diagnostic purposes and for analysis of neuroimaging data in general is introduced.
Abstract: This abstract will talk about machine learning and BCI with a focus on analysing cognition, a topic that has been extensively covered by the author and co-workers in numerous papers and conference abstracts. Due to the review character of the presentation a high overlap to the above mentioned contributions is unavoidable. When analysing cognition, it is often useful to combine information from various modalities (see e.g. Biessmann et al., 2011, Sui et al., 2012). In BCI recently multimodal fusion concepts have received great attention under the label hybrid BCI (Pfurtscheller et al., 2010, Muller-Putz et al. 2015, Dahne et al. 2015, Fazli et al. 2015) or as data analysis technique for extracting (non-) linear relations between data (see e.g. Biessmann et al., 2010, Biessmann et al., 2011, Fazli et al., 2009, 2011, 2012, Dahne et al., 2013, 2014a,b, 2015, Winkler et al. 2015). They are rooted in the modern machine learning and signal processing techniques that are now available for analysing EEG, for decoding mental states etc. (see Muller et al. 2008, Bunau et al. 2009, Tomioka and Muller, 2010, Blankertz et al., 2008, 2011, Lemm et al., 2011, Porbadnigk et al. 2015 for recent reviews and contributions to Machine Learning for BCI, see Samek et al. 2014 for a review on robust methods). Note that fusing information has also been a very common practice in the sciences and engineering (Waltz and Llinas, 1990). The talk will discuss a number of recent contributions from the BBCI group that have helped to broaden the spectrum of applicability for Brain Computer Interfaces and mental state monitoring in particular and for analysis of neuroimaging data in general. I will introduce a novel reliable method for estimating the Hurst exponent, a quantity that has recently become popular for describing network properties and is being used for diagnostic purposes (cf. Blythe et al. 2014). It is applied to estimate and analyse cognitive properties in neurophysiological data from BCI experiments (Samek et al. 2016). Furthermore if time permits I will discuss a recent attractive application of BCI in the context of video coding (Scholler et al. 2012 and Acqualagna et al 2015).

3 citations

01 Jan 2007
TL;DR: This work describes the work using linear discrimination of multichannel electroencephalography for single-trial detection of neural signatures of visual recognition events, and shows how “cortical triaging” improves image search over a strictly behavioral response.
Abstract: We describe our work using linear discrimination of multichannel electroencephalography for single-trial detection of neural signatures of visual recognition events. We demonstrate the approach as a methodology for relating neural variability to response variability, describing studies for response accuracy and response latency during visual target detection. We then show how the approach can be used to construct a novel type of brain-computer interface, which we term “cortically coupled computer vision.” In this application, a large database of images is triaged using the detected neural signatures. We show how “cortical triaging” improves image search over a strictly behavioral response.

3 citations

01 Jan 2007
TL;DR: This chapter introduces and summarizes recent work on probabilistic models of motor cortical activity and methods for inferring, or decoding, hand movements from this activity and considers approximations of the encoding problem that allow efficient inference of hand movement using a Kalman filter.
Abstract: This chapter introduces and summarizes recent work on probabilistic models of motor cortical activity and methods for inferring, or decoding, hand movements from this activity A simple generalization of previous encoding models is presented in which neural firing rates are represented as a linear function of hand movements A Bayesian approach is taken to exploit this generative model of firing rates for the purpose of inferring hand kinematics In particular, we consider approximations of the encoding problem that allow efficient inference of hand movement using a Kalman filter Decoding results are presented and the use of these methods for neural prosthetic cursor control is discussed

3 citations

Posted Content
TL;DR: Zennit as discussed by the authors is a post-hoc attribution framework implemented in PyTorch and CoRelAy is a web-application to interactively explore data, attributions, and analysis results.
Abstract: Deep Neural Networks (DNNs) are known to be strong predictors, but their prediction strategies can rarely be understood. With recent advances in Explainable Artificial Intelligence, approaches are available to explore the reasoning behind those complex models' predictions. One class of approaches are post-hoc attribution methods, among which Layer-wise Relevance Propagation (LRP) shows high performance. However, the attempt at understanding a DNN's reasoning often stops at the attributions obtained for individual samples in input space, leaving the potential for deeper quantitative analyses untouched. As a manual analysis without the right tools is often unnecessarily labor intensive, we introduce three software packages targeted at scientists to explore model reasoning using attribution approaches and beyond: (1) Zennit - a highly customizable and intuitive attribution framework implementing LRP and related approaches in PyTorch, (2) CoRelAy - a framework to easily and quickly construct quantitative analysis pipelines for dataset-wide analyses of explanations, and (3) ViRelAy - a web-application to interactively explore data, attributions, and analysis results.

3 citations


Cited by
More filters
Proceedings ArticleDOI
27 Jun 2016
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers—8× deeper than VGG nets [40] but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions1, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

123,388 citations

Posted Content
TL;DR: This work presents a residual learning framework to ease the training of networks that are substantially deeper than those used previously, and provides comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth.
Abstract: Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and can gain accuracy from considerably increased depth. On the ImageNet dataset we evaluate residual nets with a depth of up to 152 layers---8x deeper than VGG nets but still having lower complexity. An ensemble of these residual nets achieves 3.57% error on the ImageNet test set. This result won the 1st place on the ILSVRC 2015 classification task. We also present analysis on CIFAR-10 with 100 and 1000 layers. The depth of representations is of central importance for many visual recognition tasks. Solely due to our extremely deep representations, we obtain a 28% relative improvement on the COCO object detection dataset. Deep residual nets are foundations of our submissions to ILSVRC & COCO 2015 competitions, where we also won the 1st places on the tasks of ImageNet detection, ImageNet localization, COCO detection, and COCO segmentation.

44,703 citations

Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

Journal ArticleDOI

[...]

08 Dec 2001-BMJ
TL;DR: There is, I think, something ethereal about i —the square root of minus one, which seems an odd beast at that time—an intruder hovering on the edge of reality.
Abstract: There is, I think, something ethereal about i —the square root of minus one. I remember first hearing about it at school. It seemed an odd beast at that time—an intruder hovering on the edge of reality. Usually familiarity dulls this sense of the bizarre, but in the case of i it was the reverse: over the years the sense of its surreal nature intensified. It seemed that it was impossible to write mathematics that described the real world in …

33,785 citations

Proceedings Article
Sergey Ioffe1, Christian Szegedy1
06 Jul 2015
TL;DR: Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin.
Abstract: Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train models with saturating nonlinearities. We refer to this phenomenon as internal covariate shift, and address the problem by normalizing layer inputs. Our method draws its strength from making normalization a part of the model architecture and performing the normalization for each training mini-batch. Batch Normalization allows us to use much higher learning rates and be less careful about initialization, and in some cases eliminates the need for Dropout. Applied to a state-of-the-art image classification model, Batch Normalization achieves the same accuracy with 14 times fewer training steps, and beats the original model by a significant margin. Using an ensemble of batch-normalized networks, we improve upon the best published result on ImageNet classification: reaching 4.82% top-5 test error, exceeding the accuracy of human raters.

30,843 citations