scispace - formally typeset
Search or ask a question
Author

Konrad P. Kording

Bio: Konrad P. Kording is an academic researcher from University of Pennsylvania. The author has contributed to research in topics: Artificial neural network & Deep learning. The author has an hindex of 61, co-authored 339 publications receiving 15943 citations. Previous affiliations of Konrad P. Kording include Columbia University & Rehabilitation Institute of Chicago.


Papers
More filters
Journal ArticleDOI
15 Jan 2004-Nature
TL;DR: This work shows that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process.
Abstract: When we learn a new motor skill, such as playing an approaching tennis ball, both our sensors and the task possess variability. Our sensors provide imperfect information about the ball's velocity, so we can only estimate it. Combining information from multiple modalities can reduce the error in this estimate. On a longer time scale, not all velocities are a priori equally probable, and over the course of a match there will be a probability distribution of velocities. According to bayesian theory, an optimal estimate results from combining information about the distribution of velocities-the prior-with evidence from sensory feedback. As uncertainty increases, when playing in fog or at dusk, the system should increasingly rely on prior knowledge. To use a bayesian strategy, the brain would need to represent the prior distribution and the level of uncertainty in the sensory feedback. Here we control the statistical variations of a new sensorimotor task and manipulate the uncertainty of the sensory feedback. We show that subjects internally represent both the statistical distribution of the task and their sensory uncertainty, combining them in a manner consistent with a performance-optimizing bayesian process. The central nervous system therefore employs probabilistic models during sensorimotor learning.

1,811 citations

Journal ArticleDOI
26 Sep 2007-PLOS ONE
TL;DR: An ideal-observer model is formulated that infers whether two sensory cues originate from the same location and that also estimates their location(s) and accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks.
Abstract: Perceptual events derive their significance to an animal from their meaning about the world, that is from the information they carry about their causes. The brain should thus be able to efficiently infer the causes underlying our sensory events. Here we use multisensory cue combination to study causal inference in perception. We formulate an ideal-observer model that infers whether two sensory cues originate from the same location and that also estimates their location(s). This model accurately predicts the nonlinear integration of cues by human subjects in two auditory-visual localization tasks. The results show that indeed humans can efficiently infer the causal structure as well as the location of causes. By combining insights from the study of causal inference with the ideal-observer approach to sensory cue combination, we show that the capacity to infer causal structure is not limited to conscious, high-level cognition; it is also performed continually and effortlessly in perception.

873 citations

Journal ArticleDOI
TL;DR: This paper reviews recent studies that have investigated the mechanisms used by the nervous system to solve estimation and decision problems, which show that human behaviour is close to that predicted by Bayesian Decision Theory.

764 citations

Journal ArticleDOI
TL;DR: It is argued that a deep network is best understood in terms of components used to design it—objective functions, architecture and learning rules—rather than unit-by-unit computation.
Abstract: Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In artificial neural networks, the three components specified by design are the objective functions, the learning rules and the architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

633 citations

Journal ArticleDOI
TL;DR: Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.
Abstract: Progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moore's law. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons. Over the last five decades, progress in neural recording techniques has allowed the number of simultaneously recorded neurons to double approximately every 7 years, mimicking Moore's law. Such exponential growth motivates us to ask how data analysis techniques are affected by progressively larger numbers of recorded neurons. Traditionally, neurons are analyzed independently on the basis of their tuning to stimuli or movement. Although tuning curve approaches are unaffected by growing numbers of simultaneously recorded neurons, newly developed techniques that analyze interactions between neurons become more accurate and more complex as the number of recorded neurons increases. Emerging data analysis techniques should consider both the computational costs and the potential for more accurate models associated with this exponential growth of the number of recorded neurons.

567 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Abstract: The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

11,201 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
06 Jun 1986-JAMA
TL;DR: The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or her own research.
Abstract: I have developed "tennis elbow" from lugging this book around the past four weeks, but it is worth the pain, the effort, and the aspirin. It is also worth the (relatively speaking) bargain price. Including appendixes, this book contains 894 pages of text. The entire panorama of the neural sciences is surveyed and examined, and it is comprehensive in its scope, from genomes to social behaviors. The editors explicitly state that the book is designed as "an introductory text for students of biology, behavior, and medicine," but it is hard to imagine any audience, interested in any fragment of neuroscience at any level of sophistication, that would not enjoy this book. The editors have done a masterful job of weaving together the biologic, the behavioral, and the clinical sciences into a single tapestry in which everyone from the molecular biologist to the practicing psychiatrist can find and appreciate his or

7,563 citations

Journal ArticleDOI
TL;DR: Recent studies examining spontaneous fluctuations in the blood oxygen level dependent (BOLD) signal of functional magnetic resonance imaging as a potentially important and revealing manifestation of spontaneous neuronal activity are reviewed.
Abstract: The majority of functional neuroscience studies have focused on the brain's response to a task or stimulus. However, the brain is very active even in the absence of explicit input or output. In this Article we review recent studies examining spontaneous fluctuations in the blood oxygen level dependent (BOLD) signal of functional magnetic resonance imaging as a potentially important and revealing manifestation of spontaneous neuronal activity. Although several challenges remain, these studies have provided insight into the intrinsic functional architecture of the brain, variability in behaviour and potential physiological correlates of neurological and psychiatric disease.

6,135 citations