scispace - formally typeset
Search or ask a question
Author

Hideki Hayakawa

Bio: Hideki Hayakawa is an academic researcher. The author has contributed to research in topics: Normal & Nonlinear system. The author has an hindex of 2, co-authored 3 publications receiving 141 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: By mathematical analysis and computer simulation, it is shown that a small number of relaxation computations circulating this forward-inverse optics hierarchy achieves fast and reliable integration of vision modules, and therefore might resolve the following problems.
Abstract: We propose that the feedforward connection from the lower visual cortical area to the higher visual cortical area provides an approximated inverse model of the imaging process (optics), while the backprojection connection from the higher area to the lower area provides a forward model of the optics. By mathematical analysis and computer simulation, we show that a small number of relaxation computations circulating this forward-inverse optics hierarchy achieves fast and reliable integration of vision modules, and therefore might resolve the following problems. (i) How are parallel visual modules (multiple visual cortical areas) integrated to allow a coherent scene perception? (ii) How can ill-posed vision problems be solved by the brain within several hundreds of milliseconds?

128 citations

Journal ArticleDOI
TL;DR: A computational model for shape estimation by integration of shading and edge information is proposed and a rather weak interaction among the three modules can undo the shape-from-shading problem that resembles the chicken-and-egg problem.

21 citations

Journal ArticleDOI
TL;DR: In computer simulations, it is demonstrated that the proposed algorithm can estimate the shape of an object within a small number of iterations in comparison with the Ikeuchi-Horn algorithm.
Abstract: A solution of the nonlinear vision problem based on forward and approximated inverse optics models is proposed. In the solution, the geometrical structure modification in the three-dimensional (3-D) world required due to the 2-D brightness error is separated from that required due to the smoothness constraint. Therefore, a simple and fast algorithm for each modification can be used. First, the optimality of the converged solution in the algorithm and the convergence of the algorithm is proved. In the proof, an optimal solution of the linear optimization problem is used for the approximated inverse optics. Next, the proposed algorithm is applied to the shape-from-shading problem. In the problem, a Newton-like method is used that quickly gives the approximated optimal solution of the linear optimization problem. In computer simulations, it is demonstrated that the proposed algorithm can estimate the shape of an object within a small number of iterations in comparison with the Ikeuchi-Horn algorithm.

1 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This Review looks at some key brain theories in the biological and physical sciences from the free-energy perspective, suggesting that several global brain theories might be unified within a free- energy framework.
Abstract: A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories — optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework.

4,866 citations

Journal ArticleDOI
TL;DR: This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action.
Abstract: Brains, it has recently been argued, are essentially prediction machines. They are bundles of cells that support perception and action by constantly attempting to match incoming sensory inputs with top-down expectations or predictions. This is achieved using a hierarchical generative model that aims to minimize prediction error within a bidirectional cascade of cortical processing. Such accounts offer a unifying model of perception and action, illuminate the functional role of attention, and may neatly capture the special contribution of cortical processing to adaptive success. This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action. Sections 1 and 2 lay out the key elements and implications of the approach. Section 3 explores a variety of pitfalls and challenges, spanning the evidential, the methodological, and the more properly conceptual. The paper ends (sections 4 and 5) by asking how such approaches might impact our more general vision of mind, experience, and agency.

3,640 citations

Journal ArticleDOI
TL;DR: The aims of this article are to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective and to provide a principled way to understand many aspects of cortical organization and responses.
Abstract: This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain’s free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective. In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses.

3,569 citations

Book
01 Jan 2001
TL;DR: This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory.
Abstract: Theoretical neuroscience provides a quantitative basis for describing what nervous systems do, determining how they function, and uncovering the general principles by which they operate This text introduces the basic mathematical and computational methods of theoretical neuroscience and presents applications in a variety of areas including vision, sensory-motor integration, development, learning, and memory The book is divided into three parts Part I discusses the relationship between sensory stimuli and neural responses, focusing on the representation of information by the spiking activity of neurons Part II discusses the modeling of neurons and neural circuits on the basis of cellular and synaptic biophysics Part III analyzes the role of plasticity in development and learning An appendix covers the mathematical methods used, and exercises are available on the book's Web site

3,441 citations

Journal ArticleDOI
TL;DR: Cells in area TE of the inferotemporal cortex of the monkey brain selectively respond to various moderately complex object features, and those that cluster in a columnar region that runs perpendicular to the cortical surface respond to similar features.
Abstract: Cells in area TE of the inferotemporal cortex of the monkey brain selectively respond to various moderately complex object features, and those that cluster in a columnar region that runs perpendicular to the cortical surface respond to similar features. Although cells within a column respond to similar features, their selectivity is not necessarily identical. The data of optical imaging in 'E have suggested that the borders between neighboring columns are not discrete; a continuous mapping of complex feature space within a larger region contains several partially overlapped columns. This continuous mapping may be used for various computations, such as production of the image of the object at different viewing angles, illumination conditions. and articulation poses.

1,371 citations