scispace - formally typeset
Search or ask a question
Author

K. Hirotsu

Bio: K. Hirotsu is an academic researcher from Sumitomo Electric Industries. The author has contributed to research in topics: Controller (computing) & Chip. The author has an hindex of 1, co-authored 1 publications receiving 34 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: A mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation and a genetic random search algorithm: the RWC algorithm, suitable for direct feedback control.
Abstract: The paper presents a mixed signal CMOS feedforward neural-network chip with on-chip error-reduction hardware for real-time adaptation. The chip has compact on-chip weighs capable of high-speed parallel learning; the implemented learning algorithm is a genetic random search algorithm: the random weight change (RWC) algorithm. The algorithm does not require a known desired neural network output for error calculation and is suitable for direct feedback control. With hardware experiments, we demonstrate that the RWC chip, as a direct feedback controller, successfully suppresses unstable oscillations modeling combustion engine instability in real time.

36 citations


Cited by
More filters
01 Nov 1981
TL;DR: In this paper, the authors studied the effect of local derivatives on the detection of intensity edges in images, where the local difference of intensities is computed for each pixel in the image.
Abstract: Most of the signal processing that we will study in this course involves local operations on a signal, namely transforming the signal by applying linear combinations of values in the neighborhood of each sample point. You are familiar with such operations from Calculus, namely, taking derivatives and you are also familiar with this from optics namely blurring a signal. We will be looking at sampled signals only. Let's start with a few basic examples. Local difference Suppose we have a 1D image and we take the local difference of intensities, DI(x) = 1 2 (I(x + 1) − I(x − 1)) which give a discrete approximation to a partial derivative. (We compute this for each x in the image.) What is the effect of such a transformation? One key idea is that such a derivative would be useful for marking positions where the intensity changes. Such a change is called an edge. It is important to detect edges in images because they often mark locations at which object properties change. These can include changes in illumination along a surface due to a shadow boundary, or a material (pigment) change, or a change in depth as when one object ends and another begins. The computational problem of finding intensity edges in images is called edge detection. We could look for positions at which DI(x) has a large negative or positive value. Large positive values indicate an edge that goes from low to high intensity, and large negative values indicate an edge that goes from high to low intensity. Example Suppose the image consists of a single (slightly sloped) edge:

1,829 citations

Journal ArticleDOI
TL;DR: This article presents a comprehensive overview of the hardware realizations of artificial neural network models, known as hardware neural networks (HNN), appearing in academic studies as prototypes as well as in commercial use.

638 citations

Posted Content
TL;DR: An exhaustive review of the research conducted in neuromorphic computing since the inception of the term is provided to motivate further work by illuminating gaps in the field where new research is needed.
Abstract: Neuromorphic computing has come to refer to a variety of brain-inspired computers, devices, and models that contrast the pervasive von Neumann computer architecture This biologically inspired approach has created highly connected synthetic neurons and synapses that can be used to model neuroscience theories as well as solve challenging machine learning problems The promise of the technology is to create a brain-like ability to learn and adapt, but the technical challenges are significant, starting with an accurate neuroscience model of how the brain works, to finding materials and engineering breakthroughs to build devices to support these models, to creating a programming framework so the systems can learn, to creating applications with brain-like capabilities In this work, we provide a comprehensive survey of the research and motivations for neuromorphic computing over its history We begin with a 35-year review of the motivations and drivers of neuromorphic computing, then look at the major research areas of the field, which we define as neuro-inspired models, algorithms and learning approaches, hardware and devices, supporting systems, and finally applications We conclude with a broad discussion on the major research topics that need to be addressed in the coming years to see the promise of neuromorphic computing fulfilled The goals of this work are to provide an exhaustive review of the research conducted in neuromorphic computing since the inception of the term, and to motivate further work by illuminating gaps in the field where new research is needed

570 citations

Journal ArticleDOI
TL;DR: A general class of memristor-based recurrent neural networks with time-varying delays with exponential convergence and conditions on the nondivergence and global attractivity are established by using local inhibition.

177 citations

Journal ArticleDOI
TL;DR: It is shown that a circuit-based learning using RWC is two orders faster than its software counterpart, which is a first of its kind demonstrating successful circuit- based learning for multilayer neural network built with memristors.
Abstract: Memristor-based circuit architecture for multilayer neural networks is proposed. It is a first of its kind demonstrating successful circuit-based learning for multilayer neural network built with memristors. Though back-propagation algorithm is a powerful learning scheme for multilayer neural networks, its hardware implementation is very difficult due to complexities of the neural synapses and the operations involved in the learning algorithm. In this paper, the circuit of a multilayer neural network is designed with memristor bridge synapses and the learning is realized with a simple learning algorithm called Random Weight Change (RWC). Though RWC algorithm requires more iterations than back-propagation algorithm, we show that a circuit-based learning using RWC is two orders faster than its software counterpart. The method to build a multilayer neural network using memristor bridge synapses and a circuit-based learning architecture of RWC algorithm is proposed. Comparison between software-based and memristor circuit-based learning are presented via simulations.

140 citations