scispace - formally typeset
Search or ask a question
Author

Arun Rao

Bio: Arun Rao is an academic researcher from Arizona State University. The author has contributed to research in topics: Artificial neural network & Adaptive resonance theory. The author has an hindex of 3, co-authored 3 publications receiving 27 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: It is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.
Abstract: The embedding of neural networks in real-time systems performing classification and clustering tasks requires that models be implemented in hardware. A flexible, pipelined associative memory capable of operating in real-time is proposed as a hardware substrate for the emulation of neural fixed-radius clustering and binary classification schemes. This paper points out several important considerations in the development of hardware implementations. As a specific example, it is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.

15 citations

Proceedings Article
16 Oct 1989
TL;DR: A pipelined associative memory appears to offer an attractive interim solution to the variety of problems that ART1 addresses, and has the advantage of using the best of conventional technology while being capable of all the functions of a nonparallel hardware implementation of ART1.
Abstract: Adaptive resonance theory (ART) is a neural-network based clustering method developed by G.A. Carpenter and S. Grossberg (1987). Its inspiration is neurobiological and its component parts are intended to model a variety of hierarchical inference levels in the human brain. Neural networks based upon ART are capable of recognizing patterns close to previously stored patterns according to some criterion, and storing patterns which are not close to already stored patterns. There are two varieties of ART networks; ART1 recognizes binary inputs and ART2 can deal with general analog inputs. The theory of the networks is outlined, and then hardware implementations are discussed. A pipelined associative memory appears to offer an attractive interim solution to the variety of problems that ART1 addresses. It has the advantage of using the best of conventional technology while being capable of all the functions of a nonparallel hardware implementation of ART1.

8 citations

Proceedings ArticleDOI
22 Nov 1989
TL;DR: The emphasis of this work is on conventional hardware implementation, ART1 is mainly discussed, and two varieties of ART networks have been proposed.
Abstract: Adaptive resonance theory (ART) is a neural-network based clustering method developed by G.A. Carpenter and S. Grossberg (1987). Its inspiration is neurobiological and its component parts are intended to model a variety of hierarchical inference levels in the human brain. Neural networks based upon ART are capable of 'recognizing' patterns close to previously stored patterns according to some criterion, and storing patterns which are not close to already stored patterns. Two varieties of ART networks have been proposed. ART1 recognizes binary inputs and ART2 can deal with general analog inputs as well. Since the emphasis of this work is on conventional hardware implementation, ART1 is mainly discussed. >

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper describes elements necessary for a general-purpose low-cost very large scale integration (VLSI) neural network and a 64-synapse, 8-neuron proof-of-concept chip, which solves four-bit parity in an average of 680 ms and is successful in about 96% of the trials.
Abstract: This paper describes elements necessary for a general-purpose low-cost very large scale integration (VLSI) neural network. By choosing a learning algorithm that is tolerant of analog nonidealities, the promise of high-density analog VLSI is realized. A 64-synapse, 8-neuron proof-of-concept chip is described. The synapse, which occupies only 4900 /spl mu/m/sup 2/ in a 2-/spl mu/m technology, includes a hybrid of nonvolatile and dynamic weight storage that provides fast and accurate learning as well as reliable long-term storage with no refreshing. The architecture is user-configurable in any one-hidden-layer topology. The user-interface is fully microprocessor compatible. Learning is accomplished with minimal external support; the user need only present inputs, targets, and a clock. Learning is fast and reliable. The chip solves four-bit parity in an average of 680 ms and is successful in about 96% of the trials.

77 citations

Journal ArticleDOI
TL;DR: An analog very large scale integration (VLSI) neural network intended for cost-sensitive, battery-powered, high-volume applications is described, with on-chip controlled perturbation-based gradient descent allowing fast learning with very little external support.
Abstract: An analog very large scale integration (VLSI) neural network intended for cost-sensitive, battery-powered, high-volume applications is described. Weights are stored in the analog domain using a combination of dynamic and nonvolatile memory that allows both fast learning and reliable long-term storage. The synapse occupies 4.9 K /spl mu/m/sup 2/ in a 2-/spl mu/m technology. On-chip controlled perturbation-based gradient descent allows fast learning with very little external support. Other distinguishing features include a reconfigurable topology and a temperature-independent feedforward path. An eight-neuron, 64-synapse proof-of-concept chip reliably solves the exclusive-or problem in ten's of milliseconds and 4-b parity in hundred's of milliseconds.

39 citations

Journal ArticleDOI
TL;DR: An overview of several competitive learning algorithms in artificial neural networks, including self-organizing feature maps, focusing on properties of these algorithms important to hardware implementations, and a reconfigurable parallel neurocomputer architecture designed using digital signal processing chips and field-programmable gate array devices.
Abstract: This paper begins with an overview of several competitive learning algorithms in artificial neural networks, including self-organizing feature maps, focusing on properties of these algorithms important to hardware implementations. We then discuss previously reported digital implementations of these networks. Finally, we report a reconfigurable parallel neurocomputer architecture we have designed using digital signal processing chips and field-programmable gate array devices. Communications are based upon a broadcast network with FPGA-based message preprocessing and postprocessing. A small prototype of this system has been constructed and applied to competitive learning in self-organizing maps. This machine is able to model slowly-varying nonstationary data in real time.

32 citations

Journal ArticleDOI
TL;DR: In this article, a solution to the problem of implementation of the adaptive resonance theory (ART) of neural networks that uses an optical correlator which allows the large body of correlator research to be leveraged in the implementation of ART is presented.
Abstract: A solution to the problem of implementation of the adaptive resonance theory (ART) of neural networks that uses an optical correlator which allows the large body of correlator research to be leveraged in the implementation of ART is presented. The implementation takes advantage of the fact that one ART-based architecture, known as ART1, can be broken into several parts, some of which are better to implement in parallel. The control structure of ART, often regarded as its most complex part, is actually not very time consuming and can be done in electronics. The bottom-up and top-down gated pathways, however, are very time consuming to simulate and are difficult to implement directly in electronics due to the high number of interconnections. In addition to the design, the authors present experiments with a laboratory prototype to illustrate its feasibility and to discuss implementation details that arise in practice. This device can potentially outperform alternative implementations of ART1 by as much as two to three orders of magnitude in problems requiring especially large input fields. >

32 citations

Journal ArticleDOI
TL;DR: It is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.
Abstract: The embedding of neural networks in real-time systems performing classification and clustering tasks requires that models be implemented in hardware. A flexible, pipelined associative memory capable of operating in real-time is proposed as a hardware substrate for the emulation of neural fixed-radius clustering and binary classification schemes. This paper points out several important considerations in the development of hardware implementations. As a specific example, it is shown how the ART1 paradigm can be functionally emulated by the limited resolution pipelined architecture, in the absence of full parallelism.

15 citations