scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

ART1 network implementation issues

22 Nov 1989-pp 462-466
TL;DR: The emphasis of this work is on conventional hardware implementation, ART1 is mainly discussed, and two varieties of ART networks have been proposed.
Abstract: Adaptive resonance theory (ART) is a neural-network based clustering method developed by G.A. Carpenter and S. Grossberg (1987). Its inspiration is neurobiological and its component parts are intended to model a variety of hierarchical inference levels in the human brain. Neural networks based upon ART are capable of 'recognizing' patterns close to previously stored patterns according to some criterion, and storing patterns which are not close to already stored patterns. Two varieties of ART networks have been proposed. ART1 recognizes binary inputs and ART2 can deal with general analog inputs as well. Since the emphasis of this work is on conventional hardware implementation, ART1 is mainly discussed. >
Citations
More filters
Book ChapterDOI
29 Oct 2002
TL;DR: This method can perform an IP lookup in 4.5 nanoseconds, which implies supporting 60 Gbps link rate and Pipelining and parallel processing can be used to increase the link rate up to 400 Gbps and decrease the learning time.
Abstract: IP routers need lookup tables to forward packets. They also classify packets to determine which flow they belong to and to decide what quality of service they should receive. Increasing rate of communication links is in contrast with practical processing power of routers and switches. We propose a few neural network algorithms to solve the IP lookup problem. Some of these algorithms, gives promising results, however, they have problems in training time. Parallel processing of neural networks provide a huge processing power to do IP lookup. The algorithm can be implemented in hardware on a single chip. Our method can perform an IP lookup in 4.5 nanoseconds, which implies supporting 60 Gbps link rate. Pipelining and parallel processing can be used to increase the link rate up to 400 Gbps and decrease the learning time.

5 citations


Cites background from "ART1 network implementation issues"

  • ...For more information on the algorithm and the structure, an interested reader can refer to [17, 30]....

    [...]

Proceedings ArticleDOI
30 May 1994
TL;DR: A new learning law, the Direct Coding Rule, is proposed for bottom-up long term memory learning in Adaptive Resonance Theory (ART) networks that requires less computational precision than the traditional Weber Law Rule and modifies the search dynamics of the network to accelerate convergence.
Abstract: A new learning law, the Direct Coding Rule, is proposed for bottom-up long term memory learning in Adaptive Resonance Theory (ART) networks. This law requires less computational precision than the traditional Weber Law Rule and modifies the search dynamics of the network to accelerate convergence. Following a brief mathematical analysis of the new learning law, an ART1 network based on this law is applied to a passive radar detection problem. The simulation results allow comparison of the new law to the Weber Law Rule, with and without weight quantization, from the speed and cost viewpoints. >

4 citations

01 Jan 2005
TL;DR: In this article, the authors proposed a neural network algorithm to solve the IP lookup problem, which can perform an IP lookup in 4.5 nanoseconds, which implies the support for a 60 Gbps link rate.
Abstract: Routers use lookup tables to forward packets. They also classify packets to determine which flow they belong to and what quality of service (QoS) they should receive. Increasing the rate of communication links is in contrast to the practical processing power of routers and switches. We propose some neural network algorithms to solve the IP lookup problem. One of these algorithms, back propagation, gives promising results; however, it has problems in training time. Another algorithm, a 12 layer neural network, represents acceptable results on error rate and training time. Parallel processing of neural networks provides huge processing power to do IP lookup. The algorithm can be implemented in hardware on a single chip. Our method can perform an IP lookup in 4.5 nanoseconds, which implies the support for a 60 Gbps link rate. Pipelining and parallel processing can be used to increase the link rate up to 400 Gbps and also decrease the learning time. Keywords– IP lookup, packet classification, neural network, ART1, back propagation

3 citations

Proceedings ArticleDOI
21 Sep 2003
TL;DR: This work proposes a neural network scheme for the IP lookup problem, a 12 layers neural network that can be implemented in hardware on a single chip and can perform an IP lookup in only 4.5 nanoseconds implying it can support 60 Gbps link rate.
Abstract: IP routers use lookup tables to forward packets They also classify packets to determine which flow they belong to in order to decide the type of quality of service they should receive Increasing the rate of communication links and expansion of the global network is in contrast with the practical processing power of the switching devices We propose a neural network scheme for the IP lookup problem Our algorithm-a 12 layers neural network-represents acceptable results on the error rate and training time Fortunately, parallel processing of neural networks provides a huge processing power to process packets Our algorithm can be implemented in hardware on a single chip and can perform an IP lookup in only 45 nanoseconds implying it can support 60 Gbps link rate Pipelining and parallel processing can be used to increase the link rate up to 400 Gbps and decrease the learning time

2 citations

References
More filters
Journal ArticleDOI
TL;DR: A neural network architecture for the learning of recognition categories is derived which circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.
Abstract: A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top—down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the ⅔ Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.

2,462 citations

PatentDOI
TL;DR: ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog or binary input patterns, is introduced.
Abstract: A neural network includes a feature representation field which receives input patterns. Signals from the feature representation field select a category from a category representation field through a first adaptive filter. Based on the selected category, a template pattern is applied to the feature representation field, and a match between the template and the input is determined. If the angle between the template vector and a vector within the representation field is too great, the selected category is reset. Otherwise the category selection and template pattern are adapted to the input pattern as well as the previously stored template. A complex representation field includes signals normalized relative to signals across the field and feedback for pattern contrast enhancement.

1,865 citations

Proceedings ArticleDOI
19 Feb 1988
TL;DR: ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog of binary input patterns, is introduced.
Abstract: Adaptive resonance architectures are neural networks that self-organize stable pattern recognition codes in real-time in response to arbitrary sequences of input patterns. This article introduces ART 2, a class of adaptive resonance architectures which rapidly self-organize pattern recognition categories in response to arbitrary sequences of either analog of binary input patterns. In order to cope with arbitrary sequences of analog input patterns, ART 2 architectures embody solutions to a number of design principles, such as the stability-plasticity tradeoff, the search-direct access tradeoff, and the match-reset tradeoff. In these architectures, top-down learned expectation and matching mechanisms are critical in self-stabilizing the code learning process. A parallel search scheme updates itself adaptively as the learning process unfolds, and realizes a form of real-time hypothesis discovery, testing, learning, and recognition. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time for familiar inputs does not increase with the complexity of the learned code. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. A parameter called the attentional vigilance parameter determines how fine the categories will be. If vigilance increases (decreases) due to environmental feedback, then the system automatically searches for and learns finer (coarser) recognition categories. Gain control parameters enable the architecture to suppress noise up to a prescribed level. The architecture's global design enables it to learn effectively despite the high degree of nonlinearity of such mechanisms.

859 citations

Proceedings ArticleDOI
24 Jul 1988
TL;DR: In this paper, a discussion of some of the implementation constraints imposed on VLSI architectures for emulations of very large connectionist/neural networks (VLCNs) is presented.
Abstract: A discussion is presented of some of the implementation constraints imposed on VLSI architectures for emulations of very large connectionist/neural networks (VLCNs). Specifically, the authors show that multiplexing of interconnections is necessary for networks exhibiting poor locality. They show that it is more feasible to build a VLCN system with sharing or multiplexing of interconnections than to build one with dedicated wires for each connection. This is true unless the network exhibits extreme locality and all CNs are connected to others within some small-radius region. Unfortunately, association requires some global connectivity. >

67 citations