scispace - formally typeset
Search or ask a question

Showing papers on "Bidirectional associative memory published in 1997"


Proceedings Article
01 Dec 1997
TL;DR: If an object has a continuous family of instantiations, it should be represented by a continuous attractor, and this idea is illustrated with a network that learns to complete patterns.
Abstract: One approach to invariant object recognition employs a recurrent neural network as an associative memory. In the standard depiction of the network's state space, memories of objects are stored as attractive fixed points of the dynamics. I argue for a modification of this picture: if an object has a continuous family of instantiations, it should be represented by a continuous attractor. This idea is illustrated with a network that learns to complete patterns. To perform the task of filling in missing information, the network develops a continuous attractor that models the manifold from which the patterns are drawn. From a statistical view-point, the pattern completion task allows a formulation of unsupervised learning in terms of regression rather than density estimation.

107 citations


Journal ArticleDOI
TL;DR: A design method for associative memories using a new model of discrete-time high-order neural networks which includes local interconnections among neurons is illustrated, making feasible an implementation of such networks.
Abstract: In this brief a design method for associative memories using a new model of discrete-time high-order neural networks which includes local interconnections among neurons is illustrated. The synthesis approach, which exploits the properties of pseudoinverse matrices, is flexible as it enables one to choose the complexity of the associative memory to be designed; that is, it can generate networks for associative memories with first-order and/or higher order interactions among neurons. The suggested technique preserves local interconnections among neurons, making feasible an implementation of such networks. Simulation results and comparisons among different neural architectures are reported to show the applicability of the proposed method.

37 citations


Journal ArticleDOI
TL;DR: The proposed dynamical Boolean systems analysis is able to formulate necessary and sufficient conditions for network stability which are more general than the well-known but restrictive conditions for the class of single layer networks: symmetric weight matrix with positive diagonal and asynchronous update.
Abstract: Discrete-time/discrete-state recurrent neural networks are analyzed from a dynamical Boolean systems point of view in order to devise new analytic and design methods for the class of both single and multilayer recurrent artificial neural networks. With the proposed dynamical Boolean systems analysis, we are able to formulate necessary and sufficient conditions for network stability which are more general than the well-known but restrictive conditions for the class of single layer networks: (1) symmetric weight matrix with (2) positive diagonal and (3) asynchronous update. In terms of design, we use a dynamical Boolean systems analysis to construct a high performance associative memory. With this Boolean memory, we can guarantee that all fundamental memories are stored, and also guarantee the size of the basin of attraction for each fundamental memory.

33 citations


Journal ArticleDOI
TL;DR: This paper develops the statistical dynamics of the SOBAM, a bidirectional associative memory model with second-order connections, and uses the dynamics to estimate the memory capacity, the attraction basin, and the number of errors in the retrieved items.
Abstract: In this paper, a bidirectional associative memory (BAM) model with second-order connections, namely second-order bidirectional associative memory (SOBAM), is first reviewed. The stability and statistical properties of the SOBAM are then examined. We use an example to illustrate that the stability of the SOBAM is not guaranteed. For this result, we cannot use the conventional energy approach to estimate its memory capacity. Thus, we develop the statistical dynamics of the SOBAM. Given that a small number of errors appear in the initial input, the dynamics shows how the number of errors varies during recall. We use the dynamics to estimate the memory capacity, the attraction basin, and the number of errors in the retrieved items. Extension of the results to higher-order bidirectional associative memories is also discussed.

28 citations


Book ChapterDOI
01 Jan 1997

24 citations


Journal ArticleDOI
TL;DR: A single instruction stream many data stream (SIMD)-based parallel processing architecture, is developed for the adaptive BAM neural network, taking advantage of the inherent parallelism in BAM.
Abstract: In this paper emerging parallel/distributed architectures are explored for the digital VLSI implementation of adaptive bidirectional associative memory (BAM) neural network. A single instruction stream many data stream (SIMD)-based parallel processing architecture, is developed for the adaptive BAM neural network, taking advantage of the inherent parallelism in BAM. This novel neural processor architecture is named the sliding feeder BAM array processor (SLiFBAM). The SLiFBAM processor can be viewed as a two-stroke neural processing engine, It has four operating modes: learn pattern, evaluate pattern, read weight, and write weight. Design of a SLiFBAM VLSI processor chip is also described. By using 2-/spl mu/m scalable CMOS technology, a SLiFBAM processor chip with 4+4 neurons and eight modules of 256/spl times/5 bit local weight-storage SRAM, was integrated on a 6.9/spl times/7.4 mm/sup 2/ prototype die. The system architecture is highly flexible and modular, enabling the construction of larger BAM networks of up to 252 neurons using multiple SLiFBAM chips.

23 citations


Journal ArticleDOI
TL;DR: A design methodology for mapping neuralyinspired algorithms for vector quantization, into VLSI hardware, and uses the basic building blocks to design an associative processor for bit-pattern classification; a high-density memory based neuromorphic processor.
Abstract: We present a design methodology for mapping neuraly inspired algorithms for vector quantization, into VLSI hardware We describe the building blocks used: memory cells, current conveyors, and translinear circuits We use the basic building blocks to design an associative processor for bit-pattern classification; a high-density memory based neuromorphic processor Operating in parallel, the single chip system determines the closest match, based on the Hamming distance, between an input bit pattern and multiple stored bit templates; ties are broken arbitrarily Energy efficient processing is achieved through a precision-on-demand architecture Scalable storage and processing is achieved through a compact six transistor static RAM cell/ALU circuit The single chip system is programmable for template sets of up to 124 bits per template and can store up to 116 templates (total storage capacity of 14 Kbits) An additional 604 bits of auxiliary storage is used for pipelining and fault tolerance re-configuration capability A fully functional 68 mm by 69 mm chip has been fabricated in a standard single–poly, double–metal 20µmn–well CMOS process

21 citations


Journal ArticleDOI
TL;DR: In the recalling process of the improved eBAM (IeBAM), the continuity assumption is avoided, and the stability of the system in synchronous and asynchronous modes are proven by defining an energy function which decreases on the change of neuron states.
Abstract: Based on Jeng's exponential bidirectional associative memory (eBAM) another improved updating rule for eBAMs is presented. In the recalling process of the improved eBAM (IeBAM), the continuity assumption of the eBAM is avoided, and the stability of the system in synchronous and asynchronous modes are proven by defining an energy function which decreases on the change of neuron states. The proposed model greatly improves the performances of the eBAM. Computer simulations demonstrate that the IeBAM has a much higher storage capacity and a better error correcting capability than those of the eBAM.

20 citations


Journal ArticleDOI
TL;DR: It is shown here that the XOR function can be implemented in a Hopfield style network using only two hidden neurons.
Abstract: It is well known that a perceptron cannot be used to implement the XOR function but that a feed forward network with some hidden neurons can. The purpose of this work is to show that a Hopfield style network can also be used to implement the XOR function. It is shown here that the XOR function can be implemented in a Hopfield style network using only two hidden neurons.

14 citations


Proceedings Article
01 Dec 1997
TL;DR: A new bidirectional iterative retrieval method for the Willshaw model, called crosswise bid Directional (CB) retrieval, providing enhanced performance and segmentation ability of CB-retrieval with addresses containing the superposition of pattens, provided even at high memory load.
Abstract: Similarity based fault tolerant retrieval in neural associative memories (NAM) has not lead to wiedespread applications. A drawback of the efficient Willshaw model for sparse patterns [Ste61, WBLH69], is that the high asymptotic information capacity is of little practical use because of high cross talk noise arising in the retrieval for finite sizes. Here a new bidirectional iterative retrieval method for the Willshaw model is presented, called crosswise bidirectional (CB) retrieval, providing enhanced performance. We discuss its asymptotic capacity limit, analyze the first step, and compare it in experiments with the Willshaw model. Applying the very efficient CB memory model either in information retrieval systems or as a functional model for reciprocal cortico-cortical pathways requires more than robustness against random noise in the input: Our experiments show also the segmentation ability of CB-retrieval with addresses containing the superposition of pattens, provided even at high memory load.

13 citations


Journal ArticleDOI
TL;DR: Simulations show that an approximate weight matrix can be effectively learnt by applying a genetic algorithm, which possesses extensive searching ability, and higher storage capacity and a satisfactory approximate recall can be realized despite the restriction imposed on the weight matrix.
Abstract: A genetic design approach to learning the weight matrix of a bidirectional associative memory (BAM) is proposed in this paper. This approach removes inadequacies in conventional iterative learning algorithms. A restriction is made in the weight matrix representation in order to reduce the solution search. Design procedures are provided in detail. Simulations show that an approximate weight matrix can be effectively learnt by applying a genetic algorithm, which possesses extensive searching ability. By this approach, higher storage capacity and a satisfactory approximate recall can be realized despite the restriction imposed on the weight matrix. For the case that all training pattern pairs are storable, the genetic approach is capable of automatically making the attraction basin of each stored pattern pair as large as possible. This is realized by an appropriately defined discrete evaluation index. A larger attraction basin implies higher noise correction ability of the BAM. However, automatic ad...

Proceedings ArticleDOI
09 Jun 1997
TL;DR: Higher order neural networks (HONNs) have been proposed as new systems and theoretical results on the associative ability of HONNs are shown.
Abstract: Higher order neural networks (HONNs) have been proposed as new systems. In this paper, we show some theoretical results on the associative ability of HONNs. Memory capacity of HONNs is much larger than that of the conventional neural networks. The capacity of auto-correlation associative memory is (/sup m//sub k/)/(2 log m), where m is the number of neurons and K is the order of connections.

Proceedings ArticleDOI
13 Apr 1997
TL;DR: This paper applies evolutionary computations to Hopfield's neural network model of associative memory, a model of neural networks that can be thought of as a test suite of multi-modal and/or multi-objective function optimizations.
Abstract: We apply evolutionary computations to Hopfield's neural network model of associative memory. In the Hopfield model, an almost infinite number of combinations of synaptic weights gives a network an associative memory function. Furthermore, there is a trade-off between the storage capacity and the size of the basin of attraction. Therefore, the model can be thought of as a test suite of multi-modal and/or multi-objective function optimizations. As a preliminary stage, we investigate the basic behavior of an associative memory under simple evolutionary processes. In this paper, we present some experiments using an evolution strategy.

Proceedings ArticleDOI
12 Oct 1997
TL;DR: This paper proposes a method to avoid the uninvited memory patterns by using associative mapping, which is widely known as the neural network with associative memory function.
Abstract: Cellular neural network (CNN) proposed by Chua and Yang (1988) is a sort of interconnecting network, which consists of regularly arranged units. Various applications of CNN are reported such as a feature extraction of the patterns, an extraction of the edges or corners of a figure, noise exclusion, searching in maze and so forth. CNN is also effective as the associative memory by using a noncloning template. Hopfield network is widely known as the neural network with associative memory function, but not many images can be registered on account, of the restrictions. While in CNN, it is possible to embed many images. A 9/spl times/9 matrix Hopfield network can store at most 6/spl sim/9 images, and the same size CNN can store over 30 images. Although CNN is able to embed many images, some uninvited images are included in the memories. This paper proposes a method to avoid the uninvited memory patterns by using associative mapping.


Proceedings ArticleDOI
09 Jun 1997
TL;DR: Two strategies to improve the performance of the bidirectional associative memory (BAM) are presented and a number of experiments suggest that the new methods present better performance than PRLAB when dealing with noisy input patterns.
Abstract: This paper presents two strategies to improve the performance of the bidirectional associative memory (BAM). The unlearning of spurious attractors (USA-BAM) consists in dissociating any stimulus from an incorrect response. The bidirectional delta rule (BDR-BAM) extends the use of the delta rule to BAM bidirectional operation. These paradigms are based on cognitive assumptions, do not demand pre-processed inputs, train quickly the network, have stable behavior, and present high noise tolerance and abstraction ability. The models are compared with the original BAM and the pseudo-relaxation learning algorithm (PRLAB). A number of experiments suggest that the new methods present better performance than PRLAB when dealing with noisy input patterns. These three methods are combined two by two and the resulting model USA-BDR-BAM presents the best overall performance.

Proceedings ArticleDOI
04 Apr 1997
TL;DR: A kind of fuzzy max-product associative memory network and its learning algorithm are presented, which possess strong ability of error-tolerance and better performance in computer simulations.
Abstract: A kind of fuzzy max-product associative memory network and its learning algorithm are presented in this paper. The connection weight matrix for fuzzy max-product auto- associative memory is determined by the generalized fuzzy solution. Each initial state pattern will be converged another state of the network via the connection weight matrix at one iteration. Fuzzy max-product associative memory network possess strong ability of error-tolerance. The computer simulations show the better performance of the fuzzy max-product associative memory network and its learning algorithm.© (1997) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
12 Oct 1997
TL;DR: A kind of algorithm, called max-min encoding learning algorithm, is presented to be identified as the connection weight for FMM hetero-associative memory networks, and the simulation shows the effectiveness of the method.
Abstract: This paper proposes a kind of algorithm, called max-min encoding learning algorithm, for fuzzy max-multiplication (in short FMM) associative memory networks. The new method can store all auto-associative memory samples. Based on the max-min encoding, a kind of gradient descent learning method is presented to be identified as the connection weight for FMM hetero-associative memory networks. The simulation shows the effectiveness of the method.

Book ChapterDOI
04 Jun 1997
TL;DR: A genetic algorithm using real-encoded chromosomes which successfully evolves over-loaded Hebbian synaptic weights to function as an associative memory is described.
Abstract: We apply evolutionary algorithms to Hopfield model of associative memory. Previously we reported that a genetic algorithm using ternary chromosomes evolves the Hebb-rule associative memory to enhance its storage capacity by pruning some connections. This paper describes a genetic algorithm using real-encoded chromosomes which successfully evolves over-loaded Hebbian synaptic weights to function as an associative memory. The goal of this study is to shed new light on the analysis of the Hopfield model, which also enables us to use the model as more challenging test suite for evolutionary computations.

Journal ArticleDOI
01 Jun 1997
TL;DR: Associative-memory neural networks with adaptive weighted outer-product learning with gradient-descent approach to adaptively find the optimal learning weights with reference to global- or local-error measure is proposed.
Abstract: Associative-memory neural networks with adaptive weighted outer-product learning are proposed in this paper. For the correct recall of a fundamental memory (FM), a corresponding learning weight is attached and a parameter called signal-to-noise-ratio-gain (SNRG) is devised. The sufficient conditions for the learning weights and the SNRG's are derived. It is found both empirically and theoretically that the SNRG's have their own threshold values for correct recalls of the corresponding FM's. Based on the gradient-descent approach, several algorithms are constructed to adaptively find the optimal learning weights with reference to global- or local-error measure.

01 Jan 1997
TL;DR: This paper proposes a deterministic algorithm for choosing good target states for the hidden layer of a Hebbian pattern associator, and argues that it is critical to examine both increased stability and increased basin size of the attractor around each stored pattern to improve the network’s capacity to recall noisy patterns.
Abstract: Our brains have an extraordinarily large capacity to store and recognize complex patterns after only one or a very few exposures to each item. Existing computational learning algorithms fall short of accounting for these properties of human memory; they either require a great many learning iterations, or the y can do one-shot learning but suffer from very poor capacity. In this paper, we explore one approach to improving the capacity of simple Hebbian pattern associators: adding hidden units. We propose a deterministic algorithm for choosing good target states for the hidden layer. In assessing performance of the model, we argue that it is critical to examine both increased stability and increased basin size of the attractor around each stored pattern. Our algorithm achieves both, thereby improving the network’s capacity to recall noisy patterns. Furthe r, the hidden layer helps to cushion the network from interference effects as the memory is overloaded. Another technique, almost as effective, is to “soft-clamp” the input layer duri ng retrieval. Finally, we discuss other approaches to improvi ng memory capacity, as well the relation between our model and extant models of the hippocampal system.

Book ChapterDOI
TL;DR: An evolution of the Hopfield model of associative memory is presented using evolutionary programming as a real-valued parameter optimization and it is shown that a network with random synaptic weights evolves eventually to store some number of patterns as fixed points.
Abstract: We apply evolutionary computations to Hopfield model of associative memory. Although there have been a lot of researches which apply evolutionary techniques to layered neural networks, their applications to Hopfield neural networks remain few so far. Previously we reported that a genetic algorithm using discrete encoding chromosomes evolves the Hebb-rule associative memory to enhance its storage capacity. We also reported that the genetic algorithm evolves a network with random synaptic weights eventually to store some number of patterns as fixed points. In this paper we present an evolution of the Hopfield model of associative memory using evolutionary programming as a real-valued parameter optimization.

Journal ArticleDOI
TL;DR: A mathematical model of the simplex memory neural network is constructed to memorize any binary pattern with content-addressable memory function and has some important functions in accord with the learning and memory behaviors of the brain.

Journal ArticleDOI
01 Jun 1997
TL;DR: This paper investigates some important properties of bidirectional associative memories (BAM) and proposes an improved capacity estimate and an implementation approach to improve the storage capacity.
Abstract: Two issues are addressed in this paper. Firstly, it investigates some important properties of bidirectional associative memories (BAM) and proposes an improved capacity estimate. Those properties are the encoding form of the input pattern pairs as sell as their decoding, the orthogonality of the pattern pairs, the similarity of associated patterns, and the density of the pattern pairs. Secondly, it proposes an implementation approach to improve the storage capacity. The approach embraces three proposed methods, i.e., the bipolar-orthogonal augmentation, the set partition, and the combined method. Along with those methods is the construction of the set of bipolar orthogonal patterns.

Journal ArticleDOI
TL;DR: Conditions for the existence of equilibrium points and global stability are emphatically discussed for Bidirectional Associative Memory models with axonal signal transmission delays, and the discussed methods are more general.
Abstract: In this paper, conditions for the existence of equilibrium points and global stability are emphatically discussed for Bidirectional Associative Memory(BAM) models with axonal signal transmission delays, and the discussed methods are more general. The correctness of obtained conclusions is verified by use of some examples. The obtained results have primary significance in the design and application of BAM.

Proceedings ArticleDOI
09 Jun 1997
TL;DR: It is shown that if the desired memory matrix accepts a suitable overlapping decomposition, then the problem can be solved by synthesizing a number of smaller networks independently.
Abstract: This paper is concerned with the design of neural networks to be used as associative memories. The idea of overlapping decompositions, which is extensively used in the solution of large-scale problems as a method of reducing the computational work, is applied to discrete-time neural networks with binary neurons. It is shown that if the desired memory matrix accepts a suitable overlapping decomposition, then the problem can be solved by synthesizing a number of smaller networks independently. The concept is illustrated with two examples.

Proceedings ArticleDOI
07 Jul 1997
TL;DR: This work considers the Boolean model of associative memory using neural nets to obtain exchange relation between the model parameters and the size of stored information (the memory capacity).
Abstract: We consider the Boolean model of associative memory using neural nets. Previous results from the superimposed code theory are applied to obtain exchange relation between the model parameters and the size of stored information (the memory capacity).

Journal ArticleDOI
TL;DR: This article analyzes the storage behavior of bidirectional associative memory (BAM) under the forgetting learning to ask, Can the most recent k learning item be stored as a fixed point?
Abstract: Forgetting learning is an incremental learning rule in associative memories. With it, the recent learning items can be encoded, and the old learning items will be forgotten. In this article, we analyze the storage behavior of bidirectional associative memory (BAM) under the forgetting learning. That is, “Can the most recent k learning item be stored as a fixed point?” Also, we discuss how to choose the forgetting constant in the forgetting learning such that the BAM can correctly store as many as possible of the most recent learning items. Simulation is provided to verify the theoretical analysis.

Book ChapterDOI
01 Jan 1997
TL;DR: This chapter presents some novel neural algorithms based on fuzzy δ rules for max-min operator networks, and demonstrates that these algorithms can also be extended to max-times operator network.
Abstract: Due to the difficulty in a given system of using neural networks to solve fuzzy relation equations, the best learning rate sometimes cannot be decided easily and strict theoretical analyses on convergence of algorithms are not given. To overcome these problems, we present in this chapter some novel neural algorithms based on fuzzy δ rules. We first describe such algorithms for max-min operator networks, then we demonstrate that these algorithms can also be extended to max-times operator network. Important results include some improved fuzzy δ rules, a convergence theorem, and an equivalence theorem which reflects that fuzzy theory and neural networks can reach the same goal by different routes. We also discuss the fuzzy bidirectional associative memory network and its training algorithms, and prove all important theorems with additional simulation and comparison results.