scispace - formally typeset
Search or ask a question

Showing papers on "Bidirectional associative memory published in 1994"


Journal ArticleDOI
TL;DR: It is shown that if the neuronal gains are small compared with the synaptic connection weights, then a bidirectional associative memory network with axonal signal transmission delays converges to the equilibria associated with exogenous inputs to the network.
Abstract: It is shown that if the neuronal gains are small compared with the synaptic connection weights, then a bidirectional associative memory network with axonal signal transmission delays converges to the equilibria associated with exogenous inputs to the network. Both discrete and continuously distributed delays are considered; the asymptotic stability is global in the state space of neuronal activations and also is independent of the delays. >

554 citations


Journal ArticleDOI
TL;DR: An iterative learning algorithm called PRLAB is described for the discrete bidirectional associative memory (BAM) that is a novel adaptation from the well-known relaxation method for solving a system of linear inequalities and offers high scalability for large applications.
Abstract: An iterative learning algorithm called PRLAB is described for the discrete bidirectional associative memory (BAM). Guaranteed recall of all training pairs is ensured by PRLAB. The proposed algorithm is significant in many ways. Unlike many existing iterative learning algorithms, PRLAB is not based on the gradient descent technique. It is a novel adaptation from the well-known relaxation method for solving a system of linear inequalities. The algorithm is very fast. Learning 200 random patterns in a 200-200 BAM takes only 20 epochs on the average. PRLAB is highly insensitive to learning parameters and the initial configuration of a BAM. It also offers high scalability for large applications by providing the same high performance when the number of training patterns are increased in proportion to the size of the BAM. An extensive performance analysis of the new learning algorithm is included. >

85 citations


Journal ArticleDOI
TL;DR: A new modification of the BAM is made and a new model named asymmetric bidirectional associative memory (ABAM) is proposed, which can cater for the logical asymmetry of interconnections but also is capable of accommodating a larger number of non-orthogonal patterns.
Abstract: Bidirectional associative memory (BAM) is a potentially promising model for heteroassociative memories. However, its applications are severely restricted to networks with logical symmetry of interconnections and pattern orthogonality or small pattern size. Although the restrictions on pattern orthogonality and pattern size can be relaxed to a certain extent, all previous efforts are at the cost of increase in connection complexity. In this paper, a new modification of the BAM is made and a new model named asymmetric bidirectional associative memory (ABAM) is proposed. This model not only can cater for the logical asymmetry of interconnections but also is capable of accommodating a larger number of non-orthogonal patterns. Furthermore, all these properties of the ABAM are achieved without increasing the connection complexity of the network. Theoretical analysis and simulation results all demonstrate that the ABAM indeed outperforms the BAM and its existing variants in all aspects of storage capacity, error-correcting capability and convergence. >

69 citations




Journal ArticleDOI
TL;DR: Theoretical and experimental results show that the performances of the proposed learning scheme depend on the way the graphs are represented in the training set, and the representations developed for the pointers seem to be robust to recurrent decoding along a cycle.
Abstract: In this paper, we propose an extension to the recursive auto-associative memory (RAAM) by Pollack. This extension, the labelling RAAM (LRAAM), can encode labelled graphs with cycles by representing pointers explicitly. Some technical problems encountered in the RAAM, such as the termination problem in the learning and decoding processes, are solved more naturally in the LRAAM framework. The representations developed for the pointers seem to be robust to recurrent decoding along a cycle. Theoretical and experimental results show that the performances of the proposed learning scheme depend on the way the graphs are represented in the training set. Critical features for the representation are cycles and confluent pointers. Data encoded in a LRAAM can be accessed by a pointer as well as by content. Direct access by content can be achieved by transforming the encoder network of the LRAAM into a particular bidirectional associative memory (BAM). Statistics performed on different instances of LRAAM show...

42 citations


Journal ArticleDOI
TL;DR: The robust capacity conditions of this multilayer associative neural network that lead to forming the local minima of the energy function at the exact training pairs are derived and the chosen strategy not only maximizes the total number of stored images but also completely relaxes any code-dependent conditions of the learning pairs.
Abstract: The objective of this paper is to to resolve important issues in artificial neural nets-exact recall and capacity in multilayer associative memories. These problems have imposed restrictions on coding strategies. We propose the following triple-layered hybrid neural network: the first synapse is a one-shot associative memory using the modified Kohonen's adaptive learning algorithm with arbitrary input patterns; the second one is Kosko's bidirectional associative memory consisting of orthogonal input/output basis vectors such as Walsh series satisfying the strict continuity condition; and finally, the third one is a simple one-shot associative memory with arbitrary output images. A mathematical framework based on the relationship between energy local minima (capacity of the neural net) and noise-free recall is established. The robust capacity conditions of this multilayer associative neural network that lead to forming the local minima of the energy function at the exact training pairs are derived. The chosen strategy not only maximizes the total number of stored images but also completely relaxes any code-dependent conditions of the learning pairs. >

39 citations


Proceedings ArticleDOI
27 Jun 1994
TL;DR: Under a certain condition, the proposed rule can efficiently encode multiple fuzzy pattern pairs in a single FAM and perfect association of these pairs can be achieved.
Abstract: In this paper, a learning rule for multiple pattern pairs in fuzzy associative memories (FAMs) with max-min composition units is presented. Under a certain condition, the proposed rule can efficiently encode multiple fuzzy pattern pairs in a single FAM and perfect association of these pairs can be achieved. The correctness of the proposed rule is proved and illustrative examples are given. >

36 citations


Journal ArticleDOI
TL;DR: By modifying the proof of convergence of the perceptron, the author has proved that BL yields one of the solution connection matrices within a finite number of iterations (if the solutions exist).
Abstract: Borrowing the idea of the perceptron, bidirectional learning (BL) is proposed to enhance the recall performance of bidirectional associative memory (BAM). By modifying the proof of convergence of the perceptron, the author has proved that BL yields one of the solution connection matrices within a finite number of iterations (if the solutions exist). According to the above convergence of BL, the capacity of BAM with BL is larger than or equal to that with any other learning rule. Hence, BL can be considered as an optimum learning rule for BAM in the sense of capacity. Simulations show that BL greatly improves the capacity and the error correction capability of BAM. >

34 citations


Journal ArticleDOI
TL;DR: A learning algorithm is proposed and it is shown that it enables perfect learning provided the training set forms a consistent conceptual structure and the set of all stable points forms a complete lattice.

29 citations


Journal ArticleDOI
TL;DR: An artificial neural network, Dystal (dynamically stable associative learning) is constructed that utilizes non-Hebbian learning rules and displays a number of useful properties, including self-organization; monotonic convergence; large storage capacity without saturation; computational complexity of O(N); the ability to learn, store, and recall associations among arbitrary, noisy patterns after four to eight training epochs.

Journal ArticleDOI
TL;DR: A novel artificial neural network has been devised and is evaluated for the background correction of single-scan infrared (IR) spectra and an optimal associative memory (OAM) is an enhanced bidirectional associativeMemory (BAM).
Abstract: A novel artificial neural network has been devised and is evaluated for the background correction of single-scan infrared (IR) spectra. An optimal associative memory (OAM) is an enhanced bidirectional associative memory (BAM). Factoring the weight matrix allows OAMs to be used with high-resolution data on a desktop computer. IR spectroscopy provides a rigorous and practical challenge for evaluating background correction. IR single-scan background spectra are stored in the associative memory

Proceedings ArticleDOI
27 Jun 1994
TL;DR: The proposed episodic associative memory can memorize and recall episodic associations; it can store plural episodes; it has high memory capacity.
Abstract: Episodic associative memory (EAM) is introduced and simulated. It uses quick learning for bidirectional associative memory (QLBAM) and pseudo-noise (PN) sequences. The features of the proposed EAM are: it can memorize and recall episodic associations; it can store plural episodes; it has high memory capacity. >

Proceedings ArticleDOI
02 Oct 1994
TL;DR: A neural network-based associative memory for storing complex patterns is proposed that approaches the former as a limit and a crude capacity estimate for the discrete model is made.
Abstract: A neural network-based associative memory for storing complex patterns is proposed. Two variations of the model are proposed: 1) a discrete model, and 2) a continuous model. The latter approaches the former as a limit. A crude capacity estimate for the discrete model is made. Network weights can be calculated in one step using a complex outer-product rule or can be adjusted adaptively using a Hebbian learning rule. Possible biological significance of the complex neuron state is briefly discussed. >

Book ChapterDOI
01 Jan 1994
TL;DR: This paper describes current research work in the application of neural networks to conditional spatial simulation and an example simulation using a feedforward, error back propagation network is shown.
Abstract: This paper describes current research work in the application of neural networks to conditional spatial simulation. A number of possible approaches are suggested and an example simulation using a feedforward, error back propagation network is shown. The advantages and disadvantages of the neural network approach are discussed and some possible directions for further work are suggested.

Proceedings ArticleDOI
03 Aug 1994
TL;DR: In this work, a new class of space-varying cellular neural networks with a nonsymmetric interconnection structure is considered and a learning algorithm, based on the relaxation method, is used to compute the feedback parameters of the considered network.
Abstract: In this work a design of a space-varying cellular neural network (CNN) in order to behave as an associative memory is presented. To this purpose, a new class of space-varying cellular neural networks with a nonsymmetric interconnection structure is considered. A stability analysis is firstly carried out. Then, a learning algorithm, based on the relaxation method, is used to compute the feedback parameters of the considered network. Simulation tests are reported to confirm the validity of the suggested approach.

Proceedings ArticleDOI
01 Dec 1994
TL;DR: The QLBAM is insensitive to correlation of training pairs; it is robust for noisy inputs; the minimum absolute value of net inputs indexes a noise margin; the memory capacity is greatly improved: the maximum capacity in the simulation is about 2.2N.
Abstract: Several important characteristics of Quick Learning for Bidirectional Associative Memory (QLBAM) are introduced. QLBAM uses two stage learning. In the first stage, the BAM is trained by Hebbian learning and then by Pseudo-Relaxation Learning Algorithm for BAM (PRLAB). The following features of the QLBAM are made clear: it is insensitive to correlation of training pairs; it is robust for noisy inputs; the minimum absolute value of net inputs indexes a noise margin; the memory capacity is greatly improved: the maximum capacity in our simulation is about 2.2N. >

Proceedings ArticleDOI
27 Jun 1994
TL;DR: The high-order Hopfield model was proposed by Psaltis, Park and Hong (1988) whose capacity is rigorously determined and an s-order polynomial approximation of the projection rule is introduced and proved to be higher than that of the Hopfield associative memory with the same implementation complexity.
Abstract: It is well known that the second-order Hopfield associative memory has storage capacity of order O(n/log n) This result is proved under the assumption that the stored vectors and probe vector are subject to uniform distributions. Unfortunately, this is not always the case practically. We prove that the capacity drops to order of zero when stored vectors and probe vector have nonuniform distributions. Therefore, it is necessary to explore the influence of these distributions on the capacity. To improve the capacity of associative memory, the high-order Hopfield model was proposed by Psaltis, Park and Hong (1988) whose capacity is rigorously determined in this paper. As an alternative of the Hopfield associative memory, are introduce an s-order polynomial approximation of the projection rule and prove that its storage capacity is higher than that of the Hopfield associative memory with the same implementation complexity. >

Journal ArticleDOI
TL;DR: A stochastic analogue of the bidirectional associative memories introduced by B. Kosko is investigated by statistical mechanical methods and derives its storage capacity as a function of the total number of synapses and of the asymmetry of the network.
Abstract: We investigate by statistical mechanical methods a stochastic analogue of the bidirectional associative memories introduced by B. Kosko. We derive its storage capacity as a function of the total number of synapses and of the asymmetry of the network

Posted Content
TL;DR: This lecture will present some models of neural networks that have been developed in the recent years which work as associative memories and some of the experiments done on real mammal brains.
Abstract: In this lecture I will present some models of neural networks that have been developed in the recent years. The aim is to construct neural networks which work as associative memories. Different attractors of the network will be identified as different internal representations of different objects. At the end of the lecture I will present a comparison among the theoretical results and some of the experiments done on real mammal brains.

Journal ArticleDOI
TL;DR: This work extends conventional neural network architectures by introducing additional dynamical variables and assigns a so-called phase to each formal neuron, and presents an associative memory that actually is capable of forming hypotheses of classification.
Abstract: Nonlinear associative memories as realized, e.g., by Hopfield nets are characterized by attractor-type dynamics. When fed with a starting pattern, they converge to exactly one of the stored patterns which is supposed to be most similar. These systems cannot render hypotheses of classification, i.e., render several possible answers to a given classification problem. Inspired by von der Malsburg's correlation theory of brain function, we extend conventional neural network architectures by introducing additional dynamical variables. Assuming an oscillatory time structure of neural firing, i.e., the existence of neural clocks, we assign a so-called phase to each formal neuron. The phases explicitly describe detailed correlations of neural activities neglected in conventional neural network architectures. Implemening this extension into a simple self-organizing network based on a feature map, we present an associative memory that actually is capable of forming hypotheses of classification.

Journal ArticleDOI
TL;DR: In this article, the associative memory in recurrent neural networks was investigated based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi, which has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model.
Abstract: In this paper, we investigate the associative memory in recurrent neural networks, based on the model of evolving neural networks proposed by Nolfi, Miglino and Parisi.Experimentally developed network has highly asymmetric synaptic weights and dilute connections, quite different from those of the Hopfield model.Some results on the effect of learning efficiency on the evolution are also presented.

Journal ArticleDOI
TL;DR: The UBBAM model, the SASLM1 smart pixels and a proposed optical system will be described in this paper, which forms a powerful parallel processing hardware, significant in representing neuron fields, in real time, with minimal off-chip control and processing.
Abstract: A unipolar binary bidirectional associative memory (UBBAM) has been designed to associate input and output vectors in a manner of a content addressable memory. The neural network thus formed does not require inhibitory neurons and is well suited for optoelectronic implementa tion. The performance of the UBBAM has been shown to match that of the original BAM. The key device in the implementation of the UBBAM is a Smart Advanced Spatial Light Modulator (SASLM1), a ferroelectric liquid crystal (FELC) SLM driven by a silicon CMOS backplane. The smart pixel of the SASLM serves as a local processing unit which sets the state of its FELC modula tor according to some function of the optical signals incident upon the pixel's photodetector. The ac tion of each smart pixel is hence akin to a neuron's soma, and a matrix of such smart pixels therefore forms a powerful parallel processing hardware, significant in representing neuron fields, in real time, with minimal off-chip control and processing. The UBBAM model, th...

Book ChapterDOI
26 May 1994
TL;DR: Bidirectional Associative OrthoGonal Memory (BAOGM) goes beyond the BAM capacity and uses proper filters and orthogonality to increase the storage capacity and reduce the noise effect arising from linear dependences between patterns.
Abstract: Hopfield (Hopfield,1984a and 1984b) introduced a first model of one-layer autoassociative memory. The Bidirectional Associative Memory (BAM) was proposed by Kosko (Kosko,1988a) and generalizes the model to be bidirectional and heteroassociative. The BAMs have storage capacity limitations (Wang. 1990a). It has been proposed several improvements (Adaptive Bidirectional Associative Memories (Kosko,1988b). multiple training (Wang. 1990a) y (Wang,1990b), guaranteed recall (Wang,199l) and a lot more besides). One-step models without iteration has been developed too (Orthonormalized Associative Memories (MAON), (Garcia,1992) and the Hao’s associative memory (Hao,1992) which uses a hidden layer). In this paper. we propose a new model of associative memory which can be used either bidirectionally or in one-step mode. This model uses a hidden layer. proper filters and orthogonality to increase the storage capacity and reduce the noise effect arising from linear dependences between patterns. Our model, that we call Bidirectional Associative OrthoGonal Memory (BAOGM) goes beyond the BAM capacity. BAM and MAON models arc particular cases of it.


Proceedings ArticleDOI
27 Jun 1994
TL;DR: A pattern pair training algorithm is proposed to ensure every trained pair stored in the eBAM will be recalled when an input pair is located in its basin with radius r.
Abstract: Due to the limitation of the dynamic range of VLSI circuits in the implementation of the exponential bidirectional associative memory (eBAM), the optimal radix must be found in order to get the maximal dimension for the eBAM. It is also necessary to update the radix when new pattern pairs are to be stored. An optimal updating algorithm for the radix is also present. Besides, a pattern pair training algorithm is proposed to ensure every trained pair stored in the eBAM will be recalled when an input pair is located in its basin with radius r. >

Proceedings ArticleDOI
M.H. Erdem1, Y. Ozturk
30 May 1994
TL;DR: A binary associative memory, inspired from Newton's mass attraction theory is proposed and some related analysis is given and the proposed model has been observed to be superior to Hamming net, Hopfield network and Harmony theory in various aspects.
Abstract: In this study, a binary associative memory, inspired from Newton's mass attraction theory is proposed and some related analysis is given. In the model, memory items are considered as masses in the interior or at the corners of a hypercube. In recall, "attraction forces" are computed and the memory item, whose "force" is the greatest, becomes the output pattern. Since the operation of the model is highly parallel, the network is extremely fast. Retrieving a memory item takes only two steps. The proposed model has been observed to be superior to Hamming net, Hopfield network and Harmony theory in various aspects. >

Journal ArticleDOI
TL;DR: This paper describes how n-tuple techniques can be used in the hardware implementation of a general auto-associative network.
Abstract: The use of n-tuple or weightless neural networks as pattern recognition devices has been well documented. They have a significant advantages over more common networks paradigms, such as the multilayer perceptron in that they can be easily implemented in digital hardware using standard random access memories. To date, n-tuple networks have predominantly been used as fast pattern classification devices. The paper describes how n-tuple techniques can be used in the hardware implementation of a general auto-associative network.

Proceedings ArticleDOI
27 Jun 1994
TL;DR: The practical capacity of exponential bidirectional associative memory (eBAM) considering fault tolerance and fixed dynamic range of VLSI circuits is discussed and a maximal length of patterns under a fixedynamic range is derived.
Abstract: The practical capacity of exponential bidirectional associative memory (eBAM) considering fault tolerance and fixed dynamic range of VLSI circuits is discussed. Several factors are taken into consideration in the implementation of an eBAM VLSI circuits. First, the fault tolerance requirement leads to the discovery of the attraction radius of the basin for each stored pattern pair. Second, the bit-error probability of the eBAM has to be optimally small when a huge amount of pattern pairs are encoded in the eBAM. Third, the fixed dynamic range of a transistor or a diode operating in the subthreshold region results in a limited length of each stored pattern. Hence, the signal-noise-ratio (SNR) analysis approach is adopted to find the attraction radius, and the practical capacity. A maximal bit-error probability (P/sub e/) is estimated. A maximal length of patterns under a fixed dynamic range is derived. >

Journal ArticleDOI
01 Jul 1994-Robotica
TL;DR: A new neural networks-based method to solve the motion planning problem, i.e. to construct a collision-free path for a moving object among fixed obstacles, using modified feed-forward neural networks and bidirectional associative memory.
Abstract: This paper presents a new neural networks-based method to solve the motion planning problem, i.e. to construct a collision-free path for a moving object among fixed obstacles. Our ‘navigator’ basically consists of two neural networks: The first one is a modified feed-forward neural network, which is used to determine the configuration space; the moving object is modelled as a configuration point in the configuration space. The second neural network is a modified bidirectional associative memory, which is used to find a path for the configuration point through the configuration space while avoiding the configuration obstacles. The basic processing unit of the neural networks may be constructed using logic gates, including AND gates, OR gates, NOT gate and flip flops. Examples of efficient solutions to difficult motion planning problems using our proposed techniques are presented.