scispace - formally typeset
Search or ask a question

Showing papers on "Bidirectional associative memory published in 1991"


Journal ArticleDOI
TL;DR: In this article, a review of neural networks in chemistry is presented, focusing on the back-propagation algorithm and its applications in spectroscopy, protein structure, process control and chemical reactivity.

547 citations


Journal ArticleDOI
TL;DR: The asymptotic storage capacity of the ECAM with limited dynamic range in its exponentiation nodes is found to be proportional to that dynamic range, and it meets the ultimate upper bound for the capacity of associative memories.
Abstract: A model for a class of high-capacity associative memories is presented. Since they are based on two-layer recurrent neural networks and their operations depend on the correlation measure, these associative memories are called recurrent correlation associative memories (RCAMs). The RCAMs are shown to be asymptotically stable in both synchronous and asynchronous (sequential) update modes as long as their weighting functions are continuous and monotone nondecreasing. In particular, a high-capacity RCAM named the exponential correlation associative memory (ECAM) is proposed. The asymptotic storage capacity of the ECAM scales exponentially with the length of memory patterns, and it meets the ultimate upper bound for the capacity of associative memories. The asymptotic storage capacity of the ECAM with limited dynamic range in its exponentiation nodes is found to be proportional to that dynamic range. Design and fabrication of a 3-mm CMOS ECAM chip is reported. The prototype chip can store 32 24-bit memory patterns, and its speed is higher than one associative recall operation every 3 mu s. An application of the ECAM chip to vector quantization is also described. >

121 citations


Journal ArticleDOI
TL;DR: A linear programming/multiple training (LP/MT) method that determines weights which satisfy the conditions when a solution is feasible is presented and the sequential multiple training (SMT) method is shown to yield integers for the weights, which are multiplicities of the training pairs.
Abstract: Necessary and sufficient conditions are derived for the weights of a generalized correlation matrix of a bidirectional associative memory (BAM) which guarantee the recall of all training pairs. A linear programming/multiple training (LP/MT) method that determines weights which satisfy the conditions when a solution is feasible is presented. The sequential multiple training (SMT) method is shown to yield integers for the weights, which are multiplicities of the training pairs. Computer simulation results, including capacity comparisons of BAM, LP/MT BAM, and SMT BAM, are presented. >

106 citations


01 Jan 1991
TL;DR: In this article, the authors provide a discussion of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements.
Abstract: An historical discussion is provided of the intellectual trends that caused nineteenth century interdisciplinary studies of physics and psychobiology by leading scientists such as Helmholtz, Maxwell, and Mach to splinter into separate twentieth-century scientific movements. The nonlinear, nonstationary, and nonlocal nature of behavioral and brain data are emphasized. Three sources of contemporary neural network research—the binary, linear, and continuous-nonlinear models—are noted. The remainder of the article describes results about continuous-nonlinear models: Many models of content-addressable memory are shown to be special cases of the Cohen-Grossberg model and global Liapunov function, including the additive, brain-state-in-a-box, McCulloch-Pitts, Boltzmann machine, Hartline-Ratliff-Miller, shunting, masking field, bidirectional associative memory, Volterra-Lotka, Gilpin-Ayala, and Eigen-Schuster models. A Liapunov functional method is described for proving global limit or oscillation theorems for nonlinear competitive systems when their decision schemes are globally consistent or inconsistent, respectively. The former case is illustrated by a model of a globally stable economic market, and the latter case is illustrated by a model of the voting paradox. Key properties of shunting competitive feedback networks are summarized, including the role of sigmoid signalling, automatic gain control, competitive choice and quantization, tunable filtering, total activity normalization, and noise suppression in pattern transformation and memory storage applications. Connections to models of competitive learning, vector quantization, and categorical perception are noted. Adaptive resonance theory (ART) models for self-stabilizing adaptive pattern recognition in response to complex real-time nonstationary input environments are compared with off-line models such as autoassociators, the Boltzmann machine, and back propagation. Special attention is paid to the stability and capacity of these models, and to the role of top-down expectations and attentional processing in the active regulation of both learning and fast information processing. Models whose performance and learning are regulated by internal gating and matching signals, or by external environmentally generated error signals, are contrasted with models whose learning is regulated by external teacher signals that have no analog in natural real-time environments. Examples from sensory-motor control of adaptive vector encoders, adaptive coordinate transformations, adaptive gain control by visual error signals, and automatic generation of synchronous multijoint movement trajectories illustrate the former model types. Internal matching processes are shown capable of discovering several different types of invariant environmental properties. These include ART mechanisms which discover recognition invariants, adaptive vector encoder mechanisms which discover movement invariants, and autoreceptive associative mechanisms which discover invariants of self-regulating target position maps.

98 citations


Journal ArticleDOI
TL;DR: Structural stability is proved for a large class of unsupervised nonlinear feedback neural networks, adaptive bidirectional associative memory (ABAM) models, and it is proved that this much larger family of models, random ABAM (RABAM)models, is globally stable.
Abstract: Structural stability is proved for a large class of unsupervised nonlinear feedback neural networks, adaptive bidirectional associative memory (ABAM) models. The approach extends the ABAM models to the random-process domain as systems of stochastic differential equations and appends scaled Brownian diffusions. It is also proved that this much larger family of models, random ABAM (RABAM) models, is globally stable. Intuitively, RABAM equilibria equal ABAM equilibria that vibrate randomly. The ABAM family includes many unsupervised feedback and feedforward neural models. All RABAM models permit Brownian annealing. The RABAM noise suppression theorem characterizes RABAM system vibration. The mean-squared activation and synaptic velocities decrease exponentially to their lower hounds, the respective temperature-scaled noise variances. The many neuronal and synaptic parameters missing from such neural network models are included, but as net random unmodeled effects. They do not affect the structure of real-time global computations. >

38 citations


Proceedings ArticleDOI
18 Nov 1991
TL;DR: A novel encoding algorithm, referred to as the Householder encoding algorithm (HCA), for discrete bidirectional associative memory (BAM) is proposed, where the capacity of BAM with HCA is greatly improved compared with Kosko's method, particularly when the input dimensions are large.
Abstract: A novel encoding algorithm, referred to as the Householder encoding algorithm (HCA), for discrete bidirectional associative memory (BAM) is proposed. The traditional encoding algorithm suggested by B. Kosko (1988) is based on the Hebbian-type correlation method. Thus, not all training pattern pairs can be fixed points, even when the number of training pairs is small. Using the HCA, the capacity of a BAM tends to the bound of min (L/sub A/, L/sub B/) where L/sub A/ and L/sub B/ are the dimensions of the BAM. Simulation results show that the capacity of BAM with HCA is greatly improved compared with Kosko's method, particularly when the input dimensions are large. Distorted inputs recall the stored pair with the best approximation when the HCA is used. >

20 citations


Journal ArticleDOI
TL;DR: A modified model for the intraconnected bidirectional associative memory (IBAM) is introduced in which there are not only interfield connections but also intrafield connections added in each neuron field, which results in removal of the complement encoding problem and relaxation of the continuity assumption for reliable recalls of the BAM.
Abstract: A modified model for the intraconnected bidirectional associative memory (IBAM) is introduced in which there are not only interfield connections but also intrafield connections added in each neuron field. In the modified IBAM recall process, the intralayer feedback processes run parallel, instead of sequentially as in the IBAM, with the interlayer processes. This results in both removal of the complement encoding problem and relaxation of the continuity assumption for reliable recalls of the BAM.

19 citations


Journal ArticleDOI
TL;DR: It is shown that a wide range of neural network models, such as a bidirectional associative memory network or a layered network, are special cases of the sparsity of connections which yields the maximum capacity for a fixed noise level.
Abstract: The method of statistical neurodynamics is used to analyse retrieval dynamics of an auto-associative neural network model. The model has arbitrarily specified connectivity and static noises are added to threshold values. The method is based only on probability and approximation calculations. Connections between neurons are determined by a version of the Hebb rule (correlation-type rule), and some of them are removed at random. It is shown that the capacity of the network per connection is a monotone decreasing function of connectivity if there are no noises in the threshold. When there are noises in the threshold there exists an optimal value of the sparsity of connections which yields the maximum capacity for a fixed noise level. In addition, effects of systematic removal of connections in contrast with random removal, i.e. structured models, are discussed. It is shown that a wide range of neural network models, such as a bidirectional associative memory network or a layered network, are special cases of...

17 citations



Proceedings ArticleDOI
08 Jul 1991
TL;DR: A new neural network architecture and a learning algorithm, referred to as the hierarchically self-organizing learning (HSOL) network, is presented especially for skill learning, which is applied to machine acquisition of game playing skills.
Abstract: Presents a theory and an architecture on machine skill acquisition and its implementation in neural networks. Particular emphasis is given to the skill acquisition in man/machine systems where the neural network observes control behavior of a human expert and learns rules behind his expertise. The paradigm of machine acquisition of skills implies the machine exploitation of its own skills through the exploration of experience based on the transferred skills. A new neural network architecture and a learning algorithm, referred to as the hierarchically self-organizing learning (HSOL) network, is presented especially for skill learning. The HSOL network is a dynamically competitive or cooperative network with the ability of self-organizing hidden units, and functions as a universal approximator of arbitrary input-output mappings. This network is applied to machine acquisition of game playing skills, and its performance is compared with that of other networks, including backpropagation, fully connected network, bidirectional associative memory, and recurrent network. >

12 citations



Book ChapterDOI
01 Jan 1991
TL;DR: In this paper, the effect of stuck-at-fault concerning the connection weights of an associative memory was investigated. But the effect on the performance of associative memories was not investigated.
Abstract: The focus of the paper is on the effect of stuck-at-faults concerning the connection weights of an associative memory. The associative memory under consideration is based on distributed storage of information and is of the type analyzed by Palm (1980). The effect of stuck-at-1 as well as stuck-at-0 faults on system performance has been estimated theoretically and examined by computer simulation. It turns out that the system performance degrades only gradually with increasing number of faulty connections and that stuck-at-0 faults have the greater impact on system performance of this special type of an associative memory.

Journal ArticleDOI
TL;DR: Simulations indicate that the storage capacity of the network for uncorrelated memories, and the radius of attraction of each memory, are significantly better than those of the standard Hopfield network.
Abstract: This paper describes a process for adding hidden neurons to a fully recurrent Hopfield neural network in such a way as to optimize the orthogonality of the memory space. The process uses the network itself, operating with a "reverse update rule" to assign optimal values to the hidden neurons for each memory. The outer product rule is used to modify synaptic strengths as each new memory is added. As in a standard Hopfield network this is a fast process because it is noniterative. Tri-state hidden neurons, initially set to zero, are used in the recovery of memories. Simulations indicate that the storage capacity of the network for uncorrelated memories, and the radius of attraction of each memory, are significantly better than those of the standard Hopfield network. The use of hidden neurons permits flexibility in the network capacity for memories of a given length. The network is able to solve second-order hetero-associative problems, as illustrated with solutions to the XOR set of associations.

Patent
Kazuo Kyuma1, Shuichi Tai1, Jun Ohta1, Masaya Oita1, Nagaaki Ohyama1, Masahiro Yamaguchi1 
14 Aug 1991
TL;DR: In this article, an associative and restrictive condition is added to the energy function of a neural network constituting the associative memory, thereby converging associative output on a stable state of the energy.
Abstract: An intelligence information processing system is composed of an associative memory and a serial processing-type computer. Input pattern information is associated with the associative memory, and pattern recognition based on the computer evaluates an associative output. In accordance with this evaluation, an associative and restrictive condition is repeatedly added to the energy function of a neural network constituting the associative memory, thereby converging the associative output on a stable state of the energy. The converged associative output is verified with intelligence information stored in a computer memory. The associative and restrictive condition is again repeatedly added to the energy function in accordance with the verification so as to produce an output from the system.

Proceedings ArticleDOI
18 Nov 1991
TL;DR: A novel neural-network architecture, the neural network loop (NNL), and its learning rules, which can operate as Hopfield, BAM, and other kinds of neural networks, and which can perform multiple category associative memory.
Abstract: Describes a novel neural-network architecture, the neural network loop (NNL), and its learning rules. It can operate as Hopfield, BAM (bidirectional associative memory), and other kinds of neural networks. In particular, it can perform multiple category associative memory. This capability is very similar to that of the human brain. It can be applied to pattern recognition and associative memory. Computer simulation was carried out, and the results prove that NNL is an effective network. >


Proceedings ArticleDOI
18 Nov 1991
TL;DR: Experimental verification demonstrates the accuracy of the analytically derived formula for storage capacity of a discrete bidirectional associative memory based on the correct learning condition.
Abstract: The authors derive the storage capacity of a discrete bidirectional associative memory (BAM) based on the correct learning condition. The correct learning condition is a relaxed sufficient condition for a BAM matrix to possess the perfect recall property. In fact, the perfect recall property depends only on the way the equilibrium points are formed, since a discrete BAM is always asymptotically stable. The storage capacity provides a measure of probability that a given input to the BAM is perfectly recalled under the given situation. A comparison study between analytical and experimental results is reported. This experimental verification demonstrates the accuracy of the analytically derived formula for storage capacity. >

Journal ArticleDOI
TL;DR: This paper compares the performance of a recurrent associative memory to that of a feed-forward neural network trained with the same data and finds that the neural network's performance is much less promising than that of the associativeMemory.
Abstract: Many optimization procedures presume the availability of an initial approximation in the neighborhood of a local or global optimum. Unfortunately, finding a set of good starting conditions is itself a nontrivial proposition. Our previous papers [1,2] describe procedures that use simple and recurrent associative memories to identify approximately solutions to closely related linear programs. In this paper, we compare the performance of a recurrent associative memory to that of a feed-forward neural network trained with the same data. The neural network's performance is much less promising than that of the associative memory. Modest infeasibilities exist in the estimated solutions provided by the associative memory, but the basic variables defining the optimal solutions to the linear programs are readily apparent.

Book ChapterDOI
17 Sep 1991
TL;DR: It is demonstrated that dynamical assocative memory can be realized with the chaotic neural network and it is shown that chaotic dynamics can be applied to associative memory.
Abstract: We review our model of chaotic neural networks with chaotic dynamics and apply it to associative memory We demonstrate that dynamical assocative memory can be realized with the chaotic neural network

Proceedings ArticleDOI
18 Nov 1991
TL;DR: An encoding strategy for improving the noise tolerance and capacity of Kosko's bidirectional associative memory is proposed and increases the network's storage capacity and its tolerance to noise to a level not achievable by multiple training alone.
Abstract: An encoding strategy for improving the noise tolerance and capacity of Kosko's bidirectional associative memory is proposed. Energy minima corresponding to pattern pairs that are to be stored are enhanced, and, simultaneously, unwanted or spurious states are eliminated. The method is an extension of the multiple training procedure that has been described in the literature for inducing local minima at desired locations. An additional unlearning term in the energy expression is included to eliminate spurious states. Spurious states and parameter values for constructing the network were determined experimentally. Computer simulations showed that unlearning increases the network's storage capacity and its tolerance to noise to a level not achievable by multiple training alone. >

Book ChapterDOI
TL;DR: Four associative memory schemes that are suitable for optical processor implementation are described andumerical results are presented that compare the capabilities of these 4 associative memories.
Abstract: Four associative memory schemes (Hopfield associative memory, generalized inverse associative memory, spectral associative memory and Ho-Kashyap associative memory) that are suitable for optical processor implementation are described Numerical results are presented that compare the capabilities of these 4 associative memories

Proceedings ArticleDOI
08 Jan 1991
TL;DR: The authors present the application of a binary associative memory to classification tasks together with a memory efficient programming algorithm which is fast, even in simulations on a sequential computer.
Abstract: The authors present the application of a binary associative memory to classification tasks together with a memory efficient programming algorithm which is fast, even in simulations on a sequential computer. The associative memory is a neural network like structure with binary synaptic weights. Due to its simplicity it is rather simple to analyse compared with other more sophisticated neural networks. Furthermore, the memory is easily realisable by means of dedicated VLSI chips up to a size of several thousand neurons and a storing capacity of several megabyte. The application pursued is the control of an autonomous vehicle. >


Proceedings ArticleDOI
11 Jun 1991
TL;DR: The authors present a complete VLSI continuous-time bidirectional associative memory (BAM) using small transconductance four quadrant multipliers, and capacitors for the integrators.
Abstract: The authors present a complete VLSI continuous-time bidirectional associative memory (BAM). The short term memory (STM) section is implemented using small transconductance four quadrant multipliers, and capacitors for the integrators. The long term memory (LTM) is built using an additional multiplier that uses locally available signals to perform Hebbian learning. The value of the learned weight is present at a capacitor for each synapse. After learning has been accomplished the value of the stored weight voltage can be refreshed using a simple analog/digital (AD)-digital/analog (D/A) conversion, which if done fast enough, will maintain the weight value within a discrete interval of the complete weight range. Such a discretization still allows good performance of the STM section after learning is finished. >

Proceedings ArticleDOI
18 Nov 1991
TL;DR: From the analysis, it was found that stability and attractivity in the modified BAM model are much better than in the original BAM if all the conditions are the same.
Abstract: The authors study the bidirectional associative memory (BAM) model from the matched-filtering viewpoint, getting an intuitive understanding of its information processing mechanism. They analyze the problem of stability and attractivity, in BAM and propose some sufficient conditions. The shortcomings of BAM, that is, low memory capacity and weak attractivity, are pointed out. A revised BAM model is proposed by taking an exponential function operating on the related correlations between a probing vector and its neighbor library pattern vectors. From the analysis, it was found that stability and attractivity in the modified model are much better than in the original BAM if all the conditions are the same. >

Proceedings ArticleDOI
18 Nov 1991
TL;DR: A novel connectionist architecture aimed at modeling the interaction between long and short term memory, which combines two different network architectures, bidirectional associative memory (BAM) and mean field theory (MFT), which serve as short term and long term store, respectively.
Abstract: The authors describe a novel connectionist architecture aimed at modeling the interaction between long and short term memory. The model is capable of incrementally storing several items from long term memory in a short term memory. The model combines two different network architectures, bidirectional associative memory (BAM) and mean field theory (MFT), which serve as short term and long term store, respectively. The properties of a BAM system match all the major design criteria of an incremental short term store. When augmented with randomized internal representations (RIR), a BAM system can serve as an autoassociative memory which supports hidden representations and has an enlarged capacity and ability to store correlated patterns. MFT systems can be powerful autoassociators capable of storing very large numbers of correlated patterns, because they can utilize hidden units. Interaction between the two systems is established by means of a common hidden representation. >

Journal ArticleDOI
TL;DR: An extension of the Boolean matrix dynamics characterization technique to other, more complex DAMs is presented and spurious memories are shown to occur only if the number of stored patterns exceeds two in an even-dimensionality Hopfield memory.
Abstract: The exact dynamics of shallow loaded associative neural memories are generated and characterized. The Boolean matrix analysis approach is employed for the efficient generation of all possible state transition trajectories for parallel updated binary-state dynamic associative memories (DAMs). General expressions for the size of the basin of attraction of fundamental and oscillatory memories and the number of oscillatory and stable states are derived for discrete synchronous Hopfield DAMs loaded with one, two, or three even-dimensionality bipolar memory vectors having the same mutual Hamming distances between them. Spurious memories are shown to occur only if the number of stored patterns exceeds two in an even-dimensionality Hopfield memory. The effects of odd- versus even-dimensionality memory vectors on DAM dynamics and the effects of memory pattern encoding on DAM performance are tested. An extension of the Boolean matrix dynamics characterization technique to other, more complex DAMs is presented. >

Proceedings ArticleDOI
18 Nov 1991
TL;DR: The authors examine the selection of connection weights of a Hopfield neural network model so that the network functions as a content addressable memory (CAM) and shows that a single choice ofconnection weights is dependent upon two matrices, whose choice will affect the network functioning properly as a CAM.
Abstract: The authors examine the selection of connection weights of a Hopfield neural network model so that the network functions as a content addressable memory (CAM). They consider the discrete time version with synchronous update rule and sigmoid type nonlinear functions in the neuron outputs. The general characterization of connection weights for fixed-point programming and a condition for the asymptotic stability of these fixed points are presented. An example is also included for the analysis. It was shown that a single choice of connection weights is dependent upon two matrices, whose choice will affect the network functioning properly as a CAM. >

Proceedings ArticleDOI
18 Nov 1991
TL;DR: Two improved MAM architectures are proposed, which are based on a counter-propagation network and contain high-order neurons used to improve the network capacity and the noise performance is superior to that of a MAM model based on Kosko's learning algorithm.
Abstract: It is pointed out that multidirectional associative memories (MAMs) based on Kosko's learning algorithm have some limitations such as low network capacity, difficulty in recalling from all the layers, and O(N/sup 2/) weight matrices required for N-way patterns. The authors propose two improved MAM architectures. In the first architecture, high-order neurons are used to improve the network capacity. In the second architecture, which is based on a counter-propagation network, full network capacity is achieved with less training time and with the number of weight matrices growing with order O(N). The noise performance of these networks is also superior to that of a MAM model based on Kosko's learning algorithm. >

Book ChapterDOI
01 Jan 1991
TL;DR: The pattern classification capability of associative memory network is shown through the recognition of a set of multi-font Chinese characters through the selection of associated class vectors for each pattern prototype being stored.
Abstract: This paper shows the pattern classification capability of associative memory network through the recognition of a set of multi-font Chinese characters. Associative memory can become a suitable pattern classifier by an appropriate selection of the inner codes, or specifically, the selection of associated class vectors for each pattern prototype being stored. In our experiment, Hadamard vectors are selected in view of the features of the original character data, and their effectiveness is displayed by the network's recognition behaviour.