scispace - formally typeset
Journal ArticleDOI

A connectionist model for category perception: theory and implementation

01 Mar 1993-IEEE Transactions on Neural Networks (IEEE)-Vol. 4, Iss: 2, pp 257-269

TL;DR: A connectionist model for learning and recognizing objects (or object classes) is presented and the theory of learning is developed based on some probabilistic measures.

AbstractA connectionist model for learning and recognizing objects (or object classes) is presented. The learning and recognition system uses confidence values for the presence of a feature. The network can recognize multiple objects simultaneously when the corresponding overlapped feature train is presented at the input. An error function is defined, and it is minimized for obtaining the optimal set of object classes. The model is capable of learning each individual object in the supervised mode. The theory of learning is developed based on some probabilistic measures. Experimental results are presented. The model can be applied for the detection of multiple objects occluding each other. >

...read more


Citations
More filters
Journal ArticleDOI
TL;DR: It is observed that HSs of patients are successfully classified by the GAL network compared to the LVQ network.
Abstract: A novel method is presented for the classification of heart sounds (HSs). Wavelet transform is applied to a window of two periods of HSs. Two analyses are realized for the signals in the window: segmentation of the first and second HSs, and extraction of the features.After the segmentation, feature vectors are formed by using the wavelet detail coefficients at the sixth decomposition level. The best feature elements are analyzed by using dynamic programming. Grow and learn (GAL) network and linear vector quantization (LVQ) network are used for the classification of seven different HSs.It is observed that HSs of patients are successfully classified by the GAL network compared to the LVQ network.

147 citations

Journal ArticleDOI
01 Mar 1994
TL;DR: Two connectionist models for mid-level vision problems, namely, edge and line linking, have been presented and the experimental results and the proof of convergence of the network models have been provided.
Abstract: In this paper two connectionist models for mid-level vision problems, namely, edge and line linking, have been presented. The processing elements (PE) are arranged in the form of two-dimensional lattice in both the models. The models take the strengths and the corresponding directions of the fragmented edges (or lines) as the input. The state of each processing element is updated by the activations received from the neighboring processing elements. In one model, each neuron interacts with its eight neighbors, while in the other model, each neuron interacts over a larger neighborhood. After convergence, the output of the neurons represent the linked edge (or line) segments in the image. The first model directly produces the linked line segments, while the second model produces a diffused edge cover. The linked edge segments are found by finding out the spine of the diffused edge cover. The experimental results and the proof of convergence of the network models have also been provided. >

62 citations

Journal ArticleDOI
TL;DR: A neural network with a multilayer perceptron (MLP) structure as the base learning model is used and results show the effectiveness of this method in various video stream data sets.
Abstract: This paper proposes an incremental multiple-object recognition and localization (IMORL) method. The objective of IMORL is to adaptively learn multiple interesting objects in an image. Unlike the conventional multiple-object learning algorithms, the proposed method can automatically and adaptively learn from continuous video streams over the entire learning life. This kind of incremental learning capability enables the proposed approach to accumulate experience and use such knowledge to benefit future learning and the decision making process. Furthermore, IMORL can effectively handle variations in the number of instances in each data chunk over the learning life. Another important aspect analyzed in this paper is the concept drifting issue. In multiple-object learning scenarios, it is a common phenomenon that new interesting objects may be introduced during the learning life. To handle this situation, IMORL uses an adaptive learning principle to autonomously adjust to such new information. The proposed approach is independent of the base learning models, such as decision tree, neural networks, support vector machines, and others, which provide the flexibility of using this method as a general learning methodology in multiple-object learning scenarios. In this paper, we use a neural network with a multilayer perceptron (MLP) structure as the base learning model and test the performance of this method in various video stream data sets. Simulation results show the effectiveness of this method.

48 citations


Cites background from "A connectionist model for category ..."

  • ...In [5], a connectionist model for recognizing multiple objects was presented....

    [...]

Journal ArticleDOI
TL;DR: The relevance of integration of the merits of fuzzy set theory and neural network models for designing an efficient decision making system is explained and feasibility of such systems and different ways of integration are described.
Abstract: The relevance of integration of the merits of fuzzy set theory and neural network models for designing an efficient decision making system is explained. The feasibility of such systems and different ways of integration, so far made, in the context of image processing and pattern recognition are described. Scope for further research and development is outlined. An extensive bibliography is also provided.

25 citations

Journal ArticleDOI
TL;DR: A connectionist system has been designed for learning and simultaneous recognition of flat industrial objects by integrating the psychological hypotheses with the generalized Hough transform technique, which uses the mechanism of selective attention for initial hypotheses generation.
Abstract: A connectionist system has been designed for learning and simultaneous recognition of flat industrial objects (based an the concepts of conventional and structured connectionist computing) by integrating the psychological hypotheses with the generalized Hough transform technique. The psychological facts include the evidence of separation of two regions for identification ("what it is") and pose estimation ("where it is"). The system uses the mechanism of selective attention for initial hypotheses generation. A special two-stage training paradigm has been developed for learning the structural relationships between the features and objects and the importance values of the features with respect to the objects. The performance of the system has been demonstrated on real-life data both for single and mixed (overlapped) instances of object categories. The robustness of the system with respect to noise and false alarming has been theoretically investigated. >

24 citations


References
More filters
Journal ArticleDOI
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

15,722 citations

Book
01 Jan 1988

8,883 citations

Book
01 Jan 1984
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Abstract: 1. Various Aspects of Memory.- 1.1 On the Purpose and Nature of Biological Memory.- 1.1.1 Some Fundamental Concepts.- 1.1.2 The Classical Laws of Association.- 1.1.3 On Different Levels of Modelling.- 1.2 Questions Concerning the Fundamental Mechanisms of Memory.- 1.2.1 Where Do the Signals Relating to Memory Act Upon?.- 1.2.2 What Kind of Encoding is Used for Neural Signals?.- 1.2.3 What are the Variable Memory Elements?.- 1.2.4 How are Neural Signals Addressed in Memory?.- 1.3 Elementary Operations Implemented by Associative Memory.- 1.3.1 Associative Recall.- 1.3.2 Production of Sequences from the Associative Memory.- 1.3.3 On the Meaning of Background and Context.- 1.4 More Abstract Aspects of Memory.- 1.4.1 The Problem of Infinite-State Memory.- 1.4.2 Invariant Representations.- 1.4.3 Symbolic Representations.- 1.4.4 Virtual Images.- 1.4.5 The Logic of Stored Knowledge.- 2. Pattern Mathematics.- 2.1 Mathematical Notations and Methods.- 2.1.1 Vector Space Concepts.- 2.1.2 Matrix Notations.- 2.1.3 Further Properties of Matrices.- 2.1.4 Matrix Equations.- 2.1.5 Projection Operators.- 2.1.6 On Matrix Differential Calculus.- 2.2 Distance Measures for Patterns.- 2.2.1 Measures of Similarity and Distance in Vector Spaces.- 2.2.2 Measures of Similarity and Distance Between Symbol Strings.- 2.2.3 More Accurate Distance Measures for Text.- 3. Classical Learning Systems.- 3.1 The Adaptive Linear Element (Adaline).- 3.1.1 Description of Adaptation by the Stochastic Approximation.- 3.2 The Perceptron.- 3.3 The Learning Matrix.- 3.4 Physical Realization of Adaptive Weights.- 3.4.1 Perceptron and Adaline.- 3.4.2 Classical Conditioning.- 3.4.3 Conjunction Learning Switches.- 3.4.4 Digital Representation of Adaptive Circuits.- 3.4.5 Biological Components.- 4. A New Approach to Adaptive Filters.- 4.1 Survey of Some Necessary Functions.- 4.2 On the "Transfer Function" of the Neuron.- 4.3 Models for Basic Adaptive Units.- 4.3.1 On the Linearization of the Basic Unit.- 4.3.2 Various Cases of Adaptation Laws.- 4.3.3 Two Limit Theorems.- 4.3.4 The Novelty Detector.- 4.4 Adaptive Feedback Networks.- 4.4.1 The Autocorrelation Matrix Memory.- 4.4.2 The Novelty Filter.- 5. Self-Organizing Feature Maps.- 5.1 On the Feature Maps of the Brain.- 5.2 Formation of Localized Responses by Lateral Feedback.- 5.3 Computational Simplification of the Process.- 5.3.1 Definition of the Topology-Preserving Mapping.- 5.3.2 A Simple Two-Dimensional Self-Organizing System.- 5.4 Demonstrations of Simple Topology-Preserving Mappings.- 5.4.1 Images of Various Distributions of Input Vectors.- 5.4.2 "The Magic TV".- 5.4.3 Mapping by a Feeler Mechanism.- 5.5 Tonotopic Map.- 5.6 Formation of Hierarchical Representations.- 5.6.1 Taxonomy Example.- 5.6.2 Phoneme Map.- 5.7 Mathematical Treatment of Self-Organization.- 5.7.1 Ordering of Weights.- 5.7.2 Convergence Phase.- 5.8 Automatic Selection of Feature Dimensions.- 6. Optimal Associative Mappings.- 6.1 Transfer Function of an Associative Network.- 6.2 Autoassociative Recall as an Orthogonal Projection.- 6.2.1 Orthogonal Projections.- 6.2.2 Error-Correcting Properties of Projections.- 6.3 The Novelty Filter.- 6.3.1 Two Examples of Novelty Filter.- 6.3.2 Novelty Filter as an Autoassociative Memory.- 6.4 Autoassociative Encoding.- 6.4.1 An Example of Autoassociative Encoding.- 6.5 Optimal Associative Mappings.- 6.5.1 The Optimal Linear Associative Mapping.- 6.5.2 Optimal Nonlinear Associative Mappings.- 6.6 Relationship Between Associative Mapping, Linear Regression, and Linear Estimation.- 6.6.1 Relationship of the Associative Mapping to Linear Regression.- 6.6.2 Relationship of the Regression Solution to the Linear Estimator.- 6.7 Recursive Computation of the Optimal Associative Mapping.- 6.7.1 Linear Corrective Algorithms.- 6.7.2 Best Exact Solution (Gradient Projection).- 6.7.3 Best Approximate Solution (Regression).- 6.7.4 Recursive Solution in the General Case.- 6.8 Special Cases.- 6.8.1 The Correlation Matrix Memory.- 6.8.2 Relationship Between Conditional Averages and Optimal Estimator.- 7. Pattern Recognition.- 7.1 Discriminant Functions.- 7.2 Statistical Formulation of Pattern Classification.- 7.3 Comparison Methods.- 7.4 The Subspace Methods of Classification.- 7.4.1 The Basic Subspace Method.- 7.4.2 The Learning Subspace Method (LSM).- 7.5 Learning Vector Quantization.- 7.6 Feature Extraction.- 7.7 Clustering.- 7.7.1 Simple Clustering (Optimization Approach).- 7.7.2 Hierarchical Clustering (Taxonomy Approach).- 7.8 Structural Pattern Recognition Methods.- 8. More About Biological Memory.- 8.1 Physiological Foundations of Memory.- 8.1.1 On the Mechanisms of Memory in Biological Systems.- 8.1.2 Structural Features of Some Neural Networks.- 8.1.3 Functional Features of Neurons.- 8.1.4 Modelling of the Synaptic Plasticity.- 8.1.5 Can the Memory Capacity Ensue from Synaptic Changes?.- 8.2 The Unified Cortical Memory Model.- 8.2.1 The Laminar Network Organization.- 8.2.2 On the Roles of Interneurons.- 8.2.3 Representation of Knowledge Over Memory Fields.- 8.2.4 Self-Controlled Operation of Memory.- 8.3 Collateral Reading.- 8.3.1 Physiological Results Relevant to Modelling.- 8.3.2 Related Modelling.- 9. Notes on Neural Computing.- 9.1 First Theoretical Views of Neural Networks.- 9.2 Motives for the Neural Computing Research.- 9.3 What Could the Purpose of the Neural Networks be?.- 9.4 Definitions of Artificial "Neural Computing" and General Notes on Neural Modelling.- 9.5 Are the Biological Neural Functions Localized or Distributed?.- 9.6 Is Nonlinearity Essential to Neural Computing?.- 9.7 Characteristic Differences Between Neural and Digital Computers.- 9.7.1 The Degree of Parallelism of the Neural Networks is Still Higher than that of any "Massively Parallel" Digital Computer.- 9.7.2 Why the Neural Signals Cannot be Approximated by Boolean Variables.- 9.7.3 The Neural Circuits do not Implement Finite Automata.- 9.7.4 Undue Views of the Logic Equivalence of the Brain and Computers on a High Level.- 9.8 "Connectionist Models".- 9.9 How can the Neural Computers be Programmed?.- 10. Optical Associative Memories.- 10.1 Nonholographic Methods.- 10.2 General Aspects of Holographic Memories.- 10.3 A Simple Principle of Holographic Associative Memory.- 10.4 Addressing in Holographic Memories.- 10.5 Recent Advances of Optical Associative Memories.- Bibliography on Pattern Recognition.- References.

8,132 citations

Journal ArticleDOI
TL;DR: This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Abstract: Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

7,595 citations


"A connectionist model for category ..." refers background in this paper

  • ...top-down link will be the winner-take-all4 [ 17 ] and remains enabled, the other hidden nodes will be disabled....

    [...]

  • ...The basic concepts of neural networks are presented in various surveys [ 17 ], [3], [4], [7]....

    [...]

Journal ArticleDOI
TL;DR: This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

7,401 citations