scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A connectionist model for category perception: theory and implementation

TL;DR: A connectionist model for learning and recognizing objects (or object classes) is presented and the theory of learning is developed based on some probabilistic measures.
Abstract: A connectionist model for learning and recognizing objects (or object classes) is presented. The learning and recognition system uses confidence values for the presence of a feature. The network can recognize multiple objects simultaneously when the corresponding overlapped feature train is presented at the input. An error function is defined, and it is minimized for obtaining the optimal set of object classes. The model is capable of learning each individual object in the supervised mode. The theory of learning is developed based on some probabilistic measures. Experimental results are presented. The model can be applied for the detection of multiple objects occluding each other. >
Citations
More filters
01 Jan 1997
TL;DR: This dissertation focuses on the representation of Uncertainty in the context of Cortical Neurons, specifically the role of the central nervous system and its role in the decision-making process.
Abstract: 1 In t r o d u c t io n ....................................................................................................... 1 1.1 Background................................................................................................... 3 1.1.1 Neuroscience Background ............................................................. 5 1.1.2 Neural Network Background......................................................... 7 1.2 Dissertation O verv iew .................................................................................... 11 2 A F ramework For Cortical M o d e l in g .................................................... 13 2.1 The Representation of U n certa in ty ............................................................. 14 2.1.1 Chaotic In sta b ility ..............................................................................15 2.1.2 Previous Methods of Representing Uncertainty..............................17 2.2 Theory Formulation........................................................................................... 19 2.2.1 Chaotic Base C o n d itio n ....................................................................20 2.2.2 Stability B u b b les.................................................................................22 2.2.3 Confined Activation .......................................................................... 23 2.3 Cortical Architecture....................................................................................... 24 2.3.1 Formulation of the Interconnection Structure.................................25 2.3.2 Local Cortical C ircu its.......................................................................29 2.4 The Function of the Cerebral C o r te x ..........................................................32 2.4.1 Selective A tten tio n ............................................................................. 33 2.4.2 Dynamic B ind in g .................................................................................35 2.4.3 Hierarchical M em ory.......................................................................... 36 2.5 S u m m ary ...........................................................................................................38 3 Chaotic N eurodynamic N e t w o r k s ..............................................................39 3.1 Cortical Behavior M o d elin g .......................................................................... 40 3.1.1 Characterization of Behavior............................................................. 40 3.1.2 The Computational Role of C haos................................................... 41 3.2 Simulated Laminar Structure ....................................................................... 43 3.2.1 Lateral Interaction F unction ............................................................. 44 Reproduced with permission of the copyright owner. Further reproduction prohibited without permission. 3.2.2 Simulation of Chaotic Base Condition..............................................46 3.2.3 Response to Stimuli.............................................................................47 3.3 Sum m ary........................................................................................................49 4 Self Partitioning Cortical Structures...................................................50 4.1 Cortical Cluster Formation........................................................................... 52 4.1.1 Threshold Variation Stu dy................................................................ 58 4.1.2 Function Variation Study ................................................................ 60 4.2 Disordered Boolean C ircuits........................................................................ 61 4.3 A Developmental Model of the Striate C o r te x .........................................66 4.4 Sum m ary....................................................................................................... 70 5 Confined Ac tiv a tio n ........................................................................................74 5.1 Traditional Pattern Com pletion..................................................................75 5.2 Information Driven D ynam ics.....................................................................80 5.2.1 The Preservation of Uncertainty.......................................................81 5.2.2 Quiet Neurons......................................................................................86 5.2.3 Energy as an Estimate of Entropy................................................... 88 5.2.4 Collective and Individual Uncertainties.......................................... 90 5.3 The EDD M odel............................................................................................. 93 5.3.1 Small Network Examples................................................................... 97 5.3.2 Larger Sim ulations.......................................................................... 101 5.4 Feature Based R etrieva l............................................................................ 103 5.5 Sum m ary..................................................................................................... 107 6 Conclusions......................................................................................................... 108 6.1 D iscussion..................................................................................................... 109 6.2 Future Research............................................................................................112 B i b l i o g r a p h y ............................................................................................................ 113 A Simulations of the Laminar Structure Mo d e l ................................. 118 B F u n c t i o n V a r i a t i o n : C o m p l e t e ........................................................ 123 C F u n c t i o n V a r i a t i o n : R e d u c e d ........................................................... 132 D Self Partitioning on Walsh Pa tter n s ...........................................141 E Feature Based Retrieval Re s u l t s .......................................................... 156 V it a ...............................................................................................................................159

1 citations


Cites background from "A connectionist model for category ..."

  • ...Periodic oscillatory neurodynamics appear in many of the new artificial neural network models that seek to be more physiologically realistic [1 , 5, 15, 37, 47, 48]....

    [...]

  • ...Indeed neuropsychological studies have long identified correlations of the activity of specific cortical regions with certain motor and cognitive tasks or sensory input [1, 12, 26, 27, 35, 38, 40, 41, 50]....

    [...]

  • ...A recent trend in neurocomputing is toward physiologically realistic models of brain function [1, 5, 15, 37, 47, 48]....

    [...]

Proceedings ArticleDOI
27 Feb 1996
TL;DR: A novel cellular connectionist neural network model for the implementation of clustering-based Bayesian image segmentation with Gibbs random field spatial constraints is described and it is proved that this cellular neural network does converge to the desired steady state with a properly designed update scheme.
Abstract: We describe in this paper a novel cellular connectionist neural network model for the implementation of clustering-based Bayesian image segmentation with Gibbs random field spatial constraints. The success of such an algorithm is largely due to the neighborhood constraints modeled by the Gibbs random field. However, the iterative enforcement of the neighborhood constraints involved in the Bayesian estimation would generally need tremendous computational power. Such computational requirement hinders the real-time application of the Bayesian image segmentation algorithms. The cellular connectionist model proposed in this paper aims at implementing the Bayesian image segmentation with real-time processing potentials. With a cellular neural network architecture mapped onto the image spatial domain, the powerful Gibbs spatial constraints are realized through the interactions among neurons connected through their spatial cellular layout. This network model is structurally similar to the conventional cellular network. However, in this new cellular model, the processing elements designed within the connectionist network are functionally more versatile in order to meet the challenging needs of Bayesian image segmentation based on Gibbs random field. We prove that this cellular neural network does converge to the desired steady state with a properly designed update scheme. An example of CT volumetric medical image segmentation is presented to demonstrate the potential of this cellular neural network for a specific image segmentation application.© (1996) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

1 citations

Book ChapterDOI
24 Aug 2013
TL;DR: The result has shown SURF cluttered dataset has a better accuracy and it is faster and SIFT and SURF also compare in aspects of speed and recognition accuracy too.
Abstract: This paper presents a method to classify new objects with SURF descriptors and shape skeleton of objects in dataset. The objective of the research is to classify all objects which exist in all images. Stages in this method are consisting of three main stages: image segmentation, object recognition and object class recognition. The region of interest in this method is used the saliency based region selection. In this paper, SIFT and SURF also compare in aspects of speed and recognition accuracy too. The result has shown SURF cluttered dataset has a better accuracy and it is faster. Also for object class recognition purpose shape skeleton would help to classify same category objects. Finally the outputs will be train with fuzzy logic to make an accurate decision making. Results have shown the accuracy improved up to 94%.

1 citations


Additional excerpts

  • ...a connectionist model was presented by [9] to detect and learn multiple objects in images and [10] presented a learning method considering multiple views of multiple objects in images....

    [...]

Proceedings ArticleDOI
23 Apr 2008
TL;DR: Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously and be adaptively accumulated for objects recognition in the testing stage.
Abstract: This paper proposes an innovative approach for multi-objects recognition for homeland security and defense based intelligent sensor networks. Unlike the conventional way of information analysis, data mining in such networks is typically characterized with high information ambiguity/uncertainty, data redundancy, high dimensionality and real-time constrains. Furthermore, since a typical military based network normally includes multiple mobile sensor platforms, ground forces, fortified tanks, combat flights, and other resources, it is critical to develop intelligent data mining approaches to fuse different information resources to understand dynamic environments, to support decision making processes, and finally to achieve the goals. This paper aims to address these issues with a focus on multi-objects recognition. Instead of classifying a single object as in the traditional image classification problems, the proposed method can automatically learn multiple objectives simultaneously. Image segmentation techniques are used to identify the interesting regions in the field, which correspond to multiple objects such as soldiers or tanks. Since different objects will come with different feature sizes, we propose a feature scaling method to represent each object in the same number of dimensions. This is achieved by linear/nonlinear scaling and sampling techniques. Finally, support vector machine (SVM) based learning algorithms are developed to learn and build the associations for different objects, and such knowledge will be adaptively accumulated for objects recognition in the testing stage. We test the effectiveness of proposed method in different simulated military environments.

1 citations


Additional excerpts

  • ...The application of connectionist model for multiple objects recognition was investigated in [2]....

    [...]

Book ChapterDOI
01 Jan 2010
TL;DR: The following stages in the recognition of visual patterns: extraction of textural features; recognition of textures; extraction on the image of the regions with uniform texture; and the Recognition of the shape of these regions can be realized effectively on neurocomputers.
Abstract: Neural networks are widely used for solving pattern recognition problems [1–5]. Let us examine the following stages in the recognition of visual patterns: extraction of textural features; recognition of textures; extraction on the image of the regions with uniform texture; and the recognition of the shape of these regions. All these stages can be realized effectively on neurocomputers. Some statistical characteristics of the local sections of images usually are understood by textural features [6, 7]. We understand under the texture a property of the local section of the image, which is fixed in a certain extensive section of the image. The foliage of the trees, grass, asphalt coating, and so on can serve as examples of sections of images having identical texture.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

16,652 citations

Book
01 Jan 1988

8,937 citations

Journal ArticleDOI
TL;DR: This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

8,434 citations

Book
01 Jan 1984
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Abstract: 1. Various Aspects of Memory.- 1.1 On the Purpose and Nature of Biological Memory.- 1.1.1 Some Fundamental Concepts.- 1.1.2 The Classical Laws of Association.- 1.1.3 On Different Levels of Modelling.- 1.2 Questions Concerning the Fundamental Mechanisms of Memory.- 1.2.1 Where Do the Signals Relating to Memory Act Upon?.- 1.2.2 What Kind of Encoding is Used for Neural Signals?.- 1.2.3 What are the Variable Memory Elements?.- 1.2.4 How are Neural Signals Addressed in Memory?.- 1.3 Elementary Operations Implemented by Associative Memory.- 1.3.1 Associative Recall.- 1.3.2 Production of Sequences from the Associative Memory.- 1.3.3 On the Meaning of Background and Context.- 1.4 More Abstract Aspects of Memory.- 1.4.1 The Problem of Infinite-State Memory.- 1.4.2 Invariant Representations.- 1.4.3 Symbolic Representations.- 1.4.4 Virtual Images.- 1.4.5 The Logic of Stored Knowledge.- 2. Pattern Mathematics.- 2.1 Mathematical Notations and Methods.- 2.1.1 Vector Space Concepts.- 2.1.2 Matrix Notations.- 2.1.3 Further Properties of Matrices.- 2.1.4 Matrix Equations.- 2.1.5 Projection Operators.- 2.1.6 On Matrix Differential Calculus.- 2.2 Distance Measures for Patterns.- 2.2.1 Measures of Similarity and Distance in Vector Spaces.- 2.2.2 Measures of Similarity and Distance Between Symbol Strings.- 2.2.3 More Accurate Distance Measures for Text.- 3. Classical Learning Systems.- 3.1 The Adaptive Linear Element (Adaline).- 3.1.1 Description of Adaptation by the Stochastic Approximation.- 3.2 The Perceptron.- 3.3 The Learning Matrix.- 3.4 Physical Realization of Adaptive Weights.- 3.4.1 Perceptron and Adaline.- 3.4.2 Classical Conditioning.- 3.4.3 Conjunction Learning Switches.- 3.4.4 Digital Representation of Adaptive Circuits.- 3.4.5 Biological Components.- 4. A New Approach to Adaptive Filters.- 4.1 Survey of Some Necessary Functions.- 4.2 On the "Transfer Function" of the Neuron.- 4.3 Models for Basic Adaptive Units.- 4.3.1 On the Linearization of the Basic Unit.- 4.3.2 Various Cases of Adaptation Laws.- 4.3.3 Two Limit Theorems.- 4.3.4 The Novelty Detector.- 4.4 Adaptive Feedback Networks.- 4.4.1 The Autocorrelation Matrix Memory.- 4.4.2 The Novelty Filter.- 5. Self-Organizing Feature Maps.- 5.1 On the Feature Maps of the Brain.- 5.2 Formation of Localized Responses by Lateral Feedback.- 5.3 Computational Simplification of the Process.- 5.3.1 Definition of the Topology-Preserving Mapping.- 5.3.2 A Simple Two-Dimensional Self-Organizing System.- 5.4 Demonstrations of Simple Topology-Preserving Mappings.- 5.4.1 Images of Various Distributions of Input Vectors.- 5.4.2 "The Magic TV".- 5.4.3 Mapping by a Feeler Mechanism.- 5.5 Tonotopic Map.- 5.6 Formation of Hierarchical Representations.- 5.6.1 Taxonomy Example.- 5.6.2 Phoneme Map.- 5.7 Mathematical Treatment of Self-Organization.- 5.7.1 Ordering of Weights.- 5.7.2 Convergence Phase.- 5.8 Automatic Selection of Feature Dimensions.- 6. Optimal Associative Mappings.- 6.1 Transfer Function of an Associative Network.- 6.2 Autoassociative Recall as an Orthogonal Projection.- 6.2.1 Orthogonal Projections.- 6.2.2 Error-Correcting Properties of Projections.- 6.3 The Novelty Filter.- 6.3.1 Two Examples of Novelty Filter.- 6.3.2 Novelty Filter as an Autoassociative Memory.- 6.4 Autoassociative Encoding.- 6.4.1 An Example of Autoassociative Encoding.- 6.5 Optimal Associative Mappings.- 6.5.1 The Optimal Linear Associative Mapping.- 6.5.2 Optimal Nonlinear Associative Mappings.- 6.6 Relationship Between Associative Mapping, Linear Regression, and Linear Estimation.- 6.6.1 Relationship of the Associative Mapping to Linear Regression.- 6.6.2 Relationship of the Regression Solution to the Linear Estimator.- 6.7 Recursive Computation of the Optimal Associative Mapping.- 6.7.1 Linear Corrective Algorithms.- 6.7.2 Best Exact Solution (Gradient Projection).- 6.7.3 Best Approximate Solution (Regression).- 6.7.4 Recursive Solution in the General Case.- 6.8 Special Cases.- 6.8.1 The Correlation Matrix Memory.- 6.8.2 Relationship Between Conditional Averages and Optimal Estimator.- 7. Pattern Recognition.- 7.1 Discriminant Functions.- 7.2 Statistical Formulation of Pattern Classification.- 7.3 Comparison Methods.- 7.4 The Subspace Methods of Classification.- 7.4.1 The Basic Subspace Method.- 7.4.2 The Learning Subspace Method (LSM).- 7.5 Learning Vector Quantization.- 7.6 Feature Extraction.- 7.7 Clustering.- 7.7.1 Simple Clustering (Optimization Approach).- 7.7.2 Hierarchical Clustering (Taxonomy Approach).- 7.8 Structural Pattern Recognition Methods.- 8. More About Biological Memory.- 8.1 Physiological Foundations of Memory.- 8.1.1 On the Mechanisms of Memory in Biological Systems.- 8.1.2 Structural Features of Some Neural Networks.- 8.1.3 Functional Features of Neurons.- 8.1.4 Modelling of the Synaptic Plasticity.- 8.1.5 Can the Memory Capacity Ensue from Synaptic Changes?.- 8.2 The Unified Cortical Memory Model.- 8.2.1 The Laminar Network Organization.- 8.2.2 On the Roles of Interneurons.- 8.2.3 Representation of Knowledge Over Memory Fields.- 8.2.4 Self-Controlled Operation of Memory.- 8.3 Collateral Reading.- 8.3.1 Physiological Results Relevant to Modelling.- 8.3.2 Related Modelling.- 9. Notes on Neural Computing.- 9.1 First Theoretical Views of Neural Networks.- 9.2 Motives for the Neural Computing Research.- 9.3 What Could the Purpose of the Neural Networks be?.- 9.4 Definitions of Artificial "Neural Computing" and General Notes on Neural Modelling.- 9.5 Are the Biological Neural Functions Localized or Distributed?.- 9.6 Is Nonlinearity Essential to Neural Computing?.- 9.7 Characteristic Differences Between Neural and Digital Computers.- 9.7.1 The Degree of Parallelism of the Neural Networks is Still Higher than that of any "Massively Parallel" Digital Computer.- 9.7.2 Why the Neural Signals Cannot be Approximated by Boolean Variables.- 9.7.3 The Neural Circuits do not Implement Finite Automata.- 9.7.4 Undue Views of the Logic Equivalence of the Brain and Computers on a High Level.- 9.8 "Connectionist Models".- 9.9 How can the Neural Computers be Programmed?.- 10. Optical Associative Memories.- 10.1 Nonholographic Methods.- 10.2 General Aspects of Holographic Memories.- 10.3 A Simple Principle of Holographic Associative Memory.- 10.4 Addressing in Holographic Memories.- 10.5 Recent Advances of Optical Associative Memories.- Bibliography on Pattern Recognition.- References.

8,197 citations

Book
01 Jan 1988
TL;DR: The second and third questions are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory as mentioned in this paper.
Abstract: The first of these questions is in the province of sensory physiology, and is the only one for which appreciable understanding has been achieved. This article will be concerned primarily with the second and third questions, which are still subject to a vast amount of speculation, and where the few relevant facts currently supplied by neurophysiology have not yet been integrated into an acceptable theory. With regard to the second question, two alternative positions have been maintained. The first suggests that storage of sensory information is in the form of coded representations or images, with some sort of one-to-one mapping between the sensory stimulus

8,134 citations