scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On edge and line linking with connectionist models

01 Mar 1994-Vol. 24, Iss: 3, pp 413-428
TL;DR: Two connectionist models for mid-level vision problems, namely, edge and line linking, have been presented and the experimental results and the proof of convergence of the network models have been provided.
Abstract: In this paper two connectionist models for mid-level vision problems, namely, edge and line linking, have been presented. The processing elements (PE) are arranged in the form of two-dimensional lattice in both the models. The models take the strengths and the corresponding directions of the fragmented edges (or lines) as the input. The state of each processing element is updated by the activations received from the neighboring processing elements. In one model, each neuron interacts with its eight neighbors, while in the other model, each neuron interacts over a larger neighborhood. After convergence, the output of the neurons represent the linked edge (or line) segments in the image. The first model directly produces the linked line segments, while the second model produces a diffused edge cover. The linked edge segments are found by finding out the spine of the diffused edge cover. The experimental results and the proof of convergence of the network models have also been provided. >
Citations
More filters
Journal ArticleDOI
TL;DR: The various applications of neural networks in image processing are categorised into a novel two-dimensional taxonomy for image processing algorithms and their specific conditions are discussed in detail.

1,100 citations

Journal ArticleDOI
TL;DR: An automatic seeded region growing algorithm for color image segmentation that can produce good results as favorably compared to some existing algorithms.

406 citations

Proceedings ArticleDOI
18 Jun 1996
TL;DR: A new (to computer vision) experimental framework which allows us to make quantitative comparisons using subjective ratings made by people, which avoids the issue of pixel-level ground truth.
Abstract: The purpose of this paper is to describe a new (to computer vision) experimental framework which allows us to make quantitative comparisons using subjective ratings made by people. This approach avoids the issue of pixel-level ground truth. As a result, it does not allow us to make statements about the frequency of false positive and false negative errors at the pixel level. Instead, using experimental design and statistical techniques borrowed from Psychology, we make statements about whether the outputs of one edge detector are rated statistically significantly higher than the outputs of another. This approach offers itself as a nice complement to signal-based quantitative measures. Also, the evaluation paradigm in this paper is goal oriented; in particular, we consider edge detection in the context of object recognition. The human judges rate the edge, detectors based on how well the capture the salient features of real objects. So far, edge detection modules have been designed and evaluated in isolation, except for the recent work by Ramesh and Haralick (1992). The only prior work (that we are aware of) which also uses humans to rate image algorithms is that of Reeves and Higdon (1995). They use human ratings to decide on regularization parameters of image restoration. Fram and Deutch (1975) also used human subjects, however, the focus was on human versus machine performance rather than using human ratings to compare different edge detectors. The use of human judges to rate image outputs mist be approached systematically. Experiments must be designed and conducted carefully, and results interpreted with appropriate statistical tools. The use of statistical analysis in vision system performance characterization has been rare. The only prior work in the area that we are aware of is that of Nair et al. (1995), who used statistical ranking procedures to compare neural network based object recognition systems.

321 citations


Additional excerpts

  • ...TABLE 1 Edge Detection Algorithms in PAMI (Jan. 93–June 95), SMC (Jan. 93–Aug. 95), R&A (April 94–June 95), CVGIP (Jan. 90–July 95), IJCV (Jan. 90, Dec. 94), PR (Jan. 93–July 95) Nature Performance Real image Algorithms Source of the algorithm presented on ground truth compared [3] (PAMI, 1995) Covariance models 3 real 0 None [4] (PAMI, 1994) Expansion matching 1 real 0 Canny [5] (PAMI, 1993) Dispersion of gradient 1 real 0 Sobel direction [6] (PAMI, 1993) Regularization 2 real 0 LoG, Canny [7] (SMC, 1995) Surface fitting 2 synth 0 Sobel, Haralick [8] (SMC, 1994) Neural networks 2 real 0 None [9] (CVGIP, 1994) Voting based 3 real 0 Canny 3 range 2 synth [10] (CVGIP, 1994) Linear filtering 1 real, 1 synth 0 LoG [11] (CVGIP, 1992) Maximum likelihood 1 synth 0 Rosenfeld & Thurston [12] (CVGIP, 1991) Linear filtering 3 real, 1 synth 0 LoG, Canny [13] (CVGIP, 1991) Linear filtering 2 real 0 Deriche [14] (CVGIP, 1991) Derivative based 1 real 0 None [15] (IJCV, 1994) Linear filter 1 synth, 2 real 0 None [16] (IJCV, 1994) Linear filter 1 synth 0 None [17] (IJCV, 1993) Analog network 2 real Reconstructed image Log as ground [18] (PR, 1995) Statistical 4 real 0 Canny, LoG [19] (PR, 1995) Search 1 synth, 3 real 0 Canny, LoG, Ashkar & Modestino [20] (PR, 1995) Filtering 4 real 0 None [21] (PR, 1994) Neural nets 1 synth, 1 real 0 Canny [22] (PR, 1994) Genetic opt....

    [...]

  • ...[3] (PAMI, 1995) Covariance models 3 real 0 None [4] (PAMI, 1994) Expansion matching 1 real 0 Canny [5] (PAMI, 1993) Dispersion of gradient 1 real 0 Sobel direction [6] (PAMI, 1993) Regularization 2 real 0 LoG, Canny [7] (SMC, 1995) Surface fitting 2 synth 0 Sobel, Haralick [8] (SMC, 1994) Neural networks 2 real 0 None [9] (CVGIP, 1994) Voting based 3 real 0 Canny 3 range 2 synth [10] (CVGIP, 1994) Linear filtering 1 real, 1 synth 0 LoG [11] (CVGIP, 1992) Maximum likelihood 1 synth 0 Rosenfeld & Thurston [12] (CVGIP, 1991) Linear filtering 3 real, 1 synth 0 LoG, Canny [13] (CVGIP, 1991) Linear filtering 2 real 0 Deriche [14] (CVGIP, 1991) Derivative based 1 real 0 None [15] (IJCV, 1994) Linear filter 1 synth, 2 real 0 None [16] (IJCV, 1994) Linear filter 1 synth 0 None [17] (IJCV, 1993) Analog network 2 real Reconstructed image Log as ground [18] (PR, 1995) Statistical 4 real 0 Canny, LoG [19] (PR, 1995) Search 1 synth, 3 real 0 Canny, LoG, Ashkar & Modestino [20] (PR, 1995) Filtering 4 real 0 None [21] (PR, 1994) Neural nets 1 synth, 1 real 0 Canny [22] (PR, 1994) Genetic opt....

    [...]

Journal ArticleDOI
TL;DR: This work presents a paradigm based on experimental psychology and statistics, in which humans rate the output of low level vision algorithms, and investigates whether there is a statistically significant difference in edge detector outputs as perceived by humans when considering an object recognition task.

319 citations

Journal ArticleDOI
TL;DR: A novel edge segment detection algorithm that runs real-time and produces high quality edge segments, each of which is a linear pixel chain, hence the name Edge Drawing (ED).

163 citations

References
More filters
Journal ArticleDOI
TL;DR: A model of a system having a large number of simple equivalent components, based on aspects of neurobiology but readily adapted to integrated circuits, produces a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size.
Abstract: Computational properties of use of biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.

16,652 citations

Book
01 Jan 1984
TL;DR: The purpose and nature of Biological Memory, as well as some of the aspects of Memory Aspects, are explained.
Abstract: 1. Various Aspects of Memory.- 1.1 On the Purpose and Nature of Biological Memory.- 1.1.1 Some Fundamental Concepts.- 1.1.2 The Classical Laws of Association.- 1.1.3 On Different Levels of Modelling.- 1.2 Questions Concerning the Fundamental Mechanisms of Memory.- 1.2.1 Where Do the Signals Relating to Memory Act Upon?.- 1.2.2 What Kind of Encoding is Used for Neural Signals?.- 1.2.3 What are the Variable Memory Elements?.- 1.2.4 How are Neural Signals Addressed in Memory?.- 1.3 Elementary Operations Implemented by Associative Memory.- 1.3.1 Associative Recall.- 1.3.2 Production of Sequences from the Associative Memory.- 1.3.3 On the Meaning of Background and Context.- 1.4 More Abstract Aspects of Memory.- 1.4.1 The Problem of Infinite-State Memory.- 1.4.2 Invariant Representations.- 1.4.3 Symbolic Representations.- 1.4.4 Virtual Images.- 1.4.5 The Logic of Stored Knowledge.- 2. Pattern Mathematics.- 2.1 Mathematical Notations and Methods.- 2.1.1 Vector Space Concepts.- 2.1.2 Matrix Notations.- 2.1.3 Further Properties of Matrices.- 2.1.4 Matrix Equations.- 2.1.5 Projection Operators.- 2.1.6 On Matrix Differential Calculus.- 2.2 Distance Measures for Patterns.- 2.2.1 Measures of Similarity and Distance in Vector Spaces.- 2.2.2 Measures of Similarity and Distance Between Symbol Strings.- 2.2.3 More Accurate Distance Measures for Text.- 3. Classical Learning Systems.- 3.1 The Adaptive Linear Element (Adaline).- 3.1.1 Description of Adaptation by the Stochastic Approximation.- 3.2 The Perceptron.- 3.3 The Learning Matrix.- 3.4 Physical Realization of Adaptive Weights.- 3.4.1 Perceptron and Adaline.- 3.4.2 Classical Conditioning.- 3.4.3 Conjunction Learning Switches.- 3.4.4 Digital Representation of Adaptive Circuits.- 3.4.5 Biological Components.- 4. A New Approach to Adaptive Filters.- 4.1 Survey of Some Necessary Functions.- 4.2 On the "Transfer Function" of the Neuron.- 4.3 Models for Basic Adaptive Units.- 4.3.1 On the Linearization of the Basic Unit.- 4.3.2 Various Cases of Adaptation Laws.- 4.3.3 Two Limit Theorems.- 4.3.4 The Novelty Detector.- 4.4 Adaptive Feedback Networks.- 4.4.1 The Autocorrelation Matrix Memory.- 4.4.2 The Novelty Filter.- 5. Self-Organizing Feature Maps.- 5.1 On the Feature Maps of the Brain.- 5.2 Formation of Localized Responses by Lateral Feedback.- 5.3 Computational Simplification of the Process.- 5.3.1 Definition of the Topology-Preserving Mapping.- 5.3.2 A Simple Two-Dimensional Self-Organizing System.- 5.4 Demonstrations of Simple Topology-Preserving Mappings.- 5.4.1 Images of Various Distributions of Input Vectors.- 5.4.2 "The Magic TV".- 5.4.3 Mapping by a Feeler Mechanism.- 5.5 Tonotopic Map.- 5.6 Formation of Hierarchical Representations.- 5.6.1 Taxonomy Example.- 5.6.2 Phoneme Map.- 5.7 Mathematical Treatment of Self-Organization.- 5.7.1 Ordering of Weights.- 5.7.2 Convergence Phase.- 5.8 Automatic Selection of Feature Dimensions.- 6. Optimal Associative Mappings.- 6.1 Transfer Function of an Associative Network.- 6.2 Autoassociative Recall as an Orthogonal Projection.- 6.2.1 Orthogonal Projections.- 6.2.2 Error-Correcting Properties of Projections.- 6.3 The Novelty Filter.- 6.3.1 Two Examples of Novelty Filter.- 6.3.2 Novelty Filter as an Autoassociative Memory.- 6.4 Autoassociative Encoding.- 6.4.1 An Example of Autoassociative Encoding.- 6.5 Optimal Associative Mappings.- 6.5.1 The Optimal Linear Associative Mapping.- 6.5.2 Optimal Nonlinear Associative Mappings.- 6.6 Relationship Between Associative Mapping, Linear Regression, and Linear Estimation.- 6.6.1 Relationship of the Associative Mapping to Linear Regression.- 6.6.2 Relationship of the Regression Solution to the Linear Estimator.- 6.7 Recursive Computation of the Optimal Associative Mapping.- 6.7.1 Linear Corrective Algorithms.- 6.7.2 Best Exact Solution (Gradient Projection).- 6.7.3 Best Approximate Solution (Regression).- 6.7.4 Recursive Solution in the General Case.- 6.8 Special Cases.- 6.8.1 The Correlation Matrix Memory.- 6.8.2 Relationship Between Conditional Averages and Optimal Estimator.- 7. Pattern Recognition.- 7.1 Discriminant Functions.- 7.2 Statistical Formulation of Pattern Classification.- 7.3 Comparison Methods.- 7.4 The Subspace Methods of Classification.- 7.4.1 The Basic Subspace Method.- 7.4.2 The Learning Subspace Method (LSM).- 7.5 Learning Vector Quantization.- 7.6 Feature Extraction.- 7.7 Clustering.- 7.7.1 Simple Clustering (Optimization Approach).- 7.7.2 Hierarchical Clustering (Taxonomy Approach).- 7.8 Structural Pattern Recognition Methods.- 8. More About Biological Memory.- 8.1 Physiological Foundations of Memory.- 8.1.1 On the Mechanisms of Memory in Biological Systems.- 8.1.2 Structural Features of Some Neural Networks.- 8.1.3 Functional Features of Neurons.- 8.1.4 Modelling of the Synaptic Plasticity.- 8.1.5 Can the Memory Capacity Ensue from Synaptic Changes?.- 8.2 The Unified Cortical Memory Model.- 8.2.1 The Laminar Network Organization.- 8.2.2 On the Roles of Interneurons.- 8.2.3 Representation of Knowledge Over Memory Fields.- 8.2.4 Self-Controlled Operation of Memory.- 8.3 Collateral Reading.- 8.3.1 Physiological Results Relevant to Modelling.- 8.3.2 Related Modelling.- 9. Notes on Neural Computing.- 9.1 First Theoretical Views of Neural Networks.- 9.2 Motives for the Neural Computing Research.- 9.3 What Could the Purpose of the Neural Networks be?.- 9.4 Definitions of Artificial "Neural Computing" and General Notes on Neural Modelling.- 9.5 Are the Biological Neural Functions Localized or Distributed?.- 9.6 Is Nonlinearity Essential to Neural Computing?.- 9.7 Characteristic Differences Between Neural and Digital Computers.- 9.7.1 The Degree of Parallelism of the Neural Networks is Still Higher than that of any "Massively Parallel" Digital Computer.- 9.7.2 Why the Neural Signals Cannot be Approximated by Boolean Variables.- 9.7.3 The Neural Circuits do not Implement Finite Automata.- 9.7.4 Undue Views of the Logic Equivalence of the Brain and Computers on a High Level.- 9.8 "Connectionist Models".- 9.9 How can the Neural Computers be Programmed?.- 10. Optical Associative Memories.- 10.1 Nonholographic Methods.- 10.2 General Aspects of Holographic Memories.- 10.3 A Simple Principle of Holographic Associative Memory.- 10.4 Addressing in Holographic Memories.- 10.5 Recent Advances of Optical Associative Memories.- Bibliography on Pattern Recognition.- References.

8,197 citations

Journal ArticleDOI
TL;DR: This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification and exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components.
Abstract: Artificial neural net models have been studied for many years in the hope of achieving human-like performance in the fields of speech and image recognition. These models are composed of many nonlinear computational elements operating in parallel and arranged in patterns reminiscent of biological neural nets. Computational elements or nodes are connected via weights that are typically adapted during use to improve performance. There has been a recent resurgence in the field of artificial neural nets caused by new net topologies and algorithms, analog VLSI implementation techniques, and the belief that massive parallelism is essential for high performance speech and image recognition. This paper provides an introduction to the field of artificial neural nets by reviewing six important neural net models that can be used for pattern classification. These nets are highly parallel building blocks that illustrate neural net components and design principles and can be used to construct more complex systems. In addition to describing these nets, a major emphasis is placed on exploring how some existing classification and clustering algorithms can be performed using simple neuron-like components. Single-layer nets can implement algorithms required by Gaussian maximum-likelihood classifiers and optimum minimum-error classifiers for binary patterns corrupted by noise. More generally, the decision regions required by any classification algorithm can be generated in a straightforward manner by three-layer feed-forward nets.

7,798 citations

Journal ArticleDOI
TL;DR: The theory of edge detection explains several basic psychophysical findings, and the operation of forming oriented zero-crossing segments from the output of centre-surround ∇2G filters acting on the image forms the basis for a physiological model of simple cells.
Abstract: A theory of edge detection is presented. The analysis proceeds in two parts. (1) Intensity changes, which occur in a natural image over a wide range of scales, are detected separately at different scales. An appropriate filter for this purpose at a given scale is found to be the second derivative of a Gaussian, and it is shown that, provided some simple conditions are satisfied, these primary filters need not be orientation-dependent. Thus, intensity changes at a given scale are best detected by finding the zero values of delta 2G(x,y)*I(x,y) for image I, where G(x,y) is a two-dimensional Gaussian distribution and delta 2 is the Laplacian. The intensity changes thus discovered in each of the channels are then represented by oriented primitives called zero-crossing segments, and evidence is given that this representation is complete. (2) Intensity changes in images arise from surface discontinuities or from reflectance or illumination boundaries, and these all have the property that they are spatially. Because of this, the zero-crossing segments from the different channels are not independent, and rules are deduced for combining them into a description of the image. This description is called the raw primal sketch. The theory explains several basic psychophysical findings, and the operation of forming oriented zero-crossing segments from the output of centre-surround delta 2G filters acting on the image forms the basis for a physiological model of simple cells (see Marr & Ullman 1979).

6,893 citations

Journal ArticleDOI
TL;DR: It is pointed out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further, and how the method can be used for more general curve fitting.
Abstract: Hough has proposed an interesting and computationally efficient procedure for detecting lines in pictures. This paper points out that the use of angle-radius rather than slope-intercept parameters simplifies the computation further. It also shows how the method can be used for more general curve fitting, and gives alternative interpretations that explain the source of its efficiency.

6,693 citations