scispace - formally typeset
Search or ask a question
Journal ArticleDOI

An MIMD algorithm for constant curvature feature extraction using curvature based data partitioning

01 Jun 1999-Pattern Recognition Letters (North-Holland)-Vol. 20, Iss: 6, pp 573-583
TL;DR: An MIMD algorithm for the detection of constant curvature features in an image of man-made objects by intelligent partitioning of the edge points belonging to the object contour into logical divisions so that geometric token extraction algorithm for each partition can work independently.
About: This article is published in Pattern Recognition Letters.The article was published on 1999-06-01. It has received 4 citations till now. The article focuses on the topics: Constant curvature & Curvature.
Citations
More filters
Journal ArticleDOI
TL;DR: A bibliography of nearly 1700 references related to computer vision and image analysis, arranged by subject matter is presented, including computational techniques; feature detection and segmentation; image and scene analysis; and motion.

39 citations

Proceedings ArticleDOI
05 Mar 2007
TL;DR: An integrated scheme for document image compression is presented which preserves the layout structure, and still allows the display of textual portions to adapt to the user preferences and screen area, and derives an SVG representation of the complete document image.
Abstract: We present an integrated scheme for document image compression which preserves the layout structure, and still allows the display of textual portions to adapt to the user preferences and screen area. We encode the layout structure of the document images in an XML representation. The textual components and picture components are compressed separately into different representations. We derive an SVG (scalable vector graphics) representation of the complete document image. Compression is achieved since the word-images are encoded using specifications for geometric primitives that compose a word. A document rendered from its SVG representation can be adapted for display and interactive access through common browsers on desktop as well as mobile devices. We demonstrate the effectiveness of the proposed scheme for document access

5 citations

Book ChapterDOI
01 Jan 2009
TL;DR: An interactive access scheme for Indian language document collection is presented using techniques for word-image-based search and retrieval and the compression and retrieval paradigm is applicable even for those Indian scripts for which reliable OCR technology is not available.
Abstract: Indexing and retrieval of Indian language documents is an important problem. We present an interactive access scheme for Indian language document collection using techniques for word-image-based search. The compression and retrieval paradigm we propose is applicable even for those Indian scripts for which reliable OCR technology is not available. Our technique for word spotting is based on exploiting the geometrical features of the word image. The word image features are represented in the form of a graph called geometric feature graph (GFG). The GFG is encoded as a string which serves as a compressed representation of the word image skeleton. We have also augmented the GFG-based word image spotting with latent semantic analysis for more effective retrieval. The query is specified as a set of word images and the documents that best match with the query representation in the latent semantic space are retrieved. The retrieval paradigm is further enhanced to the conceptual level with the use of document image content-domain knowledge specified in the form of an ontology.

2 citations


Additional excerpts

  • ...We make a tangent angle plot [ 7 ] θ vs t, where t is the curve parameter (sequence number of the pixel)....

    [...]

Proceedings ArticleDOI
29 Nov 2007
TL;DR: It is analyzed that the odd order moments of the tiles in the raw image is more sensitive to the tiles with cracks other than tiles with only noise, the algorithm complexity is sensitive to encoding approach, and the evolution converge characteristics are sensitive to sharing function and parameters in fitness function.
Abstract: Niche Genetic Algorithm (NGA) is proposed to recognize a disconnected nonparametric curve from a noisy binary image. The fitness function used in the NGA is derived from the hypothesis: Human Visual Tradition Model (HVTM). Sharing function based niche technique and elite-preserving strategy are utilized to preserve population variety for converging at the global optimum. It has the advantage of using a nonparametric method to extract disconnected curves from the noisy binary image other than the parametric method, which Hough Transform (HT) can conclude. The curve extracted by using the nonparametric method is verified by comparing the best strings respectively along rows and columns in the permutation-based encoding space. The curve length can be derived automatically from the image by calculating the accumulation of the distance between the neighbor tiles in the extracted curve. In this paper, it is analyzed that the odd order moments of the tiles in the raw image is more sensitive to the tiles with cracks other than tiles with only noise, the algorithm complexity is sensitive to encoding approach, the evolution converge characteristics are sensitive to sharing function and parameters in fitness function. Experimental results present that the approach was successfully used in pavement crack detection.
References
More filters
Book
13 Dec 1977

1,119 citations

Journal ArticleDOI
TL;DR: A parallel algorithm for detecting dominant points on a digital closed curve is presented, which leads to the observation that the performance of dominant points detection depends not only on the accuracy of the measure of significance, but also on the precise determination of the region of support.
Abstract: A parallel algorithm is presented for detecting dominant points on a digital closed curve. The procedure requires no input parameter and remains reliable even when features of multiple sizes are present on the digital curve. The procedure first determines the region of support for each point based on its local properties, then computes measures of relative significance (e.g. curvature) of each point, and finally detects dominant points by a process of nonmaximum suppression. This procedure leads to the observation that the performance of dominant points detection depends not only on the accuracy of the measure of significance, but also on the precise determination of the region of support. This solves the fundamental problem of scale factor selection encountered in various dominant point detection algorithms. The inherent nature of scale-space filtering in the procedure is addressed, and the performance of the procedure is compared to those of several other dominant point detection algorithms, using a number of examples. >

772 citations

Journal ArticleDOI
Perkins1
TL;DR: A vision system has been developed which can determine the position and orientation of complex curved objects in gray-level noisy scenes and organizes and reduces the image data from a digitized picture to a compact representation having the appearance of a line drawing.
Abstract: A vision system has been developed which can determine the position and orientation of complex curved objects in gray-level noisy scenes. The system organizes and reduces the image data from a digitized picture to a compact representation having the appearance of a line drawing. This compact image representation can be used for forming a model under favorable viewing conditions or for locating a part under poor viewing conditions by a matching process that uses a previously formed model. Thus, models are formed automatically by having the program view the part under favorable lighting and background conditions. The compact image representation describes the boundaries of the part.

326 citations

Journal ArticleDOI
TL;DR: A new automatic peak detection algorithm is developed and applied to histogram-based image data reduction (quantization) and the results of using the proposed algorithm for data reduction purposes are presented in the case of various images.
Abstract: A new automatic peak detection algorithm is developed and applied to histogram-based image data reduction (quantization). The algorithm uses a peak detection signal derived either from the image histogram or the cumulative distribution function to locate the peaks in the image histogram. Specifically, the gray levels at which the peaks start, end, and attain their maxima are estimated. To implement data reduction, gray-level thresholds are set between the peaks, and the gray levels at which the peaks attain their maxima are chosen as the quantization levels. The results of using the proposed algorithm for data reduction purposes are presented in the case of various images.

222 citations

Journal ArticleDOI
TL;DR: It is shown that extracting a single geometric primitive is equivalent to finding the optimum value of a cost function which has potentially many local minima, and besides providing a unifying way of understanding different primitive extraction algorithms, this model shows that for efficient extraction the true global minimum must be found with as few evaluations of the cost function as possible.
Abstract: Extracting geometric primitives is an important task in model-based computer vision. The Hough transform is the most common method of extracting geometric primitives. Recently, methods derived from the field of robust statistics have been used for this purpose. We show that extracting a single geometric primitive is equivalent to finding the optimum value of a cost function which has potentially many local minima. Besides providing a unifying way of understanding different primitive extraction algorithms, this model also shows that for efficient extraction the true global minimum must be found with as few evaluations of the cost function as possible. In order to extract a single geometric primitive we choose a number of minimal subsets randomly from the geometric data. The cost function is evaluated for each of these, and the primitive defined by the subset with the best value of the cost function is extracted from the geometric data. To extract multiple primitives, this process is repeated on the geometric data that do not belong to the primitive. The resulting extraction algorithm can be used with a wide variety of geometric primitives and geometric data. It is easily parallelized, and we describe some possible implementations on a variety of parallel architectures. We make a detailed comparison with the Hough transform and show that it has a number of advantages over this classic technique.

185 citations