scispace - formally typeset
Search or ask a question

Showing papers on "Scale-invariant feature transform published in 1991"


Journal ArticleDOI
TL;DR: It is shown that if just a small subset of the edge points in the image, selected at random, is used as input for the Hough Transform, the performance is often only slightly impaired, thus the execution time can be considerably shortened.

640 citations


Journal ArticleDOI
TL;DR: An efficient probabilistic algorithm for a Monte-Carlo approximation to the Hough transform that requires substantially less computation and storage than the standard Houghtransform when applied to patterns that are easily recognized by humans.

80 citations


Proceedings Article
01 Jan 1991
TL;DR: Two parallel algorithms to compute the Hough transform on a reconfigurable mesh with buses (RMESH) multiprocessor are developed.
Abstract: We develop parallel algorithms to compute the Hough transform on a reconfigurable mesh with buses (RMESH) multiprocessor. The p angle Hough transform of an N × N image can be computed in O(p log(N/p)) time by an N × N RMESH, in O((p/N) log N) time by an N × N2 RMESH with N copies of the image pretiled, in O((p/[formula]) log N) time by an N1.5 × N1.5 RMESH, and in O((p/N) log N) time by an N2 × N2 RMESH.

48 citations


Journal ArticleDOI
TL;DR: From the results of the experiments performed using several test patterns, it can see that the proposed rotation transform method is useful for the extraction of line segments from an edge image.

26 citations



Proceedings ArticleDOI
01 Nov 1991
TL;DR: A new approach for the detection of motion of three-dimensional rigid bodies from two- dimensional images is presented, based on the Hough transform method, and the effectiveness is confirmed through computer experiment.
Abstract: A new approach for the detection of motion of three-dimensional rigid bodies from two- dimensional images is presented. The approach is based on two main stages. In the first stage, the positions and velocities of feature points are detected from two-dimensional images. In the second stage, the rotation and the translation velocity of each body is detected from the positions and velocities of the set of feature points. We employ the Hough transform method in both stages. We describe the details of the second stage and the method of computation reduction in Hough transform. The effectiveness of our method is confirmed through computer experiment.© (1991) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

4 citations


Journal ArticleDOI
TL;DR: A ‘neural-like’ parallel system is described for the real-time implementation of the Hough transform and is applied to simple object recognition in computer vision.
Abstract: A ‘neural-like’ parallel system is described for the real-time implementation of the Hough transform. It is applied to simple object recognition in computer vision.

3 citations


Book ChapterDOI
09 Oct 1991
TL;DR: By this image subdivision Hough transform and cluster detection can be applied to proportionally smaller accumulators in parallel and combine the lines identified in subimages is described.
Abstract: To cope with memory and time requirements of the Hough transform when identifying lines here, a line parametrisation is investigated which allows a relatively small and compact accumulator, a fast algorithm to fill the accumulator and its implementation in parallel hardware is developed and finally,an algorithm to combine the lines identified in subimages is described: by this image subdivision Hough transform and cluster detection can be applied to proportionally smaller accumulators in parallel.

1 citations


01 Jan 1991
TL;DR: A method based on local affine models that deals with local motion and resolves ambiguities is proposed that results in a considerably larger set of matches and lower error count than classical matching approaches.
Abstract: Reliable and well distributed correspondences between sparsely sampled photographic images of dynamic scenes are needed in many computer vision applications including image-based rendering and 3D reconstruction. We propose a method based on local affine models that deals with local motion and resolves ambiguities. For most images this results in a considerably larger set of matches and lower error count than classical matching approaches. Fairly reliable matches can be obtained by applying a key point detector and descriptor like SIFT and assigning to each key point the nearest neighbor in a multidimensional descriptor space. To limit false positives due to ambiguities and non-existent correspondences, the distance ratio between the two nearest neighbors is thresholded and the match has to be found in both directions. While discarding many potentially correct matches, a considerable amount of wrong matches remains (figure 1b). This leads to inconsistencies in multi-image key point tracks, suggesting that multiple points in one image represent the same object point. Since global measures to eliminate the wrong matches on a track cannot easily be derived from the descriptor distances, common solution are to drop all of the inconsistent tracks, to only work on image pairs, or to use underlying global models like a fundamental matrix (figure 1c), or a 3D reconstruction to reject outliers. However, those models cannot handle local motion, considerably reduce the matching density, and cannot always be used to resolve ambiguities (figure 1d).