scispace - formally typeset
Search or ask a question

Showing papers by "Mongi A. Abidi published in 1996"


Journal ArticleDOI
TL;DR: It is demonstrated by extensive experimentation, using synthetic and real range image data, that each of these three processes contributes to yield rugged and consistent segmentation results.

41 citations


Journal ArticleDOI
TL;DR: This paper focuses on the problem of extracting features such as image discontinuities from both synthetic and real images, and proposes Tikhonov's regularization paradigm as the basic tool for solving this inversion problem and restoring well-posedness.
Abstract: Data fusion provides tools for solving problems which are characterized by distributed and diverse information sources. In this paper, the authors focus on the problem of extracting features such as image discontinuities from both synthetic and real images. Since edge detection and surface reconstruction are ill-posed problems in the sense of Hadamard, Tikhonov's regularization paradigm is proposed as the basic tool for solving this inversion problem and restoring well-posedness. The proposed framework includes: (1) a systematic view of oneand two-dimensional regularization; (2) extension of the standard Tikhonov regularization method by allowing space-variant regularization parameters; and (3) further extension of the regularization paradigm by adding multiple data sources to allow for data fusion. The theoretical approach is complemented by developing a series of algorithms, then solving the early vision problems of color edge detection and surface reconstruction. An evaluation of these methods reveals that this new analytical data fusion technique output is consistently better than each of the individual RGB edge maps, and noisy corrupted images are reconstructed into smooth noiseless surfaces.

30 citations


Proceedings ArticleDOI
05 Aug 1996
TL;DR: In this paper, the authors describe a system that automatically determines an optimized next range sensor position and orientation during the reconstruction of a 3D model, based on a model consisting of surfaces which have been viewed and volumes occluded from the camera's view.
Abstract: In this paper, the authors describe the system they have implemented which automatically determines an optimized next range sensor position and orientation during the reconstruction of a three-dimensional model. The system they have developed reconstructs a model consisting of surfaces which have been viewed and volumes occluded from the camera's view. Ideally, a sensor pose determined by a "best-next-view" system will reveal the greatest quantity of previously unknown scene information. We will present results from the most intelligent of the algorithms we have developed. This algorithm attempts to intelligently cluster the occluded data and orient the sensor on the centroid of the largest cluster. A system which finds a solution to the best-next-view problem may find application in the contexts of robot navigation, manufacturing and hazardous materials handling. The methods we implement take advantage of no a priori information in finding the best-next-view position.

24 citations