scispace - formally typeset
Search or ask a question

Showing papers on "Perspective (geometry) published in 2001"


Book
05 Mar 2001
TL;DR: The state of knowledge in one subarea of vision is described, the geometric laws that relate different views of a scene from the perspective of various types of geometries, which is a unified framework for thinking about many geometric problems relevant to vision.
Abstract: From the Publisher: with contributions from Theo Papadopoulo Over the last forty years, researchers have made great strides in elucidating the laws of image formation, processing, and understanding by animals, humans, and machines. This book describes the state of knowledge in one subarea of vision, the geometric laws that relate different views of a scene. Geometry, one of the oldest branches of mathematics, is the natural language for describing three-dimensional shapes and spatial relations. Projective geometry, the geometry that best models image formation, provides a unified framework for thinking about many geometric problems relevant to vision. The book formalizes and analyzes the relations between multiple views of a scene from the perspective of various types of geometries. A key feature is that it considers Euclidean and affine geometries as special cases of projective geometry. Images play a prominent role in computer communications. Producers and users of images, in particular three-dimensional images, require a framework for stating and solving problems. The book offers a number of conceptual tools and theoretical results useful for the design of machine vision algorithms. It also illustrates these tools and results with many examples of real applications.

458 citations


Journal ArticleDOI
01 Feb 2001
TL;DR: This paper shows that frontal face models can be faithfully reconstructed from two photographs taken by consumer digital cameras in a totally non-invasive setup, and achieves a Euclidean reconstruction with the help of a novel factorization method for perspective cameras.
Abstract: This paper presents a working system for building 3-D human face models from two photographs. Rather than using expensive 3-D scanners, we show that frontal face models can be faithfully reconstructed from two photographs taken by consumer digital cameras in a totally non-invasive setup. We first rectify the image pair so that corresponding epipolar lines become coincident, by computing a dual point transformation. We then address the correspondence problem by converting it into a maximal surface extraction problem, which is then solved efficiently. The method effectively removes local extrema. Finally, a Euclidean reconstruction is achieved with the help of a novel factorization method for perspective cameras. Most of the computational steps are conducted in projective space. Euclidean information is introduced only at the last stage. This sets apart our system from the traditional ones which begin with metric information by using carefully calibrated cameras. We have collected a bank of face pairs to test our system, and are satisfied with its performance. Results from this image database are demonstrated.

34 citations


Journal ArticleDOI
TL;DR: In this article, the authors generalize Desargues theorem in the direction of dynamical systems and show that the result comprises an infinite family of configurations, having unbounded complexity.
Abstract: The Desargues theorem is a basic theorem in classical projective geometry. In this paper we generalize Desargues theorem in the direction of dynamical systems. Our result comprises an infinite family of configurations, having unbounded complexity. The proof of the result involves constructing special kinds of hyperplane arrangements and then projecting subsets of them into the plane.

8 citations


Journal ArticleDOI
TL;DR: By modifying morphological operations with a dynamic-varying structuring element, distorted targets with variant contrast values, shape deformations, and different location/orientation can be detected successfully on the perspective plane even though they are in cluttered background.

5 citations


Proceedings ArticleDOI
07 Oct 2001
TL;DR: The proposed adaptive stereo matching algorithm has been tested on both disparity map and 3D model view and shows that remarkable improvement is obtained in the projective distortion region.
Abstract: In this paper, we propose an adaptive stereo matching algorithm to encompass stereo matching problems in projective distortion region. Since the projective distortion region can not be estimated in terms of fixed-size block matching algorithm, we tried to use adaptive window warping method in hierarchical matching process to compensate the perspective distortions. In addition, probability theory was adopted to encompass uncertainty of disparity of points over the window in this study. The proposed stereo matching algorithm has been tested on both disparity map and 3D model view. The experimental results show that remarkable improvement is obtained in the projective distortion region.

3 citations


Book ChapterDOI
01 Jan 2001
TL;DR: The concept of convex embeddings of generalized polygons was introduced in this paper, where the authors introduced the concept of a "convex embedding" for generalized polygonal structures.
Abstract: We introduce the concept of a “convex embedding” for generalized polygons This concept emerges from a study of convex subcomplexes of buildings We review some results on embeddings of generalized polygons in this perspective We also relate it to some (known) characterization theorems

1 citations


Book ChapterDOI
TL;DR: A transformation used during recognition for projecting the image information into the truncated configuration space of the training, which gives full flexibility concerning the position of the camera since perspective effects are treated exactly.
Abstract: We report on a method for achieving a significant truncation of the training space necessary for recognizing rigid 3D objects from perspective images. Considering objects lying on a table, the configuration space of continuous coordinates is three-dimensional. In addition the objects have a few distinct support modes. We show that recognition using a stationary camera can be carried out by training each object class and support mode in a two-dimensional configuration space. We have developed a transformation used during recognition for projecting the image information into the truncated configuration space of the training. The new concept gives full flexibility concerning the position of the camera since perspective effects are treated exactly. The concept has been tested using 2D object silhouettes as image property and central moments as image descriptors. High recognition speed and robust performance are obtained.

1 citations