scispace - formally typeset
Search or ask a question
Topic

Perspective (geometry)

About: Perspective (geometry) is a research topic. Over the lifetime, 277 publications have been published within this topic receiving 5795 citations. The topic is also known as: perspective (geometry).


Papers
More filters
Proceedings ArticleDOI
15 Jun 1992
TL;DR: The problem of computing placement of points in 3-D space, given two uncalibrated perspective views, is considered and it is possible to determine projective invariants of3-D geometric configurations from two perspective views.
Abstract: The problem of computing placement of points in 3-D space, given two uncalibrated perspective views, is considered. The main theorem shows that the placement of the points is determined only up to an arbitrary projective transformation of 3-space. Given additional ground control points, however, the location of the points and the camera parameters may be determined. The method is linear and noniterative, whereas previously known methods for solving the camera calibration and placement problem to take proper account of both ground-control points and image correspondences are unsatisfactory in requiring either iterative methods or model restrictions. As a result of the main theorem, it is possible to determine projective invariants of 3-D geometric configurations from two perspective views. >

505 citations

Book
05 Mar 2001
TL;DR: The state of knowledge in one subarea of vision is described, the geometric laws that relate different views of a scene from the perspective of various types of geometries, which is a unified framework for thinking about many geometric problems relevant to vision.
Abstract: From the Publisher: with contributions from Theo Papadopoulo Over the last forty years, researchers have made great strides in elucidating the laws of image formation, processing, and understanding by animals, humans, and machines. This book describes the state of knowledge in one subarea of vision, the geometric laws that relate different views of a scene. Geometry, one of the oldest branches of mathematics, is the natural language for describing three-dimensional shapes and spatial relations. Projective geometry, the geometry that best models image formation, provides a unified framework for thinking about many geometric problems relevant to vision. The book formalizes and analyzes the relations between multiple views of a scene from the perspective of various types of geometries. A key feature is that it considers Euclidean and affine geometries as special cases of projective geometry. Images play a prominent role in computer communications. Producers and users of images, in particular three-dimensional images, require a framework for stating and solving problems. The book offers a number of conceptual tools and theoretical results useful for the design of machine vision algorithms. It also illustrates these tools and results with many examples of real applications.

458 citations

Proceedings ArticleDOI
23 Jun 1998
TL;DR: The novel contributions are that in a stratified context the various forms of providing metric information can be represented as circular constraints on the parameters of an affine transformation of the plane, providing a simple and uniform framework for integrating constraints.
Abstract: We describe the geometry constraints and algorithmic implementation for metric rectification of planes. The rectification allows metric properties, such as angles and length ratios, to be measured on the world plane from a perspective image. The novel contributions are: first, that in a stratified context the various forms of providing metric information, which include a known angle, two equal though unknown angles, and a known length ratio; can all be represented as circular constraints on the parameters of an affine transformation of the plane-this provides a simple and uniform framework for integrating constraints; second, direct rectification from right angles in the plane; third, it is shown that metric rectification enables calibration of the internal camera parameters; fourth, vanishing points are estimated using a Maximum Likelihood estimator; fifth, an algorithm for automatic rectification. Examples are given for a number of images, and applications demonstrated for texture map acquisition and metric measurements.

414 citations

Journal ArticleDOI
TL;DR: Parametric and non-parametric approaches to warping, and matching criteria, are reviewed.
Abstract: Summary Image warping is a transformation which maps all positions in one image plane to positions in a second plane. It arises in many image analysis problems, whether in order to remove optical distortions introduced by a camera or a particular viewing perspective, to register an image with a map or template, or to align two or more images. The choice of warp is a compromise between a smooth distortion and one which achieves a good match. Smoothness can be ensured by assuming a parametric form for the warp or by constraining it using differential equations. Matching can be specified by points to be brought into alignment, by local measures of correlation between images, or by the coincidence of edges. Parametric and non-parametric approaches to warping, and matching criteria, are reviewed.

337 citations

Proceedings ArticleDOI
18 Jun 1996
TL;DR: This paper describes a family of factorization-based algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines that can be viewed as generalizations of the Tomasi-Kanade algorithm from affine to fully perspective cameras, and from points to lines.
Abstract: This paper describes a family of factorization-based algorithms that recover 3D projective structure and motion from multiple uncalibrated perspective images of 3D points and lines. They can be viewed as generalizations of the Tomasi-Kanade algorithm from affine to fully perspective cameras, and from points to lines. They make no restrictive assumptions about scene or camera geometry, and unlike most existing reconstruction methods they do not rely on 'privileged' points or images. All of the available image data is used, and each feature in each image is treated uniformly. The key to projective factorization is the recovery of a consistent set of projective depths (scale factors) for the image points: this is done using fundamental matrices and epipoles estimated from the image data. We compare the performance of the new techniques with several existing ones, and also describe an approximate factorization method that gives similar results to SVD-based factorization, but runs much more quickly for large problems.

283 citations

Network Information
Related Topics (5)
Object detection
46.1K papers, 1.3M citations
76% related
Feature (computer vision)
128.2K papers, 1.7M citations
74% related
Convolutional neural network
74.7K papers, 2M citations
73% related
Image segmentation
79.6K papers, 1.8M citations
72% related
Feature extraction
111.8K papers, 2.1M citations
72% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202110
20204
201910
201813
201712
20167