scispace - formally typeset
Search or ask a question
Topic

Homography (computer vision)

About: Homography (computer vision) is a research topic. Over the lifetime, 2247 publications have been published within this topic receiving 51916 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The two key components that are necessary for successful SR restoration are described: the accurate alignment or registration of the LR images and the formulation of an SR estimator that uses a generative image model together with a prior model of the super-resolved image itself.
Abstract: Super-resolution (SR) restoration aims to solve the following problem: given a set of observed images, estimate an image at a higher resolution than is present in any of the individual images. Where the application of this technique differs in computer vision from other fields is in the variety and severity of the registration transformation between the images. In particular this transformation is generally unknown, and a significant component of solving the SR problem in computer vision is the estimation of the transformation. The transformation may have a simple parametric form, or it may be scene dependent and have to be estimated for every point. In either case the transformation is estimated directly and automatically from the images. We describe the two key components that are necessary for successful SR restoration: the accurate alignment or registration of the LR images and the formulation of an SR estimator that uses a generative image model together with a prior model of the super-resolved image itself. As with many other problems in computer vision, these different aspects are tackled in a robust, statistical framework.

296 citations

Proceedings ArticleDOI
07 Jul 2001
TL;DR: The camera-projector system infers models for the projector-to-camera and projector- to-screen mappings in order to provide two major benefits.
Abstract: Standard presentation systems consisting of a laptop connected to a projector suffer from two problems: (1) the projected image appears distorted (keystoned) unless the projector is precisely aligned to the projection screen; (2) the speaker is forced to interact with the computer rather than the audience. This paper shows how the addition of an uncalibrated camera, aimed at the screen, solves both problems. Although the locations, orientations and optical parameters of the camera and projector are unknown, the projector-camera system calibrates itself by exploiting the homography between the projected slide and the camera image. Significant improvements are possible over passively calibrating systems since the projector actively manipulates the environment by placing feature points into the scene. For instance, using a low-resolution (160/spl times/120) camera, we can achieve an accuracy of /spl plusmn/3 pixels in a 1024/spl times/768 presentation slide. The camera-projector system infers models for the projector-to-camera and projector-to-screen mappings in order to provide two major benefits.

283 citations

Proceedings ArticleDOI
20 Jun 2011
TL;DR: This paper describes a method to construct seamless image mosaics of a panoramic scene containing two predominate planes: a distant back plane and a ground plane that sweeps out from the camera's location.
Abstract: This paper describes a method to construct seamless image mosaics of a panoramic scene containing two predominate planes: a distant back plane and a ground plane that sweeps out from the camera's location. While this type of panorama can be stitched when the camera is carefully rotated about its optical center, such ideal scene capture is hard to perform correctly. Existing techniques use a single homography per image to perform alignment followed by seam cutting or image blending to hide inevitable alignments artifacts. In this paper, we demonstrate how to use two homographies per image to produce a more seamless image. Specifically, our approach blends the homographies in the alignment procedure to perform a nonlinear warping. Once the images are geometrically stitched, they are further processed to blend seams and reduce curvilinear visual artifacts due to the nonlinear warping. As demonstrated in our paper, our procedure is able to produce results for this type of scene where current state-of-the-art techniques fail.

270 citations

Proceedings ArticleDOI
07 Jun 2015
TL;DR: A novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations, that is more robust to parameter selection, and hence more automated compared with state-of-the-art methods.
Abstract: The goal of image stitching is to create natural-looking mosaics free of artifacts that may occur due to relative camera motion, illumination changes, and optical aberrations. In this paper, we propose a novel stitching method, that uses a smooth stitching field over the entire target image, while accounting for all the local transformation variations. Computing the warp is fully automated and uses a combination of local homography and global similarity transformations, both of which are estimated with respect to the target. We mitigate the perspective distortion in the non-overlapping regions by linearizing the homography and slowly changing it to the global similarity. The proposed method is easily generalized to multiple images, and allows one to automatically obtain the best perspective in the panorama. It is also more robust to parameter selection, and hence more automated compared with state-of-the-art methods. The benefits of the proposed approach are demonstrated using a variety of challenging cases.

250 citations

Proceedings ArticleDOI
17 Jun 1997
TL;DR: The authors present a method to solve the main problem for building a mosaic without human interaction for any rotation around the optical axis and fairly large zooming factors.
Abstract: The main problem for building a mosaic is the computation of the warping functions (homographies) In fact two cases are to be distinguished The first is when the homography is mainly a translation (ie The rotation around the optical axis and the zooming factor are small) The second is the general case (when the rotation around the optical axis and zooming are arbitrary) Some efficient methods have been developed to solve the first case But the second case is more difficult, in particular, when the rotation around the optical axis is very large (90 degrees or more) Often in this case human interaction is needed to provide a first approximation of the transformation that will bring one back to the first case The authors present a method to solve this problem without human interaction for any rotation around the optical axis and fairly large zooming factors

230 citations


Network Information
Related Topics (5)
Feature extraction
111.8K papers, 2.1M citations
83% related
Image segmentation
79.6K papers, 1.8M citations
81% related
Feature (computer vision)
128.2K papers, 1.7M citations
81% related
Image processing
229.9K papers, 3.5M citations
79% related
Convolutional neural network
74.7K papers, 2M citations
78% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20223
2021108
2020110
2019145
2018131
2017127