Topic
View synthesis
About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.
Papers published on a yearly basis
Papers
More filters
•
TL;DR: The Matching On Demand with view Synthesis algorithm (MODS) as discussed by the authors uses progressively more synthesized images and more detectors until reliable estimation of geometry is possible, which is comparable in speed to standard wide-baseline matchers on simpler problems.
Abstract: Wide-baseline matching focussing on problems with extreme viewpoint change is considered. We introduce the use of view synthesis with affine-covariant detectors to solve such problems and show that matching with the Hessian-Affine or MSER detectors outperforms the state-of-the-art ASIFT.
To minimise the loss of speed caused by view synthesis, we propose the Matching On Demand with view Synthesis algorithm (MODS) that uses progressively more synthesized images and more (time-consuming) detectors until reliable estimation of geometry is possible. We show experimentally that the MODS algorithm solves problems beyond the state-of-the-art and yet is comparable in speed to standard wide-baseline matchers on simpler problems.
Minor contributions include an improved method for tentative correspondence selection, applicable both with and without view synthesis and a view synthesis setup greatly improving MSER robustness to blur and scale change that increase its running time by 10% only.
4 citations
••
01 Sep 2018TL;DR: An algorithm for omnidirectional virtual view synthesis based on Omnid Directional Video plus Depth (OVD) format is introduced and the problem of lack of benchmark data disallowing objective quality assessment is addressed.
Abstract: In this paper we introduce an algorithm for omnidirectional virtual view synthesis based on Omnidirectional Video plus Depth (OVD) format. The implementation is done on the basis of View Synthesis Reference Software (VSRS) developed by MPEG of ISO/IEC. Also, we address the problem of lack of benchmark data disallowing objective quality assessment. We present a method for generating test images in OVD representation along with example omnidirectional images and omnidirectional depths, called “Poznan Hall 360 ”.
4 citations
01 Jan 2011
TL;DR: In this article, the authors propose a method of disassembling a set of disassembly points, called DISSERTATION, which is based on disassemblage-of-dispersal.
Abstract: OF DISSERTATION
4 citations
••
15 Jul 2013TL;DR: This paper presents a novel compression scheme that aims at improving the depth coding using a joint depth/texture coding scheme, an extension of the LAR (Locally Adaptive Resolution) codec, initially designed for 2D images.
Abstract: New 3D applications such as 3DTV and FVV require not only a large amount of data, but also high-quality visual rendering. Based on one or several depth maps, intermediate views can be synthesized using a depth image-based rendering technique. Many compression schemes have been proposed for texture-plus-depth data, but the exploitation of the correlation between the two representations in enhancing compression performances is still an open research issue. In this paper, we present a novel compression scheme that aims at improving the depth coding using a joint depth/texture coding scheme. This method is an extension of the LAR (Locally Adaptive Resolution) codec, initially designed for 2D images. The LAR coding framework provides a lot of functionalities such as lossy/lossless compression, low complexity, resolution and quality scalability and quality control. Experimental results address both lossless and lossy compression aspects, considering some state of the art techniques in the two domains (JPEGLS, JPEGXR). Subjective results on the intermediate view synthesis after depth map coding show that the proposed method significantly improves the visual quality.
4 citations