Topic
View synthesis
About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.
Papers published on a yearly basis
Papers
More filters
••
15 Mar 2012TL;DR: The experimental results show that the proposed algorithm can find appropriate textures in the temporal domain and fill the hole regions even in complex scenes.
Abstract: A depth image-based virtual view synthesis always induces a hole problem, and the holes degrade viewing quality of the synthesized image. In this paper, we propose a temporal hole filling algorithm referring to neighboring frames in the temporal domain. In order to find corresponding textures for a current hole, we synthesize both color and depth videos by the 3D warping technique, and linearly interpolate the holes in the depth video. With the interpolated depth values, we search the corresponding color textures from the neighboring key frames and fill them. The experimental results show that the proposed algorithm can find appropriate textures in the temporal domain and fill the hole regions even in complex scenes. Keywords-hole filling; virtual view synthesis; temporal domain; sterescopic video
9 citations
••
01 Jan 2005TL;DR: An algorithm for the layered segmentation of video data in multiple views based on computing the parameters of a layered representation of the scene in which each layer is modelled by its motion, appearance and occupancy is presented.
Abstract: We present an algorithm for the layered segmentation of video data in multiple views. The approach is based on computing the parameters of a layered representation of the scene in which each layer is modelled by its motion, appearance and occupancy, where occupancy describes, probabilistically, the layer's spatial extent and not simply its segmentation in a particular view. The problem is formulated as the MAP estimation of all layer parameters conditioned on those at the previous time step; i.e., a sequential estimation problem that is equivalent to tracking multiple objects in a given number views. Expectation-Maximisation is used to establish layer posterior probabilities for both occupancy and visibility, which are represented distinctly. Evidence from areas in each view which are described poorly under the model is used to propose new layers automatically. Since these potential new layers often occur at the fringes of images, the algorithm is able to segment and track these in a single view until such time as a suitable candidate match is discovered in the other views. The algorithm is shown to be very effective at segmenting and tracking non-rigid objects and can cope with extreme occlusion. We demonstrate an application of this representation to dynamic novel view synthesis.
8 citations
••
TL;DR: A no-reference image quality index for depth maps is presented by modeling the statistics of edge profiles (SEP) in a multi-scale framework and demonstrates that the proposed metric outperforms the relevant state-of-the-art quality metrics by a large margin and has better generalization ability.
8 citations
••
21 Sep 2020TL;DR: In this article, a modification to the original BPA-DWT by replacing the traditional constant extrapolation strategy with the newly proposed affine extrapolation for reconstructing depth data in the vicinity of discontinuities was explored.
Abstract: A highly scalable and compact representation of depth data is required in many applications, and it is especially critical for plenoptic multiview image compression frameworks that use depth information for novel view synthesis and interview prediction. Efficiently coding depth data can be difficult as it contains sharp discontinuities. Breakpoint-adaptive discrete wavelet transforms (BPA-DWT) currently being standardized as part of JPEG 2000 Part-17 extensions have been found suitable for coding spatial media with hard discontinuities. In this paper, we explore a modification to the original BPA-DWT by replacing the traditional constant extrapolation strategy with the newly proposed affine extrapolation for reconstructing depth data in the vicinity of discontinuities. We also present a depth reconstruction scheme that can directly decode the BPA-DWT coefficients and breakpoints onto a compact and scalable mesh-based representation which has many potential benefits over the sample-based description. For performing depth compensated view prediction, our proposed triangular mesh representation of the depth data is a natural fit for modern graphics architectures.
8 citations
•
08 Nov 2010TL;DR: In this paper, a method for detecting discontinuities in a depth map that comprises depth values corresponding to a view point of a reference camera (C1) is described, the detection comprises calculation of shifts for neighbouring pixels of the depth map, the shifts being associated with a change of viewpoint from the reference camera to a virtual camera (CV).
Abstract: Control of view synthesis of a 3D scene is described. A method comprises detecting discontinuities in a depth map that comprises depth values corresponding to a view point of a reference camera (C1). The detection comprises calculation of shifts for neighbouring pixels of the depth map, the shifts being associated with a change of viewpoint from the reference camera to a virtual camera (CV). The detected discontinuities are then analyzed, which comprises identifying increase of depths associated with the change of viewpoint from the reference camera to the virtual camera. Areas of disocclusion (108) associated with the viewpoint of the virtual camera are then identified, the areas being delimited by positions of the identified increase of depths associated with the change of viewpoint from the reference camera to the virtual camera. The identified areas of disocclusion are then provided to a view synthesis process.
8 citations