Topic
View synthesis
About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.
Papers published on a yearly basis
Papers
More filters
•
01 Nov 2012
TL;DR: This work presents a novel framework for this task which can reconstruct the disoc-cluded regions by taking temporally neighboring frames into account, and can fill disocclusions with their true color values, yielding high-quality view synthesis results.
Abstract: We present a novel method to fill disoccluded regions occurring in Depth Image Based Rendering (DIBR) in a faithful way Given a video stream and a corresponding depth map, DIBR can render arbitrary new views of a scene Areas that are not visible in the reference view need to be filled after warping We present a novel framework for this task which can reconstruct the disoc-cluded regions by taking temporally neighboring frames into account An efficient optimization scheme is employed to find faithful filling regions This way, in contrast to common methods, we can fill disocclusions with their true color values, yielding high-quality view synthesis results
5 citations
••
TL;DR: A new framework for generating depth maps from a rig made of a professional camera with two satellite cameras and a Kinect device is presented, which reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view, which is close to real-time performances for video applications.
Abstract: Generating depth maps along with video streams is valuable for Cinema and Television production. Thanks to the
improvements of depth acquisition systems, the challenge of fusion between depth sensing and disparity estimation is
widely investigated in computer vision. This paper presents a new framework for generating depth maps from a rig made
of a professional camera with two satellite cameras and a Kinect device. A new disparity-based calibration method is
proposed so that registered Kinect depth samples become perfectly consistent with disparities estimated between
rectified views. Also, a new hierarchical fusion approach is proposed for combining on the flow depth sensing and
disparity estimation in order to circumvent their respective weaknesses. Depth is determined by minimizing a global
energy criterion that takes into account the matching reliability and the consistency with the Kinect input. Thus generated
depth maps are relevant both in uniform and textured areas, without holes due to occlusions or structured light shadows.
Our GPU implementation reaches 20fps for generating quarter-pel accurate HD720p depth maps along with main view,
which is close to real-time performances for video applications. The estimated depth is high quality and suitable for 3D
reconstruction or virtual view synthesis.
5 citations
••
01 Sep 2017TL;DR: The demonstration is aimed at presenting a simple free-viewpoint television system that is under development at Poznań University of Technology, Poznoń, Poland, which is developed for broadcasting sport and cultural events, as well as for interactive courses and manuals.
Abstract: The demonstration is aimed at presenting a simple free-viewpoint television system that is under development at Poznan University of Technology, Poznan, Poland Video acquisition using sparsely distributed pairs of video cameras, the depth estimation from video, the virtual video rendering on a server feature the perspective FTV system The original depth estimation and view synthesis algorithms ensure good quality of the virtual views generated during virtual walks around a scene The system is developed for broadcasting sport (eg judo, karate, volleyball) and cultural (eg amateur or professional theater performances) events, as well as for interactive courses and manuals (medical, cosmetics, dancing, technical etc)
5 citations
••
TL;DR: Experiments show that the proposed real-time virtual view synthesis method from light field can get high one-time imaging quality in real time.
Abstract: Virtual view synthesis technique renders a virtual view image from several pre-collected viewpoint images. The hotspot on virtual view synthesis area is depth image-based rendering (DIBR), which has low one-time imaging quality. To achieve high imaging quality artifacts, the holes must be inpainted after image warping which means high computational complexity. This paper proposed a real-time virtual view synthesis method from light field. Then the light field is transformed into frequency domain. The light field is parameterized and reconstructed from image array. The virtual view is rendered by resampling the light field in frequency domain. After resampling the image by performing Fourier slice, the virtual view image is obtained by inverse Fourier transform. Experiments show that our method can get high one-time imaging quality in real time.
5 citations
•
28 Oct 2015
TL;DR: In this paper, a view synthesis method based on mean shift stereo matching was proposed, where a mean shift image segmentation method was used for cutting segments in a known reference image, and an initial disparity value was optimized to gain a more accurate disparity map of a stereo image pair, and the disparity map was smoothed.
Abstract: The present invention discloses a view synthesis method based on mean shift stereo matching. The method comprises: firstly, considering two commonly used images including gray images and color images, a mean shift image segmentation method is used for cutting segments in a known reference image; secondly, based on a weight multi-window matching method based on color similarity between pixels and an introduced matching cost function, an initial disparity value is optimized to gain a more accurate disparity map of a stereo image pair, and the disparity map is smoothed; and at last, positive view interpolation and hollow noise processing is conducted, and finally image drawing at any position between a left view and a right view. The method gains good stereo effects, and is suitable for gray images and color images.
5 citations