scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
01 Sep 2016
TL;DR: The results show the correlation between the number of occlusions in the scene and a gain from using camera pairs instead of uniformly distributed cameras.
Abstract: In the article we deal with the problem of camera positioning in sparse multiview systems with applications to free navigation. The limited number of the cameras, though makes the system relatively practical, implies problems with proper depth estimation and virtual view synthesis, due to increased amount of the occluded areas. We present experimental results for the optimal positioning of the cameras, depending on two factors — characteristics of an acquired scene and the multi-camera system (linear or circular camera setup). The results show the correlation between the number of occlusions in the scene and a gain from using camera pairs instead of uniformly distributed cameras.

6 citations

Book ChapterDOI
01 Nov 2013
TL;DR: This chapter describes such “Stereo-In to Multiple-Viewpoint-Out” functionality on a general FPGA-based system demonstrating a real-time high-quality depth extraction and viewpoint synthesizer, as a prototype toward a future chipset for 3D-HDTV.
Abstract: With the advent of 3D-TV, the increasing interest of free viewpoint TV in MPEG, and the inevitable evolution toward high-quality and higher resolution TV (from SDTV to HDTV and even UDTV) with comfortable viewing experience, there is a need to develop low-cost solutions addressing the 3D-TV market. Moreover, it is believed that in a not too distant future 2D-UDTV display technology will support a reasonable quality 3D-TV autostereoscopic display mode (no need for 3D glasses) where up to a dozens of intermediate views are rendered between the extreme left and right stereo video input views. These intermediate views can be synthesized by using viewpoint synthesizing techniques with the left and/or right image and associated depth map. With the increasing penetration of 3D-TV broadcasting with left and right images as straightforward 3D-TV broadcasting method, extracting high-quality depth map from these stereo input images becomes mandatory to synthesize other intermediate views. This chapter describes such “Stereo-In to Multiple-Viewpoint-Out” functionality on a general FPGA-based system demonstrating a real-time high-quality depth extraction and viewpoint synthesizer, as a prototype toward a future chipset for 3D-HDTV.

6 citations

Patent
28 Aug 2009
TL;DR: In this article, boundary splatting is used to reduce pinholes around the one or more boundaries or mitigate a loss of high frequency details in non-boundary locations in a warped reference view.
Abstract: Various implementations are described. Several implementations relate to view synthesis with boundary-splatting for 3D Video (3DV) applications. According to one aspect, pixels in a warped reference view are splatted based on whether the pixels are within a specified distance from one or more depth boundaries. Such splatting may result in one or more of reducing pinholes around the one or more boundaries or mitigating a loss of high frequency details in non-boundary locations.

6 citations

Journal ArticleDOI
TL;DR: A fast physically correct refocusing algorithm to address this issue in a twofold way, and its 1D kernel can be 30× faster than the benchmark View Synthesis Reference Software (VSRS)1D-Fast.
Abstract: Digital refocusing has a tradeoff between complexity and quality when using sparsely sampled light fields for low-storage applications. In this paper, we propose a fast physically correct refocusing algorithm to address this issue in a twofold way. First, view interpolation is adopted to provide photorealistic quality at infocus-defocus hybrid boundaries. Regarding its conventional high complexity, we devised a fast line-scan method specifically for refocusing, and its 1D kernel can be $30\times $ faster than the benchmark View Synthesis Reference Software (VSRS)-1D-Fast. Second, we propose a block-based multi-rate processing flow for accelerating purely infocused or defocused regions, and a further 3- $34\times $ speedup can be achieved for high-resolution images. All candidate blocks of variable sizes can interpolate different numbers of rendered views and perform refocusing in different subsampled layers. To avoid visible aliasing and block artifacts, we determine these parameters and the simulated aperture filter through a localized filter response analysis using defocus blur statistics. The final quadtree block partitions are then optimized in terms of computation time. Extensive experimental results are provided to show superior refocusing quality and fast computation speed. In particular, the run time is comparable with the conventional single-image blurring, which causes serious boundary artifacts.

6 citations

Journal ArticleDOI
TL;DR: An optical flow-assisted adaptive patch-based view synthesis algorithm that reduces the size and number of holes during reconstruction and achieves an improvement of 2.14 dB on average.
Abstract: Due to the rapid growth of technology and the dropping cost of cameras, multiview imaging applications have attracted many researchers in recent years. Free viewpoint and 3D Televisions are among these interesting applications. One of the problems that should be solved to realize such applications is rendering. In this paper, we propose an optical flow-assisted adaptive patch-based view synthesis algorithm. This patch-based scheme reduces the size and number of holes during reconstruction. The size of patch is determined in response to edge information for better reconstruction, especially near the boundaries. In the first stage of the algorithm, disparity is obtained using optical flow estimation. Then, a reconstructed version of the left and right views is generated using our adaptive patch-based algorithm. The mismatches between each view and its reconstructed version are obtained in the mismatch detection steps. This stage results in two masks as outputs, which help with the refinement of disparities and the selection of the best patches for final synthesis. Finally, the remaining holes are filled using our simple hole-filling scheme and the refined disparities. The objective and subjective performances of the proposed algorithm are compared with recent methods. The results show that the proposed algorithm achieves an improvement of 2.14 dB on average.

6 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102