scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
Huijin Lv1, Yongbing Zhang1, Kai Li1, Xingzheng Wang1, Huiming Xuan1, Qionghai Dai1 
01 Dec 2014
TL;DR: A synthesis-guided depth super resolution (SGDSR) algorithm is proposed, employing the synthesis error between virtual view and corresponding original one as the criteria, to fully exploit varying property within different regions of an image.
Abstract: Depth map, as important auxiliary information in 3D procession, is used to synthesize virtual view rather than exhibition. Inspired by this, a synthesis-guided depth super resolution (SGDSR) algorithm is proposed. Employing the synthesis error between virtual view and corresponding original one as the criteria, the best super-resolved result is selected among numerous candidate super resolution (SR) results. To fully exploit varying property within different regions of an image, a patch-based SGDSR is further devised in this paper. Experimental results demonstrate the effectiveness of our method subjectively and objectively on both single view and two views platform based on depth-image-based rendering (DIBR).

4 citations

Proceedings ArticleDOI
01 May 2019
TL;DR: This work presents a hardware system of phase-based view synthesis that is able to convert stereoscopic videos to multi-view content with low-resolution depth maps and proposes a hardware-friendly wavelet re-projection engine to reduce the hardware complexity.
Abstract: View synthesis is one of the important techniques utilized in 3D TV devices. Traditional methods such as depth image-based rendering usually rely on accurate depth maps which require computation-intensive stereo matching. In this work, we present a hardware system of phase-based view synthesis that is able to convert stereoscopic videos to multi-view content with low-resolution depth maps. When compared to the view synthesis reference software, the phase-based method does not suffer from severe arifacts on object boundaries and provides higher quality views in our experimental result. There are two major contributions in our implementation. First, we propose a cross-band disparity correction scheme that not only enables the usage of low-resolution disparity maps but also improves the quality of novel views. Second, we propose a hardware-friendly wavelet re-projection engine to reduce the hardware complexity. We implemented a VLSI circuit for 8-view 4K Ultra-HD (UHD) 3DTV in TSMC 40nm technology. It delivers 30 frames per second (fps) for UHD display when operating at 200MHz. It uses 228-KB SRAM and 2M-gate logic . We also implemented the system on FPGA, and it can provide 4K UHD multi-view content at 12 fps.

4 citations

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a fast depth intra coding approach to reduce the 3D-HEVC complexity, which is based on convolutional neural network (CNN), which can reduce the intra coding time by 72.5% on average under negligible degradation of coding performance.
Abstract: As the extension of high efficiency video coding (HEVC) standard, three dimensional-HEVC (3D-HEVC) is the latest 3D video coding standard. 3D-HEVC adopts many complicated coding algorithms to generate additional intermediate views for 3D video representation, which result in extremely high coding complexity. Therefore, this paper proposed a fast depth intra coding approach to reduce the 3D-HEVC complexity, which is based on convolutional neural network (CNN). First, we established a database based on the independent view of the depth map, which includes coding unit (CU) partition data of the depth map. Second, we constructed a depth edge classification CNN (DEC-CNN) framework to classify the edges for the depth map and embedded the network into a 3D-HEVC test platform. Finally, we utilized the pixel value of the binarized depth image to correct the above classification results. The experimental results demonstrated that our approach can reduce the intra coding time by 72.5% on average under negligible degradation of coding performance. This result outperforms the other state-of-the-art methods to reduce the coding complexity of 3D-HEVC.

4 citations

Posted Content
TL;DR: NeuS as mentioned in this paper proposes to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation.
Abstract: We present a novel neural surface reconstruction method, called NeuS, for reconstructing objects and scenes with high fidelity from 2D image inputs. Existing neural surface reconstruction approaches, such as DVR and IDR, require foreground mask as supervision, easily get trapped in local minima, and therefore struggle with the reconstruction of objects with severe self-occlusion or thin structures. Meanwhile, recent neural methods for novel view synthesis, such as NeRF and its variants, use volume rendering to produce a neural scene representation with robustness of optimization, even for highly complex objects. However, extracting high-quality surfaces from this learned implicit representation is difficult because there are not sufficient surface constraints in the representation. In NeuS, we propose to represent a surface as the zero-level set of a signed distance function (SDF) and develop a new volume rendering method to train a neural SDF representation. We observe that the conventional volume rendering method causes inherent geometric errors (i.e. bias) for surface reconstruction, and therefore propose a new formulation that is free of bias in the first order of approximation, thus leading to more accurate surface reconstruction even without the mask supervision. Experiments on the DTU dataset and the BlendedMVS dataset show that NeuS outperforms the state-of-the-arts in high-quality surface reconstruction, especially for objects and scenes with complex structures and self-occlusion.

3 citations

Proceedings ArticleDOI
TL;DR: This work proposes a sparse camera image capture system and an image based virtual image generation method for 3D imaging applications and shows a resulting virtual image produced by the proposed algorithm for generating in-between view of two real images captured with the authors' multi-camera image Capture system.
Abstract: The multi-view three-dimensional (3D) visualization by means of a 3D display requires reproduction of scene light fields. The complete light field of a scene can be reproduced from the images of a scene ideally taken from infinite viewpoints. However, capturing the images of a scene from infinite viewpoints is not feasible for practical applications. Therefore, in this work, we propose a sparse camera image capture system and an image based virtual image generation method for 3D imaging applications. We show a resulting virtual image produced by the proposed algorithm for generating in-between view of two real images captured with our multi-camera image capture system.

3 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102