scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
14 Jul 2014
TL;DR: It is demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques.
Abstract: In emerging 3D video coding, depth has significant importance in view synthesis, scene analysis, and 3D object reconstruction. Depth images can be characterized by sharp edges and smooth large regions. Most of the existing depth coding techniques use intra-coding mode and try to preserve edges explicitly with approximated edge modelling. However, edges can be implicitly preserved as long as the transformation is avoided. In this paper, we have demonstrated that inherent edge preserving encoding of inter-coded block residuals, uniformly quantized at pixel domain using motion data from associated texture components, is more efficient than explicitly edge preserving intra-coding techniques. Experimental results show that the proposed technique have achieved superior image quality of synthesized views against the new 3D-HEVC standard. Lossless applications of the proposed technique has achieved on average 66% and 23% bit-rate savings against 3D-HEVC with negligible quantization and perceptually unnoticeable view synthesis, respectively.

6 citations

Journal ArticleDOI
TL;DR: A confidence-based depth recovery and high quality 3D view synthesis are proposed and Experimental results show that the proposed method yields higher quality recovered depth maps and synthesized image views than other previous methods.

6 citations

Journal ArticleDOI
TL;DR: A deep residual architecture that can be used both for synthesizing high quality angular views in light fields and temporal frames in classical videos and for video frame interpolation, generating high quality frames compared with SOTA interpolation methods.
Abstract: In this paper, we propose a deep residual architecture that can be used both for synthesizing high quality angular views in light fields and temporal frames in classical videos. The proposed framework consists of an optical flow estimator optimized for view synthesis, a trainable feature extractor and a residual convolutional network for pixel and feature-based view reconstruction. Among these modules, the fine-tuning of the optical flow estimator specifically for the view synthesis task yields scene depth or motion information that is well optimized for the targeted problem. In cooperation with the end-to-end trainable encoder, the synthesis block employs both pixel-based and feature-based synthesis with residual connection blocks, and the two synthesized views are fused with the help of a learned soft mask to obtain the final reconstructed view. Experimental results with various datasets show that our method performs favorably against other state-of-the-art (SOTA) methods with a large gain for light field view synthesis. Furthermore, with a little modification, our method can also be used for video frame interpolation, generating high quality frames compared with SOTA interpolation methods.

6 citations

Proceedings ArticleDOI
27 Jul 2022
TL;DR: A novel approach is proposed to decompose the multi-person scene into layers and reconstruct neural representations for each layer in a weakly-supervised manner, yielding both high-quality novel view rendering and accurate instance masks.
Abstract: This paper presents a novel system for generating free-viewpoint videos of multiple human performers from very sparse RGB cameras. The system reconstructs a layered neural representation of the dynamic multi-person scene from multi-view videos with each layer representing a moving instance or static background. Unlike previous work that requires instance segmentation as input, a novel approach is proposed to decompose the multi-person scene into layers and reconstruct neural representations for each layer in a weakly-supervised manner, yielding both high-quality novel view rendering and accurate instance masks. Camera synchronization error is also addressed in the proposed approach. The experiments demonstrate the better view synthesis quality of the proposed system compared to previous ones and the capability of producing an editable free-viewpoint video of a real soccer game using several asynchronous GoPro cameras. The dataset and code are available at https://github.com/zju3dv/EasyMocap .

6 citations

Proceedings ArticleDOI
K. Kamei1, M. Maruyama, K. Seo
26 Oct 1997
TL;DR: A practical image-based rendering method that utilizes the given source images as much as possible and a style of arranging viewpoints of source images and a method of data management in the memory are described, both of which give rise to the efficient usage of resources in practice.
Abstract: We propose a practical image-based rendering method that utilizes the given source images as much as possible In our method, a new view is synthesized by splitting it into stripes, and pasting to each of them the most suitable stripe among the pre-acquired source images A well approximated view is synthesized without a precisely captured large set of source images even in a complex environment We also describe a style of arranging viewpoints of source images and a method of data management in the memory, both of which give rise to the efficient usage of resources in practice We can get fine results at any viewpoints in real time, so that a virtual environment with powerful walkthrough operations can be easily constructed

6 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102