scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Posted Content
TL;DR: Experimental results show that the proposed method provides high coding performance and overcomes the state-of-the-art LF image compression solutions.
Abstract: Light field technology represents a viable path for providing a high-quality VR content. However, such an imaging system generates a high amount of data leading to an urgent need for LF image compression solution. In this paper, we propose an efficient LF image coding scheme based on view synthesis. Instead of transmitting all the LF views, only some of them are coded and transmitted, while the remaining views are dropped. The transmitted views are coded using Versatile Video Coding (VVC) and used as reference views to synthesize the missing views at decoder side. The dropped views are generated using the efficient dual discriminator GAN model. The selection of reference/dropped views is performed using a rate distortion optimization based on the VVC temporal scalability. Experimental results show that the proposed method provides high coding performance and overcomes the state-of-the-art LF image compression solutions.

4 citations

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the treated DIBR schemes have better performance for view synthesis than the other three methods through the subjective visual perception and objective assessments in terms of peak signal to noise ratio and structural similarity index.

4 citations

Journal ArticleDOI
TL;DR: Yi et al. as discussed by the authors presented the first parametric 3D morphable dental model for both teeth and gum, based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components.
Abstract: 3D Morphable models of the human body capture variations among subjects and are useful in reconstruction and editing applications. Current dental models use an explicit mesh scene representation and model only the teeth, ignoring the gum. In this work, we present the first parametric 3D morphable dental model for both teeth and gum. Our model uses an implicit scene representation and is learned from rigidly aligned scans. It is based on a component-wise representation for each tooth and the gum, together with a learnable latent code for each of such components. It also learns a template shape thus enabling several applications such as segmentation, interpolation and tooth replacement. Our reconstruction quality is on par with the most advanced global implicit representations while enabling novel applications. The code will be available at https://github.com/cong-yi/DMM

4 citations

Proceedings ArticleDOI
01 Jan 2022
TL;DR: In this article , a volumetric representation is proposed for view synthesis from sparse source observations of a scene comprised of 3D objects, which is trained in a category-agnostic manner and does not require scene-specific optimization.
Abstract: We study the problem of novel view synthesis from sparse source observations of a scene comprised of 3D objects. We propose a simple yet effective approach that is neither continuous nor implicit, challenging recent trends on view synthesis. Our approach explicitly encodes observations into a volumetric representation that enables amortized rendering. We demonstrate that although continuous radiance field representations have gained a lot of attention due to their expressive power, our simple approach obtains comparable or even better novel view reconstruction quality comparing with state-of-the-art baselines [49] while increasing rendering speed by over 400x. Our model is trained in a category-agnostic manner and does not require scene-specific optimization. Therefore, it is able to generalize novel view synthesis to object categories not seen during training. In addition, we show that with our simple formulation, we can use view synthesis as a self-supervision signal for efficient learning of 3D geometry without explicit 3D supervision.

4 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102