scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper presents a novel method for synthesizing a novel view from two sets of differently focused images taken by an aperture camera array for a scene consisting of two approximately constant depths.
Abstract: This paper presents a novel method for synthesizing a novel view from two sets of differently focused images taken by an aperture camera array for a scene consisting of two approximately constant depths. The proposed method consists of two steps. The first step is a view interpolation to reconstruct an all-in-focus dense light field of the scene. The second step is to synthesize a novel view by a light-field rendering technique from the reconstructed dense light field. The view interpolation in the first step can be achieved simply by linear filters that are designed to shift different object regions separately, without region segmentation. The proposed method can effectively create a dense array of pin-hole cameras (i.e., all-in-focus images), so that the novel view can be synthesized with better quality

92 citations

Journal ArticleDOI
TL;DR: The heart of the method is to use programmable Pixel Shader technology to square intensity differences between reference image pixels, and then to choose final colors that correspond to the minimum difference, i.e. the most consistent color.
Abstract: We present a novel use of commodity graphics hardware that effectively combines a plane-sweeping algorithm with view synthesis for real-time, online 3D scene acquisition and view synthesis Using real-time imagery from a few calibrated cameras, our method can generate new images from nearby viewpoints, estimate a dense depth map from the current viewpoint, or create a textured triangular mesh We can do each of these without any prior geometric information or requiring any user interaction, in real time and online The heart of our method is to use programmable Pixel Shader technology to square intensity differences between reference image pixels, and then to choose final colors (or depths) that correspond to the minimum difference, ie the most consistent color In this paper we describe the method, place it in the context of related work in computer graphics and computer vision, and present some results ACM CSS: I33 Computer Graphics—Bitmap and framebuffer operations, I48 Image Processing and Computer Vision—Depth cues, Stereo

92 citations

Proceedings ArticleDOI
18 Jun 1996
TL;DR: The method does not explicitly model scene geometry, and enables fast and exact generation of synthetic views, and it is demonstrated that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information.
Abstract: We propose a new method for view synthesis from real images using stereo vision. The method does not explicitly model scene geometry, and enables fast and exact generation of synthetic views. We also reevaluate the requirements on stereo algorithms for the application of view synthesis and discuss ways of dealing with partially occluded regions of unknown depth and with completely occluded regions of unknown texture. Our experiments demonstrate that it is possible to efficiently synthesize realistic new views even from inaccurate and incomplete depth information.

91 citations

Proceedings Article
01 Jan 2020
TL;DR: NeRD as mentioned in this paper decomposes a scene into explicit representations by introducing physically-based rendering to neural radiance fields, which can be leveraged to generate novel views under any illumination in real-time.
Abstract: Decomposing a scene into its shape, reflectance, and illumination is a challenging but essential problem in computer vision and graphics. This problem is inherently more challenging when the illumination is not a single light source under laboratory conditions but is instead an unconstrained environmental illumination. Though recent work has shown that implicit representations can be used to model the radiance field of an object, these techniques only enable view synthesis and not relighting. Additionally, evaluating these radiance fields is resource and time-intensive. By decomposing a scene into explicit representations, any rendering framework can be leveraged to generate novel views under any illumination in real-time. NeRD is a method that achieves this decomposition by introducing physically-based rendering to neural radiance fields. Even challenging non-Lambertian reflectances, complex geometry, and unknown illumination can be decomposed into high-quality models. The datasets and code is available on the project page: this https URL

90 citations

Proceedings ArticleDOI
09 Jun 2003
TL;DR: This paper presents a multi-state statistical decision models with Kalman filtering based tracking for head pose detection and face orientation estimation, which allows simultaneous capture of the driver's head pose, driving view, and surroundings of the vehicle.
Abstract: Our research is focused on the development of novel machine vision based telematic systems, which provide non-intrusive probing of the state of the driver and driving conditions. In this paper we present a system which allows simultaneous capture of the driver's head pose, driving view, and surroundings of the vehicle. The integrated machine vision system utilizes a video stream of full 360 degree panoramic field of view. The processing modules include perspective transformation, feature extraction, head detection, head pose estimation, driving view synthesis, and motion segmentation. The paper presents a multi-state statistical decision models with Kalman filtering based tracking for head pose detection and face orientation estimation. The basic feasibility and robustness of the approach is demonstrated with a series of systematic experimental studies.

89 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102