scispace - formally typeset
Search or ask a question
Topic

View synthesis

About: View synthesis is a research topic. Over the lifetime, 1701 publications have been published within this topic receiving 42333 citations.


Papers
More filters
Proceedings ArticleDOI
Bo Zhu1, Gangyi Jiang1, Yun Zhang1, Zongju Peng1, Mei Yu1 
18 Jul 2009
TL;DR: Experimental results show that the proposed method not only improves the encoding speed significantly, but also saves bitrate of depth map sequence while maintaining high quality of synthesized virtual view images.
Abstract: A view synthesis-oriented depth map coding algorithm is proposed in this paper. Depth map is classified into inner motion regions, background subtracted with edges and edge regions by using edge detection of current depth map and the frame difference between corresponding color images. Then macroblocks in different regions are encoded with different RDO schemes so as to speed up the encoding process while keep the quality of reconstructed depth map. Experimental results show that the proposed method not only improves the encoding speed significantly, but also saves bitrate of depth map sequence while maintaining high quality of synthesized virtual view images.

25 citations

Proceedings ArticleDOI
B. Johansson1
01 Jan 1999
TL;DR: In this paper, a linear algorithm for synthesizing new views of piecewise planar objects without making an explicit 3D reconstruction is proposed, together with a simple algorithm for 3D reconstructing the scene.
Abstract: A linear algorithm for synthesizing new views of piecewise planar objects without making an explicit 3D reconstruction is proposed, together with a simple algorithm for 3D reconstruction of the scene. It is shown how this can be done using only one image and information about the projection of the intersection lines between the object planes. These could either be estimated manually or more automatically using at least one more image. No calibration information is needed. A main idea in the paper is to work with textured planes. A patch in one image, corresponding to a planar surface in the scene, is transformed to a patch in another image by a homography. The generalized eigenvectors and eigenvalues of the homographies have geometrical interpretations that are used in the algorithms. To generate a new image, we calculate the homographies for each plane to the new image. The textures can then easily be mapped to the new image. The reconstruction is given as the equation of each plane in the corresponding homography to a certain image.

25 citations

Proceedings ArticleDOI
TL;DR: A new method of DIBR using multi-view images acquired in a linear camera arrangement that improves virtual viewpoint images by predicting the residual errors and in the experiments, PSNR could be improved for few decibels compared with the conventional method.
Abstract: The availability of multi-view images of a scene makes possible new and exciting applications, including Free-viewpoint TV (FTV). FTV allows us to change viewpoint freely in a 3D world, where the virtual viewpoint images are synthesized by Depth-Image-Based Rendering (DIBR). In this paper, we propose a new method of DIBR using multi-view images acquired in a linear camera arrangement. The proposed method improves virtual viewpoint images by predicting the residual errors. For virtual viewpoint image synthesis, it is necessary to estimate the depth maps with multi-view images. Some algorithms to estimate depth map were proposed, but it is difficult to estimate accurate depth map. As a result, rendered virtual viewpoint images have some errors due to the depth errors. Therefore, our proposed method takes into account those depth errors and improves the quality of the rendered virtual viewpoint images. In the proposed method, the virtual images of each camera position are generated using the real images from each other camera. Then, the residual errors can be calculated between the generated images and the real images acquired by the actual cameras. The residual errors are processed and fed back to predict the residual errors that can be happened to virtual viewpoint images generated by conventional method. In the experiments, PSNR could be improved for few decibels compared with the conventional method.

25 citations

Journal ArticleDOI
TL;DR: A framework for FTV based on IBR is proposed that relieves the need for an accurate depth map by introducing a hybrid virtual view synthesis method and includes an intuitive method for virtual view specification in uncalibrated views.
Abstract: Free viewpoint television (FTV) is a new concept that aims at giving viewers the flexibility to select a novel viewpoint by employing multiple video streams as the input. Current proposed solutions for FTV include those based on ray-space resampling which demand at least dozens of cameras and large storage and transmission resources for those video streams. Image-based rendering (IBR) methods that rely on dense depth map estimation also face practical difficulties since accurate depth map estimation remains a challenging problem. This paper proposes a framework for FTV based on IBR that relieves the need for an accurate depth map by introducing a hybrid virtual view synthesis method. The framework also includes an intuitive method for virtual view specification in uncalibrated views. We present both simulation and real data experiments to validate the proposed framework and the component algorithms.

25 citations

Proceedings ArticleDOI
01 Sep 2012
TL;DR: An exemplar-based depth-guided inpainting algorithm is proposed to fill disocclusions due to uncovered areas after projection, and an improved priority function which uses the depth information to impose a desirable inPainting order is developed.
Abstract: Free-Viewpoint Video (FVV) is a novel technique which creates virtual images of multiple direction by view synthesis. In this paper, an exemplar-based depth-guided inpainting algorithm is proposed to fill disocclusions due to uncovered areas after projection. We develop an improved priority function which uses the depth information to impose a desirable inpainting order. We also propose an efficient background-foreground separation technique to enhance the accuracy of hole filling. Furthermore, a gradient-based searching approach is developed to reduce the computational cost and the location distance is incorporated into patch matching criteria to improve the accuracy. The experimental results have shown that the gradient-based search in our algorithm requires a much lower computational cost (factor of 6 compared to global search), while producing significantly improved visual results.

24 citations


Network Information
Related Topics (5)
Image segmentation
79.6K papers, 1.8M citations
86% related
Feature (computer vision)
128.2K papers, 1.7M citations
86% related
Object detection
46.1K papers, 1.3M citations
85% related
Convolutional neural network
74.7K papers, 2M citations
85% related
Feature extraction
111.8K papers, 2.1M citations
84% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202354
2022117
2021189
2020158
2019114
2018102