scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Parameterized variety for multi-view multi-exposure image synthesis and high dynamic range stereo reconstruction

TL;DR: A novel parameterized variety based model is presented that integrates these different domains into one common framework to accommodate multi-view stereo for multiple exposure input views and to render photo-realistic HDR images from arbitrary virtual viewpoints for high quality 3D reconstruction.
Abstract: Multi-view stereo, novel view synthesis and high dynamic range (HDR) imaging are three pertinent areas of concern for high quality 3D view generation. This paper presents a novel parameterized variety based model that integrates these different domains into one common framework with an envisioned goal, to accommodate multi-view stereo for multiple exposure input views and to render photo-realistic HDR images from arbitrary virtual viewpoints for high quality 3D reconstruction. We extend the parameterized variety approach for rendering presented earlier by Genc and Ponce [1] to handle full perspective cameras. An efficient algebraic framework is proposed to construct an explicit parameterization of the space of all multi-view multi-exposed images. This characterization of differently exposed views allow to simultaneously recover artifacts free HDR images, and reliable depth maps from arbitrary camera viewpoints. High quality, HDR textured 3D model of the scene is obtained using these images and recovered geometry information.
Citations
More filters
Journal ArticleDOI

[...]

TL;DR: Results confirm that the proposed algorithm for HDR image generation of a low-bit depth from two differently exposed images generates more natural HDR images than other state-of-the-art algorithms regardless of image properties.
Abstract: Recently, high-dynamic range (HDR) imaging has taken the centre stage because of the drawbacks of low-dynamic range imaging, namely detail losses in under- and over-exposed areas. In this study, the authors propose an algorithm for HDR image generation of a low-bit depth from two differently exposed images. For compatibility with conventional devices, HDR image generations of a large bit depth and bit depth compression are skipped. By using posterior probability-based labelling, luminance adjusting and adaptive blending, the authors directly blend two input images into one while preserving the global intensity order as well as enhancing its dynamic range. From the experiments on various test images, results confirm that the proposed method generates more natural HDR images than other state-of-the-art algorithms regardless of image properties.

7 citations

Book ChapterDOI

[...]

05 Mar 2021
TL;DR: In this paper, an energy-based fusion method for multi-exposure image reconstruction was proposed, which fuses the input images details by the subjective middle function average and 8 neighborhood model.
Abstract: This chapter proposes a method for multi-exposure image reconstruction by energy-based fusion. The image has been taken from the unchanged sceneries with the dissimilar and multiple exposure time which gives a good image with the superimposed quality. Based on the previous techniques which has been followed, it ended up in the unclear textures as well as edges. To overwhelm the demerits, this technique fuses the input images details by the subjective middle function average and 8 neighborhood model. This idea gives fused images with less uncertainty and redundancy and concentrates on object focus, eminence, and clearance with similar resolution with energy as the parameter. By numerical analysis, the proposed method has certain merits over the earlier methods such as the capability to conserve all applicable data and exclusion of the aftereffects like sensitivity to error and reduction of contrast.
Proceedings ArticleDOI

[...]

01 Dec 2019
TL;DR: This work proposes to make publicly available to the research community, a diversified database of Stereoscopic 3D HDR images and videos, captured within the beautiful campus of Indian Institute of Technology, Madras, which is blessed with rich flora and fauna, and is home to several rare wildlife species.
Abstract: The consumer market of High Dynamic Range (HDR) displays and cameras is blooming rapidly with the advent of 3D video and display technologies. Specialised agencies like Moving Picture Experts Group and International Telecommunication Union are demanding the standardization of latest display advancements. Lack of sufficient experimental data is a major bottleneck for the development of preliminary research efforts in 3D HDR video technology. We propose to make publicly available to the research community, a diversified database of Stereoscopic 3D HDR images and videos, captured within the beautiful campus of Indian Institute of Technology, Madras, which is blessed with rich flora and fauna, and is home to several rare wildlife species. Further, we have described the procedure of capturing, aligning, calibrating and post-processing of 3D images and videos. We have discussed research opportunities and challenges, and the potential use cases of HDR stereo 3D applications and depth-from-HDR aspects.

Cites background from "Parameterized variety for multi-vie..."

  • [...]

References
More filters
Proceedings ArticleDOI

[...]

18 Jun 2003
TL;DR: This work combines the constraints from the theoretical space with the data from the DoRF database to create a low-parameter Empirical Model of Response (EMoR), which allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart.
Abstract: Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image brightness is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter Empirical Model of Response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http://www.cs.columbia.edu/CAVE.

183 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • [...]

  • [...]

Book ChapterDOI

[...]

06 Jun 1998
TL;DR: An approach is described which achieves this goal by combining state-of-the-art algorithms for uncalibrated projective reconstruction, self-calibration and dense correspondence matching.
Abstract: Modeling of 3D objects from image sequences is one of the challenging problems in computer vision and has been a research topic for many years. Important theoretical and algorithmic results were achieved that allow to extract even complex 3D scene models from images. One recent effort has been to reduce the amount of calibration and to avoid restrictions on the camera motion. In this contribution an approach is described which achieves this goal by combining state-of-the-art algorithms for uncalibrated projective reconstruction, self-calibration and dense correspondence matching.

118 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • [...]

  • [...]

  • [...]

Journal ArticleDOI

[...]

TL;DR: In this paper, the Cayley-Dixon-Kapur-Saxena-Yang result was used to solve the six-line problem in computer vision and in the automated analysis of images.
Abstract: The “Six-line Problem” arises in computer vision and in the automated analysis of images. Given a three-dimensional (3D) object, one extracts geometric features (for example six lines) and then, via techniques from algebraic geometry and geometric invariant theory, produces a set of 3D invariants that represents that feature set. Suppose that later an object is encountered in an image (for example, a photograph taken by a camera modeled by standard perspective projection, i.e. a “pinhole” camera), and suppose further that six lines are extracted from the object appearing in the image. The problem is to decide if the object in the image is the original 3D object. To answer this question two-dimensional (2D) invariants are computed from the lines in the image. One can show that conditions for geometric consistency between the 3D object features and the 2D image features can be expressed as a set of polynomial equations in the combined set of two- and three-dimensional invariants. The object in the image is geometrically consistent with the original object if the set of equations has a solution. One well known method to attack such sets of equations is with resultants . Unfortunately, the size and complexity of this problem made it appear overwhelming until recently. This paper will describe a solution obtained using our own variant of the Cayley–Dixon–Kapur–Saxena–Yang resultant. There is reason to believe that the resultant technique we employ here may solve other complex polynomial systems.

40 citations

Journal ArticleDOI

[...]

TL;DR: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera by showing that the corresponding image space can be represented by a six-dimensional variety embedded in R2(m+n) and parameterized by the image positions of three reference points.
Abstract: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera. By taking explicitly into account the Euclidean constraints associated with calibrated cameras, we show that the corresponding image space can be represented by a six-dimensional variety embedded in {\cal R}^{2(m+n)} and parameterized by the image positions of three reference points. The coefficients defining this parameterized image variety (or PIV for short) can be estimated from a sample of images of a scene via linear and non-linear least squares. The PIV provides an integrated framework for using both point and line features to synthesize new images from a set of pre-recorded pictures (image-based rendering). The proposed technique does not perform any explicit three-dimensional scene reconstruction but it supports hidden-surface elimination, texture mapping and interactive image synthesis at frame rate on ordinary PCs. It has been implemented and extensively tested on real data sets.

13 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • [...]

  • [...]

  • [...]

  • [...]