scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Parameterized variety for multi-view multi-exposure image synthesis and high dynamic range stereo reconstruction

TL;DR: A novel parameterized variety based model is presented that integrates these different domains into one common framework to accommodate multi-view stereo for multiple exposure input views and to render photo-realistic HDR images from arbitrary virtual viewpoints for high quality 3D reconstruction.
Abstract: Multi-view stereo, novel view synthesis and high dynamic range (HDR) imaging are three pertinent areas of concern for high quality 3D view generation. This paper presents a novel parameterized variety based model that integrates these different domains into one common framework with an envisioned goal, to accommodate multi-view stereo for multiple exposure input views and to render photo-realistic HDR images from arbitrary virtual viewpoints for high quality 3D reconstruction. We extend the parameterized variety approach for rendering presented earlier by Genc and Ponce [1] to handle full perspective cameras. An efficient algebraic framework is proposed to construct an explicit parameterization of the space of all multi-view multi-exposed images. This characterization of differently exposed views allow to simultaneously recover artifacts free HDR images, and reliable depth maps from arbitrary camera viewpoints. High quality, HDR textured 3D model of the scene is obtained using these images and recovered geometry information.
Citations
More filters
Journal ArticleDOI
TL;DR: Results confirm that the proposed algorithm for HDR image generation of a low-bit depth from two differently exposed images generates more natural HDR images than other state-of-the-art algorithms regardless of image properties.
Abstract: Recently, high-dynamic range (HDR) imaging has taken the centre stage because of the drawbacks of low-dynamic range imaging, namely detail losses in under- and over-exposed areas. In this study, the authors propose an algorithm for HDR image generation of a low-bit depth from two differently exposed images. For compatibility with conventional devices, HDR image generations of a large bit depth and bit depth compression are skipped. By using posterior probability-based labelling, luminance adjusting and adaptive blending, the authors directly blend two input images into one while preserving the global intensity order as well as enhancing its dynamic range. From the experiments on various test images, results confirm that the proposed method generates more natural HDR images than other state-of-the-art algorithms regardless of image properties.

7 citations

Proceedings ArticleDOI
01 Jun 2022
TL;DR: In this paper , a High Dynamic Range Neural Radiance Fields (HDR-NeRF) is proposed to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures.
Abstract: We present High Dynamic Range Neural Radiance Fields (HDR-NeRF) to recover an HDR radiance field from a set of low dynamic range (LDR) views with different exposures. Using the HDR-NeRF, we are able to generate both novel HDR views and novel LDR views under different exposures. The key to our method is to model the simplified physical imaging process, which dictates that the radiance of a scene point transforms to a pixel value in the LDR image with two implicit functions: a radiance field and a tone mapper. The radiance field encodes the scene radiance (values vary from 0 to $+\infty$ ), which outputs the density and radiance of a ray by giving corresponding ray origin and ray direction. The tone mapper models the mapping process that a ray hitting on the camera sensor becomes a pixel value. The color of the ray is predicted by feeding the radiance and the corresponding exposure time into the tone mapper. We use the classic volume rendering technique to project the output radiance, colors and densities into HDR and LDR images, while only the input LDR images are used as the supervision. We collect a new forward-facing HDR dataset to evaluate the proposed method. Experimental results on synthetic and real-world scenes validate that our method can not only accurately control the exposures of synthesized views but also render views with a high dynamic range.

5 citations

Proceedings ArticleDOI
01 Dec 2019
TL;DR: This work proposes to make publicly available to the research community, a diversified database of Stereoscopic 3D HDR images and videos, captured within the beautiful campus of Indian Institute of Technology, Madras, which is blessed with rich flora and fauna, and is home to several rare wildlife species.
Abstract: The consumer market of High Dynamic Range (HDR) displays and cameras is blooming rapidly with the advent of 3D video and display technologies. Specialised agencies like Moving Picture Experts Group and International Telecommunication Union are demanding the standardization of latest display advancements. Lack of sufficient experimental data is a major bottleneck for the development of preliminary research efforts in 3D HDR video technology. We propose to make publicly available to the research community, a diversified database of Stereoscopic 3D HDR images and videos, captured within the beautiful campus of Indian Institute of Technology, Madras, which is blessed with rich flora and fauna, and is home to several rare wildlife species. Further, we have described the procedure of capturing, aligning, calibrating and post-processing of 3D images and videos. We have discussed research opportunities and challenges, and the potential use cases of HDR stereo 3D applications and depth-from-HDR aspects.

4 citations


Cites background from "Parameterized variety for multi-vie..."

  • ...The extensive research in high dynamic range imaging, Ultra HD, 4K, 8K, HDR & 3D display technology is directed towards providing close to natural, high quality 3D experience to the end users [1, 2, 3, 4]....

    [...]

Book ChapterDOI
05 Mar 2021
TL;DR: In this paper, an energy-based fusion method for multi-exposure image reconstruction was proposed, which fuses the input images details by the subjective middle function average and 8 neighborhood model.
Abstract: This chapter proposes a method for multi-exposure image reconstruction by energy-based fusion. The image has been taken from the unchanged sceneries with the dissimilar and multiple exposure time which gives a good image with the superimposed quality. Based on the previous techniques which has been followed, it ended up in the unclear textures as well as edges. To overwhelm the demerits, this technique fuses the input images details by the subjective middle function average and 8 neighborhood model. This idea gives fused images with less uncertainty and redundancy and concentrates on object focus, eminence, and clearance with similar resolution with energy as the parameter. By numerical analysis, the proposed method has certain merits over the earlier methods such as the capability to conserve all applicable data and exclusion of the aftereffects like sensitivity to error and reduction of contrast.
References
More filters
Proceedings ArticleDOI
18 Jun 2003
TL;DR: This work combines the constraints from the theoretical space with the data from the DoRF database to create a low-parameter Empirical Model of Response (EMoR), which allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart.
Abstract: Many vision applications require precise measurement of scene radiance. The function relating scene radiance to image brightness is called the camera response. We analyze the properties that all camera responses share. This allows us to find the constraints that any response function must satisfy. These constraints determine the theoretical space of all possible camera responses. We have collected a diverse database of real-world camera response functions (DoRF). Using this database we show that real-world responses occupy a small part of the theoretical space of all possible responses. We combine the constraints from our theoretical space with the data from DoRF to create a low-parameter Empirical Model of Response (EMoR). This response model allows us to accurately interpolate the complete response function of a camera from a small number of measurements obtained using a standard chart. We also show that the model can be used to accurately estimate the camera response from images of an arbitrary scene taken using different exposures. The DoRF database and the EMoR model can be downloaded at http://www.cs.columbia.edu/CAVE.

201 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • ...Camera views C0 to C10 are taken and exposures are added by applying one of the response functions obtained from DoRF/EMoR database [6]....

    [...]

  • ...Recovered inverse response curves (brightness vs irradiance) for each color channel plotted against ground truth EMoR Model [6]....

    [...]

Book ChapterDOI
06 Jun 1998
TL;DR: An approach is described which achieves this goal by combining state-of-the-art algorithms for uncalibrated projective reconstruction, self-calibration and dense correspondence matching.
Abstract: Modeling of 3D objects from image sequences is one of the challenging problems in computer vision and has been a research topic for many years. Important theoretical and algorithmic results were achieved that allow to extract even complex 3D scene models from images. One recent effort has been to reduce the amount of calibration and to avoid restrictions on the camera motion. In this contribution an approach is described which achieves this goal by combining state-of-the-art algorithms for uncalibrated projective reconstruction, self-calibration and dense correspondence matching.

118 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • ...The second stage merges this variety model with MVS algorithm [5], to robustly fuse information from actual and synthesized artifacts free multi-exposed images....

    [...]

  • ...We adopt the method proposed in [5] to reconstruct the HDR texture 3D model of the scene....

    [...]

  • ...Calibration is obtained by upgrading projective reconstruction upto Euclidean by imposing the constraints on the absolute conic and internal camera parameters [5]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the Cayley-Dixon-Kapur-Saxena-Yang result was used to solve the six-line problem in computer vision and in the automated analysis of images.

41 citations

Journal ArticleDOI
TL;DR: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera by showing that the corresponding image space can be represented by a six-dimensional variety embedded in R2(m+n) and parameterized by the image positions of three reference points.
Abstract: This paper addresses the problem of characterizing the set of all images of a rigid set of m points and n lines observed by a weak perspective or paraperspective camera. By taking explicitly into account the Euclidean constraints associated with calibrated cameras, we show that the corresponding image space can be represented by a six-dimensional variety embedded in {\cal R}^{2(m+n)} and parameterized by the image positions of three reference points. The coefficients defining this parameterized image variety (or PIV for short) can be estimated from a sample of images of a scene via linear and non-linear least squares. The PIV provides an integrated framework for using both point and line features to synthesize new images from a set of pre-recorded pictures (image-based rendering). The proposed technique does not perform any explicit three-dimensional scene reconstruction but it supports hidden-surface elimination, texture mapping and interactive image synthesis at frame rate on ordinary PCs. It has been implemented and extensively tested on real data sets.

13 citations


"Parameterized variety for multi-vie..." refers methods in this paper

  • ...It constructs a complete parameterization of 3D space which is not the case in weak and paraperspective cases as explained in [1]....

    [...]

  • ...We extend the parameterized variety approach for rendering presented earlier by Genc and Ponce [1] to handle full perspective cameras....

    [...]

  • ...Parameterized Image Variety (or PIV) was proposed earlier by Genc and Ponce [1] for image based rendering....

    [...]

  • ...[1] Y. Genc, J. Ponce, Image Based rendering using parameterized image varieties, IJCV, vol. 41, issue 3, pp. 143-170, 2001....

    [...]