scispace - formally typeset
Search or ask a question

Inverse rendering for computer graphics

TL;DR: In this paper, a linear least squares system is proposed and demonstrated for inverse rendering problems, which can be used to work backward from photographs to attributes of the scene, such as geometry, lighting, and camera positions.
Abstract: Creating realistic images has been a major focus in the study of computer graphics for much of its history. This effort has led to mathematical models and algorithms that can compute predictive, or physically realistic, images from known camera positions and scene descriptions that include the geometry of objects, the reflectance of surfaces, and the lighting used to illuminate the scene. These images accurately describe the physical quantities that would be measured from a real scene. Because these algorithms can predict real images, they can also be used in inverse problems to work backward from photographs to attributes of the scene. Work on three such inverse rendering problems is described. The first, inverse lighting, assumes knowledge of geometry, reflectance, and the recorded photograph and solves for the lighting in the scene. A technique using a linear least-squares system is proposed and demonstrated. Also demonstrated is an application of inverse lighting, called re-lighting, which modifies lighting in photographs. The second two inverse rendering problems solve for unknown reflectance, given images with known geometry, lighting, and camera positions. Photographic texture measurement concentrates on capturing the spatial variation in an object's reflectance. The resulting system begins with scanned 3D models of real objects and uses photographs to construct accurate, high-resolution textures suitable for physically realistic rendering. The system is demonstrated on two complex natural objects with detailed surface textures. Image-based BRDF measurement takes the opposite approach to reflectance measurement, capturing the directional characteristics of a surface's reflectance by measuring the bidirectional reflectance distribution function, or BRDF. Using photographs of an object with spatially uniform reflectance, the BRDFs of paints and papers are measured with completeness and accuracy that rival that of measurements obtained using specialized devices. The image-based approach and novel light source positioning technique require only general-purpose equipment, so the cost of the apparatus is low compared to conventional approaches. In addition, very densely sampled data can be measured very quickly, when the wavelength spectrum of the BRDF does not need to be measured in detail.
Citations
More filters
Proceedings ArticleDOI
01 Jul 2003
TL;DR: This work presents a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data that lets users define perceptually meaningful parametrization directions to navigate in the reduced-dimension BRDF space.
Abstract: We present a generative model for isotropic bidirectional reflectance distribution functions (BRDFs) based on acquired reflectance data. Instead of using analytical reflectance models, we represent each BRDF as a dense set of measurements. This allows us to interpolate and extrapolate in the space of acquired BRDFs to create new BRDFs. We treat each acquired BRDF as a single high-dimensional vector taken from a space of all possible BRDFs. We apply both linear (subspace) and non-linear (manifold) dimensionality reduction tools in an effort to discover a lower-dimensional representation that characterizes our measurements. We let users define perceptually meaningful parametrization directions to navigate in the reduced-dimension BRDF space. On the low-dimensional manifold, movement along these directions produces novel but valid BRDFs.

818 citations


Cites background from "Inverse rendering for computer grap..."

  • ...The descriptions follow the concepts defined in Nicodemus et al. [37], Jensen [18], and Marschner [ 32 ]....

    [...]

Proceedings ArticleDOI
01 Aug 2001
TL;DR: This work introduces a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting.
Abstract: Realism in computer-generated images requires accurate input models for lighting, textures and BRDFs. One of the best ways of obtaining high-quality data is through measurements of scene attributes from real photographs by inverse rendering. However, inverse rendering methods have been largely limited to settings with highly controlled lighting. One of the reasons for this is the lack of a coherent mathematical framework for inverse rendering under general illumination conditions. Our main contribution is the introduction of a signal-processing framework which describes the reflected light field as a convolution of the lighting and BRDF, and expresses it mathematically as a product of spherical harmonic coefficients of the BRDF and the lighting. Inverse rendering can then be viewed as deconvolution. We apply this theory to a variety of problems in inverse rendering, explaining a number of previous empirical results. We will show why certain problems are ill-posed or numerically ill-conditioned, and why other problems are more amenable to solution. The theory developed here also leads to new practical representations and algorithms. For instance, we present a method to factor the lighting and BRDF from a small number of views, i.e. to estimate both simultaneously when neither is known.

627 citations


Cites methods from "Inverse rendering for computer grap..."

  • ...Details are given by Marschner [ 34 ] and Levoy et al. [16]....

    [...]

Journal ArticleDOI
Fausto Bernardini1, Holly Rushmeier1
TL;DR: There is potentially an opportunity to consider new virtual reality applications as diverse as cultural heritage and retail sales that will allow people to view realistic 3D objects on home computers.
Abstract: Three-dimensional (3D) image acquisition systems are rapidly becoming more affordable, especially systems based on commodity electronic cameras. At the same time, personal computers with graphics hardware capable of displaying complex 3D models are also becoming inexpensive enough to be available to a large population. As a result, there is potentially an opportunity to consider new virtual reality applications as diverse as cultural heritage and retail sales that will allow people to view realistic 3D objects on home computers. Although there are many physical techniques for acquiring 3D data—including laser scanners, structured light and time-of-flight—there is a basic pipeline of operations for taking the acquired data and producing a usable numerical model. We look at the fundamental problems of range image registration, line-of-sight errors, mesh integration, surface detail and color, and texture mapping. In the area of registration we consider both the problems of finding an initial global alignment using manual and automatic means, and refining this alignment with variations of the Iterative Closest Point methods. To account for scanner line-of-sight errors we compare several averaging approaches. In the area of mesh integration, that is finding a single mesh joining the data from all scans, we compare various methods for computing interpolating and approximating surfaces. We then look at various ways in which surface properties such as color (more properly, spectral reflectance) can be extracted from acquired imagery. Finally, we examine techniques for producing a final model representation that can be efficiently rendered using graphics hardware.

492 citations


Cites background from "Inverse rendering for computer grap..."

  • ...Marschner [79] describes an example of applying a relative distance preserving parameterization in a scanning application....

    [...]

Proceedings Article
Fausto Bernardini1, Holly Rushmeier1
01 Jan 2000
TL;DR: In this paper, the fundamental problems of range image registration, line-of-sight errors, mesh integration, surface detail and color, and texture mapping are discussed. And the problems of finding an initial global alignment using manual and automatic means, and refining this alignment with variations of the Iterative Closest Points (ICP) methods are compared.
Abstract: Three-dimensional (3D) image acquisition systems are rapidly becoming more affordable, especially systems based on commodity electronic cameras. At the same time, personal computers with graphics hardware capable of displaying complex 3D models are also becoming inexpensive enough to be available to a large population. As a result, there is potentially an opportunity to consider new virtual reality applications as diverse as cultural heritage and retail sales that will allow people to view realistic 3D objects on home computers. Although there are many physical techniques for acquiring 3D data—including laser scanners, structured light and time-of-flight—there is a basic pipeline of operations for taking the acquired data and producing a usable numerical model. We look at the fundamental problems of range image registration, line-of-sight errors, mesh integration, surface detail and color, and texture mapping. In the area of registration we consider both the problems of finding an initial global alignment using manual and automatic means, and refining this alignment with variations of the Iterative Closest Point methods. To account for scanner line-of-sight errors we compare several averaging approaches. In the area of mesh integration, that is finding a single mesh joining the data from all scans, we compare various methods for computing interpolating and approximating surfaces. We then look at various ways in which surface properties such as color (more properly, spectral reflectance) can be extracted from acquired imagery. Finally, we examine techniques for producing a final model representation that can be efficiently rendered using graphics hardware.

477 citations

Proceedings ArticleDOI
29 Jun 2005
TL;DR: This work evaluates several well-known analytical models in terms of their ability to fit measured BRDFs, and shows that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.
Abstract: The Bidirectional Reflectance Distribution Function (BRDF) describes the appearance of a material by its interaction with light at a surface point. A variety of analytical models have been proposed to represent BRDFs. However, analysis of these models has been scarce due to the lack of high-resolution measured data. In this work we evaluate several well-known analytical models in terms of their ability to fit measured BRDFs. We use an existing high-resolution data set of a hundred isotropic materials and compute the best approximation for each analytical model. Furthermore, we have built a new setup for efficient acquisition of anisotropic BRDFs, which allows us to acquire anisotropic materials at high resolution. We have measured four samples of anisotropic materials (brushed aluminum, velvet, and two satins). Based on the numerical errors, function plots, and rendered images we provide insights into the performance of the various models. We conclude that for most isotropic materials physically-based analytic reflectance models can represent their appearance quite well. We illustrate the important difference between the two common ways of defining the specular lobe: around the mirror direction and with respect to the half-vector. Our evaluation shows that the latter gives a more accurate shape for the reflection lobe. Our analysis of anisotropic materials indicates current parametric reflectance models cannot represent their appearances faithfully in many cases. We show that using a sampled microfacet distribution computed from measurements improves the fit and qualitatively reproduces the measurements.

472 citations