scispace - formally typeset
Search or ask a question
Dissertation

Image based surface reflectance remapping for consistent and tool independent material appearence

01 Jan 2018-
TL;DR: Automatic solutions to material appearance consistency are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders.
Abstract: Physically-based rendering in Computer Graphics requires the knowledge of material properties other than 3D shapes, textures and colors, in order to solve the rendering equation A number of material models have been developed, since no model is currently able to reproduce the full range of available materials Although only few material models have been widely adopted in current rendering systems, the lack of standardisation causes several issues in the 3D modelling workflow, leading to a heavy tool dependency of material appearance In industry, final decisions about products are often based on a virtual prototype, a crucial step for the production pipeline, usually developed by a collaborations among several departments, which exchange data Unfortunately, exchanged data often tends to differ from the original, when imported into a different application As a result, delivering consistent visual results requires time, labour and computational cost This thesis begins with an examination of the current state of the art in material appearance representation and capture, in order to identify a suitable strategy to tackle material appearance consistency Automatic solutions to this problem are suggested in this work, accounting for the constraints of real-world scenarios, where the only available information is a reference rendering and the renderer used to obtain it, with no access to the implementation of the shaders In particular, two image-based frameworks are proposed, working under these constraints The first one, validated by means of perceptual studies, is aimed to the remapping of BRDF parameters and useful when the parameters used for the reference rendering are available The second one provides consistent material appearance across different renderers, even when the parameters used for the reference are unknown It allows the selection of an arbitrary reference rendering tool, and manipulates the output of other renderers in order to be consistent with the reference
Citations
More filters
Book
01 Dec 1988
TL;DR: In this paper, the spectral energy distribution of the reflected light from an object made of a specific real material is obtained and a procedure for accurately reproducing the color associated with the spectrum is discussed.
Abstract: This paper presents a new reflectance model for rendering computer synthesized images. The model accounts for the relative brightness of different materials and light sources in the same scene. It describes the directional distribution of the reflected light and a color shift that occurs as the reflectance changes with incidence angle. The paper presents a method for obtaining the spectral energy distribution of the light reflected from an object made of a specific real material and discusses a procedure for accurately reproducing the color associated with the spectral energy distribution. The model is applied to the simulation of a metal and a plastic.

1,401 citations

Proceedings Article
01 Jan 2013
TL;DR: In this article, the authors proposed two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and another inspired by micro-facet-theory.
Abstract: Glossy surface reflectance is hard to model accuratley using traditional parametric BRDF models An alternative is provided by data driven reflectance models, however these models offers less user control and generally results in lower efficency In our work we propose two new lightweight parameteric BRDF models for accurate modeling of glossy surface refllectance, one inspired by Rayleigh-Rice theory for optically smooth surfaces and one inspired by microfacet-theory We base our models on a thourough study of the scattering behaviour of measured reflectance data from the MERL database The study focuses on two key aspects of BRDF models, parametrization and scatter distribution We propose a new scattering distributuion for glossy BRDFs inspired by the ABC model for surface statistics of optically smooth surfaces Based on the survey we consider two parameterizations, one based on micro-facet theory using the halfway vector and one inspired by the parametrization for the Rayleigh-Rice BRDF model considering the projected devaition vector To enable efficent rendering we also show how the new models can be approximatley sampled for importance sampling the scattering integral

13 citations

Journal ArticleDOI
TL;DR: The aim of this research is to provide artists with a general solution, applicable regardless the file format and the software used, thus allowing them to uniform the output of the renderer they use with a reference application, arbitrarily selected within an industry, to which all the renderings obtained with other software will be made visually uniform.
Abstract: Current materials appearance is mainly tool dependent and requires time, labour and computational cost to deliver consistent visual result. Within the industry, the development of a project is often based on a virtual model, which is usually developed by means of a collaboration among several departments, which exchange data. Unfortunately, a virtual material in most cases does not appear the same as the original once imported in a different renderer due to different algorithms and settings. The aim of this research is to provide artists with a general solution, applicable regardless the file format and the software used, thus allowing them to uniform the output of the renderer they use with a reference application, arbitrarily selected within an industry, to which all the renderings obtained with other software will be made visually uniform. We propose to characterize the appearance of several classes of materials rendered using the arbitrary reference software by extracting relevant visual characteristics. By repeating the same process for any other renderer we are able to derive ad-hoc mapping functions between the two renderers. Our approach allows us to hallucinate the appearance of a scene, depicting mainly the selected classes of materials, under the reference software.

1 citations

References
More filters
Proceedings ArticleDOI
07 Jun 2015
TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Abstract: We propose a deep convolutional neural network architecture codenamed Inception that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14). The main hallmark of this architecture is the improved utilization of the computing resources inside the network. By a carefully crafted design, we increased the depth and width of the network while keeping the computational budget constant. To optimize quality, the architectural decisions were based on the Hebbian principle and the intuition of multi-scale processing. One particular incarnation used in our submission for ILSVRC14 is called GoogLeNet, a 22 layers deep network, the quality of which is assessed in the context of classification and detection.

40,257 citations

Journal ArticleDOI
TL;DR: The Psychophysics Toolbox is a software package that supports visual psychophysics and its routines provide an interface between a high-level interpreted language and the video display hardware.
Abstract: The Psychophysics Toolbox is a software package that supports visual psychophysics. Its routines provide an interface between a high-level interpreted language (MATLAB on the Macintosh) and the video display hardware. A set of example programs is included with the Toolbox distribution.

16,594 citations

Journal ArticleDOI
TL;DR: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli.
Abstract: The VideoToolbox is a free collection of two hundred C subroutines for Macintosh computers that calibrates and controls the computer-display interface to create accurately specified visual stimuli. High-level platform-independent languages like MATLAB are best for creating the numbers that describe the desired images. Low-level, computer-specific VideoToolbox routines control the hardware that transforms those numbers into a movie. Transcending the particular computer and language, we discuss the nature of the computer-display interface, and how to calibrate and control it.

10,084 citations

Journal ArticleDOI

3,917 citations

Journal ArticleDOI
TL;DR: The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables.
Abstract: The model for three-mode factor analysis is discussed in terms of newer applications of mathematical processes including a type of matrix process termed the Kronecker product and the definition of combination variables. Three methods of analysis to a type of extension of principal components analysis are discussed. Methods II and III are applicable to analysis of data collected for a large sample of individuals. An extension of the model is described in which allowance is made for unique variance for each combination variable when the data are collected for a large sample of individuals.

3,810 citations


"Image based surface reflectance rem..." refers methods in this paper

  • ...[97] proposed to represent four-dimensional measured BRDFs data as a function of tensor products, factorised using Tucker decomposition [98], a generalisation of higher order principal component analysis....

    [...]