scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Environment matting and compositing

TL;DR: This paper introduces a new process, environment matting, which captures not just a foreground object and its traditional opacity matte from a real-world scene, but also a description of how that object refracts and reflects light, which is called an environment matte.
Abstract: This paper introduces a new process, environment matting, which captures not just a foreground object and its traditional opacity matte from a real-world scene, but also a description of how that object refracts and reflects light, which we call an environment matte The foreground object can then be placed in a new environment, using environment compositing, where it will refract and reflect light from that scene Objects captured in this way exhibit not only specular but glossy and translucent effects, as well as selective attenuation and scattering of light according to wavelength Moreover, the environment compositing process, which can be performed largely with texture mapping operations, is fast enough to run at interactive speeds on a desktop PC We compare our results to photos of the same objects in real scenes Applications of this work include the relighting of objects for virtual and augmented reality, more realistic 3D clip art, and interactive lighting design CR Categories: I210 [Artificial Intelligence]: Vision and Scene Understanding – modeling and recovery of physical attributes; I33 [Computer Graphics]: Picture/Image Generation – display algorithms; I37 [Computer Graphics]: ThreeDimensional Graphics and Realism – color, shading, shadowing, and texture Additional

Content maybe subject to copyright    Report

Citations
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations


Cites background from "Environment matting and compositing..."

  • ...For the more sophisticated mixture of Gaussian 8 If we relax the assumption that the environment is distant, the monitor can be placed at several depths to estimate a depth-dependent mapping function (Zongker et al. 1999)....

    [...]

  • ...2000); (e) environment matte in front of a novel background (Zongker et al. 1999); (f) real-time video environment matte (Chuang et al....

    [...]

  • ...10: Environment mattes: (a–b) a refractive object can be placed in front of a series of novel backgrounds, and their light patterns will be correctly refracted (Zongker et al. 1999); (c) multiple refractions can be handled using a mixture of Gaussians model, and (d) real-time mattes can be pulled using a single graded colored background (Chuang et al....

    [...]

Proceedings ArticleDOI
01 Jul 2000
TL;DR: A method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint and demonstrates the technique with synthetic renderings of a person's face under novel illumination and viewpoints.
Abstract: We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a person's face under novel illumination and viewpoints.

1,102 citations


Cites background or methods from "Environment matting and compositing..."

  • ...By recording the light stage images in high dynamic range [9] and using the process of environment matting [42], we can apply this technique to translucent and refractive objects and reproduce the appearance of the environment in the background; this process is described in the Appendix....

    [...]

  • ...[42] showed that by illuminating a shiny or refractive object with a set of coded lighting patterns, it could be correctly composited over an arbitrary background by determining the direction and spread of the reflected and refracted rays....

    [...]

Journal ArticleDOI
01 Jul 2006
TL;DR: Fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source for scenes that include complex interreflections, subsurface scattering and volumetric scattering are presented.
Abstract: We present fast methods for separating the direct and global illumination components of a scene measured by a camera and illuminated by a light source. In theory, the separation can be done with just two images taken with a high frequency binary illumination pattern and its complement. In practice, a larger number of images are used to overcome the optical and resolution limitations of the camera and the source. The approach does not require the material properties of objects and media in the scene to be known. However, we require that the illumination frequency is high enough to adequately sample the global components received by scene points. We present separation results for scenes that include complex interreflections, subsurface scattering and volumetric scattering. Several variants of the separation approach are also described. When a sinusoidal illumination pattern is used with different phase shifts, the separation can be done using just three images. When the computed images are of lower resolution than the source and the camera, smoothness constraints are used to perform the separation using a single image. Finally, in the case of a static scene that is lit by a simple point source, such as the sun, a moving occluder and a video camera can be used to do the separation. We also show several simple examples of how novel images of a scene can be computed from the separation results.

484 citations


Additional excerpts

  • ...Several techniques have been proposed to reduce the number of images by using coded illumination fields [Zongker et al. 1999; Debevec et al. 2000; Chuang et al. 2000; Lin et al. 2002; Peers and Dutré 2003; Zhu and Yang 2004; Shim and Chen 2005; Sen et al. 2005]....

    [...]

Proceedings ArticleDOI
25 Jun 2007
TL;DR: A realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering is presented.
Abstract: We estimate surface normal maps of an object from either its diffuse or specular reflectance using four spherical gradient illumination patterns. In contrast to traditional photometric stereo, the spherical patterns allow normals to be estimated simultaneously from any number of viewpoints. We present two polarized lighting techniques that allow the diffuse and specular normal maps of an object to be measured independently. For scattering materials, we show that the specular normal maps yield the best record of detailed surface shape while the diffuse normals deviate from the true surface normal due to subsurface scattering, and that this effect is dependent on wavelength. We show several applications of this acquisition technique. First, we capture normal maps of a facial performance simultaneously from several viewing positions using time-multiplexed illumination. Second, we show that highresolution normal maps based on the specular component can be used with structured light 3D scanning to quickly acquire high-resolution facial surface geometry using off-the-shelf digital still cameras. Finally, we present a realtime shading model that uses independently estimated normal maps for the specular and diffuse color channels to reproduce some of the perceptually important effects of subsurface scattering.

378 citations

Journal ArticleDOI
01 Jul 2005
TL;DR: The approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval.
Abstract: We present a technique for capturing an actor's live-action performance in such a way that the lighting and reflectance of the actor can be designed and modified in postproduction. Our approach is to illuminate the subject with a sequence of time-multiplexed basis lighting conditions, and to record these conditions with a high-speed video camera so that many conditions are recorded in the span of the desired output frame interval. We investigate several lighting bases for representing the sphere of incident illumination using a set of discrete LED light sources, and we estimate and compensate for subject motion using optical flow and image warping based on a set of tracking frames inserted into the lighting basis. To composite the illuminated performance into a new background, we include a time-multiplexed matte within the basis. We also show that the acquired data enables time-varying surface normals, albedo, and ambient occlusion to be estimated, which can be used to transform the actor's reflectance to produce both subtle and stylistic effects.

300 citations

References
More filters
Book
11 Feb 1984
TL;DR: This invaluable reference helps readers assess and simplify problems and their essential requirements and complexities, giving them all the necessary data and methodology to master current theoretical developments and applications, as well as create new ones.
Abstract: Image Processing and Mathematical Morphology-Frank Y. Shih 2009-03-23 In the development of digital multimedia, the importance and impact of image processing and mathematical morphology are well documented in areas ranging from automated vision detection and inspection to object recognition, image analysis and pattern recognition. Those working in these ever-evolving fields require a solid grasp of basic fundamentals, theory, and related applications—and few books can provide the unique tools for learning contained in this text. Image Processing and Mathematical Morphology: Fundamentals and Applications is a comprehensive, wide-ranging overview of morphological mechanisms and techniques and their relation to image processing. More than merely a tutorial on vital technical information, the book places this knowledge into a theoretical framework. This helps readers analyze key principles and architectures and then use the author’s novel ideas on implementation of advanced algorithms to formulate a practical and detailed plan to develop and foster their own ideas. The book: Presents the history and state-of-the-art techniques related to image morphological processing, with numerous practical examples Gives readers a clear tutorial on complex technology and other tools that rely on their intuition for a clear understanding of the subject Includes an updated bibliography and useful graphs and illustrations Examines several new algorithms in great detail so that readers can adapt them to derive their own solution approaches This invaluable reference helps readers assess and simplify problems and their essential requirements and complexities, giving them all the necessary data and methodology to master current theoretical developments and applications, as well as create new ones.

9,566 citations


"Environment matting and compositing..." refers methods in this paper

  • ...Next, we use morphological operations [24] (an open followed by a close, both with a 5 5 box as the structuring element) to clean up the resulting alpha channel, removing any stray isolated covered or uncovered pixels....

    [...]

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work discusses how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing, and demonstrates a few applications of having high dynamic range radiance maps.
Abstract: We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

2,967 citations

Proceedings ArticleDOI
Franklin C. Crow1
01 Jan 1984
TL;DR: Texture-map computations can be made tractable through use of precalculated tables which allow computational costs independent of the texture density, and the cost and performance of the new technique is compared to previous techniques.
Abstract: Texture-map computations can be made tractable through use of precalculated tables which allow computational costs independent of the texture density. The first example of this technique, the “mip” map, uses a set of tables containing successively lower-resolution representations filtered down from the discrete texture function. An alternative method using a single table of values representing the integral over the texture function rather than the function itself may yield superior results at similar cost. The necessary algorithms to support the new technique are explained. Finally, the cost and performance of the new technique is compared to previous techniques.

1,455 citations


"Environment matting and compositing..." refers methods in this paper

  • ...We use summed area tables [8], which allow us to compute the average value of an axis-aligned rectangle quickly....

    [...]

  • ...With summed area tables [8], we can achieve texture antialiasing as well as substantial gloss or translucency at interactive rates....

    [...]

Proceedings ArticleDOI
01 Jan 1984
TL;DR: In this article, a matte component can be computed similarly to the color channels for four-channel pictures, and guidelines for the generation of elements and arithmetic for their arbitrary compositing are discussed.
Abstract: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. There are several applications, however, where elements must be rendered separately, relying on compositing techniques for the anti-aliased accumulation of the full image. This paper presents the case for four-channel pictures, demonstrating that a matte component can be computed similarly to the color channels. The paper discusses guidelines for the generation of elements and the arithmetic for their arbitrary compositing.

1,328 citations

Book
01 Dec 1988
TL;DR: The case for four-channel pictures is presented, demonstrating that a matte component can be computed similarly to the color channels, and guidelines for the generation of elements and the arithmetic for their arbitrary compositing are discussed.
Abstract: Most computer graphics pictures have been computed all at once, so that the rendering program takes care of all computations relating to the overlap of objects. There are several applications, however, where elements must be rendered separately, relying on compositing techniques for the anti-aliased accumulation of the full image. This paper presents the case for four-channel pictures, demonstrating that a matte component can be computed similarly to the color channels. The paper discusses guidelines for the generation of elements and the arithmetic for their arbitrary compositing.

1,287 citations


"Environment matting and compositing..." refers background in this paper

  • ...[20] Thomas Porter and Tom Duff....

    [...]

  • ...Porter and Duff's alpha at each pixel allows for images to be acquired or rendered in layers and then combined [20]....

    [...]

  • ...In 1984, Porter and Duff [20] introduced the digital analog of the matte — the alpha channel — and showed how synthetic images with alpha could be useful in creating complex digital images....

    [...]