scispace - formally typeset
Search or ask a question

Showing papers by "Paul Debevec published in 2008"


Journal ArticleDOI
01 Dec 2008
TL;DR: This work presents a novel method for acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps that can be used to automatically generate high-frequency wrinkle and pore details on top of many existing facial animation systems.
Abstract: We present a novel method for acquisition, modeling, compression, and synthesis of realistic facial deformations using polynomial displacement maps. Our method consists of an analysis phase where the relationship between motion capture markers and detailed facial geometry is inferred, and a synthesis phase where novel detailed animated facial geometry is driven solely by a sparse set of motion capture markers. For analysis, we record the actor wearing facial markers while performing a set of training expression clips. We capture real-time high-resolution facial deformations, including dynamic wrinkle and pore detail, using interleaved structured light 3D scanning and photometric stereo. Next, we compute displacements between a neutral mesh driven by the motion capture markers and the high-resolution captured expressions. These geometric displacements are stored in a polynomial displacement map which is parameterized according to the local deformations of the motion capture dots. For synthesis, we drive the polynomial displacement map with new motion capture data. This allows the recreation of large-scale muscle deformation, medium and fine wrinkles, and dynamic skin pore detail. Applications include the compression of existing performance data and the synthesis of new performances. Our technique is independent of the underlying geometry capture system and can be used to automatically generate high-frequency wrinkle and pore details on top of many existing facial animation systems.

168 citations


Journal ArticleDOI
01 Dec 2008
TL;DR: In this article, a practical method for modeling layered facial reflectance consisting of specular reflectance, single scattering, and shallow and deep subsurface scattering is presented, where the authors estimate parameters of appropriate reflectance models for each of these layers from just 20 photographs recorded in a few seconds from a single viewpoint.
Abstract: We present a practical method for modeling layered facial reflectance consisting of specular reflectance, single scattering, and shallow and deep subsurface scattering. We estimate parameters of appropriate reflectance models for each of these layers from just 20 photographs recorded in a few seconds from a single viewpoint. We extract spatially-varying specular reflectance and single-scattering parameters from polarization-difference images under spherical and point source illumination. Next, we employ direct-indirect separation to decompose the remaining multiple scattering observed under cross-polarization into shallow and deep scattering components to model the light transport through multiple layers of skin. Finally, we match appropriate diffusion models to the extracted shallow and deep scattering components for different regions on the face. We validate our technique by comparing renderings of subjects to reference photographs recorded from novel viewpoints and under novel illumination conditions.

132 citations


Patent
17 Apr 2008
TL;DR: In this paper, an interactive, autostereoscopic system for displaying an object in 3D includes a mirror configured to spin around a vertical axis when actuated by a motor, a high speed video projector, and a processing system including a graphics card interfaced to the video projector.
Abstract: An interactive, autostereoscopic system for displaying an object in 3D includes a mirror configured to spin around a vertical axis when actuated by a motor, a high speed video projector, and a processing system including a graphics card interfaced to the video projector. An anisotropic reflector is bonded onto an inclined surface of the mirror. The video projector projects video signals of the object from the projector onto the inclined surface of the mirror while the mirror is spinning, so that light rays representing the video signals are redirected toward a field of view of a 360 degree range. The processing system renders the redirected light rays so as to interactively generate a horizontal-parallax 3D display of the object. Vertical parallax can be included in the display by adjusting vertically the displayed views of the object, in response to tracking of viewer motion by a tracking system.

62 citations


Patent
17 Apr 2008
TL;DR: In this article, a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns are configured and arranged to illuminate the surface of the object with the gradient illumination pattern.
Abstract: An apparatus for generating a surface normal map of an object may include a plurality of light sources having intensities that are controllable so as to generate one or more gradient illumination patterns The light sources are configured and arranged to illuminate the surface of the object with the gradient illumination patterns A camera may receive light reflected from the illuminated surface of the object, and generate data representative of the reflected light A processing system may process the data so as to estimate the surface normal map of the surface of the object A specular normal map and a diffuse normal map of the surface of the object may be generated separately, by placing polarizers on the light sources and in front of the camera so as to illuminate the surface of the object with polarized spherical gradient illumination patterns

19 citations


Proceedings ArticleDOI
11 Aug 2008
TL;DR: This class outlines recent advances in high dynamic range imaging (HDRI) - from capture to image-based lighting to display, and the trade-offs at each step are assessed allowing attendees to make informed choices about data capture techniques, file formats and tone reproduction operators.
Abstract: This class outlines recent advances in high dynamic range imaging (HDRI) - from capture to image-based lighting to display. In a hands-on approach, we show how HDR images and video can be captured, the file formats available to store them, and the algorithms required to prepare them for display on low dynamic range displays. The trade-offs at each step are assessed allowing attendees to make informed choices about data capture techniques, file formats and tone reproduction operators. In addition, the latest developments in image-based lighting will be presented.

10 citations


01 Jan 2008
TL;DR: A real-time geometry capture approach to digital face replacement for a dynamic performance that goes beyond the traditional scope of face replacement techniques that are either completely image based and hence viewdependent or typically capture a performance under a fixed lighting condition and hence cannot be relit for usage in other performances.
Abstract: In this work, we develop a real-time geometry capture approach to digital face replacement for a dynamic performance. Digital face replacement has major applications in visual effects for motion pictures as well as interactive applications such as video games and simulation and training environments. Our approach looks into extending the 3D face scanning technology developed at the ICT Graphics Lab [Ma 2008] to support seamless face replacement along with separated diffuse and specular albedo textures and surface normals for high quality post-production relighting of the captured performance (see Figure 1). Such an approach goes beyond the traditional scope of face replacement techniques that are either completely image based and hence viewdependent or typically capture a performance under a fixed lighting condition and hence cannot be relit for usage in other performances [Zhang and Yau 2006].

6 citations


Proceedings ArticleDOI
11 Aug 2008
TL;DR: A high-resolution, real-time facial performance capture system based on a spherical gradient photometric stereo technique and multi-view stereo that allows details such as dynamic wrinkles and fine-scale stretching and compression of skin pores to be captured in real- time.
Abstract: Introduction We developed a high-resolution, real-time facial performance capture system based on a spherical gradient photometric stereo technique [Ma et al. 2007] and multi-view stereo. We use four spherical gradient illumination patterns to estimate normal maps of subjects. A structured-light-assisted two-view stereo system is employed to acquire 3D positions of the subject. The captured stereo geometry is then enhanced using the gradient normals. This allows details such as dynamic wrinkles and fine-scale stretching and compression of skin pores to be captured in real-time.

4 citations


Proceedings ArticleDOI
11 Aug 2008
TL;DR: This course offers a diverse but practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples.
Abstract: Computational photography combines plentiful computing, digital sensors, modern optics, many varieties of actuators, probes and smart lights to escape the limitations of traditional film cameras and enables novel imaging applications. Unbounded dynamic range, variable focus, resolution, and depth of field, hints about shape, reflectance, and lighting, and new interactive forms of photos that are partly snapshots and partly videos, performance capture and interchangeably relighting real and virtual characters are just some of the new applications emerging in Computational Photography. The computational techniques encompass methods from modification of imaging parameters during capture to sophisticated reconstructions from indirect measurements.We will bypass basic and introductory material presented in earlier versions of this course (Computational Photography 2005, 6, 7) and expand coverage of more recent topics. Emphasizing more recent work in computational photography and related fields (2006 or later) this course will give more attention to advanced topics only briefly touched before, including tomography, heterodyning and Fourier Slice applications, inverse problems, gradient illumination, novel optics, emerging sensors and social impact of computational photography. With this deeper coverage, the course offers a diverse but practical guide to topics in image capture and manipulation methods for generating compelling pictures for computer graphics and for extracting scene properties for computer vision, with several examples.

2 citations


Proceedings ArticleDOI
11 Aug 2008
TL;DR: The approach is to employ the recently introduced direct-indirect separation technique of Nayar et al.
Abstract: Accurate simulation of the diffuse scattering of light in skin is important for achieving the charateristic “softness” in skin appearance and subtle effects such as light bleeding at shadow boundaries. While the dipole diffusion model has been widely used in computer graphics to efficiently simulate these effects for translucent materials, it tends to over-smooth the details near the skin surface resulting in an unnatural “waxy” appearance for faces. Donner&Jensen [2005] recently introduced a multi-layer subsurface scattering model for rendering human skin more realistically. They relate the multi-layer model to the various epidermal and dermal layers of skin and provide scattering parameters for the layers from tissue optics literature. While providing more convincing results for human skin than the dipole model, the greater complexity of the multi-layer model also makes it more challenging to fit the scattering parameters from measured data. Unlike the dipole model, it is unclear how to fit the various parameters of the multi-layer model from a typically observed scattering profile from a live subject. We seek to address this problem in this work. Our approach is to employ the recently introduced direct-indirect separation technique of Nayar et al. [2006] in order to decompose the diffuse scattering of light in skin into a shallow and a deep scattering component respectively. Given the separated components and an additionally observed scattering profile, we then estimate parameters of a simplified two-layer scattering model by employing the Kubelka-Munk theory to the total diffusely reflected radiance.

2 citations


01 Jan 2008
TL;DR: This work proposes a novel post-production facial performance relighting system for human actors that uses just a dataset of view-dependent facial appearances with a neutral expression, captured for a static subject using a Light Stage apparatus, enabling image-based relighting of the entire sequence.
Abstract: We propose a novel post-production facial performance relighting system for human actors. Our system uses just a dataset of view-dependent facial appearances with a neutral expression, captured for a static subject using a Light Stage apparatus. For the actual performance, however, a potentially dierent actor is captured under known, but static, illumination. During post-production, the reflectance field of the reference dataset actor is transferred onto the dynamic performance, enabling image-based relighting of the entire sequence. Our approach makes post-production relighting more practical and could easily be incorporated in a traditional production pipeline since it does not require additional hardware during principal photography. Additionally, we show that our system is suitable for real-time post-production illumination editing.

1 citations