scispace - formally typeset
Search or ask a question
Author

Douglas E. Zongker

Other affiliations: Microsoft
Bio: Douglas E. Zongker is an academic researcher from University of Washington. The author has contributed to research in topics: Graphics & Font. The author has an hindex of 7, co-authored 8 publications receiving 626 citations. Previous affiliations of Douglas E. Zongker include Microsoft.

Papers
More filters
Proceedings ArticleDOI
01 Jul 1999
TL;DR: This paper introduces a new process, environment matting, which captures not just a foreground object and its traditional opacity matte from a real-world scene, but also a description of how that object refracts and reflects light, which is called an environment matte.
Abstract: This paper introduces a new process, environment matting, which captures not just a foreground object and its traditional opacity matte from a real-world scene, but also a description of how that object refracts and reflects light, which we call an environment matte The foreground object can then be placed in a new environment, using environment compositing, where it will refract and reflect light from that scene Objects captured in this way exhibit not only specular but glossy and translucent effects, as well as selective attenuation and scattering of light according to wavelength Moreover, the environment compositing process, which can be performed largely with texture mapping operations, is fast enough to run at interactive speeds on a desktop PC We compare our results to photos of the same objects in real scenes Applications of this work include the relighting of objects for virtual and augmented reality, more realistic 3D clip art, and interactive lighting design CR Categories: I210 [Artificial Intelligence]: Vision and Scene Understanding – modeling and recovery of physical attributes; I33 [Computer Graphics]: Picture/Image Generation – display algorithms; I37 [Computer Graphics]: ThreeDimensional Graphics and Realism – color, shading, shadowing, and texture Additional

254 citations

Proceedings ArticleDOI
01 Jul 2000
TL;DR: This work extends environment matting in two opposite directions: recovering a more accurate model at the expense of using additional structured light backdrops, and obtaining a simplified matte using just a single backdrop.
Abstract: Environment matting is a generalization of traditional bluescreen matting. By photographing an object in front of a sequence of structured light backdrops, a set of approximate light-transport paths through the object can be computed. The original environment matting research chose a middle ground—using a moderate number of photographs to produce results that were reasonably accurate for many objects. In this work, we extend the technique in two opposite directions: recovering a more accurate model at the expense of using additional structured light backdrops, and obtaining a simplified matte using just a single backdrop. The first extension allows for the capture of complex and subtle interactions of light with objects, while the second allows for video capture of colorless objects in motion.

170 citations

Proceedings ArticleDOI
24 Jul 1998
TL;DR: The idea of “adaptive clip art,” which encapsulates the rules for creating a specific ornamental pattern, is introduced, which can be used to generate patterns that are tailored to fit a particularly shaped region of the plane.
Abstract: This paper describes some of the principles of traditional floral ornamental design, and explores ways in which these designs can be created algorithmically. It introduces the idea of “adaptive clip art,” which encapsulates the rules for creating a specific ornamental pattern. Adaptive clip art can be used to generate patterns that are tailored to fit a particularly shaped region of the plane. If the region is resized or reshaped, the ornament can be automatically regenerated to fill this new area in an appropriate way. Our ornamental patterns are created in two steps: first, the geometry of the pattern is generated as a set of two-dimensional curves and filled boundaries; second, this geometry is rendered in any number of styles. We demonstrate our approach with a variety of floral ornamental designs. CR Categories: I.3.3 [Computer Graphics]: Picture/Image Generation; I.3.4 [Computer Graphics]: Graphics Utilities— Picture description languages. Additional

98 citations

Proceedings ArticleDOI
26 Jul 2003
TL;DR: This paper describes a set of common authoring paradigms that it believes a system for building animated presentations should support, and presents the latest version of the script-based system for creating animated presentations, called SLITHY.
Abstract: Computers are used to display visuals for millions of live presentations each day, and yet only the tiniest fraction of these make any real use of the powerful graphics hardware available on virtually all of today's machines. In this paper, we describe our efforts toward harnessing this power to create better types of presentations: presentations that include meaningful animation as well as at least a limited degree of interactivity. Our approach has been iterative, alternating between creating animated talks using available tools, then improving the tools to better support the kinds of talk we wanted to make. Through this cyclic design process, we have identified a set of common authoring paradigms that we believe a system for building animated presentations should support. We describe these paradigms and present the latest version of our script-based system for creating animated presentations, called SLITHY. We show several examples of actual animated talks that were created and given with versions of SLITHY, including one talk presented at SIGGRAPH 2000 and four talks presented at SIGGRAPH 2002. Finally, we describe a set of design principles that we have found useful for making good use of animation in presentation.

53 citations

Patent
20 Dec 2004
TL;DR: In this article, the authors described a system for automatically hinting fonts, particularly TrueType fonts, by transferring hints from one font to another by modifying values in a control value table (CVT).
Abstract: Methods and systems for automatically hinting fonts, particularly TrueType fonts, by transferring hints from one font to another are described. In one embodiment, a character or glyph (i.e. a source character) from a first font is selected and provides hints that are to be transferred to a character or glyph of a second font (i.e. a target character). The hints comprise statements defined in terms of control points or knots that define the shape or appearance of a character. A match is found between individual control points on the different characters and then used as the basis for transferring the hints. In one embodiment, hints are transferred by modifying values in a control value table (CVT) that contains entries that are used to constrain the control points of the source character. The CVT values are modified so that they now constrain corresponding control points in the target character.

36 citations


Cited by
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Proceedings ArticleDOI
18 Jun 2003
TL;DR: A method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light that does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors.
Abstract: Progress in stereo algorithm performance is quickly outpacing the ability of existing stereo data sets to discriminate among the best-performing algorithms, motivating the need for more challenging scenes with accurate ground truth information. This paper describes a method for acquiring high-complexity stereo image pairs with pixel-accurate correspondence information using structured light. Unlike traditional range-sensing approaches, our method does not require the calibration of the light sources and yields registered disparity maps between all pairs of cameras and illumination projectors. We present new stereo data sets acquired with our method and demonstrate their suitability for stereo algorithm evaluation. Our results are available at http://www.middlebury.edu/stereo/.

1,840 citations

Proceedings ArticleDOI
01 Jul 2000
TL;DR: A method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint and demonstrates the technique with synthetic renderings of a person's face under novel illumination and viewpoints.
Abstract: We present a method to acquire the reflectance field of a human face and use these measurements to render the face under arbitrary changes in lighting and viewpoint. We first acquire images of the face from a small set of viewpoints under a dense sampling of incident illumination directions using a light stage. We then construct a reflectance function image for each observed image pixel from its values over the space of illumination directions. From the reflectance functions, we can directly generate images of the face from the original viewpoints in any form of sampled or computed illumination. To change the viewpoint, we use a model of skin reflectance to estimate the appearance of the reflectance functions for novel viewpoints. We demonstrate the technique with synthetic renderings of a person's face under novel illumination and viewpoints.

1,102 citations

Journal ArticleDOI
TL;DR: This paper investigates the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots, and proposes design principles for creating effective transitions and illustrates the application in DynaVis, a visualization system featuring animated data graphics.
Abstract: In this paper we investigate the effectiveness of animated transitions between common statistical data graphics such as bar charts, pie charts, and scatter plots. We extend theoretical models of data graphics to include such transitions, introducing a taxonomy of transition types. We then propose design principles for creating effective transitions and illustrate the application of these principles in DynaVis, a visualization system featuring animated data graphics. Two controlled experiments were conducted to assess the efficacy of various transition types, finding that animated transitions can significantly improve graphical perception.

495 citations