scispace - formally typeset
Search or ask a question
Author

Paul S. Heckbert

Bio: Paul S. Heckbert is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Radiosity (computer graphics) & Texture mapping. The author has an hindex of 29, co-authored 47 publications receiving 10549 citations. Previous affiliations of Paul S. Heckbert include Rafael Advanced Defense Systems & University of California, Berkeley.

Papers
More filters
Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work has developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models, and which also supports non-manifold surface models.
Abstract: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations

3,564 citations

Proceedings ArticleDOI
01 Jul 1982
TL;DR: It is demonstrated that many color images which would normally require a frame buffer having 15 bits per pixel can be quantized to 8 or fewerbits per pixel with little subjective degradation.
Abstract: Algorithms for adaptive, tapered quantization of color images are described. The research is motivated by the desire to display high-quality reproductions of color images with small frame buffers. It is demonstrated that many color images which would normally require a frame buffer having 15 bits per pixel can be quantized to 8 or fewer bits per pixel with little subjective degradation. In most cases, the resulting images look significantly better than those made with uniform quantization.The color image quantization task is broken into four phases:1) Sampling the original image for color statistics2) Choosing a colormap based on the color statistics3) Mapping original colors to their nearest neighbors in the colormap4) Quantizing and redrawing the original image (with optional dither).Several algorithms for each of phases 2-4 are described, and images created by each given.

750 citations

Journal ArticleDOI
01 Aug 1986
TL;DR: The fundamentals of texture mapping are surveyed, which can be spilt into two topics: the geometric mapping that warps a texture onto a surface, and the filtering necessary to avoid aliasing.
Abstract: Texture mapping is one of the most successful new techniques in high-quality image synthesis. It can enchance the visual richness of raster-scan images immensely while entailing only a relatively smann increase in computation. The technique has been applied to a number of surface attributes: surface color, surface normal, specularity, transparency, illumination, and surface displacement?to name a few. Although the list is potentially endless, the techniques of texture mapping are essentially the same in all cases. This article surveys the fundamentals of texture mapping, which can be spilt into two topics: the geometric mapping that warps a texture onto a surface, and the filtering necessary to avoid aliasing. An extensive bibliography is included.

704 citations

01 May 1997
TL;DR: Methods for simplifying and approximating polygonal surfaces from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically.
Abstract: : This paper surveys methods for simplifying and approximating polygonal surfaces. A polygonal surface is a piecewise-linear surface in 3-D defined by a set of polygons; typically a set of triangles. Methods from computer graphics, computer vision, cartography, computational geometry, and other fields are classified, summarized, and compared both practically and theoretically. The surface types range from height fields (bivariate functions), to manifolds, to non-manifold self-intersecting surfaces. Piecewise-linear curve simplification is also briefly surveyed.

594 citations

17 Jun 1989
TL;DR: This work develops a new theory describing the ideal, space variant intialiazing filter for signals warped and resampled according to an arbitrary mapping of texture mapping and filtering techniques.
Abstract: The applications of texture mapping in computer graphics and image distortion (warping) in image processing share a core of fundamental techniques. We explore two of these techniques, the two-dimensional geometric mappings that arise in the parameterization and projection of textures onto surfaces, and the filters necessary to eliminate aliasing when an image is resampled during texture mapping or warping. With respect to mappings, this work presents a tutorial on three common classes of mapping: the affine, bilinear, and projective. For resampling, this work develops a new theory describing the ideal, space variant intialiazing filter for signals warped and resampled according to an arbitrary mapping. Efficient implementations of the mapping and filtering techniques are discussed and demonstrated.

538 citations


Cited by
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Book ChapterDOI
TL;DR: Raster3D is discussed, which is a suite of programs for molecular graphics, which must compromise the quality of rendered images to achieve rendering speeds high enough for useful interactive manipulation of three-dimensional objects.
Abstract: Publisher Summary This chapter discusses Raster3D, which is a suite of programs for molecular graphics. Crystallographers were among the first and most avid consumers of graphics workstations. Rapid advances in computer hardware, and particularly in the power of specialized computer graphics boards, have led to successive generations of personal workstations with ever more impressive capabilities for interactive molecular graphics. For many years, it was standard practice in crystallography laboratories to prepare figures by photographing directly from the workstation screen. No matter how beautiful the image on the screen, however, this approach suffers from several intrinsic limitations. Among these is the inherent limitation imposed by the effective resolution of the screen. Use of the graphics hardware in a workstation to generate images for later presentation can also impose other limitations. Designers of workstation hardware must compromise the quality of rendered images to achieve rendering speeds high enough for useful interactive manipulation of three-dimensional objects.

3,735 citations

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work has developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models, and which also supports non-manifold surface models.
Abstract: Many applications in computer graphics require complex, highly detailed models. However, the level of detail actually necessary may vary considerably. To control processing time, it is often desirable to use approximations in place of excessively detailed models. We have developed a surface simplification algorithm which can rapidly produce high quality approximations of polygonal models. The algorithm uses iterative contractions of vertex pairs to simplify models and maintains surface error approximations using quadric matrices. By contracting arbitrary vertex pairs (not just edges), our algorithm is able to join unconnected regions of models. This can facilitate much better approximations, both visually and with respect to geometric error. In order to allow topological joining, our system also supports non-manifold surface models. CR Categories: I.3.5 [Computer Graphics]: Computational Geometry and Object Modeling—surface and object representations

3,564 citations

Proceedings ArticleDOI
01 Aug 1996
TL;DR: This work presents a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs, which combines both geometry-based and imagebased techniques, and presents view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models.
Abstract: We present a new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs. Our modeling approach, which combines both geometry-based and imagebased techniques, has two components. The first component is a photogrammetricmodeling method which facilitates the recovery of the basic geometry of the photographed scene. Our photogrammetric modeling approach is effective, convenient, and robust because it exploits the constraints that are characteristic of architectural scenes. The second component is a model-based stereo algorithm, which recovers how the real scene deviates from the basic model. By making use of the model, our stereo technique robustly recovers accurate depth from widely-spaced image pairs. Consequently, our approach can model large architectural environments with far fewer photographs than current image-based modeling approaches. For producing renderings, we present view-dependent texture mapping, a method of compositing multiple views of a scene that better simulates geometric detail on basic models. Our approach can be used to recover models for use in either geometry-based or image-based rendering systems. We present results that demonstrate our approach’s ability to create realistic renderings of architectural scenes from viewpoints far from the original photographs. CR Descriptors: I.2.10 [Artificial Intelligence]: Vision and Scene Understanding Modeling and recovery of physical attributes; I.3.7 [Computer Graphics]: Three-Dimensional Graphics and Realism Color, shading, shadowing, and texture I.4.8 [Image Processing]: Scene Analysis Stereo; J.6 [Computer-Aided Engineering]: Computer-aided design (CAD).

2,159 citations

Journal ArticleDOI
TL;DR: Some applications of centroidal Voronoi tessellations to problems in image compression, quadrature, finite difference methods, distribution of resources, cellular biology, statistics, and the territorial behavior of animals are given.
Abstract: A centroidal Voronoi tessellation is a Voronoi tessellation whose generating points are the centroids (centers of mass) of the corresponding Voronoi regions. We give some applications of such tessellations to problems in image compression, quadrature, finite difference methods, distribution of resources, cellular biology, statistics, and the territorial behavior of animals. We discuss methods for computing these tessellations, provide some analyses concerning both the tessellations and the methods for their determination, and, finally, present the results of some numerical experiments.

2,151 citations