scispace - formally typeset
Search or ask a question
Author

Gregory J. Ward

Bio: Gregory J. Ward is an academic researcher from Lawrence Berkeley National Laboratory. The author has contributed to research in topics: Rendering (computer graphics) & Global illumination. The author has an hindex of 11, co-authored 18 publications receiving 3186 citations.

Papers
More filters
Proceedings ArticleDOI
01 Jul 1992
TL;DR: A new device for measuring the spatial reflectancedistributions of surfaces is introduced, along with a new mathematical model of sniaorropic reflectance, which is both simple and accurate, permitting efficient reflectance data reduction in production.
Abstract: A new device for measuring the spatial reflectancedistributionsof surfaces is introduced, along with a new mathematical model of sniaorropic reflectance. The reflectance model presented is both simple and accurate, permitting efficient reflectance data reduction rasdreproduction. Tire validity of the model is substantiated with comparisons to complete meaarsremems of surface reflectance functions gathered with the novel retlectometry device. This new device uses imaging technology to capture the entire hemisphem of reflected directions simttkarreously, which greatly accelerates the reflectance data gathering process, making it pssible to measure dozens of surfaces in the time that it used to take to do one. Example measurements and simulations are shown. and a table of fitted parameters for several surfaces is presented. General Terms: algorithms, measurement, theory, verification. CR Categories and Descriptors: 1.3.7 Three-dimensionalgraphics and rw#ism, 1.6.4 Model validation and analysis. Additional

1,259 citations

Proceedings ArticleDOI
24 Jul 1994
TL;DR: A physically-based rendering system tailored to the demands of lighting design and architecture using a light-backwards ray-tracing method with extensions to efficiently solve the rendering equation under most conditions.
Abstract: This paper describes a physically-based rendering system tailored to the demands of lighting design and architecture. The simulation uses a light-backwards ray-tracing method with extensions to efficiently solve the rendering equation under most conditions. This includes specular, diffuse and directional-diffuse reflection and transmission in any combination to any level in any environment, including complicated, curved geometries. The simulation blends deterministic and stochastic ray-tracing techniques to achieve the best balance between speed and accuracy in its local and global illumination methods. Some of the more interesting techniques are outlined, with references to more detailed descriptions elsewhere. Finally, examples are given of successful applications of this free software by others.

1,037 citations

Proceedings ArticleDOI
01 Jun 1988
TL;DR: A Monte Carlo technique computes the indirect contributions to illuminance at locations chosen by the rendering process, which speeds the process and provides a natural limit to recursion.
Abstract: An efficient ray tracing method is presented for calculating interreflections between surfaces with both diffuse and specular components. A Monte Carlo technique computes the indirect contributions to illuminance at locations chosen by the rendering process. The indirect illuminance values are averaged over surfaces and used in place of a constant "ambient" term. Illuminance calculations are made only for those areas participating in the selected view, and the results are stored so that subsequent views can reuse common values. The density of the calculation is adjusted to maintain a constant accuracy, permitting less populated portions of the scene to be computed quickly. Successive reflections use proportionally fewer samples, which speeds the process and provides a natural limit to recursion. The technique can also model diffuse transmission and illumination from large area sources, such as the sky.

402 citations

Book ChapterDOI
12 Jun 1995
TL;DR: Numerical techniques for comparing real and synthetic luminance images are explored and components of a perceptually based metric using ideas from the image compression literature are introduced.
Abstract: This paper explores numerical techniques for comparing real and synthetic luminance images. We introduce components of a perceptually based metric using ideas from the image compression literature. We apply a series of metrics to a set of real and synthetic images, and discuss their performance. Finally, we conclude with suggestions for future work in formulating image metrics and incorporating them into new image synthesis methods.

154 citations

Book ChapterDOI
01 Jan 1994
TL;DR: This work presents a simple technique for improving the efficiency of ray tracing in scenes with many light sources, which requires very littie storage, and produces no visible artifacts.
Abstract: We present a simple technique for improving the efficiency of ray tracing in scenes with many light sources. The sources are sorted according to their potential contribution, and only those sources whose shadows are above a specified threshold are tested. The remainder are added into the result in proportion to a statistical estimate of their visibility. The algorithm requires very littie storage, and produces no visible artifacts.

115 citations


Cited by
More filters
Book
30 Sep 2010
TL;DR: Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images and takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene.
Abstract: Humans perceive the three-dimensional structure of the world with apparent ease. However, despite all of the recent advances in computer vision research, the dream of having a computer interpret an image at the same level as a two-year old remains elusive. Why is computer vision such a challenging problem and what is the current state of the art? Computer Vision: Algorithms and Applications explores the variety of techniques commonly used to analyze and interpret images. It also describes challenging real-world applications where vision is being successfully used, both for specialized applications such as medical imaging, and for fun, consumer-level tasks such as image editing and stitching, which students can apply to their own personal photos and videos. More than just a source of recipes, this exceptionally authoritative and comprehensive textbook/reference also takes a scientific approach to basic vision problems, formulating physical models of the imaging process before inverting them to produce descriptions of a scene. These problems are also analyzed using statistical models and solved using rigorous engineering techniques Topics and features: structured to support active curricula and project-oriented courses, with tips in the Introduction for using the book in a variety of customized courses; presents exercises at the end of each chapter with a heavy emphasis on testing algorithms and containing numerous suggestions for small mid-term projects; provides additional material and more detailed mathematical topics in the Appendices, which cover linear algebra, numerical techniques, and Bayesian estimation theory; suggests additional reading at the end of each chapter, including the latest research in each sub-field, in addition to a full Bibliography at the end of the book; supplies supplementary course material for students at the associated website, http://szeliski.org/Book/. Suitable for an upper-level undergraduate or graduate-level course in computer science or engineering, this textbook focuses on basic techniques that work under real-world conditions and encourages students to push their creative boundaries. Its design and exposition also make it eminently suitable as a unique reference to the fundamental techniques and current research literature in computer vision.

4,146 citations

Proceedings ArticleDOI
01 Jul 1992
TL;DR: A general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points that is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners.
Abstract: This thesis describes a general method for automatic reconstruction of accurate, concise, piecewise smooth surfaces from unorganized 3D points. Instances of surface reconstruction arise in numerous scientific and engineering applications, including reverse-engineering--the automatic generation of CAD models from physical objects. Previous surface reconstruction methods have typically required additional knowledge, such as structure in the data, known surface genus, or orientation information. In contrast, the method outlined in this thesis requires only the 3D coordinates of the data points. From the data, the method is able to automatically infer the topological type of the surface, its geometry, and the presence and location of features such as boundaries, creases, and corners. The reconstruction method has three major phases: (1) initial surface estimation, (2) mesh optimization, and (3) piecewise smooth surface optimization. A key ingredient in phase 3, and another principal contribution of this thesis, is the introduction of a new class of piecewise smooth representations based on subdivision. The effectiveness of the three-phase reconstruction method is demonstrated on a number of examples using both simulated and real data. Phases 2 and 3 of the surface reconstruction method can also be used to approximate existing surface models. By casting surface approximation as a global optimization problem with an energy function that directly measures deviation of the approximation from the original surface, models are obtained that exhibit excellent accuracy to conciseness trade-offs. Examples of piecewise linear and piecewise smooth approximations are generated for various surfaces, including meshes, NURBS surfaces, CSG models, and implicit surfaces.

3,119 citations

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work discusses how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing, and demonstrates a few applications of having high dynamic range radiance maps.
Abstract: We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

2,967 citations

Journal ArticleDOI
TL;DR: A new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail, is presented, based on a two-scale decomposition of the image into a base layer.
Abstract: We present a new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail. It is based on a two-scale decomposition of the image into a base layer,...

1,715 citations

Proceedings ArticleDOI
01 Jul 2002
TL;DR: A new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail, is presented, based on a two-scale decomposition of the image into a base layer, encoding large-scale variations, and a detail layer.
Abstract: We present a new technique for the display of high-dynamic-range images, which reduces the contrast while preserving detail. It is based on a two-scale decomposition of the image into a base layer, encoding large-scale variations, and a detail layer. Only the base layer has its contrast reduced, thereby preserving detail. The base layer is obtained using an edge-preserving filter called the bilateral filter. This is a non-linear filter, where the weight of each pixel is computed using a Gaussian in the spatial domain multiplied by an influence function in the intensity domain that decreases the weight of pixels with large intensity differences. We express bilateral filtering in the framework of robust statistics and show how it relates to anisotropic diffusion. We then accelerate bilateral filtering by using a piecewise-linear approximation in the intensity domain and appropriate subsampling. This results in a speed-up of two orders of magnitude. The method is fast and requires no parameter setting.

1,612 citations