scispace - formally typeset
Search or ask a question

Showing papers by "Brian A. Wandell published in 2000"


Journal ArticleDOI
TL;DR: Methods for viewing large portions of the brain's surface in a single flattened representation are described and the flattened representation preserves several key spatial relationships between regions on the cortical surface.
Abstract: Much of the human cortical surface is obscured from view by the complex pattern of folds, making the spatial relationship between different surface locations hard to interpret. Methods for viewing large portions of the brain's surface in a single flattened representation are described. The flattened representation preserves several key spatial relationships between regions on the cortical surface. The principles used in the implementations and evaluations of these implementations using artificial test surfaces are provided. Results of applying the methods to structural magnetic resonance measurements of the human brain are also shown. The implementation details are available in the source code, which is freely available on the Internet.

276 citations


Proceedings ArticleDOI
TL;DR: In this paper, a review of tone reproduction curves (TRCs) and tone reproduction operators (TROs) is presented, which operate pointwise on the image data, making the algorithms simple and efficient.
Abstract: In this paper, we review several algorithms that have been proposed to transform a high dynamic range image into a reduced dynamic range image that matches the general appearance of the original. We organize these algorithms into two categories: tone reproduction curves (TRCs) and tone reproduction operators (TROs). TRCs operate pointwise on the image data, making the algorithms simple and efficient. TROs use the spatial structure of the image data and attempt to preserve local image contrast.

185 citations


Proceedings ArticleDOI
TL;DR: This paper describes a methodology, using a camera simulator and image quality metrics, for determining the optimal pixel size, and it is shown that the optimalpixel size scales with technology, btu at slower rate than the technology itself.
Abstract: Pixel design is a key part of image sensor design. After deciding on pixel architecture, a fundamental tradeoff is made to select pixel size. A small pixel size is desirable because it results in a smaller die size and/or higher spatial resolution; a large pixel size is desirable because it results in higher dynamic range and signal-to-noise ratio. Given these two ways to improve image quality and given a set of process and imaging constraints an optimal pixel size exists. It is difficult, however, to analytically determine the optimal pixel size, because the choice depends on many factors, including the sensor parameters, imaging optics and the human perception of image quality. This paper describes a methodology, using a camera simulator and image quality metrics, for determining the optimal pixel size. The methodology is demonstrated for APS implemented in CMOS processes down to 0.18 (mu) technology. For a typical 0.35 (mu) CMOS technology the optimal pixel size is found to be approximately 6.5 micrometers at fill factor of 30%. It is shown that the optimal pixel size scales with technology, btu at slower rate than the technology itself.

114 citations


Proceedings Article
01 Jan 2000
TL;DR: In this paper, the authors describe spectral estimation principles that are useful for color balancing, color conversion, and sensor design, and apply these principles to typical daylight illuminants that they measured over the course of twenty days in Stanford, California.
Abstract: We describe spectral estimation principles that are useful for color balancing, color conversion, and sensor design. The principles extend conventional estimation methods, which rely on linear models of the input data, by characterizing the distribution or structure of the linear model coefficients. When the linear model coefficients of the input data are highly structured, it is possible to improve the quality of a simple linear model by estimating coefficients that are invisible to the sensors. We illustrate these principles using the synthetic example of estimating blackbody radiator spectral power distributions. Then, we apply the principles to typical daylight illuminants that we measured over the course of twenty days in Stanford, California. We show that the distribution of the daylight linear model coefficients that approximate the daylight spectral power distributions are highly structured. We further show that from knowledge of the coefficient structure, nonlinear algorithms using N sensors estimate the data as well as linear algorithms using N+1 sensors.

15 citations


Proceedings ArticleDOI
01 Sep 2000
TL;DR: A new image capture technology, based on a digital pixel fabricated on a CMOS process, is well-designed for exploring a novel image pipeline architecture that is being developed to serve features of human vision that are not yet incorporated in the conventional pipeline.
Abstract: An effective image reproduction pipeline, spanning image capture, processing and display, must be designed to account for the properties of the human observer. In designing an image pipeline, three principles of human vision are particularly important: trichromacy, color adaptation, and pattern-color sensitivity. These properties also play an important role in metrics used to evaluate image quality reproduction. The main portion of this review comprises a description of these properties of the visual system and how these principles are incorporated into the image reproduction pipeline. The last part of this review describes a new image capture technology, based on a digital pixel fabricated on a CMOS process. This sensor is well-designed for exploring a novel image pipeline architecture that we call multiple-capture, single-image. This architecture is being developed to serve features of human vision that are not yet incorporated in the conventional pipeline.

8 citations