scispace - formally typeset
Open AccessJournal ArticleDOI

Digital camera simulation.

Reads0
Chats0
TLDR
Through the use of simulation, the effects of individual digital camera components on system performance and image quality can be quantified, which can be helpful for both camera design and imagequality assessment.
Abstract
We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. We show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, we can quantify the effects of individual digital camera components on system performance and image quality. This computational approach can be helpful for both camera design and image quality assessment.

read more

Content maybe subject to copyright    Report

Digital camera simulation
Joyce Farrell,
1,
* Peter B. Catrysse,
1,2
and Brian Wandell
1,3
1
Department of Electrical Engineering, Stanford University, Stanford, California 94305, USA
2
Edward L. Ginzton Laboratory, Stanford University, Stanford, California 94305, USA
3
Department of Psychology, Stanford University, Stanford, California 94305, USA
*Corresponding author: Joyce_Farrell@stanford.edu
Received 5 October 2011; revised 22 December 2011; accepted 23 December 2011;
posted 10 January 2012 (Doc. ID 156026); published 1 February 2012
We describe a simulation of the complete image processing pipeline of a digital camera, beginning with a
radiometric description of the scene captured by the camera and ending with a radiometric description of
the image rendered on a display. We show that there is a good correspondence between measured and
simulated sensor performance. Through the use of simulation, we can quantify the effects of individual
digital camera components on system performance and image quality. This computational approach can
be helpful for both camera design and image quality assessment. © 2012 Optical Society of America
OCIS codes: 100.3000, 100.0100.
1. Introduction
Digital cameras are desig ned by a team of engineers
and scientists who have different expertise and who
use different analytical tools and language to charac-
terize the imaging component they work on. Typi-
cally, these engineers work at different companies
that specialize in the design and manufacturing of
one imaging component, such as optical lenses, fil-
ters, sensors, processors, or displays.
Digital cameras are purchased by consumers who
judge the image quality of the digital camera by
viewing the final rendered output. Achieving a high
quality output depends on multiple system compo-
nents, including the optical system, imaging sensor,
image processor, and display device. Consequently,
analyzing components singly, without reference to
the characteristics of the other components, provides
only a limited view of the system performance. In
multidevice systems, a controlled simulation envir-
onment can provide the engineer with useful gui-
dance that improves the understanding of the
system and guides design considerations for indivi-
dual parts and algorithms.
There has been much progress in modeling the in-
dividual components of a digital camera, including
camera optics [
13] and sensor noise [49]. Progress
has also been made in modeling the spectral reflec-
tances and illuminants in a scene [
1017]. There is
also a very large literature on image processing algo-
rithms that are part of any digital camera [
18]. This
is an opportune time to develop a system simulator
that incorporates models for each of the system com-
ponents. In this paper, we describe how to model and
simulate the complete imaging pipeline of a digital
camera, beginning with a radiometric description
of the scene captured by the camera and ending with
a radiometric description of the final image as it
appears on an LCD display.
Computer simulations have played an important
role in evaluating remote imaging systems that
are used to classify agricultural plants and mater ials
and to detect and identify buildings, vehicles, and
other targets [
1922]. Modeling the complete system
for remote imaging systems includes (1) characteriz-
ing spectral properties of possible targets, (2) model-
ing atmospheric conditions, (3) characterizing the
spectral transmissivity of filters, the sensitivity,
noise, and spatial resolution of imaging sensors
(typically one pixel sensors), and (4) implementing
image processing operations, including exposure
1559-128X/12/040A80-11$15.00/0
© 2012 Optical Society of America
A80 APPLIED OPTICS / Vol. 51, No. 4 / 1 February 2012

control, quantization, and detection algorithms. The
veridicality of the simulations is judged by how well
the models predict target detection as quantified by
receiver operating curves. The visual abilities of the
human observer are also modeled when the final
detector is a human [
22,23].
We describe a simulation for consumer imaging
that parallels this methodology. We (1) characterize
the radiometric properties of scenes and illuminants,
(2) model the optical properties of lenses, (3) charac-
terize the sensitivity and noise of sensors, including
spatial and spectral sampling of color filter arrays,
(4) implement image processing algorithms, and
(5) generate a radiometric representa tion of displayed
images. We also model spatial and chromatic sensitiv-
ity of human observers for purposes of predicting the
visibility of noise and sampling artifacts [
24,25].
The digital camera simulations are comprised of
an integrated suite of MATLAB software tools re-
ferred to as the Image Systems Evaluation Toolbox
(ISET) [
26]. ISET incorporates and extends the work
that we and our colleagues have been doing over the
past 15 years on modeling and evaluating the quality
of imaging systems [
2435]. We first describe the
computational modules in ISET and then validate
the system by comparing simulated and measured
sensor data obtained from a calibrated device in
our laboratory.
2. Digital Camera Simulation
The digital camera imaging pipeline can be sepa-
rated into a sequ ence of computational modules cor-
responding to the scene, optics, sensor, processor, and
display. The scene module creates a radiometric
description of the scene. The optics module converts
scene radiance data into an irradiance image at the
sensor surface. The sensor module converts the irra-
diance image into electrons. The processor module
converts the digital values in the two-dimensional
sensor array into a three-dimensional (RGB) image
that can be rendered on a specified display. Finally,
the display module generates a radiometric descrip-
tion of the final image as it appears on an LCD dis-
play. We describe these five modules in the sections
below.
A. Scene
Digital camera simulation requires a physically
accurate description of the light incident on the ima-
ging sensor. We represent a scene as a multidimen-
sional array describing the spectral radiance
(photonssnmsrm
2
) at each pixel in the sampled
scene.
There are several different sources of scene data.
The simplest are synthetic scenes, such as the Mac-
beth ColorChecker, spatial frequency sweep pat-
terns, intensity ramps, and uniform fields. These
are the spectral radiance image data that arise from
a single image plane at a specified distance from the
optics. When used in combination with image quality
metrics, these synthetic target scenes are useful for
evaluating specific features of the system, such as
color accuracy, spatial resolution, intensity quantiza-
tion, and noise.
We can also create synthetic scenes using three-
dimensional multispectral data generated by other
software [
36]. These three-dimensional rendering
methods provide data in which both the spectral
radiance and the depth are specified. Such data can
be used to simulate the effect of optical depth of
focus [
37], synthetic apertures [38], and light-field
cameras [
39].
Another important source of scene data are mea-
surements of natural scenes using multispectral
imaging methods [
4044]. These data provide in-
sights about the typical dynamic range and spectral
characteristics of the likely scenes.
The ISET spectral radiance scene data can be
stored in a compa ct wavelength format using a linear
model for the spectral functions. Hence, a relatively
small number (four six) of chromatic samplesalong
with the modest overhead of the basis functionscan
represent the full spectral radiance information. In
addition, the illuminant spectral power distribution
is typically stored in the scene representation; this
enables simulation of illumination changes.
B. Optics
The optics module converts scene radiance data into
an irradiance image (photonssnmm
2
) at the sen-
sor surface. The conversion from radiance to irradi-
ance is determined by the properties of the optics,
which gather the diverging rays from a point in the
scene and focus them onto the image sensor [
45].
We call the irradiance image at the sensor surface,
just prior to capture, the optical irradiance. To com-
pute the optical irradiance image, we must account
for a number of factors. First, we account for the lens
f -number and magnification. Second, we account for
lens shading (relative illumination), the fall-off in
intensity with lens field height. Third, we blur the
optical irradiance image. The blurring can be per-
formed with one of three models: a wavelength-
dependent shift-invariant diffraction-limited model,
a wavelength-dependent general shift-invariant
model (arbitrary point spread), and a general ray-
trace calculation, which further incorporates geo-
metric distortions and a wavelength-dependent blur
that is not shift invariant.
1. Converting Radiance to Irradiance
The camera equation [
46,47] defines a simple model
for converting the scene radiance function, L
scene
,to
the optical irradiance field at the sensor, I
image
. The
camera equation is
I
image
x; y; λ
πTλ
4f #
2
L
scene
x
m
;
y
m
; λ
. (1)
The term f # is the effective f -number of the lens
(focal length divided by effective aperture), m is
the lens magnification, and Tλ is the lens trans-
missivity. The camera equation holds with good
1 February 2012 / Vol. 51, No. 4 / APPLIED OPTICS A81

precision for the center of the image (i.e., on the
optical axis). For all other image locations, we apply
an off-axis (relative illumination) correction.
2. Relative Illumination
The fall-off in illumination from the principal axis is
called the relative illumination or lens shading,
Rx; y; λ. There is a simple formula to describe the
shading in the case of a thin lens without vignetting
or (geometric) distortion. In that case, the fall-off
is proportional to cos
4
θ, where θ is the off-axis angle
[48]:
Rx; y; λcos
4
θ
d
S
4
. (2)
The term S is the image field height (distance from
on-axis) and d is the distance from the lens to the im-
age plane. The x; y coordinates specify the position
with respect to the center of the image axis. This for-
mula is often called the cosine-fourth law. In real
lenses, or lens collections, the actual off-axis correc-
tion may differ from this function. It is often used,
however, as a good guess for the irradiance decline
as we measure off-axis.
3. Irradiance Image Blurring
The irradiance image, I
image
x; y; λ canno t be a per-
fect replica of the scene radiance, L
scene
x; y; λ .Im-
perfections in the lens material or shape, as well
as fundamental physical limitations (diffraction),
limit the precision of the reproduction. The imperfec-
tions caused by these factors can be modeled by sev-
eral types of blurring calculations of increasing
complexity.
Diffraction-limited optics. A diffraction-limited
system can be modeled as having a wavelength-
dependent, shift-inv ariant point spread function
(PSF) [
49,50]. Diffraction-limited modeli ng uses a
wave-optics appro ach to compute the blurring caused
by a perfect lens with a finite aperture. The PSF of a
diffraction-limited lens is quite simple, depending
only on the f -number of the lens and the wavelength
of the light. It is particularly simple to express the
formula in terms of the Fourier transform of the
PSF, which is also called the optical transfer function
(OTF).
The formula for the diffraction-limited OTF is
OTF
2
π
a cos ρ
ρ

1 ρ
2
p
; ρ < 1
0; ρ 1
; (3)
where ρ f Aλd (normalized frequency), in
which f frequency in cycles/meter, A aperture
diameter in meters, λ wavelength, and d
distance between the lens aperture and detector.
Shift-invariant image formation. The
diffraction-limited PSF is a specific instance of a
shift-invariant lin ear model. In optics, the term
isoplanatic is used to define conditions when a
shift-invariant model is appropriate. Specifically,
an isoplanatic patch in an optical system is a region
in which the aberrations are constant; experimen-
tally, a patch is isoplanatic if translation of a point
in the object plane causes no change in the irradiance
distribution of the PSF except its location in the
image plane. This is precisely the idea behind a
shift-invariant linear system.
The lens transformation from a shift-invariant
system can be computed much more efficiently than
a shift-variant (anisoplanatic) system. The computa-
tional efficiency arises because the computation can
take advantage of the fast Fourier transform to cal-
culate the spatial distribution of the irradiance
image.
Specifically, we can convert the image formation
and PSF into the spatial frequency domain. The
shift-invariant convolution in the space domain is
a pointwise product in the spatial frequency domain.
Hence, we have
FTfI
image
x; y; λg FTfPSFx; y; λg
·FTfI
ideal
x; y; λ g ; (4)
FTfI
image
x;y;λgOTFf
x
;f
y
;λ·FTfI
ideal
x;y;λg; (5)
I
image
x; y; λFT
1
fOTF · FTfI
ideal
x; y; λ gg; (6)
where FTfg is the Fourier transform operator and the
OTF is the Fourier transform of the PSF. Because no
photons are lost or added, the area under the PSF is
1, or equivalently, OTF0; 0; λ1. In this shift-
invariant model, we assume that the point spread
is shift-invariant for each wavelength, but the PSF
may differ across wavelengths. Such differences
are common because of factors such as longitudinal
chromatic aberrations.
Shift-variant image formation: ray-tracing.
When measuring over large fields, in real systems
the PSFs vary considerably. The ray-trace method
model repla ces the single, shift-invariant, PSF with
a series of wavelength-dependent PSFs that vary as
a function of field height and angle. In addition, ray
tracing must account for the geometric distortion.
This can be specified as a displacement field that var-
ies as a function of input image position [dx; y]. In
the ray-trace calculation, the displacement field is
first applied and the result is blurred by the local
PSF, wavelength by wavelength.
When using the ray-trace method, it is necessary
to specify the geometric distortion and how the point
spread varies with field height and wavelength. For
real systems, these functions can be specified by the
user or calculated using lens design software [
51].
C. Sensor
The sensor module transforms the optical irradiance
image into a two-dimensional array of voltage sam-
ples, one sample from each pixel. Each sample is
A82 APPLIED OPTICS / Vol. 51, No. 4 / 1 February 2012

associated with a position in the image space. Most
commonly, the pixel positions are arranged to form a
regular, two-dimensional sampling array. This array
matches the spatial sampling grids of common out-
put devices, including displays and printers.
In most digital image sensors, the transduction of
photons to electrons is linear: specifically, the photo-
detector response (either CCD or CMOS) increases
linearly with the number of incident photons. De-
pending on the material properties of the silicon sub-
strate, such as its thickness, the photodetector
wavelength sensitivity will vary. But even so, the re-
sponse is linear in that the detector sums the re-
sponses across wavelengths. Hence, ignoring device
imperfections and noise, the mean response of the
photodetector to an irradiance image (Iλ; x ,
photonssnmm
2
) is determined by the sensor spec-
tral quantum efficiency (Sλ,thee
photon), aper-
ture function across space A
i
x, and exposure time
T; s. For the ith photodetector, the number of elec-
trons will be summed across the aperture and wave-
length range:
e
i
T
ZZ
λ;x
S
i
λA
i
xIλ; xdλdx. (7)
A complete sensor simulation must account for the
device imperfections and noise sources. Hence, the
full simulation is more complex than the linear
expression in Eq.
(7). Here, we outline the factors
and computational steps that are incorporated in
the simulation.
1. Compu ting the Signal Current Density Image
The irradiance image already includes the effects of
the imaging optics. To compute the signal current
density, we must further specify the effect of several
additional optical factors within the sensor and pixel.
For example, most consumer cameras include an in-
frared filter that covers the entire sensor. This filter
is present because the human eye is not sensitive to
infrared wavelengths, while the detector is. For con-
sumer imaging, the sensor is designed to capture the
spectral components of the image that the eye sees
and to exclude those parts that the eye fails to see.
The infrared filter helps to accomplish this goal,
and thus it covers all of the pixels.
We must also account for the color filters placed in
front of the individual pixels. While the pixels in the
sensor array are typically the same, each is covered
by a color filter that permits certain wavelengths of
light to pass more efficiently than others.
The geometric structure of a pixel, which is some-
thing like a tunnel, also has a significant impact on
the signal current density image. The position and
width of the opening to the tunnel determine the pix-
el aperture [
52]. Ordinarily the photodetector is at
the bottom of the tunnel in the silicon substrate.
In modern CMOS imagers, usually built using multi-
ple metal layers, the pixel depth can be as large as
the pixel aperture. Imagine a photon that must enter
through the aperture and arrive safely at the photo-
detector at the bottom. If the pixel is at the edge of
the sensor array, the photons direction as it travels
from the center of the imaging lens must be signifi-
cantly altered. This redirection is accomplished by a
microlens, positioned near the aperture. The position
of each microlens with respect to the pixel center var-
ies across the array because the optimal placement
of the microlens depends on the pixel position with
respect to the imaging lens.
As the photon travels from the aperture to the
photodetector, the photon must pass through a series
of materials. Each of these has its own refractive
index and thus can scatter the light or change its di-
rection. The optical efficiency of each pixel depends
on these materials [
29].
2. SpaceTime Integration
After accounting for the photodetector spectral quan-
tum efficiency, the various filters, the microlens
array, and pixel vignetting, we can compute the ex-
pected current per unit area at the sensor. This sig-
nal current density image is represented at the same
spatial sampling density as the optical irradiance
image.
The next logical step is to account for the size, po-
sition, and exposure duration of each of the photode-
tectors by integrating the current across space and
time. In this stage, we must coordinate the spatial
representation of the optical image sample points
and the pixel positions. Once these two images are
represented in the same spatial coordinate frame,
we can integrate the signal across the spatial dimen-
sions of each pixel. We also integrate across the
exposure duration to calculate the electrons accumu-
lated at each pixel.
3. Incorporating Sensor Noise
At this stage of the process, we have a spatial array of
pixel electrons. The values are noise free. In the third
step, we account for various sources of noise, includ-
ing the photon shot noise, electrical noise at the pixel,
and inhomogeneities across the sensor.
Photon shot noise refers to the random (Poisson)
fluctuation in the number of electrons captured with-
in the pixel even in response to a nominally identical
light stimulus. This noise is an inescapable property
of all imaging systems. Poisson noise is characterized
by a single rate parameter that is equal to both the
mean level and the variance of the distribution.
There are a variety of electrical imperfections in
the pixels and the sensor. Dark voltage refers to
the accumulation of charge (electrons) even in the ab-
sence of light. Dark voltage is often referred to as
thermally generated noise because the noise in-
creases with ambient temperature. The process of
reading the electrons accumulated within the pix el is
noisy, and this is called read noise. Resetting the
pixel by emptying its electrons is an imperfect pro-
cess, and this noise is called reset noise. Finally,
1 February 2012 / Vol. 51, No. 4 / APPLIED OPTICS A83

the detector captures only a fraction of the incident
photons, in part because of the material properties
and in part because the photodetector occupies only
a portion of the surface at the bottom of the pixel.
The spectral quantum efficiency is a wavelength-
dependent function that describes the expected
fraction of photons that produce an electron. The fill
factor is the percentage of the pixel that is occupied
by the photodetector.
Finally, there is the inevitable variance in the elec-
trical linear response function of the pixels. One
variation is in the slope of the response to increasing
light intensity; this differs across the array and is
called photoresponse nonuniformity (PRNU). Second,
the offset of the linear function differs across the ar-
ray, and this variance in the offset is called dark signal
nonuniformity (DSNU). PRNU and DSNU are types
of fixed pattern noise (FPN)another source of FPN
is due to variation in column amplifiers.
Over the years, circuit design has improved greatly
and very low noise levels can be achieved. Also, var-
ious simple acquisition algorithms can reduce sensor
noise. An important example is correlated double
sampling. In this method, the read process includes
two measurements a reference measurement and
a data measurement. The reference measurement in-
cludes certain types of noise (reset noise, FPN). By
subtracting the two measurements, one can eliminate
or reduce these noises. Correlated double sampling
does not remove, and may even increase, other types
of noise (e.g., shot noise or PRNU variations) [
53,54]
4. Analog-to-Digital Conversion
In the fourth step, we convert the current to a voltage
at each pixel. In this process, we use the conversion
gain and we also account for the upper limit imposed
by the voltage swing. The maximum deviation from
the baseline voltage is called the voltage swing. The
maximum number of electrons that can be stored in a
pixel is called the well capacity. The relationship be-
tween the number electrons and the voltage is called
conversion gain (voltse
1
).
In many cases, the output voltage is further scaled
by an analog gain factor; this too can be specified in
the simulation. Finally, the voltages are quantized
into digital values.
D. Processor
The processor module converts the digital values in
the two-dimensional sensor array into an RGB image
that can be rendered on a specified display. This is
accomplished by controlling exposure duration, in-
terpolating missing RGB sensor values (demosaick-
ing), and transforming sensor RGB values into an
internal color space for encoding and display (color
balancing and display rendering). There are many
different approaches to autoexposure, demosaicking,
and color balancing, and describing these methods is
beyond the scope of this paper. ISET implements sev-
eral algorithms that are in the public domain.
E. Display
The display module generates a radiometric descrip-
tion of the final image as it appears on an LCD dis-
play. It is important to calculate the spatialspectral
radiance emitted from the displayed image because,
unlike the digital image values generated by the pro-
cessor, this is the stimulus that actually reaches the
eye. Simplifying the process of modeling the radiance
distribution makes it possible to use the radiance
field as the input to objective image quality metrics
based on models of the human visual system.
ISET uses three functions to predict the spatial
spectral radiance emitted by a display. First, the dis-
play gamma is used as a lookup table to convert
digital values into a measure of the linear intensity.
Second, pixel PSFs for each color component (sub-
pixel PSF) are used to generate a spatial map of lin-
ear intensity for each of the display color primaries.
Third, the spectral power distributions of the color
primaries are used to calculate the spectral composi-
tion of the displayed image. These three functions
the display gamma, the subpixel PSFs, and the
spectral power distributionsare sufficient to char-
acterize the performance of linear displays with
independent pixels.
More formally, the spatialchromatic image from a
pixel, given a digital input, (R,G,B), is
px; y; λ 
X
i
g
i
vs
i
x; yw
i
λ; (8)
where g
i
v represents the display gamma for each
color primary, s
i
x; y is the spatial spread of the light
for each color subpixel, and w
i
λ is the spectral
power distribution of the color primary.
These equations apply to the light emitted from a
single pixel. The full display image is created by re-
peating this process across the array of display pix-
els. This calculation assumes that the light emitted
from a pixel is independent of the values at adjacent
pixels. These assumptions are a practical starting
point for display simulation, although they may
not be sufficient for some displays [
55].
3. Validation
We created software models for the scene, optics, sen-
sor, processor, and display in an integrated suite of
MATLAB software tools, the ISET [
26]. The ISET si-
mulation begins with scene data; these are trans-
formed by the imaging optics into the optical
image, an irradiance distribution at the image sensor
array; the irradiance is transformed into an image
sensor array response; finally, the image sensor
array data are processed to generate a display image.
In the next section, we use ISET to model the scene,
optics, and sensor of a calibrated 5 megapixel CMOS
digital camera and compare the simulated and mea-
sured sensor performance.
A84 APPLIED OPTICS / Vol. 51, No. 4 / 1 February 2012

Citations
More filters
Journal ArticleDOI

Interpreting canopy development and physiology using a European phenology camera network at flux sites

TL;DR: In this article, a growing observational network of digital cameras installed on towers across Europe above deciduous and evergreen forests, grasslands and croplands, where vegetation and atmosphere CO2 fluxes are measured continuously.
Journal Article

Spectral imaging system analytical model for subpixel object detection

TL;DR: This paper extends an end-to-end remote sensing system modeling approach to subpixel object detection applications by including a linear mixing model for an unresolved object in a background and using object detection algorithms and probability of detection versus false alarm curves to characterize performance.
Journal ArticleDOI

Simultaneous soot temperature and volume fraction measurements in axis-symmetric flames by a two-dimensional modulated absorption/emission technique

TL;DR: In this article, a joint theoretical and experimental approach to implement the modulated absorption/emission technique is presented, where two-dimensional fields of soot temperature and volume fraction can then be measured simultaneously in a reference steady laminar coflow axis-symmetric non-premixed ethylene flame established over the Santoro burner.
Journal ArticleDOI

Monitoring nitrogen status of potatoes using small unmanned aerial vehicles

TL;DR: In this paper, a small parafoil-wing UAV was used to acquire color-infrared images with pixel sizes between 20 and 25mm and two normalized difference spectral indices were determined from image digital numbers.
Journal ArticleDOI

Multiobjective Path Planning: Localization Constraints and Collision Probability

TL;DR: A novel path planning algorithm that efficiently constructs a product graph used to search for a near optimal solution of a multiobjective optimization problem, and can be extended to handle constraints on the probability of collision avoidance specified at every vertex along the path.
References
More filters
Journal ArticleDOI

Introduction to Fourier Optics

Joseph W. Goodman, +1 more
- 01 Apr 1969 - 
TL;DR: The second edition of this respected text considerably expands the original and reflects the tremendous advances made in the discipline since 1968 as discussed by the authors, with a special emphasis on applications to diffraction, imaging, optical data processing, and holography.
Book

Introduction to Fourier optics

TL;DR: The second edition of this respected text considerably expands the original and reflects the tremendous advances made in the discipline since 1968 as discussed by the authors, with a special emphasis on applications to diffraction, imaging, optical data processing, and holography.
Book

Color Science. Concepts and Methods, Quantitative Data and Formulas

TL;DR: An encyclopedic survey of color science can be found in this article, which includes details of light sources, color filters, physical detectors of radiant energy, and the working concepts in color matching, discrimination, and adaptation.
Proceedings ArticleDOI

Recovering high dynamic range radiance maps from photographs

TL;DR: This work discusses how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing, and demonstrates a few applications of having high dynamic range radiance maps.

Light field photography with a hand-held plenoptic camera

TL;DR: The plenoptic camera as mentioned in this paper uses a microlens array between the sensor and the main lens to measure the total amount of light deposited at that location, but how much light arrives along each ray.
Related Papers (5)
Frequently Asked Questions (11)
Q1. What are the contributions mentioned in the paper "Digital camera simulation" ?

The authors describe a simulation of the complete image processing pipeline of a digital camera, beginning with a radiometric description of the scene captured by the camera and ending with a radiometric description of the image rendered on a display. The authors show that there is a good correspondence between measured and simulated sensor performance. Through the use of simulation, the authors can quantify the effects of individual digital camera components on system performance and image quality. 

After accounting for the photodetector spectral quantum efficiency, the various filters, the microlens array, and pixel vignetting, the authors can compute the expected current per unit area at the sensor. 

The ISET spectral radiance scene data can be stored in a compact wavelength format using a linear model for the spectral functions. 

The geometric structure of a pixel, which is something like a tunnel, also has a significant impact on the signal current density image. 

ignoring device imperfections and noise, the mean response of the photodetector to an irradiance image (I λ; x , photons∕s∕nm∕m2) is determined by the sensor spectral quantum efficiency (S λ , the e−∕photon), aperture function across space Ai x , and exposure time T; s . 

Computer simulations have played an important role in evaluating remote imaging systems that are used to classify agricultural plants and materials and to detect and identify buildings, vehicles, and other targets [19–22]. 

In multidevice systems, a controlled simulation environment can provide the engineer with useful guidance that improves the understanding of the system and guides design considerations for individual parts and algorithms. 

an isoplanatic patch in an optical system is a region in which the aberrations are constant; experimentally, a patch is isoplanatic if translation of a point in the object plane causes no change in the irradiance distribution of the PSF except its location in the image plane. 

These three functions— the display gamma, the subpixel PSFs, and the spectral power distributions—are sufficient to characterize the performance of linear displays with independent pixels. 

While the authors estimated these parameters from a modest set of calibration measurements (see Appendices A and B), other parameters, such as conversion gain and voltage swing, were provided by the sensor manufacturer. 

The sensor spectral quantum efficiencies for the red, green, and blue pixels were calculated by combining the effects of the lens transmittance, color filter arrays, and photodiode quantum efficiency into one spectral sensitivity function for each red, green, or blue pixel, respectively (Fig. 1).