scispace - formally typeset

Proceedings ArticleDOI

An Evaluation Framework for the Accuracy of Camera Transfer Functions Estimated from Differently Exposed Images

26 Mar 2006-pp 168-172

TL;DR: This paper describes a radiometry-based experimental setup to directly measure the CTF and shows how to obtain image pairs of known exposure ratios from the same experiment, i.e., under identical environmental circumstances (light, temperature, camera settings).

AbstractIntensity values read from CCD- or CMOS-cameras are usually not proportional to the irradiance acquired by the sensor, but are mapped by a mostly nonlinear camera transfer function (CTF). This CTF can be measured using a test chart. This, however, is afflicted with the difficulty of ensuring uniform illumination of the chart. An alternative to chart-based measurements of the CTF is to use differently exposed images of the same scene. In this paper, we describe a radiometry-based experimental setup to directly measure the CTF. We furthermore show how to obtain image pairs of known exposure ratios from the same experiment, i.e., under identical environmental circumstances (light, temperature, camera settings). We use these images to estimate the CTF on differently exposed images, thus enabling a quantitative comparison of estimated and measured CTF

Summary (2 min read)

1. Estimating and Measuring the Camera Transfer Function

  • Many algorithms in computer vision assume that the radiance of the scene is known.
  • Changes in scene radiance between images can be used to determine scene structure and illumination [12].
  • Alternatively a polynomial [11] or a constrained piecewise linear model has been used [1, 2].
  • The authors use these images to estimate the CTF by one of the above methods, thus enabling a quantitative comparison of estimated and measured CTF.

2. Experimental Setup

  • The authors CTF measurements are based essentially on a homogeneous light source realized by an integrating sphere .
  • The authors illuminate the camera sensor directly by this light source, with all optics removed to ensure homogeneous illumination over the entire sensor area, and to avoid distortions introduced by the optics.
  • The image acquisition process integrates this irradiance E (measured in Wm2 ) over exposure time t and area A of a sensor element, yielding the energy.

Q = AEt

  • Additionally, in case of a color camera, the irradiance incident on the sensor will be filtered by colorfilters (e.g., Bayer color-filter-array) beforehand.
  • The quantity of light q undergoes further mostly nonlinear transformations described by the CTF f such as, e.g., dynamic range compression and quantization.
  • To measure the CTF in their experimental setup, the authors record the camera response f(q) for different known distances x.
  • Samples of the CTF are then straightfowardly obtained from (2) and (3).
  • The image pairs of exposure ratio k = 2 recorded this way are used to estimate the CTF like in [10, 8].

3. Estimating the Camera Transfer Function

  • Based on such an image pair the joint histogram between these two images [10, 8] can be computed.
  • Afterwards the function g(f), defined by g(f(q)) = f(kq) has to be estimated from this joint histogram.
  • The line has been fitted into the datapoints between the toe and shoulder region using a linear regression χ2-fitting.

4. Results

  • Range limits are not accounted for in this model.
  • Therefore α corresponds to the dark current for positive values only.
  • The mean absolute difference between these data points and the fitted model of the measured CTF is µred = 0.4973, µgreen = 0.1870, and µblue = 0.4705 intensity values for the red, green, and blue channel respectively, which proves that the model assumption is valid for this camera.
  • As can be seen in figure 3, the camera model (1) does not fit as good as for the 3-chip camera.
  • Table 2 shows the measured and estimated parameters for each color channel of the AVT Dolphin F145C single chip Bayer colorfilter-array camera.

5. Discussion

  • The authors have shown that it is possible to directly measure the CTF and, under the same environmental circumstances, to acquire image pairs with known exposure ratio.
  • The directly measured CTF can be used to verify the accuracy of the assumed camera model or to find a camera model for a specific camera.
  • From the simultaneously acquired image pairs of known exposure ratio one can estimate the CTF using the same algorithms which are available for nonlaboratory conditions.
  • Additionally this method can give insight into the limitations of an estimation algorithm for the CTF, which can be used to improve the algorithm.

Did you find this useful? Give us your feedback

...read more

Content maybe subject to copyright    Report

Lehrstuhl für Bildverarbeitung
Institute of Imaging & Computer Vision
An Evaluation Framework for the Accuracy
of Camera Transfer Functions Estimated
from Differently Exposed Images
Andr
´
e A. Bell and Jens N. Kaftan and Dietr ich Meyer-Ebrecht and
Til Aach
Institute of Imaging and Computer Vision
RWTH Aachen University, 52056 Aachen, Germany
tel: +49 241 80 27860, fax: +49 241 80 22200
web: www.lfb.rwth-aachen.de
in: 7th IEEE Southwest Symposium on Image Analysis and Interpretation. SSIAI 2006. See
also BIBT
E
X entry below.
BIBT
E
X:
@inproceedings{BEL06a,
author = {Andr\’{e} A. Bell and Jens N. Kaftan and Dietrich Meyer-Ebrecht and Til Aach},
title = {{A}n {E}valuation {F}ramework for the {A}ccuracy of {C}amera
{T}ransfer {F}unctions {E}stimated from {D}ifferently {E}xposed {I}mages},
booktitle = {7th IEEE Southwest Symposium on Image Analysis and Interpretation. SSIAI 2006},
publisher = {IEEE},
year = {2006},
pages = {168--172},
}
© 2006 IEEE. Personal use of this material is permitted. However, permission to reprint/republish
this material for advertising or promotional purposes or for creating new collective works for
resale or redistribution to servers or lists, or to reuse any copyrighted component of this work in
other works must be obtained from the IEEE.
document created on: May 19, 2006
created from file: hdrprecission.tex
cover page automatically created with CoverPage.sty
(available at your favourite CTAN mirror)

An Evaluation Framework for the Accuracy of Camera Transfer Functions
Estimated from Differently Exposed Images
Andr
´
e A Bell, Jens N Kaftan, Dietrich Meyer-Ebrecht, Til Aach
Institute of Imaging and Computer Vision, RWTH Aachen University, Germany
{bell,kaftan,meyer-ebrecht,aach}@lfb.rwth-aachen.de
Abstract
Intensity values read from CCD- or CMOS-cameras are
usually not proportional to the irradiance acquired by the
sensor, but are mapped by a mostly nonlinear camera trans-
fer function (CTF). This CTF can be measured using a test
chart. This, however, is afflicted with the difficulty of en-
suring uniform illumination of the chart. An alternative
to chart-based measurements of the CTF is to use differ-
ently exposed images of the same scene. In this paper,
we describe a radiometry-based experimental setup to di-
rectly measure the CTF. We furthermore show how to ob-
tain image pairs of known exposure ratios from the same ex-
periment, i.e., under identical environmental circumstances
(light, temperature, camera settings). We use these images
to estimate the CTF on differently exposed images, thus
enabling a quantitative comparison of estimated and mea-
sured CTF.
1. Estimating and Measuring the Camera
Transfer Function
Many algorithms in computer vision assume that the ra-
diance of the scene is known. For instance, changes in
scene radiance between images can be used to determine
scene structure and illumination [12]. Also, the orientation
of visible surfaces of the scene can be obtained from the
radiance by shape from shading algorithms [13]. Radiance
maps were moreover used to render synthetic objects more
realistically into the scene [3].
In our application, we seek to tonally register cytopatho-
logical images taken at different exposures to generate a
high dynamic range image. Image acquisition is based on a
microscope equipped with a three-chip RGB color camera.
Towards this end, it is crucial to determine the irradiance
values in the image plane from the recorded intensity val-
ues. Unfortunately, the intensity values read from CCD- or
CMOS-cameras are usually not proportional to the irradi-
ance acquired by the sensor, but are mapped by the mostly
nonlinear camera transfer function (CTF) f . In other words,
we seek to apply the inverse f
1
of the CTF f .
This CTF can be measured using a test chart like the
Macbeth- or the CamAlign-CGH-chart, which consist of
patches of known reflectance. Measuring the CTF using
charts, however, is afflicted with the difficulty of ensuring
uniform illumination of the chart. Furthermore it might not
always be practicable since the CTF depends on parameter
settings of the camera and the environment (e.g. tempera-
ture).
An alternative to chart-based measurements of the CTF
is to use differently exposed images of the same scene
[10, 8, 9, 4, 11, 5, 6, 1, 2]. Assuming that the exposure ra-
tios of image pairs are known and the CTF can be modeled
by, e.g., a γ-function
I := f(q) = α + βq
γ
, (1)
the parameters α and γ can be estimated by comparing the
intensity values f(q) of corresponding pixels in differently
exposed images [10, 8, 9]. The scaling factor β can not
be recovered from these exposure sets and the parameter q,
named the photoquantity in [8], denotes the amount of light
received by a sensor element. Such a parametric model can
also be replaced by a smoothness constraint [4]. Alterna-
tively a polynomial [11] or a constrained piecewise linear
model has been used [1, 2]. It has been shown that one
needs to either know the exposure ratios or make an as-
sumption about the CTF [5, 6].
These methods offer appealing ways to recover the CTF,
even in a non-laboratory environment. So far the accuracy
of the estimated CTFs has been evaluated qualitatively with
CamAlign-CGH measurements plotted into the curve of the
recovered CTF [9] or using a Macbeth chart instead [5]. The
influence of noisy measurements has been shown to be less
than 2.7% in synthetic data [11].
In this paper, we describe a radiometry-based experimen-
tal setup to directly measure the CTF. We furthermore show
how to obtain image pairs of known exposure ratios from

the same experiment, viz. under identical environmental cir-
cumstances (light, temperature, camera settings). We use
these images to estimate the CTF by one of the above meth-
ods, thus enabling a quantitative comparison of estimated
and measured CTF.
2. Experimental Setup
Our CTF measurements are based essentially on a homo-
geneous light source realized by an integrating sphere (Ul-
brichtsphere). An integrating sphere provides an isotropic
and homogeneous light output in terms of radiance L (mea-
sured in
W
sr·m
2
) at its opening of diameter r [7]. We illumi-
nate the camera sensor directly by this light source, with all
optics removed to ensure homogeneous illumination over
the entire sensor area, and to avoid distortions introduced
by the optics. The irradiance impinging on the sensor is
then given by [7]
E = π
r
2
r
2
+ x
2
L (2)
where x is the distance between sensor and exit pupil of the
light source. The image acquisition process integrates this
irradiance E (measured in
W
m
2
) over exposure time t and
area A of a sensor element, yielding the energy
Q = AEt
detected by that sensorelement. Q corresponds to
N
p
= AEt
λ
hc
detected photons, where h is Planck’s constant and c is the
speed of light. N
p
maps to the sensor signal q via the quan-
tum efficiency η of a sensor element according to
q = ηAEt
λ
hc
(3)
In non-monochromatic light, η and E may depend on the
wavelength λ, the total response q is then given by integrat-
ing (3) over λ [7]. Additionally, in case of a color cam-
era, the irradiance incident on the sensor will be filtered by
colorfilters (e.g., Bayer color-filter-array) beforehand. The
quantity of light q undergoes further mostly nonlinear trans-
formations described by the CTF f such as, e.g., dynamic
range compression and quantization.
To measure the CTF in our experimental setup, we record
the camera response f (q) for different known distances x.
Samples of the CTF are then straightfowardly obtained from
(2) and (3).
Additionally, for every image taken at a distance x
1
, we ac-
quire a second image at position
x
2
=
q
r
2
+ 2x
2
1
(4)
which, according to (2), receives half of the irradiance in
the image plane. The image pairs of exposure ratio k = 2
recorded this way are used to estimate the CTF like in [10,
8].
0 50 100 150 200 250
0
50
100
150
200
250
f(q)
f(kq)
Figure 1. Joint histogram example of the JAI
CV-M90 red channel.
3. Estimating the Camera Transfer Function
Let us consider two images f
1
and f
2
with exposure ratio
k = 2, such that f
2
receives half of the irradiance as f
1
.
This is equivalent to observing the quantity of light q and
kq through the same CTF f . Based on such an image pair
the joint histogram between these two images [10, 8] can
be computed. In case of having more image pairs of equal
exposure ratio k these can be added to the joint histogram.
Afterwards the function g(f), defined by
g(f(q)) = f(kq)
has to be estimated from this joint histogram. Choosing the
camera model (1) gives
g(f) = f (kq)
= α + β(kq)
γ
= α k
γ
α + k
γ
α + k
γ
βq
γ
= α(1 k
γ
) + k
γ
(α + βq
γ
)
= α(1 k
γ
) + k
γ
f
which is a straight line in the joint histogram between the
toe- and shoulder-region [8]. The slope m and intercept b
are therefore given by m = k
γ
and b = α(1 k
γ
). After
fitting a line into the joint histogram, it is possible to deter-
mine the parameter α and γ of the camera model by

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
50
100
150
200
250
q [arb. u.]
f(q)
Figure 2. Measured datapoints and fitted CTF for a JAI CV-M90 3-chip-CCD (8bit) camera.
γ =
log m
log k
(5)
α =
b
1 m
(6)
Figure 1 shows a plot of a joint histogram generated from 60
image pairs of exposure ratio k = 2 obtained as described
in section 2. The line has been fitted into the datapoints
between the toe and shoulder region using a linear regres-
sion χ
2
-fitting. From the estimated values m = 1.9109
and b = 14.207 and by using (5) and (6) one obtains
γ = 0.9343 and α = 15.5967 as parameter for the cam-
era model.
4. Results
Based on (1) as a model for the 3-chip RGB camera JAI
CV-M90, we estimated the parameters α and γ. Range lim-
its are not accounted for in this model. Therefore α cor-
responds to the dark current for positive values only. For
negative α, the CTF is truncated to zero. Towards higher
values the CTF exhibits the expected saturation due to the
limited maximum (“full-well”) charge generation capacity
of the sensor. The dynamic compression of the camera is
captured by the parameter γ. For the measurements, the
parameter β is determined by the radiance L of the light
source. Since L cannot be recovered by estimation from
sets of differently exposed images, we performed a least-
squares fit of the estimated CTF to the measured CTF with
respect to this parameter for comparison. Results are shown
in figure 2. Since γ 6= 1 the CTFs are not linear functions.
Evidently, as can be seen from figure 2, the fitted CTF repre-
sents the measured data of a JAI CV-M90 3-chip-CCD cam-
era very precisely. The mean absolute difference between
these data points and the fitted model of the measured CTF
is µ
red
= 0.4973, µ
g reen
= 0.1870, and µ
blue
= 0.4705
intensity values for the red, green, and blue channel respec-
tively, which proves that the model assumption is valid for
this camera. Table 1 shows the measured and estimated pa-
rameters for each color channel of the JAI CV-M90 3-chip-
CCD camera. The mean absolute error between measured
and estimated CTF is µ
red
= 0.4641, µ
g reen
= 0.4969,
and µ
blue
= 0.9856 for red, green, and blue channel re-
spectively.
Table 1. Comparison of measured and es-
timated parameters α and γ for each color
channel of the JAI CV-M90 3-chip-CCD cam-
era.
Measured CTF Estimated CTF
α
red
14.5919 15.5967
γ
red
0.9210 0.9343
α
g reen
25.9173 24.6797
γ
g reen
0.9723 0.9706
α
blue
27.9872 23.3515
γ
blue
0.8300 0.8817

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
200
400
600
800
1000
q [arb. u.]
f(q)
Figure 3. Measured datapoints and fitted CTF for a AVT Dolphin F145C single chip Bayer color-filter-
array (10bit) camera (b).
Alternatively we have chosen the same camera model
(1) for the AVT Dolphin F145C single chip Bayer color-
filter-array camera. We estimated α and γ and measured
the CTF accordingly. As can be seen in figure 3, the cam-
era model (1) does not fit as good as for the 3-chip cam-
era. The displayed curves in figure 3 represent the color
channels corresponding to the rasters of the Bayer pattern.
Therefore we have two green channels. The mean absolute
difference between the measured data points and the fitted
model of the CTF is µ
red
= 0.6591, µ
g reen1
= 1.5541,
µ
g reen2
= 0.6728, and µ
blue
= 5.4489 intensity values
for the red, green1, green2, and blue channel respectively.
It should be noticed that the two green channels diverge
as soon as red is going to be saturated and that blue is to
be dented as soon as the second green channel reaches the
saturation point (see figure 3). This effect may be due to
blooming from one color channel into the other, which is an
effect that can not occur for a 3-chip camera. Table 2 shows
the measured and estimated parameters for each color chan-
nel of the AVT Dolphin F145C single chip Bayer color-
filter-array camera. For comparison we have carried out
a least-squares fit of the estimated CTF to the measured
CTF with respect to the parameter β as in the case of the
other camera. The mean absolute error between measured
and estimated CTF is µ
red
= 0.6555, µ
g reen1
= 1.8828,
µ
g reen2
= 0.7635, and µ
blue
= 6.509 intensity values in
mean for red, green1, green2 and blue respectively.
Table 2. Comparison of measured and es-
timated parameters α and γ for each color
channel of the AVT Dolphin F145C single chip
Bayer color-filter-array camera.
Measured CTF Estimated CTF
α
red
34.6273 34.1294
γ
red
0.9654 0.9648
α
g reen1
7.9983 0.7144
γ
g reen1
0.8618 0.8822
α
g reen2
29.5660 29.8274
γ
g reen2
0.9628 0.9615
α
blue
46.6789 27.7228
γ
blue
1.0037 0.9130
5. Discussion
We have shown that it is possible to directly measure the
CTF and, under the same environmental circumstances, to
acquire image pairs with known exposure ratio. The di-
rectly measured CTF can be used to verify the accuracy
of the assumed camera model or to find a camera model
for a specific camera. From the simultaneously acquired
image pairs of known exposure ratio one can estimate the
CTF using the same algorithms which are available for non-
laboratory conditions. Both measured and estimated CTF,

Citations
More filters

Journal ArticleDOI
TL;DR: A mathematical model of the distortions of the optical path is derived and it is shown that the color fringes vanish completely after application of two different algorithms for compensation.
Abstract: Multispectral image acquisition considerably improves color accuracy in comparison to RGB technology. A common multispectral camera design concept features a filter-wheel consisting of six or more optical bandpass filters. By shifting the filters sequentially into the optical path, the electromagnetic spectrum is acquired through the channels, thus making an approximate reconstruction of the spectrum feasible. However, since the optical filters exhibit different thicknesses, refraction indices and may not be aligned in a perfectly coplanar manner, geometric distortions occur in each spectral channel: The reconstructed RGB images thus show rainbow-like color fringes. To compensate for these, we analyze the optical path and derive a mathematical model of the distortions. Based on this model we present two different algorithms for compensation and show that the color fringes vanish completely after application of our algorithms. We also evaluate our compensation algorithms in terms of accuracy and execution time.

87 citations


Cites background from "An Evaluation Framework for the Acc..."

  • ...Also the camera may have a nonlinear camera transfer function [35] relating the incident radiation to gray levels....

    [...]


Proceedings ArticleDOI
06 May 2013
TL;DR: This work presents a high resolution imaging system for inspection of LBM systems which can be easily integrated into existing machines and shows that the system can detect topological flaws and is able to inspect the surface quality of built layers.
Abstract: Laser Beam Melting (LBM) allows the fabrication of three-dimensional parts from metallic powder with almost unlimited geometrical complexity and very good mechanical properties. LBM works iteratively: a thin powder layer is deposited onto the build platform which is then melted by a laser according to the desired part geometry. Today, the potential of LBM in application areas such as aerospace or medicine has not yet been exploited due to the lack of process stability and quality management. For that reason, we present a high resolution imaging system for inspection of LBM systems which can be easily integrated into existing machines. A container file stores calibration images and all layer images of one build process (powder and melt result) with corresponding metadata (acquisition and process parameters) for documentation and further analysis. We evaluate the resolving power of our imaging system and show that it is able to inspect the process result on a microscopic scale. Sample images of a part built with varied process parameters are provided, which show that our system can detect topological flaws and is able to inspect the surface quality of built layers. The results can be used for flaw detection and parameter optimization, for example in material qualification.

63 citations


Proceedings ArticleDOI
27 Jan 2008
TL;DR: This work presents a promising combination of both technologies, a high dynamic range multispectral camera featuring a higher color accuracy, an improved signal to noise ratio and greater dynamic range compared to a similar low dynamic range camera.
Abstract: Capturing natural scenes with high dynamic range content using conventional RGB cameras generally results in saturated and underexposed and therefore compromising image areas. Furthermore the image lacks color accuracy due to a systematic color error of the RGB color filters. The problem of the limited dynamic range of the camera has been addressed by high dynamic range imaging1, 2 (HDRI): Several RGB images of different exposures are combined into one image with greater dynamic range. Color accuracy on the other hand can be greatly improved using multispectral cameras,3 which more accurately sample the electromagnetic spectrum. We present a promising combination of both technologies, a high dynamic range multispectral camera featuring a higher color accuracy, an improved signal to noise ratio and greater dynamic range compared to a similar low dynamic range camera.© (2008) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

29 citations


Journal ArticleDOI
TL;DR: It is shown how the dynamic range can be increased by acquiring a set of differently exposed cell images, which allow to measure cellular features that are otherwise difficult to capture, if at all, in high dynamic range (HDR) images.
Abstract: Cancer is one of the most common causes of death. Cytopathological, i.e., cell-based, diagnosis of cancer can be applied in screening scenarios and allows an early and highly sensitive detection of cancer, thus increasing the chance for cure. The detection of cancer on cells addressed in this paper is based on bright field light microscopy. The cells are imaged with a camera mounted on a microscope, allowing to measure cell properties. However, these cameras exhibit only a limited dynamic range, which often makes the quantification of properties difficult or even impossible. Consequently, to allow a computer-assisted analysis of microscopy images, the imaging has to be improved. To this end, we show how the dynamic range can be increased by acquiring a set of differently exposed cell images. These high dynamic range (HDR) images allow to measure cellular features that are otherwise difficult to capture, if at all. We show that HDR microscopy not only increases the dynamic range, but furthermore reduces noise and improves the acquisition of colors. We develop HDR microscopy-based algorithms, which are essential for cytopathological oncology and early cancer detection and only possible with HDR microscopy imaging. We show the detection of certain subcellular features, so-called AgNORs, in silver (Ag) stained specimens. Furthermore, we give examples of two further applications, namely: 1) the detection of stained cells in immunocytochemical preparations and 2) color separation for nuclear segmentation of specimens stained with low contrast.

22 citations


Cites background from "An Evaluation Framework for the Acc..."

  • ...The exposure times are set as EV (exposure value)....

    [...]


Proceedings ArticleDOI
03 Sep 2007
TL;DR: A mathematical model is derived for the effects of chromatic aberration that robustly estimates the parameters of an appropriate affine coordinate transformation of multispectral cameras for high-fidelity colour reproduction.
Abstract: High-fidelity colour reproduction requires multispectral cameras for image acquisition, which, unlike RGB cameras, divide the visible electromagnetic spectrum into more than 3 channels. This can be achieved by successively placing narrow-band optical filters with different passbands between object lens and sensor of a standard b/w camera. The filters are arranged on a filter wheel, the rotation of which moves the filters sequentially into the optical path. In practice, these filters exhibit different thicknesses and refraction indices, and are also not perfectly coplanar, resulting in geometric distortions between the recorded spectral components. We derive a mathematical model for these distortions. We additionally measure the effects of chromatic aberration, and incorporate these into our model. Based on this model, we then develop a registration algorithm which robustly estimates the parameters of an appropriate affine coordinate transformation. Experimental results using a seven-channel multispectral camera confirm both the validity of our model as well as the accuracy of the registration algorithm.

18 citations


References
More filters

Proceedings ArticleDOI
03 Aug 1997
TL;DR: This work discusses how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing, and demonstrates a few applications of having high dynamic range radiance maps.
Abstract: We present a method of recovering high dynamic range radiance maps from photographs taken with conventional imaging equipment. In our method, multiple photographs of the scene are taken with different amounts of exposure. Our algorithm uses these differently exposed photographs to recover the response function of the imaging process, up to factor of scale, using the assumption of reciprocity. With the known response function, the algorithm can fuse the multiple photographs into a single, high dynamic range radiance map whose pixel values are proportional to the true radiance values in the scene. We demonstrate our method on images acquired with both photochemical and digital imaging processes. We discuss how this work is applicable in many areas of computer graphics involving digitized photographs, including image-based modeling, image compositing, and image processing. Lastly, we demonstrate a few applications of having high dynamic range radiance maps, such as synthesizing realistic motion blur and simulating the response of the human visual system.

2,775 citations


Journal ArticleDOI
TL;DR: Six well-known SFS algorithms are implemented and compared, and the performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth error, mean of surface gradient error, and CPU timing.
Abstract: Since the first shape-from-shading (SFS) technique was developed by Horn in the early 1970s, many different approaches have emerged. In this paper, six well-known SFS algorithms are implemented and compared. The performance of the algorithms was analyzed on synthetic images using mean and standard deviation of depth (Z) error, mean of surface gradient (p, q) error, and CPU timing. Each algorithm works well for certain images, but performs poorly for others. In general, minimization approaches are more robust, while the other approaches are faster.

1,775 citations


"An Evaluation Framework for the Acc..." refers methods in this paper

  • ...Also, the orientation of visible surfaces of the scene can be obtained from the radiance by shape from shading algorithms [2]....

    [...]


Proceedings ArticleDOI
24 Jul 1998
TL;DR: A method that uses measured scene radiance and global illumination in order to add new objects to light-based models with correct lighting and the relevance of the technique to recovering surface reflectance properties in uncontrolled lighting situations is discussed.
Abstract: We present a method that uses measured scene radiance and global illumination in order to add new objects to light-based models with correct lighting. The method uses a high dynamic range image-based model of the scene, rather than synthetic light sources, to illuminate the new objects. To compute the illumination, the scene is considered as three components: the distant scene, the local scene, and the synthetic objects. The distant scene is assumed to be photometrically unaffected by the objects, obviating the need for reflectance model information. The local scene is endowed with estimated reflectance model information so that it can catch shadows and receive reflected light from the new objects. Renderings are created with a standard global illumination method by simulating the interaction of light amongst the three components. A differential rendering technique allows for good results to be obtained when only an estimate of the local scene reflectance properties is known.We apply the general method to the problem of rendering synthetic objects into real scenes. The light-based model is constructed from an approximate geometric model of the scene and by using a light probe to measure the incident illumination at the location of the synthetic objects. The global illumination solution is then composited into a photograph of the scene using the differential rendering technique. We conclude by discussing the relevance of the technique to recovering surface reflectance properties in uncontrolled lighting situations. Applications of the method include visual effects, interior design, and architectural visualization.

1,136 citations


"An Evaluation Framework for the Acc..." refers methods in this paper

  • ...Radiance maps were moreover used to render synthetic objects more realistically into the scene [3]....

    [...]


Proceedings ArticleDOI
23 Jun 1999
TL;DR: A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures, to fuse the multiple images into a single high dynamic range radiance image.
Abstract: A simple algorithm is described that computes the radiometric response function of an imaging system, from images of an arbitrary scene taken using different exposures. The exposure is varied by changing either the aperture setting or the shutter speed. The algorithm does not require precise estimates of the exposures used. Rough estimates of the ratios of the exposures (e.g. F-number settings on an inexpensive lens) are sufficient for accurate recovery of the response function as well as the actual exposure ratios. The computed response function is used to fuse the multiple images into a single high dynamic range radiance image. Robustness is tested using a variety of scenes and cameras as well as noisy synthetic images generated using 100 randomly selected response curves. Automatic rejection of image areas that have large vignetting effects or temporal scene variations make the algorithm applicable to not just photographic but also video cameras.

807 citations


"An Evaluation Framework for the Acc..." refers methods in this paper

  • ...An alternative to chart-based measurements of the CTF is to use differently exposed images of the same scene [4, 5, 6, 7, 8, 9, 10, 11, 12]....

    [...]

  • ...Alternatively a polynomial [8] or a constrained piecewise linear model has been used [11, 12]....

    [...]


Journal ArticleDOI
TL;DR: This paper completely determine the ambiguities associated with the recovery of the response and the ratios of the exposures, and shows that the intensity mapping between images is determined solely by the intensity histograms of the images.
Abstract: An image acquired by a camera consists of measured intensity values which are related to scene radiance by a function called the camera response function. Knowledge of this response is necessary for computer vision algorithms which depend on scene radiance. One way the response has been determined is by establishing a mapping of intensity values between images taken with different exposures. We call this mapping the intensity mapping function. In this paper, we address two basic questions. What information from a pair of images taken at different exposures is needed to determine the intensity mapping function? Given this function, can the response of the camera and the exposures of the images be determined? We completely determine the ambiguities associated with the recovery of the response and the ratios of the exposures. We show all methods that have been used to recover the response break these ambiguities by making assumptions on the exposures or on the form of the response. We also show when the ratio of exposures can be recovered directly from the intensity mapping, without recovering the response. We show that the intensity mapping between images is determined solely by the intensity histograms of the images. We describe how this allows determination of the intensity mapping between images without registration. This makes it possible to determine the intensity mapping in sequences with some motion of both the camera and objects in the scene.

276 citations


"An Evaluation Framework for the Acc..." refers background or methods in this paper

  • ...It has been shown that one needs to either know the exposure ratios or make an assumption about the CTF [9, 10]....

    [...]

  • ...An alternative to chart-based measurements of the CTF is to use differently exposed images of the same scene [4, 5, 6, 7, 8, 9, 10, 11, 12]....

    [...]


Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "An evaluation framework for the accuracy of camera transfer functions estimated from differently exposed images" ?

In this paper, the authors describe a radiometry-based experimental setup to directly measure the CTF. The authors furthermore show how to obtain image pairs of known exposure ratios from the same experiment, i. e., under identical environmental circumstances ( light, temperature, camera settings ). In this paper, the authors describe a radiometry-based experimental setup to directly measure the CTF. The authors furthermore show how to obtain image pairs of known exposure ratios from the same experiment, viz. The authors illuminate the camera sensor directly by this light source, with all optics removed to ensure homogeneous illumination over the entire sensor area, and to avoid distortions introduced by the optics. Furthermore it might not always be practicable since the CTF depends on parameter settings of the camera and the environment ( e. g. temperature ). 

The authors intend to evaluate further estimation algorithms and a larger variety of cameras, in the near future.