scispace - formally typeset
Open AccessJournal ArticleDOI

Resolution limits in practical digital holographic systems

Reads0
Chats0
TLDR
A series of experimental results are presented to confirm the validity of the theoretical model, demonstrating recovery of super- Nyquist frequencies for the first time and how the model can be used to optimize CCD design for lensless DH capture.
Abstract
We examine some fundamental theoretical limits on the abil- ity of practical digital holography DH systems to resolve detail in an image. Unlike conventional diffraction-limited imaging systems, where a projected image of the limiting aperture is used to define the system performance, there are at least three major effects that determine the performance of a DH system: i The spacing between adjacent pixels on the CCD, ii an averaging effect introduced by the finite size of these pixels, and iii the finite extent of the camera face itself. Using a theo- retical model, we define a single expression that accounts for all these physical effects. With this model, we explore several different DH record- ing techniques: off-axis and inline, considering both the dc terms, as well as the real and twin images that are features of the holographic record- ing process. Our analysis shows that the imaging operation is shift vari- ant and we demonstrate this using a simple example. We examine how our theoretical model can be used to optimize CCD design for lensless DH capture. We present a series of experimental results to confirm the validity of our theoretical model, demonstrating recovery of super- Nyquist frequencies for the first time. © 2009 Society of Photo-Optical Instrumen-

read more

Content maybe subject to copyright    Report

Resolution limits in practical digital holographic
systems
Damien P. Kelly
Bryan M. Hennelly,
MEMBER SPIE
Nitesh Pandey, MEMBER SPIE
National University of Ireland, Maynooth
Department of Computer Science
Maynooth, Kildare County
Ireland
E-mail: damienpk@cs.nuim.ie
Thomas J. Naughton,
MEMBER SPIE
National University of Ireland, Maynooth
Department of Computer Science
Maynooth, Kildare County
Ireland
and
University of Oulu
RFMedia Laboratory
Oulu Southern Institute
Vierimaantie 5
84100 Ylivieska
Finland
William T. Rhodes,
FELLOW SPIE
Florida Atlantic University
Imaging Technology Center
777 Glades Road
Building 43, Room 486
Boca Raton, Florida 33431
Abstract. We examine some fundamental theoretical limits on the abil-
ity of practical digital holography DH systems to resolve detail in an
image. Unlike conventional diffraction-limited imaging systems, where a
projected image of the limiting aperture is used to define the system
performance, there are at least three major effects that determine the
performance of a DH system: i The spacing between adjacent pixels on
the CCD, ii an averaging effect introduced by the finite size of these
pixels, and iii the finite extent of the camera face itself. Using a theo-
retical model, we define a single expression that accounts for all these
physical effects. With this model, we explore several different DH record-
ing techniques: off-axis and inline, considering both the dc terms, as well
as the real and twin images that are features of the holographic record-
ing process. Our analysis shows that the imaging operation is shift vari-
ant and we demonstrate this using a simple example. We examine how
our theoretical model can be used to optimize CCD design for lensless
DH capture. We present a series of experimental results to confirm the
validity of our theoretical model, demonstrating recovery of super-
Nyquist frequencies for the first time.
© 2009 Society of Photo-Optical Instrumen-
tation Engineers. DOI: 10.1117/1.3212678
Subject terms: Holography; digital imaging; interference.
Paper 080958RR received Dec. 10, 2008; revised manuscript received Jun. 27,
2009; accepted for publication Jul. 7, 2009; published online Sep. 4, 2009.
1 Introduction
Holography is a technique that enables the magnitude and
phase of an optical field to be recorded.
1,2
Because record-
ing media, in general, are sensitive only to the intensity of
the light field incident upon them, a more complex optical
setup is required to record the phase of the incident field.
To record the phase information of the object field, it must
be interfered with a so-called reference beam at the record-
ing plane, thereby encoding the phase information as inten-
sity variations.
1,36
In traditional holography, these intensity
variations are recorded on a photosensitive material. Digital
holography DH is an extension of this technique where
the photosensitive material is replaced with a digital
camera.
79
Recording the hologram electronically means
that this information can be stored and manipulated in real
time. The flexibility this allows the user in the recording,
processing, and remote replaying of holograms is one of the
primary advantages of a digital holographic approach. DH
has growing applications in 3D recording and display
devices,
1012
as well as industrial metrology systems
13
with
potentially far-reaching implications for cellular and micro-
electromechanical systems imaging.
14,15
One significant
drawback of this technique however is the limited space-
bandwidth product
16
of the recorded digital holograms, due
in large part to the limited number of pixels in a typical
CCD camera. These cameras are discrete devices, and
therefore, sampling theory plays an important role in mod-
eling and optimizing DH setups. Other factors, such as the
finite size of the camera and the finite pixel size of the
individual detectors, both act to significantly limit the per-
formance of DH systems. This is not unexpected and has
been discussed by other authors see, for example, Refs.
1721. What perhaps is not as well appreciated in the DH
community is that it is not necessary for the field in the
recording plane to be sampled at the Nyquist limit.
1721
In
the following analysis, we examine lensless DH exclu-
sively.
In Fig. 1, a typical in-line optical setup for performing
DH is illustrated. At the camera face, a reference wave and
a field that has been scattered from the object interfere and
the resulting intensity pattern is recorded by the camera.
This pattern contains the dc terms, the real and virtual im-
age. Typically, the user is interested in one of the image
terms and the other elements only act to degrade the quality
of the reconstructed hologram. Removal of the dc terms
and the virtual or real image is thus an important practical
consideration in all holographic systems, and accordingly,
many different techniques have been proposed to get rid of
these unwanted terms.
9,2224
Let us assume for the moment0091-3286/2009/$25.00 © 2009 SPIE
Optical Engineering 489, 095801 September 2009
Optical Engineering September 2009/Vol. 489095801-1

that the complex field the real image term, u
z
x, is avail-
able to us i.e., that we have somehow removed the dc and
virtual terms. We note then that once the complex field,
u
z
x, at the camera plane has been obtained, the field at the
object plane, u X兲共the reconstructed image, can be calcu-
lated digitally using numerical techniques by a computer.
Suppose that this field uX has a maximum spatial fre-
quency given by f
max
. uX now propagates described un-
der paraxial conditions with the Fresnel transform to the
camera face, where its complex amplitude, u
z
x,isre-
corded. It is a property of the Fresnel transform that the
magnitude of the field’s spectral distribution remains in-
variant under propagation and so u
z
x must also have a
maximum spatial frequency f
max
see Refs. 25 and 26.A
camera’s spatial frequency bandwidth, B
c
=2f
c
, and thus, its
ability to resolve a spatial frequency at the camera plane is
determined by the distance between the centers of adjacent
pixels on the camera face. Therefore, on initial consider-
ation, one might conclude that the camera must be able to
resolve f
max
i.e., f
c
f
max
if the object field is to be recon-
structed properly. However, we note that several
authors
2732
have shown that this sampling criterion may be
too strict and that it is possible to recover the object signal
when its Fresnel transform is sampled in the camera plane
over an infinite extent at a rate lower than the Nyquist
limit. These results are clearly of interest in DH and may
have practical implications: the camera could be placed
closer to the object with a resultant increase in numerical
aperture and, thus, an improvement in 3D perspective. In
Ref. 30, the authors describe recovery of these super-
Nyquist frequencies in terms of a generalized sampling
theory GST. In Ref. 29, the authors apply the GST to DH
systems however do not examine the effect of the finite
pixel size. In a later paper,
33
these authors do consider the
effect of pixel size, concluding that the maximum recover-
able frequency, f
max
=1/ 2
, where 2
is the width of the
pixel see Eq. 7 in Ref. 33. As we shall see, however, by
suitably designing a camera, this limitation can be over-
come. In Refs. 20 and 21 resolution limits in DH systems
are also examined, however only for signals that are con-
sidered well sampled in the Nyquist sense. In this paper, we
develop a theoretical model that describes the limitations
on resolution that are imposed by i the finite extent of the
camera, ii the sampling rate, and iii the finite extent of
the pixels. We note that if the camera pixels can be consid-
ered point detectors described by a comb of Dirac
func-
tions, then the analysis we present for the real image term
reduces to that discussed in Ref. 34. Also, in Refs. 20 and
21 the authors derive a similar expression to that discussed
in Ref. 34. In our model, we treat the averaging effect in-
troduced by finite-size pixels in a different manner to that
discussed in Refs. 20, 21, and 34, providing an alternative
means of understanding this effect.
The paper is organized as follows; In Sec. 2, we develop
a theoretical model describing the imaging process that in-
corporates i the finite extent of the CCD, ii the reduction
of power in higher spatial frequencies due to averaging
introduced by rectangular size pixels, and iii the effective
sampling rate imposed by the spacing between adjacent
pixels on the CCD face. We specifically examine the dc
terms and the twin image. Using this theoretical framework
in Sec. 3, we examine the predictions of our model for
several different recording architectures, off-axis and in-
line, and show that recovery of super-Nyquist frequencies
is not possible using an off-axis configuration. We then
examine how to optimally design CCD sensors to resolve
frequencies much higher than the Nyquist limit and that
limit defined in Ref. 33, by balancing these three different
effects i.e., i, ii, and iii兲兴. In Sec. 4, we demonstrate
that the imaging operation performed by a DH system is a
shift variant using a simple example. In Sec. 5, we provide
a series of experimental results that clearly demonstrate the
limitations on resolution in a DH system imposed by the
three different factors identified in our theoretical model.
We provide what we believe is the first experimental evi-
dence demonstrating that we can recover frequencies above
the Nyquist limit of the CCD camera. A fast reconstruction
algorithm based on the fast Fourier transform FFT is dis-
cussed. Finally, we close the paper with a brief conclusion.
2 Theoretical Analysis
In Fig. 1, an object is illuminated with a temporally and
spatially coherent monochromatic plane wave. We describe
the resulting scattered field at plane X see Fig. 1 by the
function uX. This field then propagates to the camera
plane located in the plane z=z
c
, where it interferes with a
reference wavefield u
R
x, and the resulting intensity is re-
corded by the CCD. Through a numerical reconstruction
process, where we simulate free-space propagation back to
the object plane plane X, see Fig. 1, we can approxi-
mately recover uX. There are several features of the re-
cording process, however, that limit the accuracy of our
recovered signal: i The finite extent of the camera, 2D,
ii the spacing between the centers of adjacent pixels, T,
and iii the finite extent, 2
of the pixels themselves. In
this section, we investigate each of these effects. We begin
by writing the continuous and instantaneous field intensity,
I
c
x ; t, at the camera face as
17,18,35
I
c
x;t = u
z
x + u
R
x兲兩
2
= I
z
x + I
R
x + u
z
*
xu
R
x + u
z
xu
R
*
x, 1
where I
z
x and I
R
x are the intensities of the object and
reference fields, respectively, and are referred to as the dc
terms. The two latter terms in Eq. 1 contain the virtual
Fig. 1 Schematic depicting a typical inline DH setup: M, mirror; P,
polarizer; BS, beamsplitter; Ph, pinhole; L lens; and MO, microscope
objective.
Kelly et al.: Resolution limits in practical digital holographic systems
Optical Engineering September 2009/Vol. 489095801-2

and real images, respectively, and the superscripted asterisk
denotes the complex conjugate operation. Under paraxial
conditions, we may relate the field u
z
x to uX using a
Fresnel transform,
3
which we define as
u
z
x =
z
uX兲其共x,
u
z
x =
1
jz
uXexp
j
z
x X
2
dX, 2
uX =
1
jz
u
z
xexp
j
z
X x
2
dx, 3
where
is the Fresnel transform operator. In Eq. 3, the
inverse operation is defined. In what follows, we drop the
constant term 1 /
jz from Eqs. 2 and 3. The CCD
has a finite number of pixels, with an assumed uniform
structure, that average the light energy incident upon them
see Chap. 9 of Ref. 36. The measured quantity is referred
to as the integrated intensity and has units of energy. A
camera consists of an array of N N pixels separated from
each other by a distance T. We can represent in one dimen-
sion the integrated intensity array using the vector
W
n
= W
1
,W
2
, ... ... . . W
N
= Wx
;t
T
x
p
D
x
, 4
where p
D
x
=1, when x
D and is 0 otherwise. Thus,
p
D
x
is an aperture function that defines the extent of the
camera face. The function
T
x in Eq. 4 is a train of
Dirac
functions
37
and is defined as
T
x=
n=−
x
nT. Because the function
T
x is periodic, it may also be
expressed mathematically using a Fourier series
representation,
38,39
T
x =
1
T
n=−
exp
j2
nx
T
. 5
The function Wx
;t is continuous and is related to the
temporally and spatially varying intensity I
c
x ; t at the
camera face by
Wx
;t =
1
2
t
t+t
x
x
+
I
c
x;tdxdt, 6
where t is the integration time of the camera and 2
is the
width area of the pixel that is sensitive to light, and is
related to the fill factor of the camera;
T/ 2.
17,18,35,40
If
we assume a stationary object and note that the illumination
source is coherent and monochromatic, then the intensity of
the light field will not vary over the integration time of the
camera. Thus, we need only consider the spatial variation
of intensity over each pixel area, and thus, we rewrite Eq.
6 as
Wx
=
C
2
x
x
+
I
c
xdx, 7
where C is an unimportant constant. We now reinterpret Eq.
7 as a convolution relation;
Wx
=
C
2
p
x x
I
c
xdx
=
1
2
I
c
x
p
x
, 8
where p
x
=1, when x
and is 0 otherwise, and
where indicates a convolution operation. Substituting
Eqs. 1 and 8 into Eq. 4 and dropping the unimportant
scaling constant C, we arrive at the following result:
20,21,34
W
n
=
1
2
p
D
x
T
x兲兵 p
x I
z
x + I
R
x + u
z
*
xu
R
x
+ u
z
xu
R
*
x兲兴其, 9
where for notational simplicity, we set x
x. Thus, we see
that the discrete vector of values returned by the camera
arise due to contributions from four separate sources: the
two dc terms and both the real and twin image. If we sim-
ply apply an inverse Fresnel transform to Eq. 9, then we
find that we will indeed arrive at an approximation to uX,
which arises due to the contribution of the real image i.e.,
the fourth term in Eq. 9; however, this result will, in
general, be effected by the contributions of both the dc and
twin image terms. We note that the Fresnel operation is
linear, and thus, in the following subsections, we consider
the terms in Eq. 9, separately.
2.1 DC Terms
The dc terms in Eq. 9 can be removed by recording the
intensities of the reference and object fields separately and
then subtracting them digitally from the captured hologram.
We note that other numerical approaches can also be used
to suppress these terms.
8,41
Phase-shifting interferometric
techniques can also be used to remove both the twin image
and the dc terms. Although these dc terms do effect the
quality of the reconstructed hologram, they are relatively
unimportant compared to the behavior of the twin and real-
image terms.
2.2 Real Image Term
We now turn our attention to numerically reconstructing an
approximation to the continuous field uX. First, however,
we simplify the fourth term in Eq. 9 further by assuming
an ideal unit amplitude in-line reference beam, u
R
x
=expj2
z z
c
/ . Setting z = z
c
and applying an inverse
Fresnel transform on the real-image term, we get the fol-
lowing result for the reconstructed image:
u
s
X =
1
2
u
z
x p
x兲兴
T
xp
D
x
exp
j
z
X x
2
dx. 10
Using the results from Appendix A, we can rewrite Eq. 10
as
Kelly et al.: Resolution limits in practical digital holographic systems
Optical Engineering September 2009/Vol. 489095801-3

u
s
X = exp
j
z
X
2
uX
1
exp
j
z
X
1
2
X,X
1
dX
1
,
11
where
X,X
1
=
Bx,X
1
T
xp
D
xexp
j2
z
xX X
1
dx
=
−D
D
Bx,X
1
T
xexp
j2
z
xX X
1
dx, 12
and where
Bx,X
1
=
1
2
exp
j
z
u
2
exp
j2
z
x X
1
u
du.
13
From Eqs. 1013, we can see that there is a complex
interaction between the finite camera extent p
D
x, the sam-
pling rate
T
x, and the averaging due to the finite pixel
extent p
x. In order to gain some insight into how these
different factors effect the numerically calculated recon-
struction, u
s
X, we apply a series of limiting operations to
Eq. 10. Initially, we will allow the camera extent ap-
proach infinity i.e., D and the finite extent of the
pixels to approach Dirac
functions i.e.,
0. Following
this, we examine, separately, how the finite camera aperture
and how the finite pixel size change this initial result. Fi-
nally, we will discuss the interaction of all three factors.
2.2.1 Infinitely large camera face with infinitely
narrow pixels: D ,
0
Making use of Eq. 5, letting D and
0 reduces Eq.
10 to the following:
u
s
X =
1
T
jz
u
z
x
T
xexp
j
z
X x
2
dx
=
1
T
jz
n=−
u
z
xexp
j2
nx
T
exp
j
z
X x
2
dx. 14
We also note the shifting property
27,42
of the Fresnel trans-
form for an arbitrary linear phase
, for some analytical
signal fX,
z
fXexpj2
␲␰
X兲其共x
= exp
j
␲␰
2
z
expj2
x
z
fX兲其共x
z. 15
Combining the results from Eqs. 14 and 15, we arrive at
u
s
X =
1
T
jz
n=−
u
z
xexp
j2
nx/T
exp
j
z
X x
2
dx,
u
s
X =
1
T
n=−
z
u
z
xexp
j2
n
T
x
X,
u
s
X =
1
T
n=−
exp
j
n/T
2
z
exp
j2
Xn/T
z
u
z
x兲其
X
nz
T
. 16
Thus, from Eq. 16, we can relate u
s
X to the actual field
uX. The sampling process however causes differences be-
tween the actual signal and our approximation to it. We
note several points in relation to this: i the sampling pro-
cess creates an infinite number of replicas in the object
plane, ii the centers of adjacent replicas are separated by a
distance z / T, iii each of the replicas is also multiplied by
a different linear phase as well as some unimportant con-
stant phase factor.
If we impose the constraint that our object field has a
finite support in the object plane, then this field can be
imaged without overlapping replicas provided that T
z/ . This important result is known
27,28,30,3234,43
and
means that, under certain conditions, it is possible to
sample the diffracted field at rates below the Nyquist limit
and to recover, through a generalized interpolation formula,
super-Nyquist frequencies. In Ref. 26, some implications of
this result are explored in more detail using a simple ana-
lytical example. As we will see, however, the effect of the
finite pixel size and camera extent impose resolutions limits
in addition to this constraint.
2.2.2 Infinitely large camera face and averaging
due to finite pixel extent: D
In this section, we look at the effect of averaging due to the
finite extent of the pixels in the camera plane and examine
how this impacts on our reconstructed approximation u
s
X.
In particular, we note the effect on the resolution obtainable
in DH systems. If we now include the effect of pixel aver-
aging in our analysis, then we replace u
z
x with
u
z
x p
x in Eq. 16, to give
u
s
X =
1
T
n=−
exp
j
n/T
2
z
expj2
Xn/T
z
u
z
x p
x兲其
X
nz
T
. 17
We now consider the inverse Fresnel operation
Kelly et al.: Resolution limits in practical digital holographic systems
Optical Engineering September 2009/Vol. 489095801-4

z
u
z
x p
x兲其共X = u
z
x p
x exp
j
z
x
2
= u
z
x exp
j
z
x
2
p
x
= uX p
X. 18
The conclusion we draw from this result is that the averag-
ing introduced by the finite size pixels acts to degrade the
quality of the reconstructed hologram by convolving it with
a narrow rectangular function that is the same size as the
pixel. As
0, this rectangular function narrows, reducing
the distorting effect that it has on the reconstruction. Pro-
vided that the distribution u X is approximately constant
over the width of the function, p
X, there will be little
distortion of uX. However, if we are attempting to recover
spatial frequencies higher than the Nyquist limit, then uX
will, by definition, vary significantly over the width of the
pixel and thus will act to make these distortions increas-
ingly pronounced. We also note that the effect of convolv-
ing p
X with uX is to increase the spatial extent of the
resultant signal from to +2
. Thus, this new signal may
be recovered provided that T z/ +2
.
We may also examine the effect of the convolution of
p
X with uX in the spatial frequency domain. From Ref.
37 p. 128, we find that the Fourier transform of p
X is
given by
Fp
x兲其共f
x
=
p
xexp j2
xf
x
dx
Fp
x兲其共f
x
= sinc2
f
x
兲兴, 19
where sincx = sinx/ x. Therefore, the effect of the convo-
lution operation is to multiply the spatial frequency content
of the signal, FuX兲其共f
x
by a sinc function. We note that
for values of f
x
such that f
x
=n/ 2
, then Eq. 19 is zero,
and therefore, these spatial frequencies will be entirely re-
moved from the signal. A more detailed interpretation of
this result is discussed in Ref. 26.
2.2.3 Finite camera extent, neglecting averaging
due to pixels:
0
We now examine the third factor that impacts on the quality
of our reconstructed hologram u
s
X, the finite extent of the
camera. For simplicity, we assume that we are sampling
with point detectors i.e., we allow
0. In this instance,
Eq. 12 can be expressed as
34
X,X
1
=
D
D
T
xexp
j2
z
xX X
1
. 20
Using Eq. 5 in conjunction with the Fourier shift theorem
see Ref. 37,p.104, we find that
X,X
1
=
1
T
n=−
sinc
2
D
z
X X
1
zn
T
. 21
Subbing Eq. 21 into Eq. 11
u
s
X = exp
j
z
X
2
1
T
n=−
uX
1
exp
j
z
X
1
2
sinc
2
D
z
X X
1
zn
T
dX
1
=exp
j
z
X
2
1
T
n=−
RX,n, 22
where
RX,n = uXexp
j
z
X
2
sinc
2
D
z
X
zn
T
. 23
Thus, we can see from Eqs. 22 and 23 that the effect of
the finite camera extent is to reduce the resolving ability of
the DH system by convolving the product of the initial
input and a quadratic phase term, with a sinc function,
whose width is determined by the wavelength of the light,
the size of the aperture, and the distance the camera is
placed from the object plane. However, as we shall shortly
demonstrate in Sec. 4, it is important to note that this “con-
volution” relationship is not a shift-invariant operation due
to the presence of the quadratic phase factor. Nevertheless,
as a useful “rule-of-thumb” approximation that we use later
when examining experimental results, it is convenient to
reinterpret Eq. 23 in the spatial frequency domain as a
low-pass filtering operation,
RX,n = F
−1
F
uXexp
j
z
X
2
P
D/z
v
exp
j2
v
nD
T
X, 24
where P
D/z
v
=1, when
v
D/ z and is 0 otherwise, and
where F and F
−1
indicate Fourier and inverse Fourier trans-
form operations.
Again assuming that our input field uX has a finite
spatial extent , we can see from Eq. 23 that this input
extent will be increased due to the presence of the sinc
function. Strictly speaking, a sinc function spans an infinite
spatial extent, implying that our recovered signal will in-
evitably suffer from aliasing. Practically, however, we may
assume that the sinc function can be effectively limited in
space. We therefore define the effective extent of the sinc
function,
S
, as being twice the distance from its maximum
value to its first null in keeping with the analysis presented
in Ref. 34 i.e.,
S
=z / D. Therefore, in order to ensure
the successive replicas do not overlap, we require that T
z/ +
S
.
2.2.4 Finite camera extent and averaging due to
pixels
In this section, we look at how all three factors interact with
each other to limit the resolution of a practical DH system.
We begin by examining Eq. 13 and discuss how we may
simplify the expression considerably. Substituting this sim-
plified expression into Eqs. 11 and 12, we then investi-
gate how finite pixel size and the finite extent of the camera
limit the resolution of the imaging system, identifying the
Kelly et al.: Resolution limits in practical digital holographic systems
Optical Engineering September 2009/Vol. 489095801-5

Citations
More filters
Journal ArticleDOI

Principles and techniques of digital holographic microscopy

TL;DR: Digital holography is an emerging field of new paradigm in general imaging applications as discussed by the authors, and a review of a subset of the research and development activities in digital holographic microscopy techniques and applications is presented.
Journal ArticleDOI

A review of snapshot multidimensional optical imaging: Measuring photon tags in parallel

TL;DR: This work categorizes snapshot multidimensional imagers based on their acquisition and image reconstruction strategies, then highlights the snapshot advantage in the context of optical throughput, and discusses their state-of-the-art implementations and applications.
Journal ArticleDOI

Resolution Analysis in a Lens-Free On-Chip Digital Holographic Microscope

TL;DR: In this article, the authors derived transfer function models that account for all these physical effects and interactions of these models on the imaging resolution of a lens-free on-chip digital holographic microscopy (LFOCDHM) system.
Journal ArticleDOI

Synthetic aperture superresolved microscopy in digital lensless Fourier holography by time and angular multiplexing of the object information

TL;DR: A method capable of improving the resolution in a digital lensless Fourier holographic configuration based on synthetic aperture (SA) generation by using time-multiplexing tilted illumination onto the input object is presented.
Journal ArticleDOI

Video-rate compressive holographic microscopic tomography

TL;DR: This work demonstrates video-rate tomographic image acquisition of two live water cyclopses with 5.2 μm spatial resolution and 60 μm axial resolution with compressed holography.
References
More filters
Journal ArticleDOI

Introduction to Fourier Optics

Joseph W. Goodman, +1 more
- 01 Apr 1969 - 
TL;DR: The second edition of this respected text considerably expands the original and reflects the tremendous advances made in the discipline since 1968 as discussed by the authors, with a special emphasis on applications to diffraction, imaging, optical data processing, and holography.
Book

Introduction to Fourier optics

TL;DR: The second edition of this respected text considerably expands the original and reflects the tremendous advances made in the discipline since 1968 as discussed by the authors, with a special emphasis on applications to diffraction, imaging, optical data processing, and holography.
Book

The Fourier Transform and Its Applications

TL;DR: In this paper, the authors provide a broad overview of Fourier Transform and its relation with the FFT and the Hartley Transform, as well as the Laplace Transform and the Laplacian Transform.
Journal ArticleDOI

A new microscopic principle.

Dennis Gabor
- 01 May 1948 - 
TL;DR: An improvement of the resolution by one decimal wotild require a correction of the objective to four decimals, a practically hopeless task.
Related Papers (5)
Frequently Asked Questions (15)
Q1. What contributions have the authors mentioned in the paper "Esolution limits in practical digital holographic ystems" ?

The authors examine some fundamental theoretical limits on the ability of practical digital holography DH systems to resolve detail in an image. The spacing between adjacent pixels on the CCD, ii an averaging effect introduced by the finite size of these pixels, and iii the finite extent of the camera face itself. With this model, the authors explore several different DH recording techniques: off-axis and inline, considering both the dc terms, as well as the real and twin images that are features of the holographic recording process. Their analysis shows that the imaging operation is shift variant and the authors demonstrate this using a simple example. The authors examine how their theoretical model can be used to optimize CCD design for lensless DH capture. The authors present a series of experimental results to confirm the validity of their theoretical model, demonstrating recovery of superNyquist frequencies for the first time. 

In closing, the authors note that while choosing a small pixel size for increases the spatial frequency response of the system, it also results in less power being incident on the light-sensitive region of the camera. 

The main esult of reducing the sampling rate is that the higher-order eplicas move into the region of space that the authors wish to view, orrupting the data therein. 

The authors lso note that reducing the sampling rate is equivalent to econstructing the hologram using a fewer number of amples from the hologram matrix. 

The dc terms in Eq. 9 can be removed by recording the intensities of the reference and object fields separately and then subtracting them digitally from the captured hologram. 

Setting z=zc and applying an inverse Fresnel transform on the real-image term, the authors get the following result for the reconstructed image:us X = 12 uz x p x T x pD xexp − j zX − x 2 dx . 

To balance these counteracting effects, the authors would suggest that a small value for 1 and a relatively larger value for 2 be chosen for optimal performance. 

18he conclusion the authors draw from this result is that the averagng introduced by the finite size pixels acts to degrade the uality of the reconstructed hologram by convolving it with narrow rectangular function that is the same size as the ixel. 

Using results from Sec. 2.2.2, in particular substituting uz * x for uz x in Eqs.16 and 17 , the authors find that the effect of sampling with finite pixels is to increase the extent of the reconstructed twin image such that the sampling rate must be further increasedso that T z / ̃+2 , to ensure that successive twinimage replicas do not overlap. 

From Eq. 32 and 33 , the authors can see that the spatial extent spanned by the virtual image is approximately given by ̃= +2 zB, because, after numerical reconstruction, the twin image has propagated a distance of Z=2z. 

To ensure that all spatial frequencies can be recovered, the authors suggest designing a camera that has two different size pixels, 1 and 2, associated with it. 

51 Using the four-step algorithm escribed in Ref. 49 see Eq. 3 therein requires that four eparate holograms are captured where the phase between aptures is stepped by precisely /2 radians. 

Another promising approach is the use of wavelets,54,56 which allows more freedom in processing the digital hologram once it has been captured. 

The finite pixel exent reduces, and in some cases eliminates, the power inptical Engineering 095801-higher spatial frequencies see Eqs. 18 and 19 , and for a more thorough discussion, see Ref. 26 . 

The authors remind the reader that the authors have only reduced the sampling along the x-axis direction, and here the authors see that the interference modulation also occurs along the x-axis.