scispace - formally typeset
Search or ask a question
Journal ArticleDOI

3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence molecular tomography imaging system.

10 Nov 2015-Applied Optics (Optical Society of America)-Vol. 54, Iss: 32, pp 9573-9582
TL;DR: This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional mouse surface geometry for fluorescence molecular tomography (FMT) imaging and believes is sufficient to obtain a finite element mesh for FMT imaging.
Abstract: This work introduces a fast, low-cost, robust method based on fringe pattern and phase shifting to obtain three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging. We used two pico projector/webcam pairs to project and capture fringe patterns from different views. We first calibrated the pico projectors and the webcams to obtain their system parameters. Each pico projector/webcam pair had its own coordinate system. We used a cylindrical calibration bar to calculate the transformation matrix between these two coordinate systems. After that, the pico projectors projected nine fringe patterns with a phase-shifting step of 2π/9 onto the surface of a mouse-shaped phantom. The deformed fringe patterns were captured by the corresponding webcam respectively, and then were used to construct two phase maps, which were further converted to two 3D surfaces composed of scattered points. The two 3D point clouds were further merged into one with the transformation matrix. The surface extraction process took less than 30 seconds. Finally, we applied the Digiwarp method to warp a standard Digimouse into the measured surface. The proposed method can reconstruct the surface of a mouse-sized object with an accuracy of 0.5 mm, which we believe is sufficient to obtain a finite element mesh for FMT imaging. We performed an FMT experiment using a mouse-shaped phantom with one embedded fluorescence capillary target. With the warped finite element mesh, we successfully reconstructed the target, which validated our surface extraction approach.

Summary (4 min read)

1. INTRODUCTION

  • Fluorescent Molecular Tomography (FMT) has emerged for almost two decades and has been used widely in biomedical research labs because of its unique features such as nonionized radiation, low cost and wide availability of molecular probes [1] [2].
  • In FMT, the fluorophores are injected inside a mouse body intravenously and then excited with lasers to emit fluorescence photons, some of which will propagate to the mouse surface and get measured [7].
  • In one study, the mouse was hung and rotated to be viewed at different angles by a camera, thus the 3D geometry can be reconstructed [9] [10].
  • In section 2, the steps of the 3D surface reconstruction method are introduced, including the basic principles of phase shifting, selection of phase shifting step number, pico-projector and webcam pairs calibration, phase to coordinate conversion, merge of two point clouds, an introduction of Digiwarp method and a brief description of their FMT imaging system.
  • Section 4 concludes the paper with discussions.

2. METHODOLOGY

  • There are two pico-projectors (AAXA p4x, AAXA Technologies Inc., Tustin, CA) and two webcams (C615, Logitech, Apples, Switzerland).
  • The pico-projectors project fringe patterns onto the surface of the object.
  • The minimum focal distance of the webcam is 200 mm.
  • The pico-projectors and the webcams have small sizes and are at low cost.
  • These components with small size can be easily mounted inside the FMT imaging system as described in [7].

A. Phase Shifting Algorithm

  • In phase shifting method, N fringe patterns with a phase shifting step of 2π/N are generated by a computer and delivered to the pico-projector for sequential projection onto the object surface.
  • The points with best quality are unwrapped first, and then the points with lower quality, until all points are unwrapped.
  • After that an additional centerline image is used to obtain the absolute phase at each pixel [25].
  • It is worth noting that the spatial frequency of the projected fringe pattern should be set well.

B. Selection of Phase Shifting Step Number

  • The average phase errors for each step number from 3 to 15 are shown in Fig.
  • From this result the authors can see how the nonlinearity of projectors affects the accuracy with increasing step number.
  • Generally, the phase error decreases as the number of steps increases.
  • But the authors can still observe that the phase errors caused by the non-linearity of projectors are relatively small when the fringe pattern number is larger than 7.

C. Pico-projector and Webcam Pairs Calibration

  • For one pair of pico-projector and webcam, there are 3 coordinate systems: the webcam coordinate system, the pico-projector coordinate system and the world coordinate system [25].
  • System calibration is required to obtain the intrinsic parameters of the webcam and the pico-projector and to create relationships among the three coordinate systems.
  • The calibration process is similar to the method described in [25].
  • The camera calibration is finished by the Matlab Camera Calibration Toolbox [29].
  • The projector checker board images are generated by pixel to pixel mapping from camera images, where the 9-step phase shifting algorithm is utilized again.

D. Phase to Coordinates Conversion

  • After all the calibration parameter matrices are obtained, the phase map generated in section 2.A can be converted to the 3D coordinates in the world coordinate system.
  • R means rotation matrix, t is translation matrix, and m means the elements of matrix A[R t].
  • The above parameters are known and Xw, Yw, Zw are the 3D coordinates in the world coordinate system to be determined.

E. Alignment of Two Point Clouds

  • As the two pairs of pico-projector and webcam are calibrated independently, they have different world coordinate systems, as shown in Fig. 2a.
  • So the authors need to perform point clouds alignment between the two point clouds in two different coordinate systems to merge them inside one point cloud.
  • The authors use a calibration bar as shown in Fig. 2b to transform both coordinate systems to the conical mirror coordinate system {Ocon, xcon, ycon, zcon}.
  • During their experiments the authors find that the two 3D point clouds of a mouse shaped phantom cannot be merged precisely after the alignment.

G. FMT Reconstruction

  • To validate FMT reconstruction with the mesh generated from the proposed surface extraction method, the authors perform an FMT experiment with a mouse shaped phantom embedded with a capillary tube that is 20 mm long and 1 mm in diameter.
  • Briefly, the FMT imaging system consists of a conical mirror, a line pattern laser mounted on a rotary stage and a CCD camera, as shown in Fig.
  • A 643 nm line laser (Stocker Yale Canada Inc.) is used to excite the fluorescence photons.
  • The authors use 30 line laser source positions and 14,723 detectors.
  • The propagation of excited and emitted lights are modeled by the diffusion equation that is solved by the finite element method [32].

A. System Calibration

  • Fig. 4a and Fig. 4b show one example of their camera checker board image and its corresponding projector checker board image generated from 9-step phase shifting method.
  • The checker board images are used for calibrating the webcam and the pico-projector.

C. Alignment of Two Point Clouds

  • The authors use the calibrated 3D shape extracting system to measure the calibration bar as shown in Fig. 2b.
  • The authors have calculated the optimal rotation and translation matrices for 2 point clouds and merged them as shown in Fig. 6a.
  • The authors plot a cross section as shown in Fig. 6b, and compare it with the ground truth.
  • The authors see that the measured calibration bar surface overlaps with the ground truth pretty well.

D. Mouse Shaped Phantom Surface Extraction

  • Fig. 7b and 7c show the fringe patterns captured by two webcams from two different views.
  • There are in total nine such fringe patterns with a phase shifting step of 2π/9 for each webcam, and an additional centerline picture, which is used to determine the absolute phase.
  • Fig. 8a, 8b and Fig. 8c, Fig. 8d plot the wrapped phase map and the unwrapped phase map of webcam 1 and webcam 2, respectively.
  • Fig. 9 shows the 3D reconstructed results of the mouse geometry after the alignment of two point clouds and 3D registration, from which the authors can see that the reconstructed size is quite close to the true size.

E. Accuracy Evaluation

  • In order to evaluate their system’s accuracy, the authors have fabricated a step object for which the step height between the two planes is 8.13 mm.
  • Its photo is shown in Fig. 10a and the reconstructed step is shown in Fig. 10b.
  • From the results the authors can see the standard deviations are less than 0.2 mm for both planes, and their system can retrieve the result with errors within 0.5 mm, which is good enough for FMT imaging.
  • All the data sets are shown in Fig. 11a, from which the authors can see they overlap very well.
  • The cross section also proves that the surface data obtained from their method is very close to the CT data.

F. Digiwarp Results

  • After the mouse surface point cloud is obtained, the authors perform Digiwarp to the point cloud and generate the finite element mesh.
  • Among these 932 corresponding points, 8 points are chosen manually from the nose, arms and legs, and the other 924 points are chosen automatically slice by slice from the trunk along the x axis.
  • To map the 924 points on the trunk, the authors divide trunk section of the point cloud and the digimouse into 30 slices evenly.
  • Fig. 13b shows the corrected posture of Digimouse, in which the limbs and the head match the position of those of the subject mouse point cloud and Fig. 13c plots the first volume warping result.
  • Fig. 13d shows the surface fitting result while Fig. 13e shows the final volume warping result.

G. FMT reconstruction Results

  • Fig.14 plots the transverse, coronal and sagittal views of the overlaid FMT and gray-scale CT images.
  • The red color line plots the mouse phantom boundary from the warped mesh.
  • From these results the authors can observe that with the finite element mesh generated by the proposed 3d shape extraction method, the FMT reconstruction result is pretty consistent with the CT reconstruction.

4. DISCUSSIONS AND CONCLUSION

  • Compared with the approach described in [34], their paper is different in following aspects.
  • Secondly, the authors have two pairs of pico-projector and webcam to cover the surface from two views.
  • Fourthly, the authors warp a Digimouse mesh into their extracted point cloud to generate the finite element mesh easily and robustly.
  • Experimental results indicate that the accuracy of the proposed surface extraction method is within 0.5 mm, which is sufficient for FMT reconstruction as validated with the FMT images.
  • The authors thank Dr. Simon Cherry in UC Davis for lending us the CRI camera and the line laser, thank Michael Lun for the proof reading, and thank Y.

Did you find this useful? Give us your feedback

Figures (15)

Content maybe subject to copyright    Report

UC Merced
UC Merced Previously Published Works
Title
3D mouse shape reconstruction based on phase-shifting algorithm for fluorescence
molecular tomography imaging system.
Permalink
https://escholarship.org/uc/item/6t0194q4
Journal
Applied optics, 54(32)
ISSN
1559-128X
Authors
Zhao, Yue
Zhu, Dianwen
Baikejiang, Reheman
et al.
Publication Date
2015-11-01
DOI
10.1364/ao.54.009573
Peer reviewed
eScholarship.org Powered by the California Digital Library
University of California

Research Article Applied Optics 1
3D Mouse Shape Reconstruction based on Phase
Shifting Algorithm for Fluorescence Molecular
Tomography Imaging System
YUE ZHAO
1
, DIANWEN ZHU
1
, REHEMAN B AIKEJIANG
1
, AND CHANGQING LI
1,*
1
School of Engineering, University of California Merced, 5200 N. Lake Road, Merced, CA 95343
*
Corresponding author: cli32@ucmerced.edu
Compiled October 19, 2015
We introduce a fast, low cost, and robust method, based on fringe pattern and phase shifting, to obtain
three-dimensional (3D) mouse surface geometry for fluorescence molecular tomography (FMT) imaging.
We use two pairs of pico-projector and webcam to project and capture fringe patterns from different views.
At first, we calibrate the pico-projectors and the webcams to obtain their system parameters. Each pair of
pico-projector and webcam has its own coordinate system. We use a cylindrical calibration bar to calculate
the transformation matrix between these two coordinate systems. After that, the pico-projectors project
nine fringe patterns with a phase shifting step of 2π/9 onto the surface of a mouse shaped phantom.
The deformed fringe patterns are captured by the corresponding webcam respectively, and then are used
to construct two phase maps that are converted to two 3D surfaces composed of scattered points. The
two 3D point clouds are further merged into one with the transformation matrix. The surface extraction
process takes less than 30 seconds. Finally, we apply the Digiwarp method to warp a standard Digimouse
into the measured surface. The proposed method can reconstruct the surface of a mouse size object with
an accuracy of 0.5 mm, which is sufficient to obtain a finite element mesh for FMT imaging. An FMT
experiment of a mouse shaped phantom with one embedded fluorescence capillary target is performed.
With the warped finite element mesh, we reconstruct the target successfully, which validates our surface
extraction approach. © 2015 Optical Society of America
OCIS codes:
(120.3890) Medical optics instrumentation; (120.2650) Fringe analysis; (120.6650) 6650 : Surface measurements,
figure.
http://dx.doi.org/10.1364/ao.XX.XXXXXX
1. INTRODUCTION
Fluorescent Molecular Tomography (FMT) has emerged for
almost two decades and has been used widely in biomedical
research labs because of its unique features such as nonionized
radiation, low cost and wide availability of molecular probes [
1
]
[
2
]. The typical applications of FMT include protease activity
detection [
3
], cancer detection [
4
], bone regeneration imaging
[
5
], and drug delivery [
6
] etc. In FMT, the fluorophores are
injected inside a mouse body intravenously and then excited
with lasers to emit fluorescence photons, some of which will
propagate to the mouse surface and get measured [
7
]. Then
three-dimensional (3D) distribution of fluorophores inside
the mouse body is reconstructed iteratively from the surface
measurements [8].
Most FMT imaging systems use a charge-coupled device
(CCD) camera to measure fluorescence photon intensity on
the mouse surface with a non-contact mode because the CCD
camera can provide more measurements compared with
fiber based detectors [
2
]. The forward modeling and the
reconstruction of FMT are based on a finite element mesh, which
is used to discretize the mouse body. For FMT imaging system,
we have to obtain the geometry of the mouse surface first before
we can construct the finite element mesh. So the mouse surface
acquisition is critical in FMT imaging.
Different methods have been applied for extracting the
mouse surface. In one study, the mouse was hung and rotated
to be viewed at different angles by a camera, thus the 3D
geometry can be reconstructed [
9
] [
10
]. In another study, a
photogrammetric camera was employed to acquire the 3D
mouse shape [
1
] [
11
]. Line lasers have also been used to extract
the mouse surface by scanning the surface sequentially. For
example, Li et al utilized a three-line laser method [
7
] to extract
the mouse profile, while Gaind et al employed a single line

Research Article Applied Optics 2
laser scanner [
12
]. Recently, Lee et al, utilized line lasers and
David Laser scanner software [
13
] in their FMT system. All
these techniques can extract reasonable 3D mouse geometry.
However, they are either expensive, complicated or time
consuming. Aside from the above optical methods, CT scan
[
14
] [
15
] is also known to be able to reconstruct 3D mouse
surface accurately but it introduces ionized radiation and is
very expensive.
In this paper, we present a phase shifting method to extract
the mouse surface. This method is based on fringe pattern
projection [
16
], which has been developed during recent years
because of its high resolution, high accuracy and simple system
configuration. Various reconstruction algorithms have been
developed, such as 3-step phase shifting algorithm [
17
], Fourier
transform profilometry [
18
], and wavelet transform profilometry
[
19
] etc. In this paper we propose to build a system using a
pico-projector and webcam with 9-step phase shifting algorithm
because of its high accuracy and simplicity. Particularly, our
mouse surface extraction system has extremely low cost due to
the affordable prices of pico-projectors and webcams. Only 10
pictures are needed to reconstruct the mouse surface for each
pair of pico-projector and webcam. The picture acquisition and
the surface reconstruction take less than 30 seconds in total.
It is nontrivial to generate a finite element mesh from the
reconstructed mouse surface point cloud. In former studies,
we generated a watertight surface mesh at first. Then we use
Tetgen [
20
] to make a 3D mesh from the surface mesh [
7
]. It is
very challenging to create watertight surface mesh considering
the complicated mouse geometry. To simplify the finite mesh
generation, a Digiwarp algorithm has been proposed to warp
an established mesh onto the point cloud and has been proved
to be effective [
21
]. Furthermore, the internal organs of the
mouse can be warped to reasonable positions too. In this paper,
we apply the Digiwarp algorithm to generate successfully a
finite element mesh onto the point cloud we obtained from a
mouse shaped phantom. With the warped finite element mesh,
we reconstruct a fluorescence target successfully using the
measurements in an FMT experiment.
The rest of this paper is organized as follows. In section 2, the
steps of the 3D surface reconstruction method are introduced,
including the basic principles of phase shifting, selection
of phase shifting step number, pico-projector and webcam
pairs calibration, phase to coordinate conversion, merge of
two point clouds, an introduction of Digiwarp method and
a brief description of our FMT imaging system. Section 3
describes the mouse shape extraction results and FMT recon-
struction results. Section 4 concludes the paper with discussions.
2. METHODOLOGY
The 3D surface measurement system is shown in Fig. 1. There
are two pico-projectors (AAXA p4x, AAXA Technologies Inc.,
Tustin, CA) and two webcams (C615, Logitech, Apples, Switzer-
land). Each pair of pico-projector and webcam projects and cap-
tures fringe patterns from different views of the object, in order
to extract the object geometry from the top and two side views.
The pico-projectors project fringe patterns onto the surface of
the object. The patterns are deformed due to the modulation of
the object surface. The webcams capture the deformed fringe
patterns. The minimum focal distance of the webcam is 200 mm.
The projected patterns cover an area of 120 mm by 70 mm, which
is sufficient for a mouse. The pico-projectors and the webcams
Fig. 1. Schematic of the surface extraction system.
have small sizes and are at low cost. These components with
small size can be easily mounted inside the FMT imaging system
as described in [7].
A. Phase Shifting Algorithm
In phase shifting method, N fringe patterns with a phase shifting
step of 2
π
/N are generated by a computer and delivered to the
pico-projector for sequential projection onto the object surface.
The 1-D cosinusoid fringe patterns are described as:
F
n
(u
p
, v
p
) = cos(2π f v
p
+
(n 1) · 2π
N
), n = 1, 2, · · · , N (1)
where
(u
p
,
v
p
)
are the image coordinates of the projector, and
the patterns along
v
p
direction are the same for each
u
p
. The
phase of each point on the surface is calculated as [22]:
φ(u
c
, v
c
) = arctan
N
n=1
I
n
(u
c
, v
c
) · sin(
2πn
N
)
N
n=1
I
n
(u
c
, v
c
) · cos(
2πn
N
)
(2)
where
(u
c
,
v
c
)
are the coordinates in the webcam image, and
I
n
(u
c
,
v
c
)
is the
n
th
deformed fringe pattern captured by the we-
bcam. The fringe pattern measurement should be performed in
a dark chamber in order to reduce the effects of ambient light. It
should be noted that the phase obtained by Eq.2 is from an arctan
function, which means
φ(u
c
,
v
c
)
is wrapped between
[π
,
π]
.
Phase unwrapping method is utilized after the calculation of
Eq.2 to obtain continuous phase information. Some phase un-
wrapping methods have been studied and reported [
19
] [
23
] [
24
].
In order to warrant the reliability of the unwrapping results, we
use the multilevel quality guided phase unwrapping method
as described in [
23
]. For this method, the phase map points are
divided to several levels according to a quality map, which is
generated from the gradient of the phase map. The points with
best quality are unwrapped first, and then the points with lower
quality, until all points are unwrapped. Points with very low
quality are discarded. A fast scan-line algorithm [23] is utilized
within each level in order to speed up the phase unwrapping

Research Article Applied Optics 3
process. Thus this method can generate good unwrapping phase
map quickly. After that an additional centerline image is used
to obtain the absolute phase at each pixel [25].
It is worth noting that the spatial frequency of the projected
fringe pattern should be set well. If the spatial frequency is too
high, phase error appears in patches with complex geometry dur-
ing phase unwrapping. If the spatial frequency is too low, only
a small range of phase values can be used, which results in the
measurement errors. In our experiment, the spatial frequency
of the projected fringe patterns are chosen empirically. There
are 50 pixels per fringe-cycle and the related spatial frequency is
about 138 cycles per meter. These values can be adjusted slightly
according to different distances between the object surface and
the pico-projector.
B. Selection of Phase Shifting Step Number
Theoretically, three fringe patterns are enough to calculate the
surface geometry of an object [
17
]. In reality, however, due to the
non-linearity of the commercial projectors, obvious fluctuation
shows on the extracted 3D surface if we only use three fringe
patterns[
26
]. Phase error compensation methods have been de-
veloped to solve this problem[
26
] [
27
] for real-time measurement
systems. It has also been reported that the phase error caused
by the projectors’ non-linearity can be reduced by increasing the
phase shifting step number [
28
], while the drawback is that the
measurement time increases accordingly. In order to simplify
phase error correction algorithms and meanwhile to maintain
the surface accuracy, we increase the fringe pattern number. We
use different fringe pattern numbers from 3 to 15 to measure a
white plane and analyze the average phase error for each step
number. Finally, after analysis, we choose the fringe pattern
number N in Eq. 1 and 2 to be 9.
C. Pico-projector and Webcam Pairs Calibration
For one pair of pico-projector and webcam, there are 3 coordi-
nate systems: the webcam coordinate system, the pico-projector
coordinate system and the world coordinate system [
25
]. Sys-
tem calibration is required to obtain the intrinsic parameters of
the webcam and the pico-projector and to create relationships
among the three coordinate systems. For our FMT imaging sys-
tems, there are two pairs of pico-projector and webcam, so the
calibration has to be performed for each pair.
Both the webcam and the pico-projector have intrinsic param-
eter matrices of the following form:
A =
α γ u
0
0 β v
0
0 0 1
(3)
where
u
0
,
v
0
are coordinates of the principle point;
α
,
β
are focal
lengths along
u
and
v
axis of the image plane, and
γ
is the
skewness of u and v axis.
The calibration process is similar to the method described in
[
25
]. The camera calibration is finished by the Matlab Camera
Calibration Toolbox [
29
]. The projector checker board images
are generated by pixel to pixel mapping from camera images,
where the 9-step phase shifting algorithm is utilized again.
After all the intrinsic parameters are obtained, the extrinsic
parameters are calibrated to create relationships among the web-
cam, the pico-projector, and the world coordinate systems. The
extrinsic 3 × 4 parameter matrix is shown as:
M =
h
R t
i
(4)
where R is a 3
×
3 rotation matrix and t is a 3
×
1 translation ma-
trix. The extrinsic calibration is finished by the Matlab Toolbox
as well.
D. Phase to Coordinates Conversion
After all the calibration parameter matrices are obtained, the
phase map generated in section 2.A can be converted to the
3D coordinates in the world coordinate system. We have the
following equations to describe the relations among different
coordinate systems:
S
c
u
c
v
c
1
= A
c
[R
c
t
c
]
X
w
Y
w
Z
w
1
=
m
c
11
m
c
12
m
c
13
m
c
14
m
c
21
m
c
22
m
c
23
m
c
24
m
c
31
m
c
32
m
c
33
m
c
34
X
w
Y
w
Z
w
1
(5)
S
p
u
p
v
p
1
= A
p
[R
p
t
p
]
X
w
Y
w
Z
w
1
=
m
p
11
m
p
12
m
p
13
m
p
14
m
p
21
m
p
22
m
p
23
m
p
24
m
p
31
m
p
32
m
p
33
m
p
34
X
w
Y
w
Z
w
1
(6)
where the subscripts c and p denote the camera and projector,
respectively, S is scale factor, and
(u
c
,
v
c
)
and
(u
p
,
v
p
)
are im-
age coordinates of the webcam and pico-projector which have
the same phase value. A is the intrinsic parameter matrix. R
means rotation matrix, t is translation matrix, and m means the
elements of matrix
A[R t]
. The above parameters are known
and
X
w
,
Y
w
,
Z
w
are the 3D coordinates in the world coordinate
system to be determined. In Eq. 5 and 6 there are 6 equations
but only 5 unknowns (
S
c
,
S
p
,
X
w
,
Y
w
and
Z
w
), so we can ignore
the information about
v
p
to reduce one equation.
S
c
and
S
p
can
be solved from Eqs. 5 and 6 as:
S
c
= m
c
31
X
w
+ m
c
32
Y
w
+ m
c
33
Z
w
+ m
c
34
S
p
= m
p
31
X
w
+ m
p
32
Y
w
+ m
p
33
Z
w
+ m
p
34
(7)
We can cancel
S
c
and
S
p
by plugging in Eq. 7 into Eq. 5 and 6,
and obtain below equations:
(u
c
m
c
31
m
c
11
)X
w
+ (u
c
m
c
32
m
c
12
)Y
w
+ (u
c
m
c
33
m
c
13
)Z
w
= m
c
14
u
c
m
c
34
(v
c
m
c
31
m
c
21
)X
w
+ (v
c
m
c
32
m
c
22
)Y
w
+ (v
c
m
c
33
m
c
23
)Z
w
= m
c
24
v
c
m
c
34
(u
p
m
p
31
m
p
11
)X
w
+ (u
p
m
p
32
m
p
12
)Y
w
+ (u
p
m
p
33
m
p
13
)Z
w
= m
p
14
u
p
m
p
34
(8)
From Eq. 8 the 3D world coordinates
(X
w
,
Y
w
,
Z
w
)
can be calcu-
lated.
E. Alignment of Two Point Clouds
As the two pairs of pico-projector and webcam are calibrated
independently, they have different world coordinate systems,
as shown in Fig. 2a.
{O
1
,
x
1
,
y
1
,
z
1
}
is the world coordinate
system for the first pair, while
{O
2
,
x
2
,
y
2
,
z
2
}
is the world
coordinate system for the second pair. So we need to perform

Research Article Applied Optics 4
point clouds alignment between the two point clouds in two
different coordinate systems to merge them inside one point
cloud. We use a calibration bar as shown in Fig. 2b to transform
both coordinate systems to the conical mirror coordinate system
{O
con
,
x
con
,
y
con
,
z
con
}
. Because the position of the calibration
bar in the conical mirror coordinate system is known, we can
obtain the rotation and translation matrices for each coordinate
system, and transform them into the conical mirror system, so
that the two 3D point clouds can be merged. During our experi-
ments we find that the two 3D point clouds of a mouse shaped
phantom cannot be merged precisely after the alignment. There
is a slight misalignment about 3
of rotation around the
x
con
axis,
and 1 mm of translation along the
y
con
axis. This misalignment
may come from the slight position errors of the calibration bar in
the conical mirror system. The rotation and translation matrices
that work well for the calibration bar result in errors for the
mouse shaped phantom. So we have an additional step of 3D
registration as described in [
30
] to make the two point clouds
align with each other. In this step, we select several points from
point cloud 1, and try to find the rotation and translation matri-
ces that minimize the distance from these points to point cloud
2.
(a)
(b)
Fig. 2.
(a)Three coordinate systems in the FMT imaging sys-
tem and (b) photograph of the calibration bar.
F. Digiwarp Method
Digiwarp algorithm had been developed by Joshi et al.[
21
] and is
used to warp an established finite element mesh onto a measured
surface with scattered point cloud so that the tedious steps from
point cloud to finite element mesh are avoided. We apply the
Digiwarp algorithm to obtain a finite element mesh from our
surface measurements. The Digiwarping process has three steps:
posture correction, surface fitting and elastic volume warping.
Posture correction means to reposition the limbs and the head of
the Digimouse in order to match those of the mouse surface point
cloud obtained from our method. In this step, 14 corresponding
points from the limbs and the head are picked from both the
Digimouse and the mouse point cloud. After posture correction,
volume warping is implemented to warp the internal anatomy
of Digimouse. Then the Digimouse surface is adjusted to fit
the subject mouse point cloud by surface fitting. After that,
the volume warping is implemented again to warp the internal
anatomy to fit the subject mouse point cloud.
G. FMT Reconstruction
To validate FMT reconstruction with the mesh generated from
the proposed surface extraction method, we perform an FMT
experiment with a mouse shaped phantom embedded with a
capillary tube that is 20 mm long and 1 mm in diameter. The
target is filled with 10
µm
DiD fluorescence dye solution. The
FMT imaging system has been described in detail elsewhere [
31
].
Briefly, the FMT imaging system consists of a conical mirror, a
line pattern laser mounted on a rotary stage and a CCD camera,
as shown in Fig. 3. After the surface scan, the object is trans-
ported into a conical mirror by a linear stage for FMT scan. The
conical mirror is used to collect the fluorescence photon informa-
tion over the whole object surface, as described in [
7
]. A 643 nm
line laser (Stocker Yale Canada Inc.) is used to excite the fluores-
cence photons. The laser beam scans across the object surface
sequentially. A Cambridge (CRI) Nuance camera (Advanced
Molecular Vision, Inc.) is used to perform photon measurements,
and a motorized filter wheel (Lambda 10-3, Sutter Instrument,
Novato, CA) is positioned in front of the camera lens to select
fluorescent emission wavelengths. We use 30 line laser source
positions and 14,723 detectors. The optical parameters at ex-
citation and emission wavelengths are both
µ
a
=
0.002
mm
1
and
µ
s
=
1.1
mm
1
. The propagation of excited and emitted
lights are modeled by the diffusion equation that is solved by
the finite element method [
32
]. We follow the reconstruction
methods proposed by Zhu et al. [
8
][
33
] for the reconstruction of
distribution of the fluorescence dye.
Fig. 3.
Photograph of the FMT imaging system with the two
pairs of pico-projector and webcam.
3. EXPERIMENT RESULTS
A. System Calibration
In our system the webcams’ resolution is 640
×
480 and the
pico-projectors’ resolution is 854
×
480. Fig. 4a and Fig. 4b
show one example of our camera checker board image and its
corresponding projector checker board image generated from
9-step phase shifting method. The checker board images are
used for calibrating the webcam and the pico-projector. The
squares on the check board have a size of 6.8
mm ×
6.8
mm
. For
each pico-projector and webcam pair, 16 camera pictures are
taken and 16 projector images are generated for calibration. The
projectors are calibrated with Matlab Calibration Toolbox [
29
]
after all projector checker board images are generated. Our
system calibration results are listed as below:
Camera intrinsic parameter matrices:

Citations
More filters
Journal ArticleDOI
TL;DR: Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image.
Abstract: Fluorescence molecular tomography (FMT) is an important in vivo imaging modality to visualize physiological and pathological processes in small animals. However, FMT reconstruction is ill-posed and ill-conditioned due to strong optical scattering in deep tissues, which results in poor spatial resolution. It is well known that FMT image quality can be improved substantially by applying the structural guidance in the FMT reconstruction. An approach to introducing anatomical information into the FMT reconstruction is presented using the kernel method. In contrast to conventional methods that incorporate anatomical information with a Laplacian-type regularization matrix, the proposed method introduces the anatomical guidance into the projection model of FMT. The primary advantage of the proposed method is that it does not require segmentation of targets in the anatomical images. Numerical simulations and phantom experiments have been performed to demonstrate the proposed approach’s feasibility. Numerical simulation results indicate that the proposed kernel method can separate two FMT targets with an edge-to-edge distance of 1 mm and is robust to false-positive guidance and inhomogeneity in the anatomical image. For the phantom experiments with two FMT targets, the kernel method has reconstructed both targets successfully, which further validates the proposed kernel method.

21 citations

Journal ArticleDOI
TL;DR: A new fringe projection method for surface-shape measurement that uses background and amplitude encoded high-frequency fringe patterns that is able to perform 3D shape measurement with only four projected patterns and captured images, using a single camera and projector.

19 citations

Journal ArticleDOI
TL;DR: A low-cost but effective GPS-aided method based on the target images is proposed to measure the platform attitude angles and it is demonstrated that the measurement accuracy was better than 0.05° (RMSE).
Abstract: Attitude measurement error is one of the main factors that deteriorates the imaging accuracy of laser scanning. In view of the fact that the inertial navigation system (INS) with high accuracy is very costly, a low-cost but effective GPS-aided method based on the target images is proposed to measure the platform attitude angles in this paper. Based on the relationship between the attitude change of the platform and the displacement of two adjacent images, the attitude change can be derived by the proposed method. To quantitatively evaluate the accuracy of the platform attitude angles measured by the proposed method, an outdoor experiment was carried out in comparison with the GPS/INS method. The preliminary results demonstrated that the measurement accuracy using the proposed method was better than 0.05° (RMSE).

5 citations

Proceedings ArticleDOI
TL;DR: This work performed numerical simulations and phantom experiments with a conical mirror based fluorescence molecular tomography (FMT) imaging system to optimize its performance and found that the line laser excitation had slightly better FMT reconstruction results than the point Laser excitation.
Abstract: We performed numerical simulations and phantom experiments with a conical mirror based fluorescence molecular tomography (FMT) imaging system to optimize its performance. With phantom experiments, we have compared three measurement modes in FMT: the whole surface measurement mode, the transmission mode, and the reflection mode. Our results indicated that the whole surface measurement mode performed the best. Then, we applied two different neutral density (ND) filters to improve the measurement's dynamic range. The benefits from ND filters are not as much as predicted. Finally, with numerical simulations, we have compared two laser excitation patterns: line and point. With the same excitation position number, we found that the line laser excitation had slightly better FMT reconstruction results than the point laser excitation. In the future, we will implement Monte Carlo ray tracing simulations to calculate multiple reflection photons, and create a look-up table accordingly for calibration.

3 citations


Cites methods from "3D mouse shape reconstruction based..."

  • ...1.(10) Two pairs of pico-projector and webcam were used to measure the object surface from two different views....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: Local frequency and phase extraction errors by the WFR and WFF algorithms are analyzed and an unbiased estimation with very low standard deviation is achievable for local frequencies and phase distributions through windowed Fourier transforms.
Abstract: A windowed Fourier ridges (WFR) algorithm and a windowed Fourier filtering (WFF) algorithm have been proposed for fringe pattern analysis and have been demonstrated to be versatile and effective. Theoretical analyses of their performances are of interest. Local frequency and phase extraction errors by the WFR and WFF algorithms are analyzed in this paper. Effectiveness of the WFR and WFF algorithms will thus be theoretically proven. Consider four phase-shifted fringe patterns with local quadric phase [c(20)=c(02)=0.005 rad/(pixel)(2)], and assume that the noise in these fringe patterns have mean values of zero and standard deviations the same as the fringe amplitude. If the phase is directly obtained using the four-step phase-shifting algorithm, the phase error has a mean of zero and a standard deviation of 0.7 rad. However, when using the WFR algorithm with a window size of sigma(x)=sigma(y)=10 pixels, the local frequency extraction error has a mean of zero and a standard deviation of less than 0.01 rad/pixel and the phase extraction error in the WFR algorithm has a mean of zero and a standard deviation of about 0.02 rad. When using the WFF algorithm with the same window size, the phase extraction error has a mean of zero and a standard deviation of less than 0.04 rad and the local frequency extraction error also has a mean of zero and a standard deviation of less than 0.01 rad/pixel. Thus, an unbiased estimation with very low standard deviation is achievable for local frequencies and phase distributions through windowed Fourier transforms. Algorithms applied to different fringe patterns, different noise models, and different dimensions are discussed. The theoretical analyses are verified by numerical simulations.

158 citations

Journal ArticleDOI
TL;DR: The filtered amplitude is used as a real-valued quality map, rather than a binary mask, which makes the phase-unwrapping algorithm more tolerant to low-quality regions in a wrapped-phase map, and the process is more automatic.
Abstract: We propose a windowed Fourier-filtered and quality-guided phase-unwrapping algorithm that is an extension and improvement of our previous phase-unwrapping algorithm based on windowed Fourier transform [Opt. Laser Technol.37, 458 (2005)OLTCAS0030-399210.1016/j.optlastec.2004.07.007, Key Eng. Mater.326-328, 67 (2006)KEMAEY1013-9826]. First, the filtered amplitude is used as a real-valued quality map, rather than a binary mask, which makes the phase-unwrapping algorithm more tolerant to low-quality regions in a wrapped-phase map, and the process is more automatic. Second, the window size selection is considered, which enables the algorithm to be adapted to tackle different phase-unwrapping problems. A large window size is useful for removing noise, building long barriers along phase discontinuities, and identifying invalid regions, while a small window size is useful for preserving local features, such as small regions and high-quality narrow channels. Eight typical examples in Ghiglia and Pritt's excellent book Two-Dimensional Phase Unwrapping: Theory, Algorithm and Software (Wiley, 1998) are used to evaluate the proposed algorithm. The proposed algorithm is able to unwrap all these examples successfully. The windowed Fourier ridges algorithm, another algorithm based on windowed Fourier transform, is also tested and found to be useful in building barriers along phase discontinuities.

135 citations


"3D mouse shape reconstruction based..." refers methods in this paper

  • ...Some phase unwrapping methods have been studied and reported [19] [23] [24]....

    [...]

Journal ArticleDOI
TL;DR: A novel phase error compensation method for reducing the measurement error caused by nonsinusoidal waveforms in phase-shifting methods and a similar method is also proposed to correct the nonsinusoidality of the fringe patterns for the purpose of generating a more accurate flat image of the object for texture mapping.
Abstract: This paper describes a novel phase error compensation method for reducing the measurement error caused by nonsinusoidal waveforms in phase-shifting methods. For 3-D shape measurement sys- tems using commercial video projectors, the nonsinusoidal waveform of the projected fringe patterns as a result of the nonlinear gamma of pro- jectors causes significant phase measurement error and therefore shape measurement error. The proposed phase error compensation method is based on our finding that the phase error due to the nonsinusoidal wave- form depends only on the nonlinearity of the projector's gamma. There- fore, if the projector's gamma is calibrated and the phase error due to the nonlinearity of the gamma is calculated, a lookup table that stores the phase error can be constructed for error compensation. Our experimen- tal results demonstrate that by using the proposed method, the measure- ment error can be reduced by 10 times. In addition to phase error com- pensation, a similar method is also proposed to correct the nonsinusoidality of the fringe patterns for the purpose of generating a more accurate flat image of the object for texture mapping. While not relevant to applications in metrology, texture mapping is important for applications in computer vision and computer graphics. © 2007 Society of

132 citations


"3D mouse shape reconstruction based..." refers background in this paper

  • ...Phase error compensation methods have been developed to solve this problem[26] [27] for real-time measurement systems....

    [...]

Journal ArticleDOI
TL;DR: The restored vibration of the drumhead is presented in an animation and the membrane vibration of Chinese drum has been measured with a high speed sampling rate of 1,000 frames/sec and a standard deviation of 0.075 mm.
Abstract: A high-speed optical measurement for the vibrating drumhead is presented and verified by experiment. A projected sinusoidal fringe pattern on the measured drumhead is dynamically deformed with the membrane vibration and grabbed by a high-speed camera. The shape deformation of the drumhead at each sampling instant can be recovered from this sequence of obtained fringe patterns. The membrane vibration of Chinese drum has been measured with a high speed sampling rate of 1,000 frames/sec. and a standard deviation of 0.075 mm. The restored vibration of the drumhead is also presented in an animation.

131 citations


"3D mouse shape reconstruction based..." refers methods in this paper

  • ...Various reconstruction algorithms have been developed, such as 3-step phase shifting algorithm [17], Fourier transform profilometry [18], and wavelet transform profilometry [19] etc....

    [...]

Journal ArticleDOI
TL;DR: An effective reconstruction algorithm is developed to reconstruct fluorescent yield and lifetime using finite element techniques for three-dimensional fluorescence molecular tomography (FMT), and helps overcome the ill-posedness with FMT.
Abstract: In this paper, we propose a dual-excitation-mode methodology for three-dimensional (3D) fluorescence molecular tomography (FMT). For this modality, an effective reconstruction algorithm is developed to reconstruct fluorescent yield and lifetime using finite element techniques. In the steady state mode, a direct linear relationship is established between measured optical data on the body surface of a small animal and the unknown fluorescent yield inside the animal, and the reconstruction of fluorescent yield is formulated as a linear least square minimization problem. In the frequency domain mode, based on localization results of the fluorescent probe obtained using the first mode, the reconstruction of fluorescent lifetime is transformed into a relatively simple optimization problem. This algorithm helps overcome the ill-posedness with FMT. The effectiveness of the proposed method is numerically demonstrated using a heterogeneous mouse chest phantom, showing good accuracy, stability, noise characteristics and computational efficiency.

128 citations


"3D mouse shape reconstruction based..." refers methods in this paper

  • ...The propagation of excited and emitted lights are modeled by the diffusion equation that is solved by the finite element method [32]....

    [...]

Frequently Asked Questions (1)
Q1. What are the contributions in "3d mouse shape reconstruction based on phase shifting algorithm for fluorescence molecular tomography imaging system" ?

In this paper, a phase shifting method was used to extract the surface of a mouse from an FMT image.