scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Performing Weak Calibration at the Microscale, Application to Micromanipulation

TL;DR: Improve and adjust usual weak calibration techniques to the case of stereo video microscopes: Harris detector using a simplex optimization method for feature points detection, a "cha" window based ZNSSD correlation for points matching.
Abstract: We improve and adjust usual weak calibration techniques to the case of stereo video microscopes: Harris detector using a simplex optimization method for feature points detection, a "cha" window based ZNSSD correlation for points matching. Images of a pattern made with a water drop covered with nickel fillings are used. The result is validated by constructing 3D view of a micromanipulation work field.

Summary (2 min read)

Introduction

  • In addition to biomicroparts like cells and pollen seeds, artificial microparts are chemically or mechanically synthetized, or micromachined.
  • The images and their processing and analysis allow the task of surveillance, system control or microparts recognition.
  • Since many years computer vision deals with the problem of using multiple view imaging systems.
  • Recently the photon video microscope is equipped with two optic paths.

II. GEOMETRY OF TWO VIEWS

  • Fig. 1 shows the projective model of two views imaging system (stereo vision system).
  • The points O and O′ are respectively the optic center of the left and right images sources, then the line [OO′] is the baseline of the stereo vision system.
  • The point P is also projected along the segment [PO′], in the image plane ψ′ onto the point p′.
  • F corresponds to a projective morphism between ψ and ψ′, its depends on the epipole v′, and the homography A between the two views.
  • The computation of F is known as the weak calibration of the corresponding stereovision system i.e. the recovery of the relative geometry of the system since it allows the determination of the epipoles.

A. Feature points detection by a simplex Harris detector

  • The first corners detector algorithm was published by Moravec [8].
  • Today, there are several corners detectors in the literature, but only two are more popular, Susan [9] and Harris [10].
  • In the Nelder- Mead method, the simplex can vary in shape from iteration to iteration following reflect, expand, contract and shrink.
  • The matching between left and right features is performed with a zero-mean normalized sum of squared differences correlation since the latter is robust than the former SSD correlation.
  • A window of 21×21 pixels leads to an error of 6.86 pixels for the rectangular window and 0.67 pixel for the cha one that corresponds to a decrease of 90.2% of the former error.

C. Estimation of Fundamental Matrix

  • The fundamental matrix is computing by the normalized eight-point algorithm introduced by LonguetHiggins [17] and finalized by Hartley [18].
  • To estimate the matrix F at least height points correspondences are required.
  • The authors used the RAndom SAmple Consensus algorithm exposed in [19].
  • This algorithm removes the outliers from the model F .

A. Pattern Calibration

  • Weak calibration is achieved from stereo views of a calibration pattern.
  • Then in the macro scale two perpendicular chess boards are used.
  • Fig. 8 shows different water drops, the diameter is of 1.5mm for the big and 500µm for the small.
  • That pattern leads to well textured images with feature points at different depths.

D. Surface Reconstruction

  • Fundamental matrix F describes the relative geometry between both image sources.
  • One interest of F is the fact it allows the performing of fast dense correspondence of the epipolar lines between two views [16].
  • The latter allows the estimation of the disparity δ(p, p′) i.e. the displacement (in pixel) between every couple of feature points (p, p′): δ(p, p′) = distE(p, p ′) (22) with distE the Euclidian distance.
  • After been calibrated by the approach exposed above the authors recorded a pair of image from the LEICA video microscope and reconstructed using the method from [21] a 3D view of the work field.
  • It represents a microgripper tips manipulating microparts (Fig. 11).

V. CONCLUSION

  • The authors have succinctly mentioned the geometry of two views and exposed a best approach to perform weak calibration of that kind of imaging system at the micro scale.
  • That process consists of a feature points detection using a Harris detector, a ZNSSD matching of feature points and the fundamental matrix estimation.
  • The authors improve above techniques to adjust them to the views from stereo video microscopes.
  • It is more robust and accurate than usual rectangular window based approach.
  • The authors also define a calibration pattern made with a water drop recovered by nickel fillings.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

HAL Id: hal-00162281
https://hal.archives-ouvertes.fr/hal-00162281
Submitted on 13 Jul 2007
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Performing weak calibration at the microscale.
Application to micromanipulation.
Julien Bert, Sounkalo Dembélé, Nadine Lefort-Piat
To cite this version:
Julien Bert, Sounkalo Dembélé, Nadine Lefort-Piat. Performing weak calibration at the microscale.
Application to micromanipulation.. IEEE International Conference on Robotics and Automation,
ICRA’2007., Apr 2007, Rome, Italy. pp.4937-4992. �hal-00162281�

Performing Weak Calibration at the Microscale, Application to
Micromanipulation
Julien Bert, Sounkalo Demb´el´e and Nadine Lefort-Piat
Laboratoire d’Automatique de Besan¸con
UMR CNRS 6596 - ENSMM - UFC
25000 Besan¸con, France
{jbert, sdembele, npiat}@ens2m.fr
Abstract We improve and adjust usual weak cali-
bration techniques to the case of stereo video micro-
scopes : Harris detector using a simplex optimization
method for feature points detection, a cha” window
based ZNSSD correlation for points matching. Images
of a pattern made with a water drop covered with
nickel fillings are used. The result is validated by
constructing 3D view of a micromanipulation work
field.
I. Introduction
Micromanipulation is the manipulation of parts at the
microscale, i.e. in the range from 1 µm to 1 mm, for
assembly, sorting or testing. In addition to biomicroparts
like cells and pollen seeds, artificial microparts are chem-
ically or mechanically synthetized, or micromachined.
Classical examples of the first and second types are
respectively grains of powder like drugs or cosmetics,
and optomechatronic components like balls, pegs, pins,
threads, membranes, lenses, shutters and fibres. In some
cases these microparts define final products (MEMS),
otherwise they must be assembly to lead to the final
products. For that purpose some automated microassem-
bly systems have been developed by [1], [2], [3] and [4].
From those results it can be noticed that a microimaging
system is always required, and the most used is the
photon microscope connected to a camera. The images
and their processing and analysis allow the task of
surveillance, system control or microparts recognition.
The drawback of above imager is the fact that the depth-
of-field is very short and the field of view is very narrow.
Since many years computer vision deals with the
problem of using multiple view imaging systems. Those
systems increase the robustness of the information of
the work field. Recently the photon video microscope
is equipped with two optic paths. This stereo video
microscop e perceives the work field with two different
angles of view, left and right, like human vision. That
microimaging system opens new perspectives for micro-
manipulation and it main application: microassembly.
But the following usual algorithms must be restricted
according to the drawbacks of that kind of image source
as pointed out above: epipolar rectification, dense stereo
correspondence, 3D reconstruction, 3D visual servoing,
depth-estimation... Each method requires at least a weak
Fig. 1. Epipolar geometry.
calibration, which corresponds to the estimation of the
relative geometry between the two views. Usually at
the macro scale it is easy to perform weak calibration:
two images of calibration pattern (like a chess board) is
required. After the stereo views is obtained, feature are
detected in every view and matched, then the calibration
parameters are estimated [5]. But at the micro scale it is
difficult to find a calibration pattern with the good char-
acteristics: the latter must contain random pattern over
a 3D surface. Then, the corresponding images exhibit
speckle and depth information.
In this paper we propose a solution to the problem
of weak calibration of two views micro imaging system.
We mention the geometry of two views in section 2. We
develop in section 3 the stages of our approach: feature
detection with a modified Harris detector, improved
feature matching, calibration parameters estimation. We
apply our algorithm to a commercial stereo video micro-
scope (LEICA MZ16 A). For that purpose an intelligent
pattern is made with a water drop recover of nickel
filings.
II. GEOMETRY OF TWO VIEWS
Fig. 1 shows the projective model of two views imaging
system (stereo vision system). The points O and O
are respectively the optic center of the left and right
images sources, then the line [OO
] is the baseline of the
stereo vision system. The projection of O in the view ψ
defines the epipole v
, the projection of O
in the view ψ

Fig. 2. Definition of a corner, an edge or a flat according to the
detector response (R) and the eigenvalues (λ
1
, λ
2
).
defines the other epipole v. Both views of this stereovision
system are intrinsically linked by the epipolar geometry.
If a point P of space belongs to a plane π, it is projected
along the segment [P O], in the image plane ψ at the
point p. The point P is also projected along the segment
[P O
], in the image plane ψ
onto the point p
.
It has been shown [6], [7] that the point p of ψ and
its correspondence p
of ψ
are linked by the epipolar
constraint :
p
T
F p = 0 (1)
where F is called the fundamental matrix and is of di-
mensions 3×3 and rank 2. F corresponds to a projective
morphism between ψ and ψ
, its depends on the epipole
v
, and the homography A between the two views. The
computation of F is known as the weak calibration of
the corresp onding stereovision system i.e. the recovery
of the relative geometry of the system since it allows the
determination of the epipoles. The estimation of F is an
important step in rendering techniques: 2D or 3D view
synthesis. The weak calibration stages developed in the
paper are: feature points detection in the two views (of
a calibration pattern) with a modified Harris detector,
feature points matching with an improved windowed
correlation and F estimation.
Fig. 3. The Tsukuba stereo images (384 × 288 pixels) is used as a
benchmark.
Fig. 4. The scheme of the Harris simplex. Where N c
is the number
of corners desired and N c
d
the number of corners detected.
III. The calibration Stages
A. Feature points detection by a simplex Harris detector
The first corners detector algorithm was published by
Moravec [8]. Today, there are several corners detectors in
the literature, but only two are more popular, Susan [9]
and Harris [10]. Ref. [11] shows that Harris detector is
the most robust according to illumination changes. This
is why, Harris detector is often used for feature point
detection. It is based on an auto-correlation function
since the latter puts in light the intensity changes:
E(u, v) =
X
x
X
y
W (x, y) [I (x + u, y + v) I (x, y)]
2
(2)
where [u, v] is the displacement of W (x, y), the window
of auto-correlation (rectangular i.e. constant or gaussian)
and I the intensity of the image. By considering a small
shift the bilinear approximation M of E can be written:
M =
X
x
X
y
W (x, y)
I
2
x
I
x
I
y
I
x
I
y
I
2
y
(3)
where I
x
and I
y
are the derivative functions defines
by:
I
x
=
I(x,y)
x
I
y
=
I(x,y)
y
(4)
For every pixel of the image the detector response is:
R = det M k (trace M )
2
(5)
det M = λ
1
λ
2
and trace M = λ
1
+ λ
2
with λ
1
and
λ
2
the eigenvalues of M . The value of k is constant and
is empirically defined to 0.04 ×10
6
in our experiments.
According to the detector response and the eigenvalues,
it is possible to determine if the region of the window is a
corner (R > 0), an edge (R < 0) or a flat (R 0) Fig. 2.
Usually the number of detected corners, N cd, for
condition R > 0, is too important. In order to adjust
that number, a threshold t is empirically defined and the
corners N cd is determined by the condition R > t:
Ncd = f(R(t)) (6)
Let us consider the stereo images from Tsukuba data
base (Fig. 3). For t = 0, Ncd is respectively 672 points
and 689 points for the left and right images, for t = 0.1
it becomes 59 points and 61 points. But in F estimation

Fig. 5. Left, the classical rectangular window correlation. Right,
the cha window correlation with k = 2.
(and others parameters estimation like the collineation
matrix) the same number of corners are required in both
images. We propose a modification of Harris detector in
order to define a priori the number of corners N c
to
detect. The problem is to find the value t
of t that gives
the desired number of corners Nc
i.e. to resolve:
Nc
f (R(t)) = 0 (7)
This is an optimization problem that can be solved
using a Nelder-Mead simplex method [12]. That method
compares the values of the objective function with zero
and does not r equire the use of any derivatives. A simplex
in Rn is a set of n + 1 points that do not lie in a
hyperplane. For example a triangle is a simplex of 2
dimensions. In the Nelder- Mead method, the simplex
can vary in shape from iteration to iteration following
reflect, expand, contract and shrink. The corner response
for the image is calculated once. While N c
N cd is
different with zero, the simplex modifies the threshold t
(Fig. 4).
B. Feature points matching by a X-ZNSSD
The matching between left and right features is per-
formed with a zero-mean normalized sum of squared
differences (ZNSSD) correlation since the latter is robust
than the former SSD correlation. Its consists of the
definition of the correlation window, the computation of
the correlation function (also called criterion) around the
features and the selection of the features corresponding to
the maximum of that criterion (maximum of likelihood).
According to the window, the SSD algorithm is more
or less robust. Multiple window (multiple asymmetric
windows from [13]) or multiple recursive window (recur-
sive adaptive size multi-windowing from [14]) are used to
improve the robustness. Refer to [15] for the comparative
study.
We use a non recursive multiple window that gives best
results. It can be expressed as followed:
W
X
= W
odd
k
.X
kx,ky
(x, y) (8)
k is an integer. x and y are respectively x and y
spaced intervals. W
k
is a window based on W
defined
as followed:
W
= f
w
(x, y).X
x,y
(x, y) (9)
Fig. 6. Process of the closer neighbor method.
where f
w
(x, y) is a continuous function (rectangular
window) and X (”cha”) the 2D function of Dirac comb.
The latter corresponds to the product of two Dirac comb:
X
x,y
(x, y) = X
x
(x).X
y
(y)
=
+
X
m=−∞
δ (x mx) .
+
X
n=−∞
δ (y ny)
(10)
Odd and the scale factor k are defined as followed:
W
odd
k
x
= k × |W
|
x
mod
3
(k × |W
|
x
)
W
odd
k
y
= k × |W
|
y
mod
3
(k × |W
|
y
)
(11)
with |∗|
x
and |∗|
y
the cardinal along x and y respec-
tively and mod
n
() the modulo. We represent in Fig. 5
the rectangular window and the cha window. The two
windows perform the correlation with the same time
whatever the value of k since they have the same number
of pixels (|W
| = |W
X
|).
The Zero-mean Normalized Sum of Squared Differ-
ences (ZNSSD) criterion is defined by:
c
x,y
=
P
i,j
[(
I(x+i,y+j)
¯
I
)
(
I
(x
+i,y
+j)
¯
I
)]
2
r
P
i,j
[
I(x+i,y+j)
¯
I
]
2
r
P
i,j
[
I
(x
+i,y
+j)
¯
I
]
2
(12)
where
¯
I and
¯
I
is the mean of images I and I
respec-
tively. In usual approach each left feature is compared
with all the right features. The right feature for which the
minimum value of the criterion is obtained corresponds to
the maximum likelihood between left one and right one.
Then that approach is slow. In order to increase the speed
of the process we used a method based on relaxation
technique [16] with only the neighbor constraint. Each
feature p of the left image I is projected without any
transformation in the right image I
and the euclidean
distance between p and every p
of I
is computed
(Fig. 6):
dist
E
(p; p
) =
q
(y
y)
2
+ (x
x)
2
(13)
The criterion is computed only for a predefined neigh-
borhood: a number of points p
in a given radius around
p. A point p
that matched a point p is removed from
I
: a unique corresp ondence is guaranteed for each point

2 4 6 8 10 12 14 16 18 20 22
0
10
20
30
40
50
60
Size of windows [pixel]
Mean error [pixel]
Full window
Cha window
Fig. 7. Mean error matching on Tsukuba images: mean errors of
matching (2500 features) according to the size of window for the
rectangular and cha window.
p. Finally we obtain two sets matches points that allows
the estimation of the fundamental matrix.
In order to compare the performance of rectangular
window based matching and cha window based matching,
we overlay a regular grid of points in both Tsukuba’s im-
ages. And we perform a dual matching: forward matching
that leads to the correspondence of every left point in
the right view and backward matching by calculating the
correspondence of that point in the left view. Then the
euclidian distance between original and es timated point
is calculated. We apply this approach to the images from
Tsukuba. Fig. 7 shows the evolution of the mean error
of matching according to the size of correlation window.
The cha window based approach is the best whatever the
window size. For example, a window of 21× 21 pixels leads
to an error of 6.86 pixels for the rectangular window and
0.67 pixel for the cha one that corresponds to a decrease
of 90.2% of the former error. The mean value of best
matching is 56.7%.
C. Estimation of Fundamental Matrix
The fundamental matrix is computing by the nor-
malized eight-point algorithm introduced by Longuet-
Higgins [17] and finalized by Hartley [18]. To estimate
the matrix F at least height points correspondences are
required. The first stage is a normalization of every point
of both images. The normalization is a transformation
with a translation and an isotropic scaling so that the
centroid of the reference points is at the origin of the
coordinate and the euclidian distance of the points from
the origin is equal to
2. We get two sets of normalized
points, and the fundamental matrix is defined by the
equation:
ˆp
T
F ˆp = 0 (14)
where ˆp
= [ˆx
, ˆy
, 1]
T
and ˆp = [ˆx, ˆy, 1]
T
are the
rectified point. It can b e written as:
ˆx
ˆy
1
ˆ
f
1
1
ˆ
f
1
2
ˆ
f
1
3
ˆ
f
2
1
ˆ
f
2
2
ˆ
f
2
3
ˆ
f
3
1
ˆ
f
3
2
ˆ
f
3
3
ˆx
ˆy
1
= 0 (15)
if that equation is expanded it becomes:
ˆx
ˆx
ˆ
f
1
1
+ ˆx
ˆy
ˆ
f
1
2
+ ˆx
ˆ
f
1
3
+ ˆy
ˆx
ˆ
f
2
1
+ ˆy
ˆy
ˆ
f
2
2
+ˆy
ˆ
f
2
3
+ ˆx
ˆ
f
3
1
+ ˆy
ˆ
f
3
2
+
ˆ
f
3
3
= 0
(16)
that result can be written as:
A
ˆ
f = 0 (17)
with
ˆ
f the vector made up of the entries of
ˆ
F and A
defines linear equations, by the set of n matches point,
of the form:
ˆx
1
ˆx
1
ˆx
1
ˆy
1
ˆx
1
ˆy
1
ˆx
1
ˆy
1
ˆy
1
ˆy
1
ˆx
1
ˆy
1
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
ˆx
n
ˆx
n
ˆx
n
ˆy
n
ˆx
n
ˆy
n
ˆx
n
ˆy
n
ˆy
n
ˆy
n
ˆx
n
ˆy
n
1
(18)
A linear solution can be computed by the Singular
Value Decomposition (SVD) of the matrix A:
A = U
A
Σ
A
V
T
A
(19)
where the last column of V
T
A
corresponds to the vector
ˆ
f i.e. the entries of the fundamental matrix
ˆ
F . The
property of F is that it is singular, thus the rank of
ˆ
F
should be two than F can be written:
ˆ
F = U
ˆ
F
Σ
ˆ
F
V
T
ˆ
F
(20)
where Σ
ˆ
F
= diag(σ
1
, σ
2
, σ
3
). To constraints F to have
a rank of two it is composed by the elements of SVD(
ˆ
F )
with the constraint value σ
3
= 0:
F = U
ˆ
F
[diag(σ
1
, σ
2
, 0)]V
T
ˆ
F
(21)
Finally the fundamental matrix F is denormalized by
the inverse transformation of points (translation and
isotropic scaling) that corresponds to the original match-
ing p p
.
We used the RAndom SAmple Consensus algorithm
(RANSAC) exposed in [19]. This algorithm removes the
outliers from the model F .
IV. Application
We apply ours algorithms to a commercial stereo video
microscop e (LEICA MZ16 A) with a magnification from
0.1x to 2x. Two different optical paths of the light lead
to two views of the scene recorded in two cameras. The
system is dedicated to the surveillance and control of a
microassembly station. Part of 400µm × 400µm × 4µm
etched in silicon wafer have to be assembled to form 3D
products.

Citations
More filters
Journal ArticleDOI
TL;DR: The accuracy and flexibility of the proposed automatic virtual calibration method, based on parallel single-plane properties, are outlined, and a 3-D virtual calibration pattern is constructed using the micromanipulator tip with subpixel-order localization in the image frame.
Abstract: In the context of virtualized-reality-based telemicromanipulation, this paper presents a visual calibration technique for an optical microscope coupled to a charge-coupled device (CCD) camera. The accuracy and flexibility of the proposed automatic virtual calibration method, based on parallel single-plane properties, are outlined. In contrast to standard approaches, a 3-D virtual calibration pattern is constructed using the micromanipulator tip with subpixel-order localization in the image frame. The proposed procedure leads to a linear system whose solution provides directly both the intrinsic and extrinsic parameters of the geometrical model. Computer simulations and real data have been used to test the proposed technique, and promising results have been obtained. Based on the proposed calibration techniques, a 3-D virtual microenvironment of the workspace is reconstructed through the real-time imaging of two perpendicular optical microscopes. Our method provides a flexible, easy-to-use technical alternative to the classical techniques used in micromanipulation systems.

46 citations

Proceedings ArticleDOI
01 Oct 2016
TL;DR: This work proposes an assistive robotic system that facilitates micromanipulation under microscopy in the form of intelligent robotic vision and guided manipulation using user-selected patch similarity and provides online coordinated depth compensation.
Abstract: Micromanipulation during live microscopic imaging relies heavily on good manual controls, dexterity, and hand-eye coordination. However, unassisted manual operations in these procedures greatly limit the speed, repeatability, and ease of operation. This is especially challenging in the case of microinjection where the insertion path needs to be in precise alignment with the imaging plane to avoid damage to cells. In this paper, we proposed an assistive robotic system that facilitates micromanipulation under microscopy. This comes in the form of intelligent robotic vision and guided manipulation. Using user-selected patch similarity, the system registers target templates and provides online coordinated depth compensation that ensures in-plane microinjection without the need for any prior calibration. This vision-based auto-registration approach readily integrates to any existing microscope system uncalibrated. It can also work as a standalone imaging solution with any general digital microscope camera. Experiments show that the similarity-score based depth compensation performed better than the uncompensated method. The method was shown to self-recover from an unfocused position. By robotizing conventional microscopy and micromanipulation procedures, we hope to address traditional latent needs and open up new possibilities in the ways experimental biology is performed.

18 citations

Journal ArticleDOI
TL;DR: An automatic vision-guided micromanipulation approach to facilitate versatile deployment and portable setup and overcomes the constraints of traditional practices that confine automated cell manipulation to a laboratory setting by extending the application beyond the laboratory environment.
Abstract: In this paper, an automatic vision-guided micromanipulation approach to facilitate versatile deployment and portable setup is proposed. This paper is motivated by the importance of micromanipulation and the limitations in existing automation technology in micromanipulation. Despite significant advancements in micromanipulation techniques, there remain bottlenecks in integrating and adopting automation for this application. An underlying reason for the gaps is the difficulty in deploying and setting up such systems. To address this, we identified two important design requirements, namely, portability and versatility of the micromanipulation platform. A self-contained vision-guided approach requiring no complicated preparation or setup is proposed. This is achieved through an uncalibrated self-initializing workflow algorithm also capable of assisted targeting. The feasibility of the solution is demonstrated on a low-cost portable microscope camera and compact actuated microstages. Results suggest subpixel accuracy in localizing the tool tip during initialization steps. The self-focus mechanism could recover intentional blurring of the tip by autonomously manipulating it 95.3% closer to the focal plane. The average error in visual servo is less than a pixel with our depth compensation mechanism showing better maintaining of similarity score in tracking. Cell detection rate in a 1637-frame video stream is 97.7% with subpixels localization uncertainty. Our work addresses the gaps in existing automation technology in the application of robotic vision-guided micromanipulation and potentially contributes to the way cell manipulation is performed. Note to Practitioners —This paper introduces an automatic method for micromanipulation using visual information from microscopy. We design an automatic workflow, which consists of: 1) self-initialization; 2) vision-guided manipulation; and 3) assisted targeting, and demonstrate versatile deployment of the micromanipulator on a portable microscope camera setup. Unlike existing systems, our proposed method does not require any tedious calibration or expensive setup making it mobile and low cost. This overcomes the constraints of traditional practices that confine automated cell manipulation to a laboratory setting. By extending the application beyond the laboratory environment, automated micromanipulation technology can be made more ubiquitous and expands readily to facilitate field study.

15 citations


Cites methods from "Performing Weak Calibration at the ..."

  • ...nate systems is typically obtained using 3-D calibration patterns [20], [21], [49], [50] or the known kinematics of...

    [...]

Journal ArticleDOI
TL;DR: This article proposes a confidence-based approach for combining two visual tracking techniques to minimize the influence of unforeseen visual tracking failures to achieve uninterrupted vision-based control and demonstrates the robustness in the developed low-cost micromanipulation platform.
Abstract: This article proposes a confidence-based approach for combining two visual tracking techniques to minimize the influence of unforeseen visual tracking failures to achieve uninterrupted vision-based control. Despite research efforts in vision-guided micromanipulation, existing systems are not designed to overcome visual tracking failures, such as inconsistent illumination condition, regional occlusion, unknown structures, and nonhomogenous background scene. There remains a gap in expanding current procedures beyond the laboratory environment for practical deployment of vision-guided micromanipulation system. A hybrid tracking method, which combines motion-cue feature detection and score-based template matching, is incorporated in an uncalibrated vision-guided workflow capable of self-initializing and recovery during the micromanipulation. Weighted average, based on the respective confidence indices of the motion-cue feature localization and template-based trackers, is inferred from the statistical accuracy of feature locations and the similarity score-based template matches. Results suggest improvement of the tracking performance using hybrid tracking under the conditions. The mean errors of hybrid tracking are maintained at subpixel level under adverse experimental conditions while the original template matching approach has mean errors of 1.53, 1.73, and 2.08 pixels. The method is also demonstrated to be robust in the nonhomogeneous scene with an array of plant cells. By proposing a self-contained fusion method that overcomes unforeseen visual tracking failures using pure vision approach, we demonstrated the robustness in our developed low-cost micromanipulation platform. Note to Practitioners —Cell manipulation is traditionally done in highly specialized facilities and controlled environment. Existing vision-based methods do not readily fulfill the need for the unique requirements in cell manipulation including prospective plant cell-related applications. There is a need for robust visual tracking to overcome visual tracking failure during the automated vision-guided micromanipulation. To address the gap in maintaining continuous tracking for vision-guided micromanipulation under unforeseen visual tracking failures, we proposed a purely visual data-driven hybrid tracking approach. Our proposed confidence-based approach combines two tracking techniques to minimize the influence of scene uncertainties, hence, achieving uninterrupted vision-based control. Because of its readily deployable design, the method can be generalized for a wide range of vision-guided micromanipulation applications. This method has the potential to significantly expand the capability of cell manipulation technology to even include prospective applications associated with plant cells, which are yet to be explored.

11 citations

01 Oct 2018
TL;DR: In this paper, a self-contained vision-guided approach to facilitate versatile deployment and portable setup is proposed, which is achieved through an uncalibrated self-initializing workflow algorithm also capable of assisted targeting.
Abstract: In this paper, an automatic vision-guided micromanipulation approach to facilitate versatile deployment and portable setup is proposed. This paper is motivated by the importance of micromanipulation and the limitations in existing automation technology in micromanipulation. Despite significant advancements in micromanipulation techniques, there remain bottlenecks in integrating and adopting automation for this application. An underlying reason for the gaps is the difficulty in deploying and setting up such systems. To address this, we identified two important design requirements, namely, portability and versatility of the micromanipulation platform. A self-contained vision-guided approach requiring no complicated preparation or setup is proposed. This is achieved through an uncalibrated self-initializing workflow algorithm also capable of assisted targeting. The feasibility of the solution is demonstrated on a low-cost portable microscope camera and compact actuated microstages. Results suggest subpixel accuracy in localizing the tool tip during initialization steps. The self-focus mechanism could recover intentional blurring of the tip by autonomously manipulating it 95.3% closer to the focal plane. The average error in visual servo is less than a pixel with our depth compensation mechanism showing better maintaining of similarity score in tracking. Cell detection rate in a 1637-frame video stream is 97.7% with subpixels localization uncertainty. Our work addresses the gaps in existing automation technology in the application of robotic vision-guided micromanipulation and potentially contributes to the way cell manipulation is performed. Note to Practitioners —This paper introduces an automatic method for micromanipulation using visual information from microscopy. We design an automatic workflow, which consists of: 1) self-initialization; 2) vision-guided manipulation; and 3) assisted targeting, and demonstrate versatile deployment of the micromanipulator on a portable microscope camera setup. Unlike existing systems, our proposed method does not require any tedious calibration or expensive setup making it mobile and low cost. This overcomes the constraints of traditional practices that confine automated cell manipulation to a laboratory setting. By extending the application beyond the laboratory environment, automated micromanipulation technology can be made more ubiquitous and expands readily to facilitate field study.

10 citations

References
More filters
Journal ArticleDOI
TL;DR: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point.
Abstract: A method is described for the minimization of a function of n variables, which depends on the comparison of function values at the (n 41) vertices of a general simplex, followed by the replacement of the vertex with the highest value by another point. The simplex adapts itself to the local landscape, and contracts on to the final minimum. The method is shown to be effective and computationally compact. A procedure is given for the estimation of the Hessian matrix in the neighbourhood of the minimum, needed in statistical estimation problems.

27,271 citations


"Performing Weak Calibration at the ..." refers methods in this paper

  • ...This is an optimization problem that can be solved using a Nelder-Mead simplex method [12]....

    [...]

Journal ArticleDOI
TL;DR: New results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form that provide the basis for an automatic system that can solve the Location Determination Problem under difficult viewing.
Abstract: A new paradigm, Random Sample Consensus (RANSAC), for fitting a model to experimental data is introduced. RANSAC is capable of interpreting/smoothing data containing a significant percentage of gross errors, and is thus ideally suited for applications in automated image analysis where interpretation is based on the data provided by error-prone feature detectors. A major portion of this paper describes the application of RANSAC to the Location Determination Problem (LDP): Given an image depicting a set of landmarks with known locations, determine that point in space from which the image was obtained. In response to a RANSAC requirement, new results are derived on the minimum number of landmarks needed to obtain a solution, and algorithms are presented for computing these minimum-landmark solutions in closed form. These results provide the basis for an automatic system that can solve the LDP under difficult viewing

23,396 citations

Proceedings ArticleDOI
01 Jan 1988
TL;DR: The problem the authors are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work.
Abstract: The problem we are addressing in Alvey Project MMI149 is that of using computer vision to understand the unconstrained 3D world, in which the viewed scenes will in general contain too wide a diversity of objects for topdown recognition techniques to work. For example, we desire to obtain an understanding of natural scenes, containing roads, buildings, trees, bushes, etc., as typified by the two frames from a sequence illustrated in Figure 1. The solution to this problem that we are pursuing is to use a computer vision system based upon motion analysis of a monocular image sequence from a mobile camera. By extraction and tracking of image features, representations of the 3D analogues of these features can be constructed.

13,993 citations


"Performing Weak Calibration at the ..." refers background in this paper

  • ...The weak calibration stages developed in the paper are: feature points detection in the two views (of a calibration pattern) with a modified Harris detector, feature points matching with an improved windowed correlation and F estimation....

    [...]

  • ...[11] shows that Harris detector is the most robust according to illumination changes....

    [...]

  • ...In this case Harris detector gives features only in the area of the image which is in focus....

    [...]

  • ...This is why, Harris detector is often used for feature point detection....

    [...]

  • ...That process consists of a feature points detection using a Harris detector, a ZNSSD matching of feature points and the fundamental matrix estimation....

    [...]

Journal ArticleDOI
TL;DR: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction and the resulting methods are accurate, noise resistant and fast.
Abstract: This paper describes a new approach to low level image processing; in particular, edge and corner detection and structure preserving noise reduction. Non-linear filtering is used to define which parts of the image are closely related to each individual pixel; each pixel has associated with it a local image region which is of similar brightness to that pixel. The new feature detectors are based on the minimization of this local image region, and the noise reduction method uses this region as the smoothing neighbourhood. The resulting methods are accurate, noise resistant and fast. Details of the new feature detectors and of the new noise reduction method are described, along with test results.

3,669 citations


"Performing Weak Calibration at the ..." refers background in this paper

  • ...Today, there are several corners detectors in the literature, but only two are more popular, Susan [9] and Harris [10]....

    [...]

Journal ArticleDOI
01 Jan 1987-Nature
TL;DR: A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown.
Abstract: A simple algorithm for computing the three-dimensional structure of a scene from a correlated pair of perspective projections is described here, when the spatial relationship between the two projections is unknown. This problem is relevant not only to photographic surveying1 but also to binocular vision2, where the non-visual information available to the observer about the orientation and focal length of each eye is much less accurate than the optical information supplied by the retinal images themselves. The problem also arises in monocular perception of motion3, where the two projections represent views which are separated in time as well as space. As Marr and Poggio4 have noted, the fusing of two images to produce a three-dimensional percept involves two distinct processes: the establishment of a 1:1 correspondence between image points in the two views—the ‘correspondence problem’—and the use of the associated disparities for determining the distances of visible elements in the scene. I shall assume that the correspondence problem has been solved; the problem of reconstructing the scene then reduces to that of finding the relative orientation of the two viewpoints.

2,671 citations


"Performing Weak Calibration at the ..." refers methods in this paper

  • ...The fundamental matrix is computing by the normalized eight-point algorithm introduced by LonguetHiggins [17] and finalized by Hartley [18]....

    [...]

Frequently Asked Questions (2)
Q1. What are the contributions mentioned in the paper "Performing weak calibration at the microscale. application to micromanipulation" ?

The authors improve and adjust usual weak calibration techniques to the case of stereo video microscopes: 

Future work will deal with the fabrication of a stable calibrated pattern since water drop is good but it evaporate after a short time and it size can not be controlled.