scispace - formally typeset
Search or ask a question
Book ChapterDOI

3D invariants with high robustness to local deformations for automated pollen recognition

12 Sep 2007-pp 425-435
TL;DR: A new technique for the extraction of features from 3D volumetric data sets based on group integration is presented, which is robust to local arbitrary deformations and nonlinear gray value changes, but is still sensitive to fine structures.
Abstract: We present a new technique for the extraction of features from 3D volumetric data sets based on group integration. The features are invariant to translation, rotation and global radial deformations. They are robust to local arbitrary deformations and nonlinear gray value changes, but are still sensitive to fine structures. On a data set of 389 confocally scanned pollen from 26 species we get a precision/recall of 99.2% with a simple 1NN classifier. On volumetric transmitted light data sets of about 180,000 airborne particles, containing about 22,700 pollen grains from 33 species, recorded with a low-cost optic in a fully automated online pollen monitor the mean precision for allergenic pollen is 98.5% (recall: 86.5%) and for the other pollen 97.5% (recall: 83.4%).

Summary (1 min read)

Introduction

  • Patients should be informed of the uncertainty related to 22 improvement of pre-treatment cranial nerve dysfunctions.
  • 9,10 Bypasses combined with arterial occlusions can also serve as salvage option 41 when all other treatment modalities have failed.
  • The series published on complex ICA aneurysms treated with bypasses are usually small 44 and focus mainly on technical nuances, but a more general analysis of the treatment 45 strategy is often lacking.

Patients and aneurysms 54

  • Data were retrieved from the Helsinki Bypass database, which includes all the patients 55 treated with bypass procedures at the Department of Neurosurgery at Helsinki University 56 Hospital (HUS) after 1998.
  • Unlike in the ICAcav 209 cases, the bypass was patent in all of the ICAsup and ICAbif patients with postoperative 210 ischaemic findings.
  • 265 266 Aneurysm occlusion strategy 267 Aneurysm occlusion rates of 91-100% have been reported previously for complex ICA 268 aneurysms treated with bypass procedures, but variable patient selection or study focus 269 (specific treatment method, combination of aneurysm locations, or the symptoms and 270 related improvement) makes comparison between the series difficult.

A1, A1 segment of anterior cerebral artery; ACA, anterior cerebral artery; AChA, anterior 2

  • 15 M AN US CR IP T AC CE PT ED ACCEPTED MANUSCRIPT Declaration of interests ☒.
  • The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  • ☐The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

3D Invariants with High Robustness to Local
Deformations for Automated Pollen Recognition
Olaf Ronneberger, Qing Wang, and Hans Burkhardt
Albert-Ludwigs-Universit¨at Freiburg, Institut ur Informatik, Lehrstuhl ur
Mustererkennung und Bildverarbeitung, Georges-K¨ohler-Allee Geb. 052,
79110 Freiburg, Deutschland
{ronneber,qwang,burkhardt}@informatik.uni-freiburg.de
Abstract. We present a new technique for the extraction of features
from 3D volumetric data sets based on group integration. The features
are invariant to translation, rotation and global radial deformations.
They are robust to local arbitrary deformations and nonlinear gray value
changes, but are still sensitive to fine structures. On a data set of 389 con-
focally scanned pollen from 26 species we get a precision/recall of 99.2%
with a simple 1NN classifier. On volumetric transmitted light data sets of
about 180,000 airborne particles, containing about 22,700 pollen grains
from 33 species, recorded with a low-cost optic in a fully automated
online pollen monitor the mean precision for allergenic pollen is 98.5%
(recall: 86.5%) and for the other pollen 97.5% (recall: 83.4%).
1 Introduction
Nearly all worldwide pollen forecasts are still based on manual counting of pollen
in air samples under the microscope. Within the BMBF-founded project “OM-
NIBUSS” a first demonstrator of a fully automated online pollen monitor was
developed, that integrates the collection, preparation and microscopic analysis
of air samples. Due to commercial interests, no details of the developed pattern
recognition algorithms were published within the last three years. This is the
first time that we show how this machine works behind the scenes.
Challenges in pollen recognition. Due to the great intra class variability and
only verysubtle inter-class differences, automated pollen recognition is a very chal-
lenging but still largely unsolved problem. As most pollen grains are nearly spher-
ical and the subtle differences are mainly found near the surface, a pollen expert
needs the full 3D information (usually by “focussing through” the transparent
pollen grain). An additional difficulty is that pollen grains are often agglomerated
and that the air samples containlots of other airborne particles. For a reliable mea-
surement of high allergenic pollen (e.g. Artemisia. A few such pollen grains per m
3
of air can already cause allergic reactions) the avoidance of false positives is one
of the most important requirements for a fully automated system.
State of the art. Almost all published articles concerning pollen recognition
deal with very low numbers of pollen grains from only a few species and use
F.A. Hamprecht, C. Schn¨orr, and B. ahne (Eds.): DAGM 2007, LNCS 4713, pp. 425–435, 2007.
c
Springer-Verlag Berlin Heidelberg 2007

426 O. Ronneberger, Q. Wang, and H. Burkhardt
manually prepared pure pollen samples, e.g. [1]. Only [4] used a data set from
real air samples containing a reasonable number of pollen grains (3686) from
27 species. But even on a reduced data set containing only 8 species and dust
particles, the recall was only 64,9% with a precision of 30%.
Main Contribution. In this paper we describe the extension of the Haar-
integration framework [9,6,7,8] (further denoted as “HI framework”) to global
and local deformations. This is achieved by creating synthetic channels con-
taining the segmentation borders and employing special parameterized kernel
functions. Due to the sparsity of non-zero-values in the synthetic channels the
resulting integral features are highly localized in the real space, while the frame-
work automatically guarantees the desired invariance properties.
For efficient computation of these integrals we make use of the sparsity of
the data in the synthetic channels and use a Fourier or spherical harmonics
(“SH”) series expansion (for the desired rotation invariance) to compute multiple
features at the same time.
a) volume rendering of
confocal data set
b) horizontal and vertical
cuts of confocal data set
c) horizontal and vertical cuts
of transmitted light data set
Fig. 1. 3D recordings of Betula pollen grains. In transmitted light microscopy the
recording properties in z-direction (the direction of the optical axis) are significantly
different from those in the xy-direction, because the effects of diffraction, refraction
and absorption depend on the direction of the transmitted light. Furthermore there
is a significant loss of information in z-direction due to the low-pass property of the
optical transfer function.
2MaterialandMethods
Data Sets. To demonstrate the generality of the proposed invariants and com-
pare them to earlier results, we use two different pollen data sets in this article.
Both contain 3D volumetric recordings of pollen grains.
The “confocal data set” contains 389 pollen grains from 26 German pollen
taxa, recorded with a confocal laser scanning microscope (fig 1a,b). For further
details on this data set refer to [6].
The “pollen monitor data set” contains about 180,000 airborne particles in-
cluding about 22,700 pollen grains from air samples that were collected, prepared

3D Invariants with High Robustness to Local Deformations 427
and recorded with transmitted light microscopy from the online pollen monitor
from March to September 2006 in Freiburg and Z¨urich (fig. 1c). All 180,000
particles were manually labeled by pollen experts.
Segmentation. To find the 3D surface of the pollen grains in the confocal data
set, we use the graph cut algorithm described in [2]. The original data were first
scaled down. The edge costs to source and sink were modeled by a Gaussian
distribution relative to the mean and minimum gray value. We added voxel-to-
voxel edges to the 124 neighborhood, where the weight was a Gaussian of the
gray differences. The resulting binary mask was then smoothly scaled up to the
original size.
The first step in processing the pollen monitor data set is the detection of
circular objects with voxel-wise vector based gray-scale invariants, similar to
those in [8]. For each detected circular object the precise border in the sharpest
layer is searched: As parts of the object border are often missing or not clear, we
use snakes to find a smooth and complete border. To avoid the common problem
of snakes being attracted to undesired edges (if plain gradient magnitude is used
asforcefield),wetakethestepsdepictedinfig2.
a) sharpest layer b) found edges c) weighted edges d) final snake
1. Applying modified Canny edge
detection.
As pollen grains have a nearly
round shape, the edges that are
approximately perpendicular to
the radial direction are more rele-
vant. We replace the gradient with
its radial component in the orig-
inal Canny edge detection algo-
rithm.
2. Model-based weighting of the
edges.
The curvatures and relative loca-
tions of the edges are analyzed
and each edge is given a different
weight. Some edges are even elim-
inated. As a result, a much clearer
weighted edge image is obtained.
3. Employing snakes to find the
final border.
The initial contour is chosen to be
the circle found in the detection
step. The external force field is the
so-called “gradient vector flow”
[10] computed from the weighted
edge image
Fig. 2. Segmentation of transmitted light microscopic images
2.1 Construction of Invariants
For the construction of invariants we use the combination of a normalization
and Haar-integration [9,6,7,8](see eq. (1)) over a transformation group con-
taining rotations and deformations (Haar-integration has nothing to do with
Haar wavelets). In contrast to the very general approach in [6], we now use the

428 O. Ronneberger, Q. Wang, and H. Burkhardt
object center and the outer border found in the segmentation step to extract
more distinctive features describing certain regions of the object.
T [f](X):=
G
f(gX)dg
G : transformation group
g : one element of the transformation group
dg : Haar measure
f : nonlinear kernel function
X : n-dim, multi-channel data set
(1)
Invariance to translations. Invariance to translations is achieved by moving
the center of mass of the segmentation mask to the origin. The final features are
quite insensitive to errors in this normalization step, because they are computed
“far” away from this center and only the direction to it is used.
Invariance to rotation. Invariance to rotation around the object center is
achieved by integration over the rotation group. In the confocal data set we can
model a 3D rotation of a real-world object by a 3D rotation of the recorded volu-
metric data set (see fig. 1b). In contrast to this, the transmitted light microscopic
image stacks from the pollen monitor data set show very different characteristics
in xy- and z-direction, (see fig. 1c). A rotation around the x- or y-axis of the
real-world object results in so different gray value distributions, that it is more
reasonable to model only the rotation around the z-axis, resulting in a planar
rotation invariance.
Invariance to global Deformations and Robustness to local Deforma-
tions. The deformation model consists of two parts. The global deformations
are modeled by a simple shift in radial direction e
r
, which depends only on the
angular coordinates (see figure 3a). For full 3D-rotations described in spherical
coordinates x =(x
r
,x
ϕ
,x
ϑ
)thismodelis
x
= x + γ
γ
γ(x)withγ
γ
γ(x)=γ(x
ϕ
,x
ϑ
) · e
r
(x
ϕ
,x
ϑ
) . (2)
For rotations around the z-axis described in cylindrical coordinates x=(x
r
,x
ϕ
,x
z
)
we get
x
= x + γ
γ
γ(x)withγ
γ
γ(x)=γ(x
ϕ
) · e
r
(x
ϕ
) . (3)
Please note, that this deformation is well defined only for r>γ(ϕ), which is
no problem in the present application, because the features are computed “far”
away from the center.
The smaller local deformations are described by an arbitrary displacement
field D(x) such that
x
= x + D(x)(4)
(see fig. 3b). For the later partial Haar-integration [3] over all possible realizations
of this displacement field, it is sufficient to know only the probability for the
occurrence of a certain relative displacement r within this field as
p
D(x + d) D(x)=r
= p
d
(r; d) x, d IR
3
, (5)

3D Invariants with High Robustness to Local Deformations 429
a) Global deformation model (radial) b) Local deformation model (arbitrary)
Fig. 3. Possible realizations of the deformation models
where we select p
d
(r; d) to be a rotationally symmetric Gaussian distribution
with a standard deviation σ = d·σ
d
.
While we achieve full invariance to radial deformations by full Haar-integration
we can only reach robustness to local deformations by partial Haar-integration.
But this non-invariance in the second case is exactly the desired behavior. In com-
bination with appropriate kernel functions this results in a continuous mapping of
objects (with weak or strong local deformations) into the feature space.
The kernel functions. Instead of selecting a certain fixed number of kernel
functions, we introduce parameterized kernel functions here. Embedded into the
HI framework, each new combination of kernel parameters results in a new in-
variant feature. For multiple kernel parameters, we now have a multidimensional
invariant feature array describing the object.
Robustness to gray value transformations. To become robust to gray value trans-
formations the information is split into gradient direction (which is very robust
even under nonlinear gray value transformations) and gradient magnitude. This
was already successfully applied to the HI framework in [8] and to confocal pollen
data sets in [5].
Synthetic channels with segmentation results. To feed the segmentation informa-
tion into the HI framework we simply render the surface (confocal data set) or
the contour of the sharpest layer (transmitted light data set) as delta-peaks into
a new channel S and extend the kernel-function with two additional points that
sense the gray value in this channel. The only condition for this technique is
that the computation of the synthetic channel and the action of transformation
group can be exchanged without the result being changed (i.e., we must get the
same result if we first extract the surface and then rotate and deform the volume
and vice versa).
Resulting kernel function. To achieve the requested properties we construct 4-
point kernels, where 2 points of the kernel a
1
and a
2
sense the segmentation

Citations
More filters
Book ChapterDOI
01 Jan 2012

1 citations

Proceedings ArticleDOI
01 Nov 2013
TL;DR: The proposed descriptor efficiently captures discriminative information by encoding the local inner and outer structure of the transparent pollens in a focus-tolerant manner, achieving approximately 74.5% classification accuracy, demonstrating that local scale invariant features can be robust even under challenging conditions.
Abstract: In this work we present a new approach to the extraction of features robust to focal mismatches, for the classification of biological particles characterized by 3 dimensional structures. We use SIFT descriptors in order to encode local gradient, fused with features derived from an introduced adaptive filterbank of Gabor filters. We have evaluated the proposed technique using a dataset consisting of 174 images of pollen grains from 29 species, acquired with a low-cost optical microscope in arbitrary focal planes. The proposed descriptor efficiently captures discriminative information by encoding the local inner and outer structure of the transparent pollens in a focus-tolerant manner, achieving approximately 74.5% classification accuracy, demonstrating that local scale invariant features can be robust even under challenging conditions.

1 citations

Proceedings ArticleDOI
29 Nov 2010
TL;DR: A new method for 3D pollen particle recognition based on spatial geometric constraints histogram descriptors (SGCHD) for reducing high dimensionality and noise disturbance, the surface curvature voxels are extracted as the primitive features instead of the original3D pollen particles.
Abstract: This paper presents a new method for 3D pollen particle recognition based on spatial geometric constraints histogram descriptors (SGCHD). For reducing high dimensionality and noise disturbance, the surface curvature voxels are extracted as the primitive features instead of the original 3D pollen particles. The geometric constraints vectors are computed to describe the spatial correlations among the curvature voxels on the 3D pollen particle surface. The histogram algorithm is applied on the geometric constraints vectors to obtain the statistical histogram descriptors with fixed dimension. Experimental results verified the good invariance and robustness of our proposed descriptors on two pollen image databases.

1 citations


Cites background from "3D invariants with high robustness ..."

  • ...The integral invariant descriptors, such as GSGI descriptors ([5]), the MiSP descriptors ([6]), BGPK descriptors ([7]), effectively describe the statistical voxels distribution of the 3D pollen particles, which are proved to be invariant to the rotation and translation transformation of pollen images....

    [...]

Proceedings ArticleDOI
01 Oct 2012
TL;DR: Experimental results validate that the presented descriptors are invariant to different pollen particles geometric transformations, such as pose change and spatial rotation, and high recognition precision and speed can be obtained simultaneously.
Abstract: This paper presents a new kind of descriptors using spatial geometric constraints histogram descriptors (SGCHD) based on curvature mesh graph for automatic 3D pollen recognition. In order to reduce high dimensionality and noise disturbance arisen from the abnormal record approach under microscopy, the separated surface curvature voxels are extracted as the primitive features to represent the original 3D pollen particles. Due to the good invariance to pollen rotation and scaling transformation, the spatial geometric constraints vectors are calculated to describe the spatial position correlations of the curvature voxels on the 3D curvature mesh graph. For exact similarity evaluation purpose, the bidirectional histogram algorithm is applied to the spatial geometric constraints vectors to obtain the statistical histogram descriptors with fixed dimensionality, which is invariant to the number and the starting position of the voxels. Experimental results validate that the presented descriptors are invariant to different pollen particles geometric transformations, such as pose change and spatial rotation, and high recognition precision and speed can be obtained simultaneously.

1 citations


Cites background from "3D invariants with high robustness ..."

  • ...The integral invariant descriptors, such as the MISP descriptors[8], the GSGI descriptors[12] and the BGPK descriptors[13], can effectively describe the statistical voxels distribution of the 3D pollen particles and have been proved to be invariant to the rotation and translation of pollen images....

    [...]

Book ChapterDOI
17 Sep 2018
TL;DR: The paper describes and investigates the application of the algorithm for the detection and extraction of pollen contour shapes in digital microscopic images based on the Modified Histogram Thresholding, previously employed in the extraction of red blood cells for the automatic diagnosis of certain diseasesbased on the erythrocyte shapes.
Abstract: The paper describes and investigates the application of the algorithm for the detection and extraction of pollen contour shapes in digital microscopic images This is the first step in the process of identification of pollen grains in order to obtain a method for automatic or semi-automatic analysis of air samples The final approach is supposed to support this process by recognizing pollen types in digital microscopic images The applied segmentation approach is based on the Modified Histogram Thresholding, previously employed in the extraction of red blood cells for the automatic diagnosis of certain diseases based on the erythrocyte shapes
References
More filters
Journal ArticleDOI
TL;DR: This paper compares the running times of several standard algorithms, as well as a new algorithm that is recently developed that works several times faster than any of the other methods, making near real-time performance possible.
Abstract: Minimum cut/maximum flow algorithms on graphs have emerged as an increasingly useful tool for exactor approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-Tarjan style "push -relabel" methods and algorithms based on Ford-Fulkerson style "augmenting paths." We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and segmentation. In many cases, our new algorithm works several times faster than any of the other methods, making near real-time performance possible. An implementation of our max-flow/min-cut algorithm is available upon request for research purposes.

4,463 citations

Journal ArticleDOI
TL;DR: This paper presents a new external force for active contours, which is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image, and has a large capture range and is able to move snakes into boundary concavities.
Abstract: Snakes, or active contours, are used extensively in computer vision and image processing applications, particularly to locate object boundaries. Problems associated with initialization and poor convergence to boundary concavities, however, have limited their utility. This paper presents a new external force for active contours, largely solving both problems. This external force, which we call gradient vector flow (GVF), is computed as a diffusion of the gradient vectors of a gray-level or binary edge map derived from the image. It differs fundamentally from traditional snake external forces in that it cannot be written as the negative gradient of a potential function, and the corresponding snake is formulated directly from a force balance condition rather than a variational formulation. Using several two-dimensional (2-D) examples and one three-dimensional (3-D) example, we show that GVF has a large capture range and is able to move snakes into boundary concavities.

4,071 citations

Book ChapterDOI
03 Sep 2001
TL;DR: The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for applications in vision, comparing the running times of several standard algorithms, as well as a new algorithm that is recently developed.
Abstract: After [10, 15, 12, 2, 4] minimum cut/maximum flow algorithms on graphs emerged as an increasingly useful tool for exact or approximate energy minimization in low-level vision. The combinatorial optimization literature provides many min-cut/max-flow algorithms with different polynomial time complexity. Their practical efficiency, however, has to date been studied mainly outside the scope of computer vision. The goal of this paper is to provide an experimental comparison of the efficiency of min-cut/max flow algorithms for energy minimization in vision. We compare the running times of several standard algorithms, as well as a new algorithm that we have recently developed. The algorithms we study include both Goldberg-style "push-relabel" methods and algorithms based on Ford-Fulkerson style augmenting paths. We benchmark these algorithms on a number of typical graphs in the contexts of image restoration, stereo, and interactive segmentation. In many cases our new algorithm works several times faster than any of the other methods making near real-time performance possible.

3,099 citations

Journal ArticleDOI
TL;DR: This system was developed to classify 12 categories of particles found in human urine; it achieves a 93.2% correct classification rate in this application and this performance is considered good.

117 citations

Book ChapterDOI
13 Sep 1995
TL;DR: This paper considers image rotations and translations and presents algorithms for constructing invariant features and develops algorithms for recognizing several objects in a single scene without the necessity to segment the image beforehand.
Abstract: Invariant features are image characteristics which remain unchanged under the action of a transformation group. We consider in this paper image rotations and translations and present algorithms for constructing invariant features. After briefly sketching the theoretical background we develop algorithms for recognizing several objects in a single scene without the necessity to segment the image beforehand. The objects can be rotated and translated independently. Moderate occlusions are tolerable. Furthermore we show how to use these techniques for the recognition of articulated objects. The methods work directly with the gray values and do not rely on the extraction of geometric primitives like edges or corners in a preprocessing step. All algorithms have been implemented and tested both on synthetic and real image data. We present some illustrative experimental results.

85 citations

Frequently Asked Questions (14)
Q1. What are the contributions in "3d invariants with high robustness to local deformations for automated pollen recognition" ?

The authors present a new technique for the extraction of features from 3D volumetric data sets based on group integration. 

Due to the sparsity of non-zero-values in the synthetic channels the resulting integral features are highly localized in the real space, while the framework automatically guarantees the desired invariance properties. 

A few such pollen grains per m3 of air can already cause allergic reactions) the avoidance of false positives is one of the most important requirements for a fully automated system. 

While the authors achieve full invariance to radial deformations by full Haar-integration the authors can only reach robustness to local deformations by partial Haar-integration. 

The first step in processing the pollen monitor data set is the detection of circular objects with voxel-wise vector based gray-scale invariants, similar to those in [8]. 

From the training set only the “clean” (not agglomerated, not contaminated) pollen and the “non-pollen” particles from a few samples were used to train the support vector machine (SVM) using the RBF-kernel (radial basis function) and the one-vs-rest multi-class approach. 

For the application on the pollen monitor data set (rotational invariance only around the z-axis), q is split into a radial distance qr to the segmentation border and the z-distance to the central plane qz. 

The best sampling of the parameter space of the kernel functions (corresponding to the inner class deformations of the objects), was found by cross validation on the training data set, resulting in Nqr ×Nqz ×Nc×n = 31×11×16×16 = 87296 “structural” features (using kernel function k1) and 8 “shape” features (usingkernel function k2). 

Within the BMBF-founded project “OMNIBUSS” a first demonstrator of a fully automated online pollen monitor was developed, that integrates the collection, preparation and microscopic analysis of air samples. 

This is achieved by creating synthetic channels containing the segmentation borders and employing special parameterized kernel functions. 

The “pollen monitor data set” contains about 180,000 airborne particles including about 22,700 pollen grains from air samples that were collected, preparedand recorded with transmitted light microscopy from the online pollen monitor from March to September 2006 in Freiburg and Zürich (fig. 1c). 

the group of arbitrary deformations GD and the group of rotations GR the final Haar integral becomes:T = ∫GR∫Gγ∫GDf ( gRgγgDS, gRgγgDX ) p(D) dgD dgγ dgR , (8)where p(D) is the probability for the occurrence of the local displacement field D. The transformation of the data set is described by (gX)(x) =: X(x′), wherex′ = Rx︸︷︷︸ rotation + γ(Rx)︸ ︷︷ ︸ global deformation+ 

For 3D rotations this framework uses a spherical-harmonics series expansion, and for planar rotations around the z-axis it is simplified to a Fourier series expansion. 

As most pollen grains are nearly spherical and the subtle differences are mainly found near the surface, a pollen expert needs the full 3D information (usually by “focussing through” the transparent pollen grain).