scispace - formally typeset
Open AccessJournal ArticleDOI

Optimal Multivariate Gaussian Fitting with Applications to PSF Modeling in Two-Photon Microscopy Imaging

TLDR
A novel variational formulation of the multivariate Gaussian fitting problem, applicable to any dimension and accounts for possible nonzero background and noise in the input data, and shows a good robustness when tested on synthetic datasets.
Abstract
Fitting Gaussian functions to empirical data is a crucial task in a variety of scientific applications, especially in image processing. However, most of the existing approaches for performing such fitting are restricted to two dimensions and they cannot be easily extended to higher dimensions. Moreover, they are usually based on alternating minimization schemes which benefit from few theoretical guarantees in the underlying nonconvex setting. In this paper, we provide a novel variational formulation of the multivariate Gaussian fitting problem, which is applicable to any dimension and accounts for possible nonzero background and noise in the input data. The block multiconvexity of our objective function leads us to propose a proximal alternating method to minimize it in order to estimate the Gaussian shape parameters. The resulting FIGARO algorithm is shown to converge to a critical point under mild assumptions. The algorithm shows a good robustness when tested on synthetic datasets. To demonstrate the versatility of FIGARO, we also illustrate its excellent performance in the fitting of the point spread functions of experimental raw data from a two-photon fluorescence microscope.

read more

Content maybe subject to copyright    Report

HAL Id: hal-01985663
https://hal.archives-ouvertes.fr/hal-01985663
Submitted on 18 Jan 2019
HAL is a multi-disciplinary open access
archive for the deposit and dissemination of sci-
entic research documents, whether they are pub-
lished or not. The documents may come from
teaching and research institutions in France or
abroad, or from public or private research centers.
L’archive ouverte pluridisciplinaire HAL, est
destinée au dépôt et à la diusion de documents
scientiques de niveau recherche, publiés ou non,
émanant des établissements d’enseignement et de
recherche français ou étrangers, des laboratoires
publics ou privés.
Optimal Multivariate Gaussian Fitting with Applications
to PSF Modeling in Two-Photon Microscopy Imaging
Emilie Chouzenoux, Tim Tsz-Kit Lau, Claire Lefort, Jean-Christophe Pesquet
To cite this version:
Emilie Chouzenoux, Tim Tsz-Kit Lau, Claire Lefort, Jean-Christophe Pesquet. Optimal Multivariate
Gaussian Fitting with Applications to PSF Modeling in Two-Photon Microscopy Imaging. Journal of
Mathematical Imaging and Vision, Springer Verlag, 2019, 61 (7), pp.1037-1050. �10.1007/s10851-019-
00884-1�. �hal-01985663�

XXX manuscript No.
(will be inserted by the editor)
Optimal Multivariate Gaussian Fitting with Applications to PSF
Modeling in Two-Photon Microscopy Imaging
Emilie Chouzenoux
1,2
· Tim Tsz-Kit Lau
3
· Claire Lefort
4
· Jean-Christophe
Pesquet
1
Received: date / Accepted: date
Abstract Fitting Gaussian functions to empirical data is a
crucial task in a variety of scientific applications, especially
in image processing. However, most of the existing approa-
ches for performing such fitting are restricted to two di-
mensions and they cannot be easily extended to higher di-
mensions. Moreover, they are usually based on alternating
minimization schemes which benefit from few theoretical
guarantees in the underlying nonconvex setting. In this pa-
per, we provide a novel variational formulation of the multi-
variate Gaussian fitting problem, which is applicable to any
dimension and accounts for possible non-zero background
and noise in the input data. The block multiconvexity of our
objective function leads us to propose a proximal alternat-
ing method to minimize it in order to est imate the Gaus-
sian shape parameters. The resulting FIGARO algorithm is
shown to converge to a critical point under mild assump-
tions. The algorithm shows a good robustness when tested
on synthetic datasets. To demonstrate the versatility of FI-
GARO, we also illustrate its excellent performance in the
fitting of the Point Spread Functions of experimental raw
data from a two-photon fluorescence microscope.
Keywords Gaussian fitting · Kullback-Leibler divergence ·
Alternating minimization · Proximal methods · PSF
identification · Two-photon Fluorescence microscopy
Emilie Chouzenoux · emilie.chouzenoux@univ-mlv.fr
1 Center for Visual Computing, CentraleSup
´
elec, INRIA Saclay,
Universit
´
e Paris-Saclay, 91190 Gif-sur-Yvette, France
2 Laboratoire d’Informatique Gaspard Monge, UMR CNRS 8049,
Universit
´
e Paris-Est Marne-la-Vall
´
ee, 77454 Marne-la-Vall
´
ee Cedex 2,
France
3 Department of Statistics, Northwestern University, Evanston, IL
60208, United States of America
4 XLIM Research Institute, UMR CNRS 7252, Universit
´
e de Limo-
ges, 87032 Limoges, France
1 Introduction
Fitting Gaussian shapes from noisy observed data points is
an essential task in various science and engineering applica-
tions. I n the one-dimensional (1D) case, it lies for instance at
the core of spectroscopy signal analysis techniques in physi-
cal science [21,31]. In the two-dimensional (2D) case, where
Gaussian profile parameters are estimated from images, some
worth mentioning applications include Gaussian beam char-
acterization, particle tracking, and sensor calibration [28,37,
15]. In the domain of image recovery, a particularly impor-
tant application of Gaussian shape fitting is the modeling of
Point Spread Functions (PSF) from raw data of optical sys-
tems (e.g., microscopes, telescopes). The success of image
restoration strategies strongly depends on the accuracy of
the PSF estimation [13]. This estimation is often performed
through a preliminary step of image acquisition of normal-
ized and calibrated objects, associated with a model fitting
strategy. The PSF model is chosen as a trade-off between
accuracy and simplicity. Gaussian models often lead to both
tractable and good quality approximations [35,32,1,42,41].
Let L
1
(R
Q
) denote the space of real-valued summable
functions defined on R
Q
. In this paper, we address the prob-
lem of fitting a Gaussian model to an observed function
y L
1
(R
Q
). We assume that the observed function y can
be modeled as
(u
u
u R
Q
) y(u
u
u) =
a + bp(u
u
u) + v(u
u
u), (1.1)
where
a R is a background term, b (0,+) is a scal-
ing parameter,
p L
1
(R
Q
) represents a noiseless version of
the observed field, and v is a function accounting for acqui-
sition errors. The main assumption is t hat
p is close, in a
sense to be made precise, to the probability density function
u
u
u 7→g(u
u
u,
µ
µ
µ
,C
C
C), of a Q-dimensional normal distribution with
mean
µ
µ
µ
R
Q
and precision (i.e., inverse covariance) matrix

2 Emilie Chouzenoux
1,2
et al.
C
C
C S
++
Q
1
. This distribution is expressed as
(u
u
u R
Q
)(
µ
µ
µ
R
Q
)(C
C
C S
++
Q
)
g(u
u
u,
µ
µ
µ
,C
C
C) =
s
|C
C
C|
(2
π
)
Q
exp
1
2
(u
u
u
µ
µ
µ
)
C
C
C(u
u
u
µ
µ
µ
)
,
(1.2)
where |C
C
C| denotes t he determinant of matrix C
C
C. The fitting
problem thus consists of finding an estimate (
b
a,
b
b,
b
p,
b
µ
µ
µ
,
b
C
C
C)
of (
a,b, p,
µ
µ
µ
,C
C
C) in accordance with model (1.1)
Because of its prominent importance in applications, there
has been a significant amount of works on this subject [12,
25,24,23,34,42]. To the best of our knowledge, all existing
works consider that p = g(·,
µ
µ
µ
,C
C
C) and they are focused on
fitting parameters (
b
a,
b
b,
b
µ
µ
µ
,
b
C
C
C) from y. Two main classes of
methods can be distinguished. The first set of approaches
[25,24,34] is based on the search for the best fitting pa-
rameters minimizing a least-squares cost between the obser-
vations and the sought model. The minimization process is
based on the famous Levenberg-Marquardt alternating min-
imization strategy. However, it is worth mentioning that few
established convergence guarantees are available for this me-
thod, which may be detrimental to its reliable use in prac-
tice. The second class of methods uses the so-called Caru-
ana’s formulation [12]. The idea here is to assume that the
background term
a is zero and to search for (
b
b,
b
µ
µ
µ
,
b
C
C
C) which
minimize the difference of logarithms between the data and
the model [23,1]. The advantage of such a strategy is that
it gives rise to a convex formulation, for which efficient and
reliable optimization techniques can be applied. It is how-
ever worth emphasizing that all the aforementioned works
are focused on the resolution of the fitting problem in low
dimensions, that is when Q = 1 [12,25,23,34] or Q = 2 [24,
1,42]. Moreover, except in [34] where a polynomial back-
ground is accounted for, the background term
a is consid-
ered as zero. These assumptions however usually do not cor-
respond to constraints inherent to an experimental setup or
environment.
The aim of this paper is to propose a new multivariate
Gaussian fitting strategy which avoids the aforementioned
limitations. Our method relies on the minimization of a hy-
brid cost function combining a least-squares data fidelity
term, a Kullback-Leibler divergence regularizer for improved
robustness, and range constraints on the parameters. This
original variational formulation results in a nonconvex mini-
mization problem for which we propose a theoretically sound
and efficient proximal alternating iterative resolution scheme.
When applied to the analysis of 3D raw data acquired with
1
Throughout t he paper, S
++
Q
will denote the set of symmetric pos-
itive definite matrices of R
Q×Q
, S
+
Q
the set of symmetric positive
semidefinite matrices of R
Q×Q
and S
Q
the set of symmetric matrices
of R
Q×Q
a two-photon fluorescence microscope, our new computa-
tional strategy shows an unprecedented accuracy and relia-
bility.
In Section 2, the data fitting problem is formulated in
a variational manner. A proximal alternating optimization
method called FIGARO is then proposed in Section 3 for
finding a minimizer of the proposed nonconvex cost func-
tion. The implementation of the algorithm steps is discussed.
The convergence of the sequence of iterates resulting from
FIGARO is established in Section 4. Section 5 illustrates
the high robustness of our approach to a model mismatch,
when compared to a standard nonlinear least squares fitting
strategy on 3D synthetic data. I n Section 6, the scope of our
approach is demonstrated through the analysis of the Point
Spread Function of a 3D two-photon fluorescence micro-
scope. Finally, Section 7 concludes the paper.
2 Proposed Variational Formulation
The key ingredient of our method relies on measuring the
closeness of
p to the Gaussian probability density functions
by using the Kullback-Leibler (KL) divergence [5]. Let us
first recall the definition of KL divergence. Let P denote
the set of probability density functions supported on R
Q
:
P =
n
q L
1
(R
Q
) | (u
u
u R
Q
) q(u
u
u) 0
Z
q(u
u
u)du
u
u = 1
o
. (2.1)
Suppose that (p,q) P
2
and q takes (strictly) positive val-
ues, the KL divergence from q to p reads
KL (p
k
q) =
Z
R
Q
p(u
u
u)log
p(u
u
u)
q(u
u
u)
du
u
u, (2.2)
with the convention 0log 0 = 0.
In order to avoid singularity issues, we will assume that
the Gaussian variances in each direction are bounded above
by some maximal values. The spectrum of the precision ma-
trix
C
C
C is thus bounded from below, in the sense that there
exists some
ε
> 0 such that C
C
C = D
D
D +
ε
I
I
I
Q
where D
D
D belongs
to S
+
Q
and I
I
I
Q
R
Q×Q
denotes the identity matrix of R
Q
.
We then propose to define (
b
a,
b
b,
b
p,
b
µ
µ
µ
,
b
D
D
D) as a minimizer of
a hybrid cost function, gathering information regarding the
observation model (1.1) and the Gaussian shape prior (1.2).
The minimization problem reads
minimize
aA ,bB
µ
µ
µ
R
Q
,pP,D
D
DS
+
Q
1
2
Z
R
Q
y(u
u
u) a bp(u
u
u)
2
du
u
u
+
λ
KL
p
k
g(·,
µ
µ
µ
,D
D
D +
ε
I
I
I
Q
)
. (2.3)

Optimal Multivariate Gaussian Fitting with Applications to PSF Modeling in Two-Photon Microscopy Imaging 3
Hereabove, A and B are some nonempty closed bounded
real intervals corresponding to known bounds on
a and b re-
spectively, and
λ
> 0 is a regularization parameter weight-
ing the KL penalty term favoring the proximity between p
and the Gaussian model (1.2) parametrized by (
µ
µ
µ
,D
D
D).
In practice, however, one generally has access only to a
sampling of y, which is performed on a bounded Borel s et
of R
Q
. The set
is supposed here chosen large enough
so that it captures most of the probability mass of the sought
Gaussian disribution. More precisely, we will assume that
is paved into N N voxels of volume
(0, +) and mass
centers (x
x
x
n
)
1nN
. The available vector of observations is
then y
y
y = (y
n
)
1nN
where, for every n {1,.. .,N}, y
n
=
y(x
x
x
n
). After this discretization, by assuming that y and p are
continuous functions in (2.3) and that
is small enough,
the following more tractable optimization problem is thus
substituted for the original variational formulation:
minimize
aA ,bB
µ
µ
µ
R
Q
,p
p
pP
d
,D
D
DS
+
Q
1
2
ky
y
y a1
1
1
N
bp
p
pk
2
+
λ
N
n=1
p
n
log
p
n
g(x
x
x
n
,
µ
µ
µ
,D
D
D +
ε
I
I
I
Q
)
, (2.4)
where k·k denotes the standard Euclidean norm. The prob-
ability density function p has been replaced by the vector
p
p
p = (p
n
)
1nN
which belongs P
d
= [0,+)
N
C , where
C is the affine hyperplane
C =
(
p
p
p R
N
N
n=1
p
n
=
1
)
. (2.5)
The discrete KL term in (2.4) can be rewritten as
N
n=1
p
n
log
p
n
g(x
x
x
n
,
µ
µ
µ
,D
D
D +
ε
I
I
I
Q
)
=
N
n=1
ent(p
n
) + p
n
Q
2
log(2
π
)
1
2
log(|D
D
D +
ε
I
I
I
Q
|)
+
1
2
(x
x
x
n
µ
µ
µ
)
(D
D
D +
ε
I
I
I
Q
)(x
x
x
n
µ
µ
µ
)
, (2.6)
where
(
υ
R) ent(
υ
) =
υ
log
υ
,
υ
> 0,
0,
υ
= 0,
+, otherwise.
(2.7)
Note that the above definition of the function ent allows us
to impose directly the nonnegativity of the components of p
p
p.
For technical reasons which will appear later, we will also
need to perform a twice continuously differentiable exten-
sion of the function D
D
D 7→log(|D
D
D +
ε
I
I
I
Q
|) on the whole do-
main S
Q
. This extension
ϕ
is defined as follows. For every
D
D
D S
Q
decomposed as U
U
U Diag(
σ
σ
σ
)U
U
U
with U
U
U R
Q×Q
an
orthogonal matrix and
σ
σ
σ
= (
σ
q
)
1qQ
the associated vector
of eigenvalues of D
D
D,
ϕ
(D
D
D) =
e
ϕ
(
σ
σ
σ
)
=
log(|D
D
D +
ε
I
I
I
Q
|) =
Q
q=1
log(
σ
q
+
ε
),
if D
D
D S
+
Q
,
e
ϕ
(0
0
0
Q
) +
σ
σ
σ
e
ϕ
(0
0
0
Q
) +
1
2
σ
σ
σ
2
e
ϕ
(0
0
0
Q
)
σ
σ
σ
,
otherwise,
(2.8)
where 0
0
0
Q
is the Q-dimensional null vector, 1
1
1
Q
the Q-dimensional
vector of all ones, and
e
ϕ
(0
0
0
Q
) = Qlog
ε
,
e
ϕ
(0
0
0
Q
) =
ε
1
1
1
1
Q
,
2
e
ϕ
(0
0
0
Q
) =
ε
2
I
I
I
Q
.
(2.9)
Let us denote by
ι
S
the indicator function of a set S , which
is equal to 0 on this set and + otherwise. We are now ready
to define the cost function which is minimized in our Gaus-
sian fitting approach:
(a R)(b R)(p
p
p R
N
)(
µ
µ
µ
R
Q
)(D
D
D S
Q
)
F(a,b, p
p
p,
µ
µ
µ
,D
D
D) =
1
2
ky
y
y a1
1
1
N
bp
p
pk
2
+
ι
A
(a)
+
ι
B
(b) +
λΨ
(p
p
p,
µ
µ
µ
,D
D
D), (2.10)
where
(p
p
p R
N
)(
µ
µ
µ
R
Q
)(D
D
D S
Q
)
Ψ
(p
p
p,
µ
µ
µ
,D
D
D) =
N
n=1
ent(p
n
) +
p
n
2
Qlog(2
π
) +
ϕ
(D
D
D)
+ (x
x
x
n
µ
µ
µ
)
(D
D
D +
ε
I
I
I
Q
)(x
x
x
n
µ
µ
µ
)
+
ι
C
(p
p
p) +
ι
S
+
Q
(D
D
D).
(2.11)
Remark 1 The proposed formulation deals with a regular
grid but it can be easily extended to the case of irregular
sampling by changing the definition of C into
C =
(
p
p
p R
N
N
n=1
n
p
n
= 1
)
(2.12)
where, for every n {1,. .., N},
n
(0,+)
N
is the vol-
ume of the n-th voxel.
3 FIGARO Minimization Algorithm
3.1 Proposed Algorithm
The objective function (4.1) is nonconvex, yet convex with
respect to each variable. A standard resolution approach is
thus to adopt an alternating minimization strategy, where, at

4 Emilie Chouzenoux
1,2
et al.
each iteration, F is minimized with respect to one variable
while the others remain fixed. This approach, sometimes re-
ferred to as Block Coordinate Descent or nonlinear Gauss-
Seidel method, has been widely used in the context of PSF
model fitting [42,30,32]. However, its convergence is only
guaranteed under restrictive assumptions [38]. In order to
get sounder convergence results, we propose to use an al-
ternative strategy based on proximal tools which consists of
replacing, at each iteration the direct minimization step by
a proximal one ([33, Def. 1.22], [6, Def. 12.23], [18, Def.
10.1] [11]).
Definition 1 (Domain) Let f be a function from R
n
to
(,+]. The domain of f is defined by
dom f := {x R
n
: f (x) < +}.
The function f is proper if and only if dom f is nonempty.
Definition 2 (Proximity operator) Let f : R
n
(, +]
be a convex, proper, lower semi-continuous function. The
proximity operator of f at x R
n
is defined as
prox
f
(x) = argmin
yR
n
f (y) +
1
2
ky xk
2
.
Let S be a nonempty closed convex subset of R
n
. Then
prox
ι
S
is equal to the projection P
S
onto S .
The application of the proximal alternating method [4,2,
8] to the minimization of (4.1) yields Algorithm 1, called
FIGARO (Fi
tting Gaussians with Proximal Optimization).
Algorithm 1 FIGARO method
a
0
A ,b
0
B, p
p
p
0
C ,
µ
µ
µ
0
R
Q
,D
D
D
0
S
+
Q
,
(
γ
a
,
γ
b
,
γ
p
,
γ
µ
,
γ
D
) (0,+)
5
.
for i = 1,2,. .. do
a
(i+1)
= prox
γ
a
F(·,b
(i)
,p
p
p
(i)
,
µ
µ
µ
(i)
,D
D
D
(i)
)
(a
(i)
)
b
(i+1)
= prox
γ
b
F(a
(i+1)
,·,p
p
p
(i)
,
µ
µ
µ
(i)
,D
D
D
(i)
)
(b
(i)
)
p
p
p
(i+1)
= prox
γ
p
F(a
(i+1)
,b
(i+1)
,·,
µ
µ
µ
(i)
,D
D
D
(i)
)
(p
p
p
(i)
)
µ
µ
µ
(i+1)
= prox
γ
µ
F(a
(i+1)
,b
(i+1)
,p
p
p
(i+1)
,·,D
D
D
(i)
)
(
µ
µ
µ
(i)
)
D
D
D
(i+1)
= prox
γ
D
F(a
(i+1)
,b
(i+1)
,p
p
p
(i+1)
,
µ
µ
µ
(i+1)
,·)
(D
D
D
(i)
)
end for
Remark that other methods such as t hose proposed in
[40,17,10] are also applicable to our problem, but the con-
sidered alternating proximal point algorithm may appear prefer-
able because of its simplicity.
3.2 Expressions of the Proximity Operators
In this part, we show that the proximity operators required
in Algorithm 1 have closed form expressions.
Proposition 1 Let (a, b, p
p
p,
µ
µ
µ
,D
D
D) R ×R ×R
N
×R
Q
×S
Q
and (
γ
a
,
γ
b
) (0,+)
2
. The proximity operator of
γ
a
F(·,b, p
p
p,
µ
µ
µ
,D
D
D) at a is given by
prox
γ
a
F(·,b,p
p
p,
µ
µ
µ
,D
D
D)
(a) = P
A
a +
γ
a
1
1
1
N
(y
y
y bp
p
p)
1 +
γ
a
N
!
(3.1)
and the proximity operator of
γ
b
F(a,·, p
p
p,
µ
µ
µ
,D
D
D) at b is given
by
prox
γ
b
F(a,·,p
p
p,
µ
µ
µ
,D
D
D)
(b) = P
B
b +
γ
b
(y
y
y a1
1
1
N
)
p
p
p
1 +
γ
b
kp
p
pk
2
. (3.2)
Proof Calculating the proximity operator of
γ
a
F(·,b, p
p
p,
µ
µ
µ
,D
D
D)
is equivalent to calculating the proximity operator of the
one-variable function
ϑ
+
ι
A
where
(a R)
ϑ
(a) =
γ
a
2
N
n=1
(y
n
a bp
n
)
2
. (3.3)
It follows from [14] that
prox
γ
a
F(·,b,p
p
p,
µ
µ
µ
,D
D
D)
= P
A
prox
ϑ
. (3.4)
On the other hand, it follows from [18] that
prox
ϑ
(a) =
a +
γ
a
1
1
1
N
(y
y
y bp
p
p)
1 +
γ
a
N
. (3.5)
Expression (3.2) is obtained by similar arguments.
Proposition 2 Let (a, b, p
p
p,
µ
µ
µ
,D
D
D) R ×R ×R
N
×R
Q
×S
Q
and
γ
p
> 0. The proximity operator of
γ
p
F(a,b,·,
µ
µ
µ
,D
D
D) at p
p
p
is given by
prox
γ
p
F(a,b,·,
µ
µ
µ
,D
D
D)
(p
p
p) = (
ρ
1
W
ρ
exp
w
n
(
b
ν
)

1nN
,
(3.6)
where W denotes the Lambert-W function [19],
ρ
=
γ
p
b
2
+ 1
γ
p
λ
, (3.7)
and, for every n
{
1,. .., N
}
, w
n
is the function defined as
(
ν
R) w
n
(
ν
) = 1c
n
+(
γ
p
λ
)
1
(p
n
+
γ
p
b(y
n
a)
ν
),
(3.8)
with
c
n
=
Q
2
log(2
π
) +
1
2
ϕ
(D
D
D)
+
1
2
(x
x
x
n
µ
µ
µ
)
(D
D
D +
ε
I
I
I
Q
)(x
x
x
n
µ
µ
µ
). (3.9)
Moreover,
b
ν
R is the the unique zero of the function
(
ν
R)
Φ
(
ν
) =
ρ
1
N
n=1
W
ρ
exp(w
n
(
ν
))
1
.
(3.10)

Citations
More filters
Journal ArticleDOI

Label-free whole-colony imaging and metabolic analysis of metastatic pancreatic cancer by an autoregulating flexible optical system.

TL;DR: A supercontinuum and super-wide-tuning integrated multimodal platform, which combines the confocal, nonlinear and fluorescence lifetime microscopy with autoregulations, for label-free evaluation of fresh tissue and pathological sections, demonstrates the great potential to translate this technique into routine clinical applications.

Article Image Analysis with Rapid and Accurate Two-Dimensional Gaussian Fitting

TL;DR: In this article, a computationally rapid image analysis method, weighted overdetermined regression, is presented for two-dimensional (2D) Gaussian fitting of particle location with subpixel resolution from a pixelized image of light intensity.
Journal ArticleDOI

FAMOUS: a fast instrumental and computational pipeline for multiphoton microscopy applied to 3D imaging of muscle ultrastructure

TL;DR: In this article, the authors presented a new instrumental and computational pipeline named FAMOUS, which is devoted to the 3D imaging of the myosin assembly of the ultrastructure of a whole striated skeletal muscle unsliced.
Journal ArticleDOI

Multiplex‐multiphoton microscopy and computational strategy for biomedical imaging

TL;DR: In this paper, the authors demonstrate the benefit of a novel laser strategy in multiphoton microscopy (MPM) and demonstrate the polyvalence of the resulting MPM device by images of many biomedical models from several origins.
References
More filters
Book

Nonlinear Programming

Journal ArticleDOI

On the Lambert W function

TL;DR: A new discussion of the complex branches of W, an asymptotic expansion valid for all branches, an efficient numerical procedure for evaluating the function to arbitrary precision, and a method for the symbolic integration of expressions containing W are presented.
Book

Convex Analysis and Monotone Operator Theory in Hilbert Spaces

TL;DR: This book provides a largely self-contained account of the main results of convex analysis and optimization in Hilbert space, and a concise exposition of related constructive fixed point theory that allows for a wide range of algorithms to construct solutions to problems in optimization, equilibrium theory, monotone inclusions, variational inequalities, and convex feasibility.
Journal ArticleDOI

Deep tissue two-photon microscopy

TL;DR: Fundamental concepts of nonlinear microscopy are reviewed and conditions relevant for achieving large imaging depths in intact tissue are discussed.
Journal ArticleDOI

Precise nanometer localization analysis for individual fluorescent probes

TL;DR: A localization algorithm motivated from least-squares fitting theory is constructed and tested both on image stacks of 30-nm fluorescent beads and on computer-generated images (Monte Carlo simulations), and results show good agreement with the derived precision equation.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What are the contributions in "Optimal multivariate gaussian fitting with applications to psf modeling in two-photon microscopy imaging" ?

In this paper, the authors provide a novel variational formulation of the multivariate Gaussian fitting problem, which is applicable to any dimension and accounts for possible non-zero background and noise in the input data. To demonstrate the versatility of FIGARO, the authors also illustrate its excellent performance in the fitting of the Point Spread Functions of experimental raw data from a two-photon fluorescence microscope. 

Future work will address the cases of more general multivariate models and noise statistics. 

The more the imaged medium is scattering or absorbing the light (laser excitation or fluorescence emission), the more the image will be deteriorated. 

The first set of approaches [25,24,34] is based on the search for the best fitting parameters minimizing a least-squares cost between the observations and the sought model. 

proximal alternating iterative resolution scheme, grounded on solid mathematical foundations, has been proposed for the resolution of the underlying nonconvex minimization problem. 

FIGARO is, in addition, very stable to a model mismatch (i.e., ρ 6= 1), while LM performance highly decreases as soon as the data are not generated by using the Gaussian model. 

the instrumental PSF in MPM has a particularly negative impact on the resulting images especially when a sub-micrometer resolution is searched (about less than 0.5 µm) or when the sample emits a low level multiphoton signal. 

v = (vn)1≤n≤N is the realization of a zero-mean Gaussian noise, with standard deviation σ chosen so as to obtain a given input signal-to-noise ratio (SNR). 

the authors observed that a good initialization strategy is to take a(0) = minn∈{1,...,N} yn,b (0) = 1, p(0) = y, µ (0) as the position of the maximum intensity in y, and C(0) a diagonal matrix with entries equal to the voxel size in each direction. 

(3.21)Proof Calculating the proximity of operator of γµ F(a,b, p, ·,D) is equivalent to calculating the proximity operator of the quadratic functionµ 

Let P denote the set of probability density functions supported on RQ:P = {q ∈ L1(RQ) | (∀u ∈ RQ) q(u)≥ 0 ∫Ω q(u)du = 1} . (2.1)Suppose that (p,q) ∈ P2 and q takes (strictly) positive values, the KL divergence from q to p readsKL (p‖ q) = ∫RQ p(u) log( p(u)q(u)) du, (2.2)with the convention 0log0 = 0. 

the averaged FWHM of the estimated Gaussian shapes is of (0.21, 0.27) µm, which appears to be consistent with the theoretical limit of optical planar resolution of 0.2 µm for this emission wavelength and numerical aperture. 

The spectrum of the precision matrix C is thus bounded from below, in the sense that there exists some ε > 0 such that C = D + εIQ where D belongs to S +Q and IQ ∈ RQ×Q denotes the identity matrix of RQ. 

the contour plots delimit the full-width at the half maximum (FWHM) region, i.e., where xn is such that â+ b̂g(xn, µ̂ ,Ĉ)= 0.5×max(â+ b̂g(xn, µ̂ ,Ĉ))1≤n≤N .