scispace - formally typeset
Open AccessJournal ArticleDOI

Quantitative evaluation of software packages for single-molecule localization microscopy

TLDR
This work focuses on the computational aspects of super-resolution microscopy and presents a comprehensive evaluation of localization software packages, reflecting the various tradeoffs of SMLM software packages and helping users to choose the software that fits their needs.
Abstract
The quality of super-resolution images obtained by single-molecule localization microscopy (SMLM) depends largely on the software used to detect and accurately localize point sources. In this work, we focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. Our philosophy is to evaluate each package as a whole, thus maintaining the integrity of the software. We prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. We then asked developers to run their software on our data; most responded favorably, allowing us to present a broad picture of the methods available. We evaluated their results using quantitative and user-interpretable criteria: detection rate, accuracy, quality of image reconstruction, resolution, software usability and computational resources. These metrics reflect the various tradeoffs of SMLM software packages and help users to choose the software that fits their needs.

read more

Content maybe subject to copyright    Report

ANALYSIS
NATURE METHODS
|
VOL.12  NO.8 
|
AUGUST 2015 
|
717
The quality of super-resolution images obtained by single-
molecule localization microscopy (SMLM) depends largely
on the software used to detect and accurately localize point
sources. In this work, we focus on the computational aspects
of super-resolution microscopy and present a comprehensive
evaluation of localization software packages. Our philosophy
is to evaluate each package as a whole, thus maintaining
the integrity of the software. We prepared synthetic data
that represent three-dimensional structures modeled after
biological components, taking excitation parameters, noise
sources, point-spread functions and pixelation into account.
We then asked developers to run their software on our data;
most responded favorably, allowing us to present a broad
picture of the methods available. We evaluated their results
using quantitative and user-interpretable criteria: detection
rate, accuracy, quality of image reconstruction, resolution,
software usability and computational resources. These metrics
reflect the various tradeoffs of SMLM software packages and
help users to choose the software that fits their needs.
We have conducted a large-scale comparative study of software
packages developed in the context of SMLM, including recently
developed algorithms. We designed realistic data that are generic
and cover a broad range of experimental conditions and compared
the software packages using a multiple-criterion quantitative
assessment that is based on a known ground truth.
Our study is based on the active participation of developers of
SMLM software. More than 30 groups have participated so far,
and the study is still under way. We provide participants access to
our benchmark data as an ongoing public challenge. Participants
run their own software on our data and report their list of
localized particles for evaluation. The results of the challenge are
accessible online and updated regularly.
SMLM was demonstrated in 2006, independently by three
research groups
1–3
, and has enabled subsequent breakthroughs
in diverse fields
4,5
. SMLM can resolve biological structures at the
nanometer scale (typically 20 nm lateral resolution), circumventing
Abbes diffraction limit. At the cost of a relatively simple setup
6,7
, it
has opened exciting new opportunities in life science research
8,9
.
The underlying principle of SMLM is the sequential imaging
of sparse subsets of fluorophores distributed over thousands of
frames, to populate a high-density map of fluorophore positions.
Such large data sets require automated image-analysis algorithms
to detect and precisely infer the position of individual fluorophore,
taking advantage of their separation in space and time.
The acquired data cannot be visualized directly; further com-
puterized image-reconstruction methods are required. These
typically comprise four steps: preprocessing, detection, locali-
zation and rendering. Preprocessing reduces the effects of the
background and noise; detection identifies potential molecule
candidates in each frame; localization performs a subpixel
refinement of the initial position estimates, usually by fitting
a point-spread function (PSF) model; and rendering turns the
detected molecule positions into a high-resolution map of mole-
cule densities. The performance of the overall processing pipeline
contributes to the quality of the super-resolved image
10
.
The current literature describes more than 25 image-analysis
software packages that process SMLM data. Each has its own char-
acteristics, set of parameters, accessibility and terminology
10,11
.
Moreover, these packages are often validated using different data.
In the absence of guidance, end users face a difficult choice in
deciding which software is most suitable for them. The lack of a
standardized methodology for conducting performance analysis
and the need for reference benchmark data constitute the gap that
we address in this work.
Our synthetic data imitate microtubule structures. The data
consist of thousands of images with labeling densities that
span well over an order of magnitude. The model of image
formation accounts for the stochastic nature of the emission
rate of the fluorophores, the characteristics of the optical setup,
and various sources of noise. As in real data, it also includes
inhomogeneous excitation, autofluorescence and readout
electron-multiplying noise from the detector, typically an
electron-multiplying charge-coupled device (EMCCD).
Our benchmark criteria were designed to objectively measure
computational performance in terms of time and quality. Our
evaluation effort is more comprehensive than previous work
12
in
benchmarking a large number of software packages, in synthesizing
Quantitative evaluation of software packages for
single-molecule localization microscopy
Daniel Sage
1
, Hagai Kirshner
1
, Thomas Pengo
2
, Nico Stuurman
3,4
, Junhong Min
5
, Suliana Manley
6
& Michael Unser
1
1
Biomedical Imaging Group, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.
2
Center for Genomic Regulation, Barcelona, Spain.
3
Howard Hughes Medical Institute, University of California (UCSF), San Francisco, California, USA.
4
Department of Cellular and Molecular Pharmacology, UCSF,
San Francisco, California, USA.
5
Department of Bio and Brain Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
6
Laboratory of Experimental Biophysics, EPFL, Lausanne, Switzerland. Correspondence should be addressed to D.S. (daniel.sage@epfl.ch).
RECEIVED 22 AUGUST 2014; ACCEPTED 17 APRIL 2015; PUBLISHED ONLINE 15 JUNE 2015; DOI:10.1038/NMETH.3442
npg
© 2015 Nature America, Inc. All rights reserved.

718
|
VOL.12  NO.8 
|
AUGUST 2015 
|
NATURE METHODS
ANALYSIS
We generated training data and disclosed them together with
the true locations of the fluorophores, allowing participants to
tune their software. We also generated contest data and deliv-
ered them without ground-truth information. We assessed every
algorithm on the basis of the contest data. We make these data
available at http://bigwww.epfl.ch/smlm/datasets/; the collection
is already used by developers
14–19
.
Quantitative assessment metrics
The core task faced by participants in our study is the 2D localization
of single molecules. To rate the performance of their software, we
defined multiple criteria (Online Methods) that highlight different
aspects of SMLM algorithms: detection rate, accuracy, image quality,
resolution, usability (USA) and execution runtime (TIME). Other
preprocessing or postprocessing steps, such as drift correction and
rendering, were excluded from our analysis to better provide an unbi-
ased comparison based primarily on the localization performance.
Detection rate and localization accuracy
The detection rate and localization accuracy are based on the pair-
ing between the molecules localized by the participants and the
molecules from the ground truth. These criteria do not depend
on any rendering mechanism.
The detection rate quantifies the framewise fidelity and the
completeness of the set of localizations with respect to the ground
truth, measured in our case by a Jaccard index. We found that the
detection rate (JAC) correlates with the level of difficulty.
The localization accuracy (ACC) is measured by the root-mean-
square error (RMSE) of matched localizations. We found that this
averaged 21.05 nm and 32.13 nm for LS1 and LS2, respectively.
This is consistent with the Cramér-Rao lower bound predicted
according the definition of uncertainty given by Rieger et al.
20
. The
detection rate and localization accuracy of each software are docu-
mented in Figure 2 and in Supplementary Figures 1 and 2.
Image quality and image resolution
Ultimately, the data representation favored by SMLM
practitioners is not a list of localizations but a rendered
image
10
(Supplementary Data 1 and Supplementary Videos 1–6).
Figure 1
|
Construction of the bio-inspired
data. (a) Top, 3D structure simulating
biological microtubules. Every single
fluorophore event is uniquely identified
and stored; collectively, they constitute
the ground-truth localizations which
can be rendered at any temporal and
spatial scale (lower panels). (b) Each
fluorophore is considered a point
source and convolved with a 3D PSF.
Combined with background and
autofluorescence of the structure,
the convolved image determines the
number of photons at each pixel.
These photons are then transformed
into a number of electrons based on
quantum efficiency (QE), shot noise
and the EMCCD parameters. The image
is reduced to the desired camera
resolution, for example, 100 nm/pixel.
Finally, these values are fed to an electron-to-DN converter (digital number, taking into account the readout noise and the quantization level).
(c) These operations are repeated to obtain long sequence (LS) of low-density frames or short sequence of high-density (HD) frames.
data closer to biological reality, and in including a rich set of
evaluation criteria such as detection rate, accuracy, image quality,
resolution and software usability.
A byproduct of our work is an extensive and annotated list
of software packages (http://bigwww.epfl.ch/smlm/software/),
which should prove a resource not only to practitioners but also
to developers because it helps identify which aspects of existing
software may be in need of further development.
RESULTS
Bio-inspired data
We designed our synthetic data to be as similar as possible to
images derived from real cellular structures. A key element is their
continuous-domain description, as opposed to a spatially discrete
model. For instance, we simulate microtubules by means of three-
dimensional (3D) paths that are defined on the continuum (Fig. 1a),
making it possible to render digital images at any scale. We typically
choose a scale of 5 nm per pixel. Our synthetic model takes many
parameters into account, among them sample thickness, random
activation, laser power, variability of the excitation laser, the
lifetime of the fluorophores, autofluorescence, several sources
of noise, pixelation, analog-to-digital conversion and the PSF of
the microscope (Fig. 1b). Our PSF model is made up either of
classical Gaussian-based 3D functions or of the more realistic
Gibson-Lanni formulation that benefits from a fast and accu-
rate implementation
13
. Because multiple-frame events are rare in
the data of interest, we tuned the lifetime model to favor single-
frame events. We rely primarily on these ground-truth data for
our objective evaluation of algorithms.
To accommodate the intended uses of the available software, we
chose to image the same synthetic sample using different imaging
modes: long sequence (LS) and high density (HD). The LS data are
low-density sequences of about 10,000 frames each, and the HD
data are high-density sequences of about 500 frames that include
overlapping PSFs (Fig. 1c). Independently of the imaging mode,
we changed the degree of difficulty of the data by modifying the
contribution of autofluorescence, the amount of acquisition noise
and the thickness of the sample (see Online Methods) to create
datasets LS1-3 and HD1-3 (in order of increasing difficulty).
3D biological structure
a b c
FOV 6,400 × 6,400 nm
Detail
Rendering at 10 nm/pixel
Rendering at 0.25 nm/pixel
Frame at low density (LS)
Frame at high density (HD)
e.g., 4 active fluorophores
e.g., 450 active fluorophores
xz section of the 3D PSF
e.g., 5 nm/voxel
Fluoresc.
dyes
Auto-
fluoresc.
Averaging
A/D, readout, quantization
QE, shot noise, EMCCD gain
Photons to e
Downsampling
e
to DN
Sequence of frames
e.g., 100 nm/pixel
Back-
ground
npg
© 2015 Nature America, Inc. All rights reserved.

NATURE METHODS
|
VOL.12  NO.8 
|
AUGUST 2015 
|
719
ANALYSIS
We used two image-based criteria in our
assessment: image quality (signal-to-noise
ratio, SNR) and image resolution (Fourier
ring correlation, FRC
21
). Methods afflicted
by issues such as sampling artifacts or a low
detection capacity at the image border are
characterized by a low SNR. Conversely,
a high SNR is often indicative of a
successful tradeoff between detection
rate and accuracy.
Software efficiency
In a retrospective analysis, we identified
the five best methods, in terms of the
tradeoff between accuracy and detection
rate for each dataset. We defined a linear
regression that fits the best methods in
a plot of ACC versus JAC, and call it
an efficiency line (Fig. 3). The distance of
the (JAC, ACC) coordinate for each
software to such a line indicates the
performance of the software.
The level of difficulty increases from LS1 to LS3, as evidenced
by the average performance (JAC, ACC), which was (79.58%,
29.98 nm) for LS1, (55.64%, 41.91 nm) for LS2 and (35.64%, 55.82
nm) for LS3. These findings are consistent with our engineering of
the data to have increasing levels of noise, as the theory predicts
that the presence of noise leads to an increase in the uncertainty
of the location of a particle. Likewise, the detection rate is also
affected by noise; single molecules with lower emission rate and
deeper axial position are more difficult to detect.
Algorithms
Our study includes more than 30 packages (Table 1), covering
a large proportion of the SMLM software currently available.
Aside from a few that do not fit our validation framework because
their SMLM reconstruction is based on deconvolution without
explicit localization
22
, most packages have a similar architecture.
However, a detailed analysis reveals fundamental differences.
Within the detection step, methods as diverse as low-pass
filtering, band-pass filtering, watershed, and wavelet transform, to
name a few, are deployed. The parameters of these preprocessing
operations need to be determined in an ad hoc fashion. In some
cases, we found that they cannot be set by the user; even when
they can be, often there is no calibration procedure provided.
Most algorithms isolate candidate pixels by applying a threshold
to identify potential local extrema, but each software uses differ-
ent methods for determining the threshold value: level of noise,
spot brightness, PSF size and/or particle density.
Over two-thirds of the participating packages carry out the
localization step by means of a fitting with a Gaussian function.
Other algorithms use an arbitrary PSF instead; DAOSTORM
and SimpleSTORM use a measured PSF. Distinctively, the two
packages MrSE and RadialSymmetry exploit the radial symmetry
of the PSF.
We have identified three groups of localization methods
and indicated their performance in Table 2. In Generation 1,
the basic methods perform localization by means of center
of mass (QuickPALM), triangularization (fluoroBancroft) or
linear regression (Gauss2dcir). Although very fast, these methods
often fail to reconstruct HD data. Generation 2 is the larg-
est group of methods, including about two-thirds of all soft-
wares submitted thus far. They are characterized by the use of
iterative localization algorithms such as maximum-likelihood esti-
mators (MLE) or least-squares minimizers (LS). Previous works
compare the LS or MLE algorithm in detail
10,23
. Generation 3
comprises advanced methods, often unpublished. They improve
Jaccard (%)
Accuracy (nm)
30
40
50
60
70
80
90
100
110
120
130
WTM
DAOSTORM
FALCON
PeakFit
B-recs
WTM
RadialSymmetry
HD1
HD2
HD3
Eff. line -HD1
Eff. line -HD2
Eff. line -HD3
Jaccard (%)
Accuracy (nm)
15
25
35
45
55
65
75
ThunderSTORM
PeakFit
SimpleSTORM
B-recs
WTM
PeakSelector
RadialSymmetry
3D-DAOSTORM
LS1
LS2
LS3
Eff. line -LS1
Eff. line -LS2
Eff. line -LS3
HD - cumulative grade
B-recs
WTM
DAOSTORM
FALCON
Fast-ML-HD
SimpleSTORM (P)
SOSplugin (mix)
SimpleSTORM
SimpleSTORM (R)
SOSplugin
Octane
SOSplugin (pts)
PeakSelector
L1H
FPGA
ThunderSTORM
SNSMIL
RadialSymmetry
W-fluoroBancroft
5.0
5.0 5.0 3.9 3.6 4.7 5.0
4.0
5.04.03.7
3.7
5.0
5.0
3.7
3.7
5.0 3.8 4.4 3.7 4.2 5.0 4.5
3.6 3.8 4.4 3.9 4.3
4.0 4.8
3.6 3.7 4.7 3.7
3.7 5.05.0 4.6 3.7
3.5 3.8
3.63.8
3.53.7
4.0 4.8 4.5 4.0 4.0
4.6 4.4
4.5 4.2 4.0
3.9 3.9 4.1
4.0 5.0
4.9 5.0 4.3 4.1 4.0
4.4 3.7
JAC1 JAC2 JAC3 ACC1 ACC2 ACC3 SNR1 SNR2
SNR3
FRC1 FRC2 FRC3 TIME USA
LS - cumulative grade
0 10 20 30 40 50 60 70 80 90 100
0 10 20 30 40 50 60 70 80 90 100
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70
0 5 10 15 20 25 30 35 40 45 50 55 60 65 70
Maliang
JAC1 JAC2 JAC3 ACC1 ACC2 ACC3 SNR1 SNR2 SNR3 FRC1 FRC2
FRC3
TIME USA
ThunderSTORM
SimpleSTORM
3D-DAOSTORM
PeakFit
Auto-Bayes (G)
Auto-Bayes (W)
B-recs
a-livePALM (R)
a-livePALM
SOSplugin (pts)
SimplePALM
WaveTracer
a-livePALM (P)
MicroManager
SOSplugin
SNSMIL
SimpleSTORM (P)
SimpleSTORM (R)
GPUgaussMLE
MrSE
PeakSelector
RapidSTORM
SOSplugin (mix)
RadialSymmetry
InSight3
WTM
PYME
Fast-ML-HD
GraspJ
W-fluoroBancroft
QuickPALM
Octane
Gauss2dcirc
4.7
3.7
3.7 4.1 3.8
4.2 4.1
3.7 4.2 3.9 3.5 3.7 4.0
5.0 4.1 3.7
4.1 4.0
3.7 4.0
3.5 4.8 3.6
3.8 5.0 5.0
3.7 4.6 3.6
4.0 3.6 5.0
3.7 3.7 4.0
4.3 4.1
5.0 5.0
4.0
3.7
4.3 4.0
5.0 4.5
5.0 5.0
4.0
4.0
5.0 4.2 3.6 5.0 4.2 3.6 5.0
4.3 4.5
4.3 3.6 5.0
4.7
4.
1 3.7 4.1 4.1
3.8 3.8 3.8 3.6 3.7 4.5
3.5 4.0 4.2 4.0 3.6
3.6 4.5 4.7 4.0 3.6
4.3 4.1 4.2 4.2
3.6 4.1 4.3 3.6 4.0
3.7 4.3 4.7 4.4 4.6 5.0
3.6 4.1 4.4 3.8
3.5 4.0
3.7
3.7 3.7 4.3 4.4 3.5 5.0
4.9
4.3 3.8 4.7
4.1 4.0
4.5 4.2
4.5
3.9 3.8 3.7
FPGA
PeakFit4.5
Figure 2
|
Accuracy versus detection rate for
each tested software. Scatter plots show high-
density (HD) data above and long sequence
data below. Efficiency lines (Eff. lines) are
computed from the five results at the boundary
of the field with high JAC and/or low ACC.
The length of the bars is proportional to the
grade, from 0 (poor) to 5 (good). Grades above
3.5 are written in the corresponding bar. The
grades of the three data sets are given here
for the detection rate, JAC1–JAC3; for the
localization accuracy, ACC1–ACC3; for the image
quality assessment, SNR1–SNR3; and for the
image resolution, FRC1–FRC3. The grades of the
computational time (TIME) and usability (USA)
are reported in light gray bars.
npg
© 2015 Nature America, Inc. All rights reserved.

720
|
VOL.12  NO.8 
|
AUGUST 2015 
|
NATURE METHODS
ANALYSIS
the detection rate while keeping a high
localization accuracy. This group includes
minimum mean squared error (MMSE)/
maximum a posteriori probability (MAP)
approaches (B-recs), a method with high-
quality interpolation (simpleSTORM), a
template-matching technique (WTM),
a mean-shift approach (simplePALM)
and packages that exploit the radial symmetry (RadialSymmetry
and MrSE). Detailed information on the software packages is
in Supplementary Notes 1 and 2.
Usability and computation time
End users require that software packages be accessible, easy to use
and fast. Although these aspects are subjective, they are important
enough to justify their inclusion in our study. To score them, we
prepared a questionnaire for the participants. We combined the
accessibility score with a usability score that covers quality of
documentation and user-friendliness. The open-source software
ImageJ/Fiji and the versatile platform Matlab are the most highly
represented frameworks hosting SMLM packages.
Finding the accurate position of millions of fluorophores is a
heavy computational task. We observed that the four packages
that use specialized hardware accelerators (a graphics processor
unit, GPU, or field-programmable gate array, FPGA) reduce their
runtimes by an order of magnitude, sometimes reconstructing a
super-resolution image in less than a minute.
Benchmarking reporting and ranking
We returned to every participant a benchmark report that includes
renderings at different scales (Fig. 3) and quantitative measures
(Fig. 4). In particular, the bottom left curve of Figure 4 illustrates
how the proximity of fluorophores, d
NN
, influences the perform-
ance of the software. In this specific case, the rate of detection
improves from about half—when d
NN
is below the FWHM of the
PSF—to near perfection when d
NN
is sufficient high.
To coalesce our six criteria for a single ranking, we computed
the final score as the weighted sum of relative grades from
0 to 5, as presented in Table 2. We gave a greater weight to
the objective criteria JAC, ACC, SNR and FRC than to the sub-
jective criteria USA and TIME. With our particular choice of
weights, the ranking for the LS data is as follows, starting from
the best results: ThunderSTORM, SimpleSTORM and PeakFit.
For the HD data, it is B-recs, WTM and DAOSTORM.
DISCUSSION
The accuracy of single-molecule localization has a direct impact
on the resolving power of the reconstructed image. We confirmed
in this study of SMLM software packages that the experimental
accuracy is one order of magnitude better than the classical
diffraction limit, which supports theoretical findings
24,25
. This is
the best one can hope for; indeed, a few software packages nearly
achieve the Cramér-Rao lower bound.
Notwithstanding its popularity, the accuracy measure may
still misrepresent performance. For instance, it does not capture
issues related to the spread of the localizations—too few accurate
ones, for example, or too many false positives. To avoid reliance
on accuracy alone, we therefore considered additional criteria
such as the detection rate, which describes the overlap between
the set of detected molecules and the set of true molecules, along
with a measure of the quality of the rendered image and a measure
of its resolution.
Accuracy and detection rate tend to be in oppositionthe
average accuracy of localization can often be made to artificially
increase just by excluding those unreliable molecules that emit a
low number of photons. It is therefore enlightening to quantify the
tradeoff between accuracy and detection. This idea has led us to
propose the efficiency lines or curves (Fig. 2), which should aid
microscopy practitioners in selecting software by better allowing
them to judge if a particular software will help them meet their
own preferred tradeoff.
We proposed a combination of six simple metrics to help users
choose an SMLM software package. While no single measure
of performance can capture the complexity of this choice, our
goal with the combined criterion is to provide guidance to
practitioners that is balanced and fair.
4,400 nm
400 nm 100 nm 4,400 nm 400 nm 100 nm
200 nm500 nm6,600 nm
6,600 nm
6,600 nm
200 nm500 nm
100 nm500 nm6,600 nm100 nm500 nm
12,500 nm
100 nm500 nm12,500 nm100 nm500 nm
2,500 nm 200 nm600 nm2,500 nm200 nm600 nm
LS - FOV
Contest 1Contest 2Contest 3TrainingReal
LS - zoom1 LS - zoom2 HD - FOV HD - zoom1 HD - zoom2
Figure 3
|
Rendering of software results
versus ground truth at various scales. Every
participant in the challenge received a detailed
report on the performance of their software,
including renderings as shown; the particular
instance here corresponds to the PeakFit
software. Long-sequence data (LS), columns
1–3: full field of view (FOV), medium (zoom1),
and high (zoom2) magnification. High-density
data (HD), columns 4–6. The white frames in
FOV indicate the regions displayed in zoom1,
while the frames in zoom1 are themselves
expanded in zoom2. Rows 1–4: simulated data.
The red channel represents the rendering of
the ground truth and the green channel the
localizations of the tested software.
Row 5: real data with unknown ground truth.
npg
© 2015 Nature America, Inc. All rights reserved.

NATURE METHODS
|
VOL.12  NO.8 
|
AUGUST 2015 
|
721
ANALYSIS
Although the correlation (CORR) between the number of
photons of the ground truth and the number estimated by the
tested software is a parameter of interest, only a few participants
provided us with relevant output to obtain these correlations.
We therefore decided to exclude CORR from the final score
but have encouraged developers to focus their efforts on
improving accessibility and usability and to provide an estimate
of the number of photons or the uncertainty of measurements
for future releases. Also, we did not assess the grouping of
multiple-frame emission from a single molecule, as this is
often carried out at the postprocessing stage.
All packages we studied require parameters from the user.
Unfortunately, choosing appropriate values is by no means easy
or straightforward. More often than not, the tuning of parameters
requires a deep knowledge of the algorithmic pipeline; inexperi-
enced users may find that they need to invest a lot of time before
they can obtain satisfactory results. For this study, to ensure
that each software was properly tuned to our simulated database,
Table 1
|
Description of SMLM software
Software Molecule detection PSF Method Platform Acc. Affiliation
3D-DAOSTORM
28
Adaptive threshold—update on residual
images
Gauss LS Python + Harvard Univ., USA
a-livePALM
29
Denoising, SNR threshold, adaptive
histogram equalization
Gauss MLE Matlab + Karlsruhe IT, Germany
Auto-Bayes Generalized minimum-error threshold
(GMET), local maximum
Gauss, Weibull LS Stand-alone + NCNST, Beijing, China
B-recs Detection: n/a; fit: Bayesian inference
framework
Arbitrary MMSE, MAP Stand-alone Janelia Farm, HHMI, USA
CSSTORM
30
No explicit localization; convex
optimization problem (HD)
Gauss Compressed
sensing
Matlab + UCSF, USA
DAOSTORM
31
Gaussian filtering, local maximum (HD) Measured, LS Python + Univ. Oxford, UK
FacePALM
32
No explicit localization; background
estimation
arbitrary Python Univ. Amsterdam,
the Netherlands
FALCON
33
Deconvolution with sparsity prior, local
maximum (HD)
Taylor approx. ADMM Matlab + KAIST, Daejeon,
Republic of Korea
Fast-ML-HD
34
Sparsity constraint, concave-convex
procedure (HD)
Gauss MLE Matlab KAIST, Daejeon,
Republic of Korea
FPGA
35
Adaptive threshold Gauss MLE, CoMass Stand-alone Univ. Heidelberg, Germany
Gauss2DCirc
36
Fixed SNR threshold Gauss REG Matlab + Univ. Illinois, USA
GPUgaussMLE
37
Simple (unspecified) methods to select
subregions
Gauss MLE Matlab + TU Delft, Delft,
the Netherlands
GraspJ
38
Peak finding: fixed threshold value Gauss MLE ImageJ + ICFO, Barcelona, Spain
Insight3 Low-pass filtering, local maximum Arbitrary LS Stand-alone UCSF, USA
L1H
39
No explicit localization; L1 homotopy,
FIST deconvolution
Gauss, arbitrary Compressed
sensing
Python + Harvard Univ., USA
M2LE
40
Adaptive threshold Gauss MLE ImageJ + Cal Poly Pomona, USA
Maliang
41
Annular averaging filters, denoising by
convolution
Gauss MLE ImageJ + WUST, Wuhan, China
Micro-Manager LM Adaptive threshold Gauss LS ImageJ + UCSF, USA
MrSE
42
Band-pass filtering, local maximum Radial CoSym Stand-alone WUST, Wuhan, China
Octane
43
Watershed maximum Gauss LS ImageJ + Univ. Connecticut, USA
PeakFit Band-pass filtering, local maximum Gauss LS ImageJ + Univ. Sussex, UK
PeakSelector
44
Time-domain filtering, adaptive threshold Gauss LS IDL, Matlab HHMI, USA
PYME
27
Wiener filtering, adaptive threshold Arbitrary LS Python + Univ. Auckland, New Zealand
QuickPALM
45
Band-pass filtering, fixed SNR threshold Gauss CoMass ImageJ + Institut Pasteur, France
RadialSymmetry
46
Filtering, local max., minimal distance
to gradient
Radial CoSym Matlab + Univ. Oregon, Eugene, USA
rapidSTORM
12
Low-pass filtering, local maximum Gauss LS, MLE Stand-alone + Univ. Würzburg, Germany
SimplePALM
47
Variance stabilization denoising, DoG,
probabilistic threshold
n/a Mean-shift Stand-alone Molecular Genetics Center,
Gif-sur-Yvette, France
simpleSTORM
14
Self-calibration, noise normalize,
background subtraction, P value
Gauss, measured Interpolation Stand-alone + Univ. Heidelberg, Germany
SNSMIL Gaussian filtering, fixed contrast threshold Gauss LS Stand-alone + NCNST, Beijing, China
SOSplugin Wavelet transform, local maximum,
Gaussian mixture
Gauss LS ImageJ + Erasmus MC, Rotterdam,
the Netherlands
ThunderSTORM
15
Extensive collection of methods, preview,
filtering, local maximum
Gauss LS, MLE ImageJ + Charles Univ., Prague,
Czech Republic
W-fluoroBancroft
48
Wavelet, adaptive threshold Gauss fB Matlab + Boston Univ., USA
WaveTracer
49
Wavelet, watershed maximum Gauss LS Metamorph Univ. Bordeaux, France
WTM
50
Wedge template matching (HD) Wedge Match. Stand-alone Hamamatsu Photononics, Japan
The software packages whose manufacturers participated in our study are listed. The study is ongoing, and this list will be updated at http://bigwww.epfl.ch/smlm/software/. Software marked
‘ImageJ’ runs on compatible products ImageJ, Fiji, Icy and ImageJ2. Abbreviations for PSF: Gauss, Gaussian, elliptical Gaussian or averaged Gaussian. Abbreviations for methods: ADMM, alternat-
ing direction method of multipliers; CoMass, center of mass; CoSym, center of symmetry; fB, fluoroBancroft; LS, least-squares; MAP, maximum a posteriori; MLE, maximum-likelihood estimator;
MMSE, minimum mean-square error; REG, regression. Abbreviations regarding open access: +, available online (sometimes upon request); −, not available or included in commercial package.
npg
© 2015 Nature America, Inc. All rights reserved.

Citations
More filters
Journal ArticleDOI

Super-resolution microscopy with DNA-PAINT

TL;DR: A protocol is presented for the creation of DNA origami test samples, in situ sample preparation, multiplexed data acquisition, data simulation, super-resolution image reconstruction and post-processing such as drift correction, molecule counting (qPAINT) and particle averaging, designed to be modular.
Journal ArticleDOI

Deep learning enables cross-modality super-resolution in fluorescence microscopy

TL;DR: This data-driven approach does not require numerical modeling of the imaging process or the estimation of a point-spread-function, and is based on training a generative adversarial network to transform diffraction-limited input images into super-resolved ones, and could serve to democratize super-resolution imaging.
Journal ArticleDOI

Fast live-cell conventional fluorophore nanoscopy with ImageJ through super-resolution radial fluctuations.

TL;DR: The broad applicability of SRRF and its performance at low signal-to-noise ratios allows super-resolution using modern widefield, confocal or TIRF microscopes with illumination orders of magnitude lower than methods such as PALM, STORM or STED.
Journal ArticleDOI

Visualizing and discovering cellular structures with super-resolution microscopy

TL;DR: An overview of super-resolution methods, their state-of-the-art capabilities, and their constantly expanding applications to biology are provided, with a focus on the latter.
Journal ArticleDOI

Deep learning massively accelerates super-resolution localization microscopy

TL;DR: Simulations and experimental imaging of microtubules, nuclear pores, and mitochondria show that high-quality, super-resolution images can be reconstructed from up to two orders of magnitude fewer frames than usually needed, without compromising spatial resolution.
References
More filters
Journal ArticleDOI

Imaging intracellular fluorescent proteins at nanometer resolution.

TL;DR: This work introduced a method for optically imaging intracellular proteins at nanometer spatial resolution and used this method to image specific target proteins in thin sections of lysosomes and mitochondria and in fixed whole cells to image retroviral protein Gag at the plasma membrane.
Journal ArticleDOI

Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM).

TL;DR: A high-resolution fluorescence microscopy method based on high-accuracy localization of photoswitchable fluorophores that can, in principle, reach molecular-scale resolution is developed.
Journal ArticleDOI

Ultra-High Resolution Imaging by Fluorescence Photoactivation Localization Microscopy

TL;DR: A new method for fluorescence imaging has been developed that can obtain spatial distributions of large numbers of fluorescent molecules on length scales shorter than the classical diffraction limit, and suggests a means to address a significant number of biological questions that had previously been limited by microscope resolution.
Journal ArticleDOI

Three-Dimensional Super-Resolution Imaging by Stochastic Optical Reconstruction Microscopy

TL;DR: 3D stochastic optical reconstruction microscopy (STORM) is demonstrated by using optical astigmatism to determine both axial and lateral positions of individual fluorophores with nanometer accuracy, allowing the 3D morphology of nanoscopic cellular structures to be resolved.
Journal ArticleDOI

Precise nanometer localization analysis for individual fluorescent probes

TL;DR: A localization algorithm motivated from least-squares fitting theory is constructed and tested both on image stacks of 30-nm fluorescent beads and on computer-generated images (Monte Carlo simulations), and results show good agreement with the derived precision equation.
Related Papers (5)
Frequently Asked Questions (1)
Q1. What are the contributions in this paper?

In this work, the authors focus on the computational aspects of super-resolution microscopy and present a comprehensive evaluation of localization software packages. The authors prepared synthetic data that represent three-dimensional structures modeled after biological components, taking excitation parameters, noise sources, point-spread functions and pixelation into account. The authors then asked developers to run their software on their data ; most responded favorably, allowing us to present a broad picture of the methods available.