scispace - formally typeset
Search or ask a question

Showing papers on "Point spread function published in 2019"


Journal ArticleDOI
TL;DR: It is demonstrated how neural networks can exploit the chromatic dependence of the point-spread function to classify the colors of single emitters imaged on a grayscale camera and how deep learning can be used to design new phase-modulating elements that result in further improved color differentiation between species.
Abstract: Deep learning has become an extremely effective tool for image classification and image restoration problems. Here, we apply deep learning to microscopy and demonstrate how neural networks can exploit the chromatic dependence of the point-spread function to classify the colors of single emitters imaged on a grayscale camera. While existing localization microscopy methods for spectral classification require additional optical elements in the emission path, e.g., spectral filters, prisms, or phase masks, our neural net correctly identifies static and mobile emitters with high efficiency using a standard, unmodified single-channel configuration. Furthermore, we show how deep learning can be used to design new phase-modulating elements that, when implemented into the imaging path, result in further improved color differentiation between species, including simultaneously differentiating four species in a single image.

91 citations


Journal ArticleDOI
TL;DR: This work presents a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor, and introduces a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF.
Abstract: Traditional snapshot hyperspectral imaging systems include various optical elements: a dispersive optical element (prism), a coded aperture, several relay lenses, and an imaging lens, resulting in an impractically large form factor. We seek an alternative, minimal form factor of snapshot spectral imaging based on recent advances in diffractive optical technology. We thereupon present a compact, diffraction-based snapshot hyperspectral imaging method, using only a novel diffractive optical element (DOE) in front of a conventional, bare image sensor. Our diffractive imaging method replaces the common optical elements in hyperspectral imaging with a single optical element. To this end, we tackle two main challenges: First, the traditional diffractive lenses are not suitable for color imaging under incoherent illumination due to severe chromatic aberration because the size of the point spread function (PSF) changes depending on the wavelength. By leveraging this wavelength-dependent property alternatively for hyperspectral imaging, we introduce a novel DOE design that generates an anisotropic shape of the spectrally-varying PSF. The PSF size remains virtually unchanged, but instead the PSF shape rotates as the wavelength of light changes. Second, since there is no dispersive element and no coded aperture mask, the ill-posedness of spectral reconstruction increases significantly. Thus, we propose an end-to-end network solution based on the unrolled architecture of an optimization procedure with a spatial-spectral prior, specifically designed for deconvolution-based spectral reconstruction. Finally, we demonstrate hyperspectral imaging with a fabricated DOE attached to a conventional DSLR sensor. Results show that our method compares well with other state-of-the-art hyperspectral imaging methods in terms of spectral accuracy and spatial resolution, while our compact, diffraction-based spectral imaging method uses only a single optical element on a bare image sensor.

78 citations


Journal ArticleDOI
TL;DR: In this article, a fast calibration for the random phase modulation OPA LiDAR is proposed, which can be used to remove the image blurring caused by the sidelobes and further improve the image quality.
Abstract: Optical phased array (OPA) imaging technique, which uses electro-optic modulation to achieve beam steering rather than mechanical scanning, is a raster scanning imaging method with great potential due to its noninertia and high speed. However, fabrication imperfection of an OPA causes pre-designed phase modulations not yielding desired steering angles, and a time-consuming calibration is usually required before practical use. Alternatively, it is possible to obtain images with a random phase modulation OPA. In this paper, we propose a fast calibration for the random phase modulation OPA LiDAR. Experimental results demonstrate that, to obtain images of the same quality, the proposed calibration is three times faster than the calibration used in raster scanning scheme. In the meantime, the proposed calibration simultaneously retrieves the point spread function of the imaging system during the process, which can be used to remove the image blurring caused by the sidelobes and further improve the image quality.

70 citations


Journal ArticleDOI
TL;DR: This work trains a neural network to receive an image containing densely overlapping PSFs of multiple emitters over a large axial range and output a list of their 3D positions, then uses the network to design the optimal PSF for the multi-emitter case.
Abstract: Localization microscopy is an imaging technique in which the positions of individual nanoscale point emitters (e.g. fluorescent molecules) are determined at high precision from their images. This is the key ingredient in single/multiple-particle-tracking and several super-resolution microscopy approaches. Localization in three-dimensions (3D) can be performed by modifying the image that a point-source creates on the camera, namely, the point-spread function (PSF). The PSF is engineered using additional optical elements to vary distinctively with the depth of the point-source. However, localizing multiple adjacent emitters in 3D poses a significant algorithmic challenge, due to the lateral overlap of their PSFs. Here, we train a neural network to receive an image containing densely overlapping PSFs of multiple emitters over a large axial range and output a list of their 3D positions. Furthermore, we then use the network to design the optimal PSF for the multi-emitter case. We demonstrate our approach numerically as well as experimentally by 3D STORM imaging of mitochondria, and volumetric imaging of dozens of fluorescently-labeled telomeres occupying a mammalian nucleus in a single snapshot.

64 citations


Journal ArticleDOI
TL;DR: In this paper, the origin of super-resolution through contact microspherical lenses was investigated and the concept of point spread function was applied to quantify the resolution in wide-field micro-spherical nanoscopy.
Abstract: Super-resolution imaging through contact microspherical lenses is often linked to the ability of dielectric microspheres to form photonic nanojets, and to the reciprocity of focusing and imaging. By rigorously solving Maxwell's equations, the authors show that this common understanding of the origin of super-resolution is not valid. Furthermore, they apply the concept of the point-spread function in combination with magnification of the virtual image to provide a basis for quantifying the resolution in wide-field microspherical nanoscopy. These results are expected to strongly influence near-field imaging beyond the diffraction limit.

60 citations


Journal ArticleDOI
TL;DR: Using quantum metrology techniques, it is shown that a simultaneous estimation of the two separations is achievable by a single quantum measurement, with a precision saturating the ultimate limit stemming from the quantum Cramér-Rao bound.
Abstract: We investigate the localization of two incoherent point sources with arbitrary angular and axial separations in the paraxial approximation. By using quantum metrology techniques, we show that a simultaneous estimation of the two separations is achievable by a single quantum measurement, with a precision saturating the ultimate limit stemming from the quantum Cram\'er-Rao bound. Such a precision is not degraded in the subwavelength regime, thus overcoming the traditional limitations of classical direct imaging derived from Rayleigh's criterion. Our results are qualitatively independent of the point spread function of the imaging system, and quantitatively illustrated in detail for the Gaussian instance. This analysis may have relevant applications in three-dimensional surface measurements.

59 citations


Journal ArticleDOI
TL;DR: This technique packages the quantitative, real-time sub-cellular imaging capabilities of QPI into a flexible configuration, opening the door for truly non-invasive, label-free, tomographic quantitative phase imaging of unaltered thick, scattering specimens.
Abstract: Quantitative phase imaging (QPI) is an important tool in biomedicine that allows for the microscopic investigation of live cells and other thin, transparent samples. Importantly, this technology yields access to the cellular and sub-cellular structure and activity at nanometer scales without labels or dyes. Despite this unparalleled ability, QPI’s restriction to relatively thin samples severely hinders its versatility and overall utility in biomedicine. Here we overcome this significant limitation of QPI to enable the same rich level of quantitative detail in thick scattering samples. We achieve this by first illuminating the sample in an epi-mode configuration and using multiple scattering within the sample—a hindrance to conventional transmission imaging used in QPI—as a source of transmissive illumination from within. Second, we quantify phase via deconvolution by modeling the transfer function of the system based on the ensemble average angular distribution of light illuminating the sample at the focal plane. This technique packages the quantitative, real-time sub-cellular imaging capabilities of QPI into a flexible configuration, opening the door for truly non-invasive, label-free, tomographic quantitative phase imaging of unaltered thick, scattering specimens. Images of controlled scattering phantoms, blood in collection bags, cerebral organoids and freshly excised whole mouse brains are presented to validate the approach.

56 citations


Journal ArticleDOI
TL;DR: PynPoint as mentioned in this paper is a data-reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations, which is particularly suitable for the 3-5 μm wavelength range where typically thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical.
Abstract: Context. The direct detection and characterization of planetary and substellar companions at small angular separations is a rapidly advancing field. Dedicated high-contrast imaging instruments deliver unprecedented sensitivity, enabling detailed insights into the atmospheres of young low-mass companions. In addition, improvements in data reduction and point spread function (PSF)-subtraction algorithms are equally relevant for maximizing the scientific yield, both from new and archival data sets.Aims. We aim at developing a generic and modular data-reduction pipeline for processing and analysis of high-contrast imaging data obtained with pupil-stabilized observations. The package should be scalable and robust for future implementations and particularly suitable for the 3–5 μ m wavelength range where typically thousands of frames have to be processed and an accurate subtraction of the thermal background emission is critical.Methods. PynPoint is written in Python 2.7 and applies various image-processing techniques, as well as statistical tools for analyzing the data, building on open-source Python packages. The current version of PynPoint has evolved from an earlier version that was developed as a PSF-subtraction tool based on principal component analysis (PCA).Results. The architecture of PynPoint has been redesigned with the core functionalities decoupled from the pipeline modules. Modules have been implemented for dedicated processing and analysis steps, including background subtraction, frame registration, PSF subtraction, photometric and astrometric measurements, and estimation of detection limits. The pipeline package enables end-to-end data reduction of pupil-stabilized data and supports classical dithering and coronagraphic data sets. As an example, we processed archival VLT/NACO L ′ and M ′ data of β Pic b and reassessed the brightness and position of the planet with a Markov chain Monte Carlo analysis; we also provide a derivation of the photometric error budget.

55 citations


Journal ArticleDOI
TL;DR: Fast acoustic wave sparsely activated localization microscopy (fast-AWSALM) was developed to achieve super-resolved frames with subsecond temporal resolution, by using low-boiling-point octafluoropropane nanodroplets and high frame rate plane waves for activation, destruction, as well as imaging.
Abstract: Localization-based ultrasound super-resolution imaging using microbubble contrast agents and phase-change nanodroplets has been developed to visualize microvascular structures beyond the diffraction limit. However, the long data acquisition time makes the clinical translation more challenging. In this study, fast acoustic wave sparsely activated localization microscopy (fast-AWSALM) was developed to achieve super-resolved frames with subsecond temporal resolution, by using low-boiling-point octafluoropropane nanodroplets and high frame rate plane waves for activation, destruction, as well as imaging. Fast-AWSALM was demonstrated on an in vitro microvascular phantom to super-resolve structures that could not be resolved by conventional B-mode imaging. The effects of the temperature and mechanical index on fast-AWSALM were investigated. The experimental results show that subwavelength microstructures as small as $190~\mu \text{m}$ were resolvable in 200 ms with plane-wave transmission at a center frequency of 3.5 MHz and a pulse repetition frequency of 5000 Hz. This is about a 3.5-fold reduction in point spread function full-width-half-maximum compared to that measured in the conventional B-mode, and two orders of magnitude faster than the recently reported AWSALM under a nonflow/very slow flow situations and other localization-based methods. Just as in AWSALM, fast-AWSALM does not require flow, as is required by current microbubble-based ultrasound super-resolution techniques. In conclusion, this study shows the promise of fast-AWSALM, a super-resolution ultrasound technique using nanodroplets, which can generate super-resolution images in milliseconds and does not require flow.

55 citations


Journal ArticleDOI
TL;DR: A user-friendly software program comprising a graphical user interface (GUI) has been developed by the author for automation of implementation of the proposed system of Terahertz images with enhanced resolution.

52 citations


Journal ArticleDOI
TL;DR: Using adjoint optimization-based inverse electromagnetic design, a cylindrical metasurface lens operating at ~ 625nm with a depth of focus exceeding that of an ordinary lens was designed in this paper.
Abstract: Extended depth of focus (EDOF) lenses are important for various applications in computational imaging and microscopy. In addition to enabling novel functionalities, EDOF lenses can alleviate the need for stringent alignment requirements for imaging systems. Existing EDOF lenses, however, are often inefficient or produce an asymmetric point spread function (PSF) that blurs images. Inverse design of nanophotonics, including metasurfaces, has generated strong interest in recent years owing to its potential for generating exotic and innovative optical elements, which are generally difficult to model intuitively. Using adjoint optimization-based inverse electromagnetic design, in this paper, we designed a cylindrical metasurface lens operating at ~ 625nm with a depth of focus exceeding that of an ordinary lens. We validated our design by nanofabrication and optical characterization of silicon nitride metasurface lenses (with lateral dimension of 66.66 {\mu}m) with three different focal lengths (66.66 {\mu}m,100 {\mu}m,133.33 {\mu}m). The focusing efficiencies of the fabricated extended depth of focus metasurface lenses are similar to those of traditional metalenses.

Journal ArticleDOI
20 Apr 2019
TL;DR: It is demonstrated that ghost imaging can be performed without ever knowing the patterns that illuminate the object, by instead using patterns correlated with them, no matter how weakly.
Abstract: Ghost imaging is an unconventional optical imaging technique that reconstructs the shape of an object by combining the measurement of two signals: one that interacted with the object, but without any spatial information; the other containing spatial information, but that never interacted with the object. Here we demonstrate that ghost imaging can be performed without ever knowing the patterns that illuminate the object, by instead using patterns correlated with them, no matter how weakly. As an experimental proof, we reconstruct the image of an object hidden behind a scattering layer using only the reflected light, which never interacts with the object.

Journal ArticleDOI
TL;DR: By investigating the relative contribution of the out-of-phase lock-in signal, information based on changes in the rate of heat transport can be extracted, and inhomogeneities in the thermal diffusion properties across the sample plane can be mapped with high sensitivity and sub-diffraction limited resolution.
Abstract: Imaging of the phase output of a lock-in amplifier in mid-infrared photothermal vibrational microscopy is demonstrated for the first time in combination with nonlinear demodulation. In general, thermal blurring and heat transport phenomena contribute to the resolution and sensitivity of mid-infrared photothermal imaging. For heterogeneous samples with multiple absorbing features, if imaged in a spectral regime of comparable absorption with their embedding medium, it is demonstrated that differentiation with high contrast is achieved in complementary imaging of the phase signal obtained from a lock-in amplifier compared to standard imaging of the photothermal amplitude signal. Specifically, by investigating the relative contribution of the out-of-phase lock-in signal, information based on changes in the rate of heat transport can be extracted, and inhomogeneities in the thermal diffusion properties across the sample plane can be mapped with high sensitivity and sub-diffraction limited resolution. Under these imaging conditions, wavenumber regimes can be identified in which the thermal diffusion contributions are minimized and an enhancement of the spatial resolution beyond the diffraction limited spot size of the probe beam in the corresponding phase images is achieved. By combining relative diffusive phase imaging with nonlinear demodulation at the second harmonic, it is demonstrated that 1-μm-size melamine beads embedded in a thin layer of 4-octyl-4'-cyanobiphenyl (8CB) liquid crystal can be detected with a 1.3-μm spatial full-width at half-maximum (FWHM) resolution. Thus, imaging with a resolving power that exceeds the probe diffraction limited spot size by a factor of 2.5 is presented, which paves the route towards super-resolution, label-free imaging in the mid-infrared.

Journal ArticleDOI
TL;DR: This work considers the midway between the camera responses of a single point and of a continuous pattern over the entire camera area, yielding an image with a maximum product of the signal-to-noise ratio and the image visibility and a maximum value of structural similarity.
Abstract: Interferenceless coded aperture correlation holography (I-COACH) is an incoherent opto-digital technique for imaging 3D objects. In I-COACH, the light scattered from an object is modulated by a coded phase mask (CPM) and then recorded by a digital camera as an object digital hologram. To reconstruct the image, the object hologram is cross-correlated with the point spread function (PSF)-the intensity response to a point at the same object's axial location recorded with the same CPM. So far in I-COACH systems, the light from each object point has scattered over the whole camera area. Hence, the signal-to-noise ratio per camera pixel is lower in comparison to the direct imaging in which each point is imaged to a single image point. In this work, we consider the midway between the camera responses of a single point and of a continuous pattern over the entire camera area. The light in this study is focused onto a set of dots randomly distributed over the camera plane. With this technique, we show that there is a PSF with a best number of dots, yielding an image with a maximum product of the signal-to-noise ratio and the image visibility and a maximum value of structural similarity.

Journal ArticleDOI
TL;DR: A phase-space deconvolution method for light field microscopy is proposed, which fully exploits the smoothness prior in the phase- space domain and converts the spatially-nonuniform point spread function (PSF) into a spatial-uniform one with a much smaller size.
Abstract: Light field microscopy, featuring with snapshot large-scale three-dimensional (3D) fluorescence imaging, has aroused great interests in various biological applications, especially for high-speed 3D calcium imaging. Traditional 3D deconvolution algorithms based on the beam propagation model facilitate high-resolution 3D reconstructions. However, such a high-precision model is not robust enough for the experimental data with different system errors such as optical aberrations and background fluorescence, which bring great periodic artifacts and reduce the image contrast. In order to solve this problem, here we propose a phase-space deconvolution method for light field microscopy, which fully exploits the smoothness prior in the phase-space domain. By modeling the imaging process in the phase-space domain, we convert the spatially-nonuniform point spread function (PSF) into a spatially-uniform one with a much smaller size. Experiments on various biological samples and resolution charts are demonstrated to verify the contrast enhancement with much fewer artifacts and 10-times less computational cost by our method without any hardware modifications required.

Journal ArticleDOI
24 Jun 2019
TL;DR: In this article, the photon avalanche (PA) anti-Stokes emission nanoparticles are used as luminescent labels to narrow the point spread function below 50 nm when the nonlinearity exceeds 50.
Abstract: Confocal fluorescence microscopy is a powerful tool for visualizing biological processes, but conventional laser scanning confocal microscopy cannot resolve structures below the diffraction limit of light. Although numerous sub-diffraction imaging techniques have been developed over the last decade, they are still limited by the photobleaching of fluorescent probes and by their complex instrumentation and alignment procedures. To address these issues, we propose a novel concept that relies on using photon avalanche (PA) anti-Stokes emission nanoparticles as luminescent labels. This technique leverages the highly non-linear relationship between photoluminescence intensity and excitation intensity observed with PA, which narrows the point spread function below 50 nm when the non-linearity exceeds 50. Using theoretical modelling, we evaluate the feasibility of obtaining PA in Nd3+ doped nanoparticles under non-resonant 1064 nm photoexcitation and study the impact of phenomenological parameters, such as photoexcitation intensity, concentration of dopants or features of the host matrix, on the theoretical PA behavior. Using these optimized parameters, our simulations resolved 20-nm features using non-linear orders of 80. These predictions require experimental proof of concept, which must be preceded by development of appropriate PA nanomaterials and experimental conditions to observe PA in nanoparticles at room temperature. When successful, the PA phenomenon in bio-functionalized nanoparticles shall radically simplify the technical means for super-resolution imaging.

Journal ArticleDOI
01 Feb 2019
TL;DR: In this paper, a vortex fiber is used to generate a Gaussian-shaped excitation beam and a donut-shaped depletion beam whose spin (polarization) and orbital angular momentum (OAM) signs are aligned.
Abstract: Super-resolution imaging using the principles of stimulated emission depletion (STED) microscopy requires collinear excitation of a sample with a Gaussian-shaped excitation beam and a donut-shaped depletion beam whose spin (polarization) and orbital angular momentum (OAM) signs are aligned. We leverage recent advances in stable OAM mode propagation in optical fibers for telecom applications to design, fabricate, and validate the utility of a vortex fiber as the beam shaping device at visible and near-IR wavelengths for STED microscopy. Specifically, using compact UV-written fiber-gratings yielding high purity mode conversion (98.7%), we demonstrate the simultaneous generation of Gaussian and OAM beams at user-defined wavelengths. Point spread function measurements reveal a depletion beam with >17.5-dB extinction ratio, a naturally co-aligned Gaussian beam, and a setup in which these characteristics are maintained even as the fiber is bent down 6-mm radii. The proof-of-concept of all-fiber STED microscopy realized using this fiber device is used to image fluorescent bead samples yielding a sub-diffraction-limited resolution of 103 nm in the lateral plane. This opens the door to performing fiber-based STED microscopy with a setup that is not only resistant to environmental perturbations but also facilitates the development of endoscopic STED imaging.

Journal ArticleDOI
TL;DR: This work analyzes the sampling patterns of the LFM, and introduces a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs, and proposes a novel aliasing-aware deconvolution scheme to address the sampling artifacts.
Abstract: The sampling patterns of the light field microscope (LFM) are highly depth-dependent, which implies non-uniform recoverable lateral resolution across depth. Moreover, reconstructions using state-of-the-art approaches suffer from strong artifacts at axial ranges, where the LFM samples the light field at a coarse rate. In this work, we analyze the sampling patterns of the LFM, and introduce a flexible light field point spread function model (LFPSF) to cope with arbitrary LFM designs. We then propose a novel aliasing-aware deconvolution scheme to address the sampling artifacts. We demonstrate the high potential of the proposed method on real experimental data.

Journal ArticleDOI
15 Oct 2019
TL;DR: In this article, the authors designed and fabricated a flat multi-level diffractive lens (MDL) that is achromatic in the SWIR band (875-nm to 1675-nm).
Abstract: We designed and fabricated a flat multi-level diffractive lens (MDL) that is achromatic in the SWIR band (875 nm to 1675 nm). The MDL had a focal length of 25 mm, aperture diameter of 8.93 mm, and thickness of only 2.6 µm. By pairing the MDL with a SWIR image sensor, we also characterized its imaging performance in terms of the point-spread functions, modulation-transfer functions, and still and video imaging.

Posted Content
TL;DR: This work fabricates an optimized optical element and attaches it as a hardware add-on to a conventional camera during inference and demonstrates that this end-to-end deep optical imaging approach to single-shot HDR imaging outperforms both purely CNN-based approaches and other PSF engineering approaches.
Abstract: High-dynamic-range (HDR) imaging is crucial for many computer graphics and vision applications. Yet, acquiring HDR images with a single shot remains a challenging problem. Whereas modern deep learning approaches are successful at hallucinating plausible HDR content from a single low-dynamic-range (LDR) image, saturated scene details often cannot be faithfully recovered. Inspired by recent deep optical imaging approaches, we interpret this problem as jointly training an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a convolutional neural network (CNN). The lens surface is then jointly optimized with the CNN in a training phase; we fabricate this optimized optical element and attach it as a hardware add-on to a conventional camera during inference. In extensive simulations and with a physical prototype, we demonstrate that this end-to-end deep optical imaging approach to single-shot HDR imaging outperforms both purely CNN-based approaches and other PSF engineering approaches.

Journal ArticleDOI
Xiaoyu Wang1, Xin Jin1, Junqi Li1, Xiaocong Lian1, Xiangyang Ji1, Qionghai Dai1 
TL;DR: A prior-information-free single-shot scattering imaging method to exceed the ME range via Fourier spectrum guessing and iterative energy constrained compensation that can be reconstructed via a single shot.
Abstract: Imaging beyond the memory effect (ME) is critical to seeing through the scattering media. Methods proposed before have suffered from invasive point spread function measurement or the availability of prior information of the imaging targets. In this Letter, we propose a prior-information-free single-shot scattering imaging method to exceed the ME range. The autocorrelation of each imaging target is separated blindly from the autocorrelation of the recorded dual-target speckle via Fourier spectrum guessing and iterative energy constrained compensation. Working together with phase retrieval, dual targets exceeding the ME range can be reconstructed via a single shot. The effectiveness of the algorithm is verified by simulated experiments and a real imaging system.

Journal ArticleDOI
TL;DR: A recently introduced tensor-factorization-based approach offers a fast solution without the use of known image pairs or strict prior assumptions for single image resolution enhancement with an offline estimate of the system point spread function.
Abstract: Available super-resolution techniques for 3-D images are either computationally inefficient prior-knowledge-based iterative techniques or deep learning methods which require a large database of known low-resolution and high-resolution image pairs. A recently introduced tensor-factorization-based approach offers a fast solution without the use of known image pairs or strict prior assumptions. In this paper, this factorization framework is investigated for single image resolution enhancement with an offline estimate of the system point spread function. The technique is applied to 3-D cone beam computed tomography for dental image resolution enhancement. To demonstrate the efficiency of our method, it is compared to a recent state-of-the-art iterative technique using low-rank and total variation regularizations. In contrast to this comparative technique, the proposed reconstruction technique gives a 2-order-of-magnitude improvement in running time—2 min compared to 2 h for a dental volume of $282\times 266\times392$ voxels. Furthermore, it also offers slightly improved quantitative results (peak signal-to-noise ratio and segmentation quality). Another advantage of the presented technique is the low number of hyperparameters. As demonstrated in this paper, the framework is not sensitive to small changes in its parameters, proposing an ease of use.

Journal ArticleDOI
TL;DR: An open-source software tool and a simple experimental calibration procedure are presented that allow retrieving accurate z-positions in any PSF engineering approach or fitting modality, even at large imaging depths.
Abstract: Three-dimensional single molecule localization microscopy relies on the fitting of the individual molecules with a point spread function (PSF) model. The reconstructed images often show local squeezing or expansion in z. A common cause is depth-induced aberrations in conjunction with an imperfect PSF model calibrated from beads on a coverslip, resulting in a mismatch between measured PSF and real PSF. Here, we developed a strategy for accurate z-localization in which we use the imperfect PSF model for fitting, determine the fitting errors and correct for them in a post-processing step. We present an open-source software tool and a simple experimental calibration procedure that allow retrieving accurate z-positions in any PSF engineering approach or fitting modality, even at large imaging depths.

Journal ArticleDOI
TL;DR: This model of the AO long-exposure PSF, adapted to various seeing conditions and any AO system, is developed, made to match accurately both the core of the PSF and its turbulent halo, with a relative error smaller than 1% for simulated and experimental data.
Abstract: Context. Adaptive optics (AO) systems greatly increase the resolution of large telescopes, but produce complex point spread function (PSF) shapes, varying in time and across the field of view. The PSF must be accurately known since it provides crucial information about optical systems for design, characterization, diagnostics, and image post-processing.Aims. We develop here a model of the AO long-exposure PSF, adapted to various seeing conditions and any AO system. This model is made to match accurately both the core of the PSF and its turbulent halo.Methods. The PSF model we develop is based on a parsimonious parameterization of the phase power spectral density, with only five parameters to describe circularly symmetric PSFs and seven parameters for asymmetrical ones. Moreover, one of the parameters is the Fried parameter r 0 of the turbulence’s strength. This physical parameter is an asset in the PSF model since it can be correlated with external measurements of the r 0 , such as phase slopes from the AO real time computer (RTC) or site seeing monitoring.Results. We fit our model against end-to-end simulated PSFs using the OOMAO tool, and against on-sky PSFs from the SPHERE/ZIMPOL imager and the MUSE integral field spectrometer working in AO narrow-field mode. Our model matches the shape of the AO PSF both in the core and the halo, with a relative error smaller than 1% for simulated and experimental data. We also show that we retrieve the r 0 parameter with sub-centimeter precision on simulated data. For ZIMPOL data, we show a correlation of 97% between our r 0 estimation and the RTC estimation. Finally, MUSE allows us to test the spectral dependency of the fitted r 0 parameter. It follows the theoretical λ 6/5 evolution with a standard deviation of 0.3 cm. Evolution of other PSF parameters, such as residual phase variance or aliasing, is also discussed.

Journal ArticleDOI
TL;DR: A dual-view detection scheme combining supercritical angle fluorescence and astigmatic imaging to obtain precise and unbiased 3D super resolution images is presented.
Abstract: Here, we present a 3D localization-based super-resolution technique providing a slowly varying localization precision over a 1 μm range with precisions down to 15 nm. The axial localization is performed through a combination of point spread function (PSF) shaping and supercritical angle fluorescence (SAF), which yields absolute axial information. Using a dual-view scheme, the axial detection is decoupled from the lateral detection and optimized independently to provide a weakly anisotropic 3D resolution over the imaging range. This method can be readily implemented on most homemade PSF shaping setups and provides drift-free, tilt-insensitive and achromatic results. Its insensitivity to these unavoidable experimental biases is especially adapted for multicolor 3D super-resolution microscopy, as we demonstrate by imaging cell cytoskeleton, living bacteria membranes and axon periodic submembrane scaffolds. We further illustrate the interest of the technique for biological multicolor imaging over a several-μm range by direct merging of multiple acquisitions at different depths. 3D single molecule localization microscopy suffers from several experimental biases that degrade the resolution or localization precision. Here the authors present a dual-view detection scheme combining supercritical angle fluorescence and astigmatic imaging to obtain precise and unbiased 3D super resolution images.

Journal ArticleDOI
TL;DR: To develop a method for fast distortion‐ and blurring‐free imaging with high precision for blurred-free and distortion-free imaging.
Abstract: Purpose To develop a method for fast distortion- and blurring-free imaging. Theory EPI with point-spread-function (PSF) mapping can achieve distortion- and blurring-free imaging at a cost of long acquisition time. In this study, an acquisition/reconstruction technique, termed "tilted-CAIPI," is proposed to achieve >20× acceleration for PSF-EPI. The proposed method systematically optimized the k-space sampling trajectory with B0 -inhomogeneity-informed reconstruction, to exploit the inherent signal correlation in PSF-EPI and take full advantage of coil sensitivity. Susceptibility-induced phase accumulation is regarded as an additional encoding that is estimated by calibration data and integrated into reconstruction. Self-navigated phase correction was developed to correct shot-to-shot phase variation in diffusion imaging. Methods Tilted-CAIPI was implemented at 3T, with incorporation of partial Fourier and simultaneous multislice to achieve further accelerations. T2 -weighted, T2 * -weighted, and diffusion-weighted imaging experiments were conducted to evaluate the proposed method. Results The ability of tilted-CAIPI to provide highly accelerated imaging without distortion and blurring was demonstrated through in vivo brain experiments, where only 8 shots per simultaneous slice group were required to provide high-quality, high-SNR imaging at 0.8-1 mm resolution. Conclusion Tilted-CAIPI achieved fast distortion- and blurring-free imaging with high SNR. Whole-brain T2 -weighted, T2 * -weighted, and diffusion imaging can be obtained in just 15-60 s.

Journal ArticleDOI
TL;DR: This paper explores the use of single-shot digital holography data and a novel algorithm, referred to as multiplane iterative reconstruction (MIR), for imaging through distributed-volume aberrations and shows that the MIR algorithm outperforms the leading multiplane image-sharpening algorithm over a wide range of anisoplanatic conditions.
Abstract: This paper explores the use of single-shot digital holography data and a novel algorithm, referred to as multiplane iterative reconstruction (MIR), for imaging through distributed-volume aberrations. Such aberrations result in a linear, shift-varying or “anisoplanatic” physical process, where multiple-look angles give rise to different point spread functions within the field of view of the imaging system. The MIR algorithm jointly computes the maximum a posteriori estimates of the anisoplanatic phase errors and the speckle-free object reflectance from the single-shot digital holography data. Using both simulations and experiments, we show that the MIR algorithm outperforms the leading multiplane image-sharpening algorithm over a wide range of anisoplanatic conditions.

Journal ArticleDOI
TL;DR: A deconvolution software for light sheet microscopy that uses a theoretical point spread function, which is derived from a model of image formation in a light sheet microscope, provides excellent blur reduction and enhancement of fine image details for image stacks recorded with low magnification objectives.
Abstract: We developed a deconvolution software for light sheet microscopy that uses a theoretical point spread function, which we derived from a model of image formation in a light sheet microscope. We show that this approach provides excellent blur reduction and enhancement of fine image details for image stacks recorded with low magnification objectives of relatively high NA and high field numbers as e.g. 2x NA 0.14 FN 22, or 4x NA 0.28 FN 22. For these objectives, which are widely used in light sheet microscopy, sufficiently resolved point spread functions that are suitable for deconvolution are difficult to measure and the results obtained by common deconvolution software developed for confocal microscopy are usually poor. We demonstrate that the deconvolutions computed using our point spread function model are equivalent to those obtained using a measured point spread function for a 10x objective with NA 0.3 and for a 20x objective with NA 0.45.

Journal ArticleDOI
TL;DR: A data-driven approach is introduced in which artificial neural networks are trained to make a direct link between an experimental point spread function image and its underlying, multidimensional parameters, and results with alternative approaches based on maximum likelihood estimation are compared.
Abstract: Recent years have witnessed the development of single-molecule localization microscopy as a generic tool for sampling diverse biologically relevant information at the super-resolution level. While current approaches often rely on the target-specific alteration of the point spread function to encode the multidimensional contents of single fluorophores, the details of the point spread function in an unmodified microscope already contain rich information. Here we introduce a data-driven approach in which artificial neural networks are trained to make a direct link between an experimental point spread function image and its underlying, multidimensional parameters, and compare results with alternative approaches based on maximum likelihood estimation. To demonstrate this concept in real systems, we decipher in fixed cells both the colors and the axial positions of single molecules in regular localization microscopy data.

Journal ArticleDOI
TL;DR: In this paper, a model of the AO long exposure point spread function (PSF) was developed to match accurately both the core of the PSF and its turbulent halo.
Abstract: Context. Adaptive optics (AO) systems greatly increase the resolution of large telescopes, but produce complex point spread function (PSF) shapes, varying in time and across the field of view. This PSF must be accurately known since it provides crucial information about optical systems for design, characterisation, diagnostics and image post processing. Aims. We develop here a model of the AO long exposure PSF, adapted to various seeing conditions and any AO system. This model is made to match accurately both the core of the PSF and its turbulent halo. Methods. The PSF model we develop is based on a parsimonious parameterization of the phase power spectral density with only five parameters to describe circularly symmetric PSFs and seven parameters for asymmetrical ones. Moreover, one of the parameters is directly the Fried parameter r0 of the turbulence s strength. This physical parameter is an asset in the PSF model since it can be correlated with external measurements of the r0, such as phase slopes from the AO real time computer (RTC) or site seeing monitoring. Results. We fit our model against endtoend simulated PSFs using OOMAO tool, and against on sky PSFs from the SPHERE ZIMPOL imager and the MUSE integral field spectrometer working in AO narrowfield mode. Our model matches the shape of the AO PSF both in the core and the halo, with a sub 1 percent relative error for simulated and experimental data. We also show that we retrieve the r0 parameter with subcentimeter precision on simulated data. For ZIMPOL data, we show a correlation of 97 percent between our r0 estimation and the RTC estimation. Finally, MUSE allows us to test the spectral dependency of the fitted r0 parameter. It follows the theoretical $\lambda^{6/5}$ evolution with a standard deviation of 0.3 cm. Evolution of other PSF parameters, such as residual phase variance or aliasing, is also discussed.