scispace - formally typeset
Search or ask a question

Showing papers on "Point spread function published in 2018"


Journal ArticleDOI
TL;DR: In this article, a wideband wide-field spectral deconvolution framework (ddfacet) based on image plane faceting, that takes into account generic direction-dependent effects is presented.
Abstract: The new generation of radio interferometers is characterized by high sensitivity, wide fields of view and large fractional bandwidth. To synthesize the deepest images enabled by the high dynamic range of these instruments requires us to take into account the direction-dependent Jones matrices, while estimating the spectral properties of the sky in the imaging and deconvolution algorithms. In this paper we discuss and implement a wideband wide-field spectral deconvolution framework (ddfacet) based on image plane faceting, that takes into account generic direction-dependent effects. Specifically, we present a wide-field co-planar faceting scheme, and discuss the various effects that need to be taken into account to solve for the deconvolution problem (image plane normalization, position-dependent Point Spread Function, etc). We discuss two wideband spectral deconvolution algorithms based on hybrid matching pursuit and sub-space optimisation respectively. A few interesting technical features incorporated in our imager are discussed, including baseline dependent averaging, which has the effect of improving computing efficiency. The version of ddfacet presented here can account for any externally defined Jones matrices and/or beam patterns.

179 citations


Journal ArticleDOI
TL;DR: In this article, a combination of a label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution was proposed for cellular and subcellular structures.
Abstract: Super-resolution fluorescence microscopy provides unprecedented insight into cellular and subcellular structures However, going ‘beyond the diffraction barrier’ comes at a price, since most far-field super-resolution imaging techniques trade temporal for spatial super-resolution We propose the combination of a novel label-free white light quantitative phase imaging with fluorescence to provide high-speed imaging and spatial super-resolution The non-iterative phase retrieval relies on the acquisition of single images at each z-location and thus enables straightforward 3D phase imaging using a classical microscope We realized multi-plane imaging using a customized prism for the simultaneous acquisition of eight planes This allowed us to not only image live cells in 3D at up to 200 Hz, but also to integrate fluorescence super-resolution optical fluctuation imaging within the same optical instrument The 4D microscope platform unifies the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy By combining the sensitivity and high temporal resolution of phase imaging with the specificity and high spatial resolution of fluorescence microscopy, a 4D microscope is demonstrated that visualizes in three dimensions the fast cellular processes in living cells at up to 200 Hz

101 citations


Journal ArticleDOI
TL;DR: In this article, a convolutional neural network (CNN) was used to recover the complex object information in a network training process for Fourier ptychography forward imaging.
Abstract: Fourier ptychography is a recently developed imaging approach for large field-of-view and high-resolution microscopy. Here we model the Fourier ptychographic forward imaging process using a convolutional neural network (CNN) and recover the complex object information in a network training process. In this approach, the input of the network is the point spread function in the spatial domain or the coherent transfer function in the Fourier domain. The object is treated as 2D learnable weights of a convolutional or a multiplication layer. The output of the network is modeled as the loss function we aim to minimize. The batch size of the network corresponds to the number of captured low-resolution images in one forward/backward pass. We use a popular open-source machine learning library, TensorFlow, for setting up the network and conducting the optimization process. We analyze the performance of different learning rates, different solvers, and different batch sizes. It is shown that a large batch size with the Adam optimizer achieves the best performance in general. To accelerate the phase retrieval process, we also discuss a strategy to implement Fourier-magnitude projection using a multiplication neural network model. Since convolution and multiplication are the two most-common operations in imaging modeling, the reported approach may provide a new perspective to examine many coherent and incoherent systems. As a demonstration, we discuss the extensions of the reported networks for modeling single-pixel imaging and structured illumination microscopy (SIM). 4-frame resolution doubling is demonstrated using a neural network for SIM. The link between imaging systems and neural network modeling may enable the use of machine-learning hardware such as neural engine and tensor processing unit for accelerating the image reconstruction process. We have made our implementation code open-source for researchers.

79 citations


Journal ArticleDOI
TL;DR: This work proposes to measure intensity transmission matrices or point-spread-function of diffusers via spatial-correlation, with no scanning or interferometric detection required, and substitutes time-consuming iterative algorithms by a fast cross-correlations deconvolution method to greatly reduce time consumption for image reconstruction.
Abstract: We propose to measure intensity transmission matrices or point-spread-function (PSF) of diffusers via spatial-correlation, with no scanning or interferometric detection required. With the measured PSF, we report optical imaging based on the memory effect that allows tracking of moving objects through a scattering medium. Our technique enlarges the limited effective range of traditional imaging techniques based on the memory effect, and substitutes time-consuming iterative algorithms by a fast cross-correlation deconvolution method to greatly reduce time consumption for image reconstruction.

68 citations


Journal ArticleDOI
TL;DR: A Tri-spot point spread function (PSF) is designed and implemented that simultaneously measures the three-dimensional orientation and the rotational mobility of dipole-like emitters across a large field of view and detects rotational dynamics of single molecules within a polymer thin film that are not observable by conventional SMLM.
Abstract: Fluorescence photons emitted by single molecules contain rich information regarding their rotational motions, but adapting single-molecule localization microscopy (SMLM) to measure their orientations and rotational mobilities with high precision remains a challenge. Inspired by dipole radiation patterns, we design and implement a Tri-spot point spread function (PSF) that simultaneously measures the three-dimensional orientation and the rotational mobility of dipole-like emitters across a large field of view. We show that the orientation measurements done using the Tri-spot PSF are sufficiently accurate to correct the anisotropy-based localization bias, from 30 nm to 7 nm, in SMLM. We further characterize the emission anisotropy of fluorescent beads, revealing that both 20-nm and 100-nm diameter beads emit light significantly differently from isotropic point sources. Exciting 100-nm beads with linearly polarized light, we observe significant depolarization of the emitted fluorescence using the Tri-spot PSF that is difficult to detect using other methods. Finally, we demonstrate that the Tri-spot PSF detects rotational dynamics of single molecules within a polymer thin film that are not observable by conventional SMLM.

64 citations


Journal ArticleDOI
20 Jul 2018
TL;DR: By continuous monitoring of freely swimming zebrafish larvae in a 3D region, it is demonstrated that the new approach enables significantly increasing the volumetric imaging rate by using a fraction of the tomographic projections without compromising the reconstructed image quality.
Abstract: State-of-the-art optoacoustic tomographic imaging systems have been shown to attain three-dimensional (3D) frame rates of the order of 100 Hz. While such a high volumetric imaging speed is beyond reach for other bio-imaging modalities, it may still be insufficient to accurately monitor some faster events occurring on a millisecond scale. Increasing the 3D imaging rate is usually hampered by the limited throughput capacity of the data acquisition electronics and memory used to capture vast amounts of the generated optoacoustic (OA) data in real time. Herein, we developed a sparse signal acquisition scheme and a total-variation-based reconstruction approach in a combined space–time domain in order to achieve 3D OA imaging at kilohertz rates. By continuous monitoring of freely swimming zebrafish larvae in a 3D region, we demonstrate that the new approach enables significantly increasing the volumetric imaging rate by using a fraction of the tomographic projections without compromising the reconstructed image quality. The suggested method may benefit studies looking at ultrafast biological phenomena in 3D, such as large-scale neuronal activity, cardiac motion, or freely behaving organisms.

63 citations


Journal ArticleDOI
TL;DR: In this article, a semianalytic framework for calculating the postcoronagraph contrast in a closed-loop adaptive optics system is proposed. But the authors do not consider the effect of atmospheric turbulence on the performance of the system.
Abstract: The discovery of the exoplanet Proxima b highlights the potential for the coming generation of giant segmented mirror telescopes (GSMTs) to characterize terrestrial—potentially habitable—planets orbiting nearby stars with direct imaging. This will require continued development and implementation of optimized adaptive optics systems feeding coronagraphs on the GSMTs. Such development should proceed with an understanding of the fundamental limits imposed by atmospheric turbulence. Here, we seek to address this question with a semianalytic framework for calculating the postcoronagraph contrast in a closed-loop adaptive optics system. We do this starting with the temporal power spectra of the Fourier basis calculated assuming frozen flow turbulence, and then apply closed-loop transfer functions. We include the benefits of a simple predictive controller, which we show could provide over a factor of 1400 gain in raw point spread function contrast at 1 λ/D on bright stars, and more than a factor of 30 gain on an I=7.5 mag star such as Proxima. More sophisticated predictive control can be expected to improve this even further. Assuming a photon-noise limited observing technique such as high-dispersion coronagraphy, these gains in raw contrast will decrease integration times by the same large factors. Predictive control of atmospheric turbulence should therefore be seen as one of the key technologies that will enable ground-based telescopes to characterize terrestrial planets.

60 citations


Journal ArticleDOI
TL;DR: This work shows that the resolution of quantum ghost imaging systems can be further degraded by reducing the strength of the spatial correlations inherent in the downconversion process.
Abstract: Quantum ghost imaging uses photon pairs produced from parametric downconversion to enable an alternative method of image acquisition. Information from either one of the photons does not yield an image, but an image can be obtained by harnessing the correlations between them. Here we present an examination of the resolution limits of such ghost imaging systems. In both conventional imaging and quantum ghost imaging the resolution of the image is limited by the point-spread function of the optics associated with the spatially resolving detector. However, whereas in conventional imaging systems the resolution is limited only by this point spread function, in ghost imaging we show that the resolution can be further degraded by reducing the strength of the spatial correlations inherent in the downconversion process.

55 citations


Journal ArticleDOI
TL;DR: A rapid focusing with sensor-less aberration corrections, based on machine learning, is demonstrated in this paper and offers great potential for in vivo real-time imaging in biological science.
Abstract: Non-invasive, real-time imaging and deep focus into tissue are in high demand in biomedical research. However, the aberration that is introduced by the refractive index inhomogeneity of biological tissue hinders the way forward. A rapid focusing with sensor-less aberration corrections, based on machine learning, is demonstrated in this paper. The proposed method applies the Convolutional Neural Network (CNN), which can rapidly calculate the low-order aberrations from the point spread function images with Zernike modes after training. The results show that approximately 90 percent correction accuracy can be achieved. The average mean square error of each Zernike coefficient in 200 repetitions is 0.06. Furthermore, the aberration induced by 1-mm-thick phantom samples and 300-µm-thick mouse brain slices can be efficiently compensated through loading a compensation phase on an adaptive element placed at the back-pupil plane. The phase reconstruction requires less than 0.2 s. Therefore, this method offers great potential for in vivo real-time imaging in biological science.

54 citations


Journal ArticleDOI
TL;DR: The use of deep convolution neural networks (CNNs) are explored to predict the focal position of the acquired image without axial scanning and it is shown that the information from the transform domains can improve the performance and robustness of the autofocusing process.
Abstract: A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.

54 citations


Journal ArticleDOI
Takeshi Shimano1, Yusuke Nakamura1, Kazuyuki Tajima1, Mayu Sao1, Taku Hoshizawa1 
TL;DR: A new type of lensless camera enabling light-field imaging for focusing after image capture and it is found this imaging principle is quite analogous to a coherent hologram.
Abstract: We propose a new type of lensless camera enabling light-field imaging for focusing after image capture and show its feasibilities with some prototyping. The camera basically consists only of an image sensor and Fresnel zone aperture (FZA). Point sources making up the subjects to be captured cast overlapping shadows of the FZA on the sensor, which result in overlapping straight moire fringes due to multiplication of another virtual FZA in the computer. The fringes generate a captured image by two-dimensional fast Fourier transform. Refocusing is possible by adjusting the size of the virtual FZA. We found this imaging principle is quite analogous to a coherent hologram. Not only the functions of still cameras but also of video cameras are confirmed experimentally by using the prototyped cameras.

Journal ArticleDOI
TL;DR: This work presents a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available.
Abstract: We present a fast and model-free 2D and 3D single-molecule localization algorithm that allows more than 3 × 106 localizations per second to be calculated on a standard multi-core central processing unit with localization accuracies in line with the most accurate algorithms currently available. Our algorithm converts the region of interest around a point spread function to two phase vectors (phasors) by calculating the first Fourier coefficients in both the x- and y-direction. The angles of these phasors are used to localize the center of the single fluorescent emitter, and the ratio of the magnitudes of the two phasors is a measure for astigmatism, which can be used to obtain depth information (z-direction). Our approach can be used both as a stand-alone algorithm for maximizing localization speed and as a first estimator for more time consuming iterative algorithms.

Journal ArticleDOI
TL;DR: The results show an improvement in the imaging performance of the QT ultrasound system in comparison to earlier ultrasound tomography systems, which are applicable to clinical applications of the system, such as breast imaging.
Abstract: PURPOSE Quantitative Transmission (QT) ultrasound has shown promise as a breast imaging modality. This study characterizes the performance of the latest generation of QT ultrasound scanners: QT Scanner 2000. METHODS The scanner consists of a 2048-element ultrasound receiver array for transmission imaging and three transceivers for reflection imaging. Custom fabricated phantoms were used to quantify the imaging performance parameters. The specific performance parameters that have been characterized are spatial resolution (as point spread function), linear measurement accuracy, contrast to noise ratio, and image uniformity, in both transmission and reflection imaging modalities. RESULTS The intrinsic in-plane resolution was measured to be better than 1.5 mm and 1.0 mm for transmission and reflection modalities respectively. The linear measurement accuracy was measured to be, on average, approximately 1% for both the modalities. Speed of sound image uniformity and measurement accuracy were calculated to be 99.5% and <0.2% respectively. Contrast to noise ratio (CNR) measurements vary as a function of object size. CONCLUSIONS The results show an improvement in the imaging performance of the system in comparison to earlier ultrasound tomography systems, which are applicable to clinical applications of the system, such as breast imaging.

Journal ArticleDOI
TL;DR: This work presents a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs.
Abstract: Point spread function (PSF) engineering is used in single emitter localization to measure the emitter position in 3D and possibly other parameters such as the emission color or dipole orientation as well. Advanced PSF models such as spline fits to experimental PSFs or the vectorial PSF model can be used in the corresponding localization algorithms in order to model the intricate spot shape and deformations correctly. The complexity of the optical architecture and fit model makes PSF engineering approaches particularly sensitive to optical aberrations. Here, we present a calibration and alignment protocol for fluorescence microscopes equipped with a spatial light modulator (SLM) with the goal of establishing a wavefront error well below the diffraction limit for optimum application of complex engineered PSFs. We achieve high-precision wavefront control, to a level below 20 mλ wavefront aberration over a 30 minute time window after the calibration procedure, using a separate light path for calibrating the pixel-to-pixel variations of the SLM, and alignment of the SLM with respect to the optical axis and Fourier plane within 3 μm (x/y) and 100 μm (z) error. Aberrations are retrieved from a fit of the vectorial PSF model to a bead z-stack and compensated with a residual wavefront error comparable to the error of the SLM calibration step. This well-calibrated and corrected setup makes it possible to create complex '3D+λ' PSFs that fit very well to the vectorial PSF model. Proof-of-principle bead experiments show precisions below 10 nm in x, y, and λ, and below 20 nm in z over an axial range of 1 μm with 2000 signal photons and 12 background photons.

Journal ArticleDOI
TL;DR: This paper systematically quantify the 3D spatial resolution of ODT by exploiting the spatial bandwidth of the reconstructed scattering potential and provides the spatial resolution as well as the arbitrary sliced angle.
Abstract: Optical diffraction tomography (ODT) is a three-dimensional (3D) quantitative phase imaging technique, which enables the reconstruction of the 3D refractive index (RI) distribution of a transparent sample. Due to its fast, non-invasive, and quantitative imaging capability, ODT has emerged as a powerful tool for various applications. However, the spatial resolution of ODT has only been quantified along the lateral and axial directions for limited conditions; it has not been investigated for arbitrary-oblique directions. In this paper, we systematically quantify the 3D spatial resolution of ODT by exploiting the spatial bandwidth of the reconstructed scattering potential. The 3D spatial resolution is calculated for various types of systems, including the illumination-scanning, sample-rotation, and hybrid scanning-rotation methods. In particular, using the calculated 3D spatial resolution, we provide the spatial resolution as well as the arbitrary sliced angle. Furthermore, to validate the present method, the point spread function of an ODT system is experimentally obtained using the deconvolution of a 3D RI distribution of a microsphere and is compared with the calculated resolution.

Journal ArticleDOI
TL;DR: The compact and lightweight probe design makes it suitable for minimally-invasive in-vivo imaging as a potential alternative to surgical biopsies and allows us to increase the imaging depth.
Abstract: We present the design, implementation and performance analysis of a compact multi-photon endoscope based on a piezo electric scanning tube. A miniature objective lens with a long working distance and a high numerical aperture (≈ 0.5) is designed to provide a diffraction limited spot size. Furthermore, a 1700 nm wavelength femtosecond fiber laser is used as an excitation source to overcome the scattering of biological tissues and reduce water absorption. Therefore, the novel optical system along with the unique wavelength allows us to increase the imaging depth. We demonstrate that the endoscope is capable of performing third and second harmonic generation (THG/SHG) and three-photon excitation fluorescence (3PEF) imaging over a large field of view (> 400 μm) with high lateral resolution (2.2 μm). The compact and lightweight probe design makes it suitable for minimally-invasive in-vivo imaging as a potential alternative to surgical biopsies.

Journal ArticleDOI
TL;DR: This work proposes a method to break the dichotomy between high-detail and low-detail imaging systems, by carefully mixing corrected low-frequency and high-frequency data in a way that eliminates the edge effect, and shows that with this novel technique it can quantify cell growth in large populations, without the need for thresholds and system variant calibration.
Abstract: As a label-free, nondestructive method, phase contrast is by far the most popular microscopy technique for routine inspection of cell cultures. However, features of interest such as extensions near cell bodies are often obscured by a glow, which came to be known as the halo. Advances in modeling image formation have shown that this artifact is due to the limited spatial coherence of the illumination. Nevertheless, the same incoherent illumination is responsible for superior sensitivity to fine details in the phase contrast geometry. Thus, there exists a trade-off between high-detail (incoherent) and low-detail (coherent) imaging systems. In this work, we propose a method to break this dichotomy, by carefully mixing corrected low-frequency and high-frequency data in a way that eliminates the edge effect. Specifically, our technique is able to remove halo artifacts at video rates, requiring no manual interaction or a priori point spread function measurements. To validate our approach, we imaged standard spherical beads, sperm cells, tissue slices, and red blood cells. We demonstrate real-time operation with a time evolution study of adherent neuron cultures whose neurites are revealed by our halo correction. We show that with our novel technique, we can quantify cell growth in large populations, without the need for thresholds and system variant calibration.

Journal ArticleDOI
20 Oct 2018
TL;DR: In this paper, the authors identify a class of point spread functions with a linear information decrease and show that any well-behaved symmetric PSF can be converted into such a form with a simple nonabsorbing signum filter.
Abstract: It has been argued that, for a spatially invariant imaging system, the information one can gain about the separation of two incoherent point sources decays quadratically to zero with decreasing separation The effect is termed Rayleigh’s curse Contrary to this belief, we identify a class of point-spread functions (PSFs) with a linear information decrease Moreover, we show that any well-behaved symmetric PSF can be converted into such a form with a simple nonabsorbing signum filter We experimentally demonstrate significant superresolution capabilities based on this idea

Journal ArticleDOI
TL;DR: Results from a series of phantom materials suggest H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio.
Abstract: H-Scan is a new ultrasound imaging technique that relies on matching a model of pulse-echo formation to the mathematics of a class of Gaussian-weighted Hermite polynomials. This technique may be beneficial in the measurement of relative scatterer sizes and in cancer therapy, particularly for early response to drug treatment. Because current H-scan techniques use focused ultrasound data acquisitions, spatial resolution degrades away from the focal region and inherently affects relative scatterer size estimation. Although the resolution of ultrasound plane wave imaging can be inferior to that of traditional focused ultrasound approaches, the former exhibits a homogeneous spatial resolution throughout the image plane. The purpose of this study was to implement H-scan using plane wave imaging and investigate the impact of spatial angular compounding on H-scan image quality. Parallel convolution filters using two different Gaussian-weighted Hermite polynomials that describe ultrasound scattering events are applied to the radiofrequency data. The H-scan processing is done on each radiofrequency image plane before averaging to get the angular compounded image. The relative strength from each convolution is color-coded to represent relative scatterer size. Given results from a series of phantom materials, H-scan imaging with spatial angular compounding more accurately reflects the true scatterer size caused by reductions in the system point spread function and improved signal-to-noise ratio. Preliminary in vivo H-scan imaging of tumor-bearing animals suggests this modality may be useful for monitoring early response to chemotherapeutic treatment. Overall, H-scan imaging using ultrasound plane waves and spatial angular compounding is a promising approach for visualizing the relative size and distribution of acoustic scattering sources.

Journal ArticleDOI
TL;DR: The first demonstration of three-photon excitation light-sheet fluorescence microscopy is presented, using a conventional femtosecond pulsed laser at 1000 nm wavelength for the imaging of 450 μm diameter cellular spheroids.
Abstract: We present the first demonstration of three-photon excitation light-sheet fluorescence microscopy. Light-sheet fluorescence microscopy in single- and two-photon modes has emerged as a powerful wide-field, low-photodamage technique for fast volumetric imaging of biological samples. We extend this imaging modality to the three-photon regime, enhancing its penetration depth. Our present study uses a conventional femtosecond pulsed laser at 1000 nm wavelength for the imaging of 450 μm diameter cellular spheroids. In addition, we show, experimentally and through numerical simulations, the potential advantages in three-photon light-sheet microscopy of using propagation-invariant Bessel beams in preference to Gaussian beams.

Journal ArticleDOI
Jörg Herbel1, Tomasz Kacprzak1, Adam Amara1, Alexandre Refregier1, Aurelien Lucchi1 
TL;DR: In this paper, a convolutional neural network (CNN) is used to estimate the free parameters of the point spread function (PSF) model from noisy images of the PSF.
Abstract: Modeling the Point Spread Function (PSF) of wide-field surveys is vital for many astrophysical applications and cosmological probes including weak gravitational lensing. The PSF smears the image of any recorded object and therefore needs to be taken into account when inferring properties of galaxies from astronomical images. In the case of cosmic shear, the PSF is one of the dominant sources of systematic errors and must be treated carefully to avoid biases in cosmological parameters. Recently, forward modeling approaches to calibrate shear measurements within the Monte-Carlo Control Loops (MCCL) framework have been developed. These methods typically require simulating a large amount of wide-field images, thus, the simulations need to be very fast yet have realistic properties in key features such as the PSF pattern. Hence, such forward modeling approaches require a very flexible PSF model, which is quick to evaluate and whose parameters can be estimated reliably from survey data. We present a PSF model that meets these requirements based on a fast deep-learning method to estimate its free parameters. We demonstrate our approach on publicly available SDSS data. We extract the most important features of the SDSS sample via principal component analysis. Next, we construct our model based on perturbations of a fixed base profile, ensuring that it captures these features. We then train a Convolutional Neural Network to estimate the free parameters of the model from noisy images of the PSF. This allows us to render a model image of each star, which we compare to the SDSS stars to evaluate the performance of our method. We find that our approach is able to accurately reproduce the SDSS PSF at the pixel level, which, due to the speed of both the model evaluation and the parameter estimation, offers good prospects for incorporating our method into the MCCL framework.

Journal ArticleDOI
TL;DR: Here it is shown that reference objects that are both spatially and spectrally separated from the object of interest can be used to obtain an approximation of the point spread function.
Abstract: Incoherently illuminated or luminescent objects give rise to a low-contrast speckle-like pattern when observed through a thin diffusive medium, as such a medium effectively convolves their shape with a speckle-like point spread function (PSF). This point spread function can be extracted in the presence of a reference object of known shape. Here it is shown that reference objects that are both spatially and spectrally separated from the object of interest can be used to obtain an approximation of the point spread function. The crucial observation, corroborated by analytical calculations, is that the spectrally shifted point spread function is strongly correlated to a spatially scaled one. With the approximate point spread function thus obtained, the speckle-like pattern is deconvolved to produce a clear and sharp image of the object on a speckle-like background of low intensity.

Journal ArticleDOI
TL;DR: This study proposes an SNR‐efficient spatial‐spectral sampling scheme using concentric circle echo planar trajectories (CONCEPT), which was adapted to intrinsically acquire a Hamming‐weighted k‐space, thus termed density‐ Weighted‐CONCEPT, which minimizes voxel bleeding, while preserving an optimal SNR.
Abstract: PURPOSE Full-slice magnetic resonance spectroscopic imaging at ≥7 T is especially vulnerable to lipid contaminations arising from regions close to the skull. This contamination can be mitigated by improving the point spread function via higher spatial resolution sampling and k-space filtering, but this prolongs scan times and reduces the signal-to-noise ratio (SNR) efficiency. Currently applied parallel imaging methods accelerate magnetic resonance spectroscopic imaging scans at 7T, but increase lipid artifacts and lower SNR-efficiency further. In this study, we propose an SNR-efficient spatial-spectral sampling scheme using concentric circle echo planar trajectories (CONCEPT), which was adapted to intrinsically acquire a Hamming-weighted k-space, thus termed density-weighted-CONCEPT. This minimizes voxel bleeding, while preserving an optimal SNR. THEORY AND METHODS Trajectories were theoretically derived and verified in phantoms as well as in the human brain via measurements of five volunteers (single-slice, field-of-view 220 × 220 mm2 , matrix 64 × 64, scan time 6 min) with free induction decay magnetic resonance spectroscopic imaging. Density-weighted-CONCEPT was compared to (a) the originally proposed CONCEPT with equidistant circles (here termed e-CONCEPT), (b) elliptical phase-encoding, and (c) 5-fold Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase-encoding. RESULTS By intrinsically sampling a Hamming-weighted k-space, density-weighted-CONCEPT removed Gibbs-ringing artifacts and had in vivo +9.5%, +24.4%, and +39.7% higher SNR than e-CONCEPT, elliptical phase-encoding, and the Controlled Aliasing In Parallel Imaging Results IN Higher Acceleration accelerated elliptical phase-encoding (all P < 0.05), respectively, which lead to improved metabolic maps. CONCLUSION Density-weighted-CONCEPT provides clinically attractive full-slice high-resolution magnetic resonance spectroscopic imaging with optimal SNR at 7T. Magn Reson Med 79:2874-2885, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Journal ArticleDOI
TL;DR: In this article, the atomic density distribution was resolved with a point spread function FWHM of 32(4) nm and a localization precision below 1 nm using a short optical pumping pulse.
Abstract: Super-resolution microscopy has revolutionized the fields of chemistry and biology by resolving features at the molecular level. Such techniques can be either "stochastic," gaining resolution through precise localization of point source emitters, or "deterministic," leveraging the nonlinear optical response of a sample to improve resolution. In atomic physics, deterministic methods can be applied to reveal the atomic wavefunction and to perform quantum control. Here we demonstrate super-resolution imaging based on nonlinear response of atoms to an optical pumping pulse. With this technique the atomic density distribution can be resolved with a point spread function FWHM of 32(4) nm and a localization precision below 1 nm. The short optical pumping pulse of 1.4 $\mu$s enables us to resolve fast atomic dynamics within a single lattice site. A byproduct of our scheme is the emergence of moire patterns on the atomic cloud, which we show to be immensely magnified images of the atomic density in the lattice. Our work represents a general approach to accessing the physics of cold atoms at the nanometer scale, and can be extended to higher dimensional lattices and bulk systems for a variety of atomic and molecular species.

Journal ArticleDOI
TL;DR: In this paper, reference objects that are both spatially and spectrally separated from the object of interest can be used to obtain an approximation of the point spread function, which is then deconvolved to produce a clear and sharp image of the object on a speckle-like background of low intensity.
Abstract: Incoherently illuminated or luminescent objects give rise to a low-contrast speckle-like pattern when observed through a thin diffusive medium, as such a medium effectively convolves their shape with a speckle-like point spread function (PSF). This point spread function can be extracted in the presence of a reference object of known shape. Here it is shown that reference objects that are both spatially and spectrally separated from the object of interest can be used to obtain an approximation of the point spread function. The crucial observation, corroborated by analytical calculations, is that the spectrally shifted point spread function is strongly correlated to a spatially scaled one. With the approximate point spread function thus obtained, the speckle-like pattern is deconvolved to produce a clear and sharp image of the object on a speckle-like background of low intensity.

Journal ArticleDOI
TL;DR: This work simultaneously optimises spot size and its intensity relative to the sidebands for various fields of view, giving a set of best compromises for use in different imaging scenarios, and introduces a less computationally demanding approach suitable for real-time use in the laboratory.
Abstract: Optical superoscillatory imaging, allowing unlabelled far-field super-resolution, has in recent years become reality. Instruments have been built and their super-resolution imaging capabilities demonstrated. The question is no longer whether this can be done, but how well: what resolution is practically achievable? Numerous works have optimised various particular features of superoscillatory spots, but in order to probe the limits of superoscillatory imaging we need to simultaneously optimise all the important spot features: those that define the resolution of the system. We simultaneously optimise spot size and its intensity relative to the sidebands for various fields of view, giving a set of best compromises for use in different imaging scenarios. Our technique uses the circular prolate spheroidal wave functions as a basis set on the field of view, and the optimal combination of these, representing the optimal spot, is found using a multi-objective genetic algorithm. We then introduce a less computationally demanding approach suitable for real-time use in the laboratory which, crucially, allows independent control of spot size and field of view. Imaging simulations demonstrate the resolution achievable with these spots. We show a three-order-of-magnitude improvement in the efficiency of focusing to achieve the same resolution as previously reported results, or a 26 % increase in resolution for the same efficiency of focusing.

Journal ArticleDOI
TL;DR: The results will allow future research efforts to assess the number of pixels, pixel size, pixel-well depth, and read-noise standard deviation needed from a focal-plane array when using digital-holographic detection in the off-axis pupil plane recording geometry for estimating the complex-optical field when in the presence of deep turbulence and detection noise.
Abstract: This paper uses wave-optics and signal-to-noise models to explore the estimation accuracy of digital-holographic detection in the off-axis pupil plane recording geometry for deep-turbulence wavefront sensing. In turn, the analysis examines three important parameters: the number of pixels across the width of the focal-plane array, the window radius in the Fourier plane, and the signal-to-noise ratio. By varying these parameters, the wave-optics and signal-to-noise models quantify performance via a metric referred to as the field-estimated Strehl ratio, and the analysis leads to a method for optimal windowing of the turbulence-limited point spread function. Altogether, the results will allow future research efforts to assess the number of pixels, pixel size, pixel-well depth, and read-noise standard deviation needed from a focal-plane array when using digital-holographic detection in the off-axis pupil plane recording geometry for estimating the complex-optical field when in the presence of deep turbulence and detection noise.

Journal ArticleDOI
TL;DR: NR-EBE motion-compensated image reconstruction appears to be a promising tool for lesion detection and quantification when imaging thoracic and abdominal regions using PET.
Abstract: Respiratory motion during positron emission tomography (PET)/computed tomography (CT) imaging can cause significant image blurring and underestimation of tracer concentration for both static and dynamic studies. In this paper, with the aim to eliminate both intra-cycle and inter-cycle motions, and apply to dynamic imaging, we developed a non-rigid event-by-event (NR-EBE) respiratory motion-compensated list-mode reconstruction algorithm. The proposed method consists of two components: the first component estimates a continuous non-rigid motion field of the internal organs using the internal–external motion correlation. This continuous motion field is then incorporated into the second component, non-rigid MOLAR (NR-MOLAR) reconstruction algorithm to deform the system matrix to the reference location where the attenuation CT is acquired. The point spread function (PSF) and time-of-flight (TOF) kernels in NR-MOLAR are incorporated in the system matrix calculation, and therefore are also deformed according to motion. We first validated NR-MOLAR using a XCAT phantom with a simulated respiratory motion. NR-EBE motion-compensated image reconstruction using both the components was then validated on three human studies injected with 18F-FPDTBZ and one with 18F-fluorodeoxyglucose (FDG) tracers. The human results were compared with conventional non-rigid motion correction using discrete motion field (NR-discrete, one motion field per gate) and a previously proposed rigid EBE motion-compensated image reconstruction (R-EBE) that was designed to correct for rigid motion on a target lesion/organ. The XCAT results demonstrated that NR-MOLAR incorporating both PSF and TOF kernels effectively corrected for non-rigid motion. The 18F-FPDTBZ studies showed that NR-EBE out-performed NR-Discrete, and yielded comparable results with R-EBE on target organs while yielding superior image quality in other regions. The FDG study showed that NR-EBE clearly improved the visibility of multiple moving lesions in the liver where some of them could not be discerned in other reconstructions, in addition to improving quantification. These results show that NR-EBE motion-compensated image reconstruction appears to be a promising tool for lesion detection and quantification when imaging thoracic and abdominal regions using PET.

Journal ArticleDOI
TL;DR: A ring-fitting method was developed to extract geometrical features out of the dual rings and to connect these features with several experimental parameters and it was found that the radius of ring equaled to the wavevector of SPPs.
Abstract: Objective-based surface plasmon resonance microscopy (SPRM) is a novel optical imaging technique that can map the spatial distribution of a local refractive index based on propagating surface plasmon polaritons (SPPs). Different from some other optical microscopy that shows a dot-like point spread function (PSF), a nanosized object appears as a wave-like pattern containing parabolic tails in SPRM. The geometrical complexity of the wave-like pattern hampered the quantitative interpretation of the PSF of SPRM. Previous studies have shown that two adjacent rings were obtained in the frequency domain by applying a two-dimensional Fourier transform to such patterns. In the present work, a ring-fitting method was developed to extract geometrical features out of the dual rings and to connect these features with several experimental parameters. It was found that the radius of ring equaled to the wavevector of SPPs. Its orientation revealed the propagation direction of SPPs. The coordinate distance of the center o...

Journal ArticleDOI
TL;DR: It is found that laser-scanning TSFG provides vibrationally sensitive imaging capabilities of lipid droplets and structures in sectioned tissue samples, offering a nonlinear infrared alternative to coherent Raman methods.
Abstract: We studied the use of vibrationally resonant, third-order sum-frequency generation (TSFG) for imaging of biological samples. We found that laser-scanning TSFG provides vibrationally sensitive imaging capabilities of lipid droplets and structures in sectioned tissue samples. Although the contrast is based on the infrared-activity of molecular modes, TSFG images exhibit a high lateral resolution of 0.5 µm or better. We observed that the imaging properties of TSFG resemble the imaging properties of coherent anti-Stokes Raman scattering (CARS) microscopy, offering a nonlinear infrared alternative to coherent Raman methods. TSFG microscopy holds promise as a high-resolution imaging technique in the fingerprint region where coherent Raman techniques often provide insufficient sensitivity.