scispace - formally typeset
Search or ask a question

Showing papers on "Point spread function published in 2020"


Journal ArticleDOI
TL;DR: In this paper, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (i.e., delays, locations, or Dopplers) from noisy measurements.
Abstract: At the core of many sensing and imaging applications, the signal of interest can be modeled as a linear superposition of translated or modulated versions of some template [e.g., a point spread function (PSF) or a Green's function] and the fundamental problem is to estimate the translation or modulation parameters (e.g., delays, locations, or Dopplers) from noisy measurements. This problem is centrally important to not only target localization in radar and sonar, channel estimation in wireless communications, and direction-of-arrival estimation in array signal processing, but also modern imaging modalities such as superresolution single-molecule fluorescence microscopy, nuclear magnetic resonance imaging, and spike localization in neural recordings, among others.

112 citations


Proceedings ArticleDOI
14 Jun 2020
TL;DR: Zhang et al. as discussed by the authors jointly trained an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a CNN.
Abstract: High-dynamic-range (HDR) imaging is crucial for many applications. Yet, acquiring HDR images with a single shot remains a challenging problem. Whereas modern deep learning approaches are successful at hallucinating plausible HDR content from a single low-dynamic-range (LDR) image, saturated scene details often cannot be faithfully recovered. Inspired by recent deep optical imaging approaches, we interpret this problem as jointly training an optical encoder and electronic decoder where the encoder is parameterized by the point spread function (PSF) of the lens, the bottleneck is the sensor with a limited dynamic range, and the decoder is a convolutional neural network (CNN). The lens surface is then jointly optimized with the CNN in a training phase; we fabricate this optimized optical element and attach it as a hardware add-on to a conventional camera during inference. In extensive simulations and with a physical prototype, we demonstrate that this end-to-end deep optical imaging approach to single-shot HDR imaging outperforms both purely CNN-based approaches and other PSF engineering approaches.

61 citations


Journal ArticleDOI
TL;DR: HyCoNet as discussed by the authors proposes an unsupervised deep learning-based fusion method that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information.
Abstract: Due to the limitations of hyperspectral imaging systems, hyperspectral imagery (HSI) often suffers from poor spatial resolution, thus hampering many applications of the imagery. Hyperspectral super-resolution refers to fusing HSI and MSI to generate an image with both high spatial and high spectral resolutions. Recently, several new methods have been proposed to solve this fusion problem, and most of these methods assume that the prior information of the Point Spread Function (PSF) and Spectral Response Function (SRF) are known. However, in practice, this information is often limited or unavailable. In this work, an unsupervised deep learning-based fusion method - HyCoNet - that can solve the problems in HSI-MSI fusion without the prior PSF and SRF information is proposed. HyCoNet consists of three coupled autoencoder nets in which the HSI and MSI are unmixed into endmembers and abundances based on the linear unmixing model. Two special convolutional layers are designed to act as a bridge that coordinates with the three autoencoder nets, and the PSF and SRF parameters are learned adaptively in the two convolution layers during the training process. Furthermore, driven by the joint loss function, the proposed method is straightforward and easily implemented in an end-to-end training manner. The experiments performed in the study demonstrate that the proposed method performs well and produces robust results for different datasets and arbitrary PSFs and SRFs.

56 citations


Journal ArticleDOI
TL;DR: In this article, an automated masking algorithm that operates within CLEAN called AUTO-MULTITHRESH is described. But this approach is not possible with today's large data volumes which require automated imaging pipelines.
Abstract: Producing images from interferometer data requires accurate modeling of the sources in the field of view, which is typically done using the CLEAN algorithm. Given the large number of degrees of freedom in interferometeric images, one constrains the possible model solutions for CLEAN by masking regions that contain emission. Traditionally this process has largely been done by hand. This approach is not possible with today's large data volumes which require automated imaging pipelines. This paper describes an automated masking algorithm that operates within CLEAN called AUTO-MULTITHRESH. This algorithm was developed and validated using a set of ~1000 ALMA images chosen to span a range of intrinsic morphology and data characteristics. It takes a top-down approach to producing masks: it uses the residual images to identify significant peaks and then expands the mask to include emission associated with these peaks down to lower signal-to-noise noise. The AUTO-MULTITHRESH algorithm has been implemented in CASA and has been used in production as part of the ALMA Imaging Pipeline starting with Cycle 5. It has been shown to be able to mask a wide range of emission ranging from simple point sources to complex extended emission with minimal tuning of the parameters based on the point spread function of the data. Although the algorithm was developed for ALMA, it is general enough to have been used successfully with data from other interferometers with appropriate parameter tuning. Integrating the algorithm more deeply within the minor cycle could lead to future performance improvements.

52 citations


Journal ArticleDOI
TL;DR: An on-chip, widefield fluorescence microscope is presented, which consists of a diffuser placed a few millimeters away from a traditional image sensor, enabling refocusability in post-processing and three-dimensional imaging of sparse samples from a single acquisition.
Abstract: We present an on-chip, widefield fluorescence microscope, which consists of a diffuser placed a few millimeters away from a traditional image sensor. The diffuser replaces the optics of a microscope, resulting in a compact and easy-to-assemble system with a practical working distance of over 1.5 mm. Furthermore, the diffuser encodes volumetric information, enabling refocusability in post-processing and three-dimensional (3D) imaging of sparse samples from a single acquisition. Reconstruction of images from the raw data requires a precise model of the system, so we introduce a practical calibration scheme and a physics-based forward model to efficiently account for the spatially-varying point spread function (PSF). To improve performance in low-light, we propose a random microlens diffuser, which consists of many small lenslets randomly placed on the mask surface and yields PSFs that are robust to noise. We build an experimental prototype and demonstrate our system on both planar and 3D samples.

47 citations


Journal ArticleDOI
TL;DR: It is shown that the lateral shape of the iPSF can be used to achieve nanometric three-dimensional localization over an extended axial range on the order of 10 µm either by means of a fit to an analytical model or calibration-free unsupervised machine learning.
Abstract: Interferometric scattering (iSCAT) microscopy is an emerging label-free technique optimized for the sensitive detection of nano-matter. Previous iSCAT studies have approximated the point spread function in iSCAT by a Gaussian intensity distribution. However, recent efforts to track the mobility of nanoparticles in challenging speckle environments and over extended axial ranges has necessitated a quantitative description of the interferometric point spread function (iPSF). We present a robust vectorial diffraction model for the iPSF in tandem with experimental measurements and rigorous FDTD simulations. We examine the iPSF under various imaging scenarios to understand how aberrations due to the experimental configuration encode information about the nanoparticle. We show that the lateral shape of the iPSF can be used to achieve nanometric three-dimensional localization over an extended axial range on the order of 10 µm either by means of a fit to an analytical model or calibration-free unsupervised machine learning. Our results have immediate implications for three-dimensional single particle tracking in complex scattering media.

41 citations


Journal ArticleDOI
TL;DR: This work investigates a simple, low-cost, and compact optical coding camera design that supports high-resolution image reconstructions from raw measurements with low pixel counts, and uses an end-to-end framework to simultaneously optimize the optical design and a reconstruction network for obtaining super-resolved images from raw measures.
Abstract: Single Photon Avalanche Photodiodes (SPADs) have recently received a lot of attention in imaging and vision applications due to their excellent performance in low-light conditions, as well as their ultra-high temporal resolution. Unfortunately, like many evolving sensor technologies, image sensors built around SPAD technology currently suffer from a low pixel count. In this work, we investigate a simple, low-cost, and compact optical coding camera design that supports high-resolution image reconstructions from raw measurements with low pixel counts. We demonstrate this approach for regular intensity imaging, depth imaging, as well transient imaging. Our method uses an end-to-end framework to simultaneously optimize the optical design and a reconstruction network for obtaining super-resolved images from raw measurements. The optical design space is that of an engineered point spread function (implemented with diffractive optics), which can be considered an optimized anti-aliasing filter to preserve as much high-resolution information as possible despite imaging with a low pixel count, low fill-factor SPAD array. We further investigate a deep network for reconstruction. The effectiveness of this joint design and reconstruction approach is demonstrated for a range of different applications, including high-speed imaging, and time of flight depth imaging, as well as transient imaging. While our work specifically focuses on low-resolution SPAD sensors, similar approaches should prove effective for other emerging image sensor technologies with low pixel counts and low fill-factors.

40 citations


Journal ArticleDOI
Matt J. Jarvis1, Gary Bernstein1, Alexandra Amon2, C. Davis2, P. F. Léget3, Keith Bechtol4, Ian Harrison5, M. Gatti6, A. Roodman2, A. Roodman7, Chihway Chang8, R. Chen9, A. Choi10, S. Desai11, Alex Drlica-Wagner12, Alex Drlica-Wagner8, Daniel Gruen2, Daniel Gruen7, Robert A. Gruendl13, Robert A. Gruendl14, A. Hernandez2, Niall MacCrann10, J. Meyers15, A. Navarro-Alsina16, S. Pandey1, A. A. Plazas17, L. F. Secco1, Erin Sheldon18, Michael Troxel9, S. Vorperian, K. Wei8, Joe Zuntz19, T. M. C. Abbott, Michel Aguena20, S. Allam12, Santiago Avila21, Sunayana Bhargava22, Sarah Bridle5, David J. Brooks23, A. Carnero Rosell24, A. Carnero Rosell25, M. Carrasco Kind14, M. Carrasco Kind13, J. Carretero6, M. Costanzi26, L. N. da Costa, J. De Vicente, H. T. Diehl12, Peter Doel23, S. Everett27, B. Flaugher12, Pablo Fosalba24, Josh Frieman8, Josh Frieman12, Juan Garcia-Bellido21, Enrique Gaztanaga24, D. W. Gerdes28, G. Gutierrez12, Samuel Hinton29, D. L. Hollowood27, K. Honscheid10, David J. James30, S. Kent12, S. Kent8, Kyler Kuehn31, Kyler Kuehn32, N. Kuropatkin12, Ofer Lahav23, M. A. G. Maia, M. March1, Jennifer L. Marshall33, Peter Melchior17, Felipe Menanteau14, Felipe Menanteau13, Ramon Miquel6, Ramon Miquel34, R. L. C. Ogando, F. Paz-Chinchón35, F. Paz-Chinchón14, Eli S. Rykoff7, Eli S. Rykoff2, E. J. Sanchez, V. Scarpine12, Michael Schubnell28, S. Serrano24, I. Sevilla-Noarbe, M. Smith36, E. Suchyta37, M. E. C. Swanson14, G. Tarle28, T. N. Varga38, T. N. Varga39, A. R. Walker, W. C. Wester12, R. D. Wilkinson22 
TL;DR: A new software package for modeling the point-spread function (PSF) of astronomical images, called Piff (PSFs In the Full FOV), is introduced, which is applied to the first three years of the Dark Energy Survey data.
Abstract: We introduce a new software package for modelling the point spread function (PSF) of astronomical images, called PIFF (PSFs In the Full FOV), which we apply to the first three years (known as Y3) of the Dark Energy Survey (DES) data. We describe the relevant details about the algorithms used by PIFF to model the PSF, including how the PSF model varies across the field of view (FOV). Diagnostic results show that the systematic errors from the PSF modelling are very small over the range of scales that are important for the DES Y3 weak lensing analysis. In particular, the systematic errors from the PSF modelling are significantly smaller than the corresponding results from the DES year one (Y1) analysis. We also briefly describe some planned improvements to PIFF that we expect to further reduce the modelling errors in future analyses.

39 citations


Journal ArticleDOI
20 Mar 2020
TL;DR: In this paper, the authors used quadriwave lateral shearing interferometry to measure the optical properties of nanoparticles, namely the complex polarizability and the extinction, scattering, and absorption cross sections.
Abstract: This paper introduces a procedure aimed to quantitatively measure the optical properties of nanoparticles, namely the complex polarizability and the extinction, scattering, and absorption cross sections, simultaneously. The method is based on the processing of intensity and wavefront images of a light beam illuminating the nanoparticle of interest. Intensity and wavefront measurements are carried out using quadriwave lateral shearing interferometry, a quantitative phase imaging technique with high spatial resolution and sensitivity. The method does not require any preknowledge on the particle and involves a single interferogram image acquisition. The full determination of the actual optical properties of nanoparticles is of particular interest in plasmonics and nanophotonics for the active search and characterization of new materials, e.g., aimed to replace noble metals in future applications of nanoplasmonics with less-lossy or refractory materials.

33 citations


Journal ArticleDOI
TL;DR: An integrated registration and fusion approach that has the best performance on images with registration errors as well as on simulations that do not consider registration effects and was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets.
Abstract: Combining a hyperspectral (HS) image and a multispectral (MS) image—an example of image fusion—can result in a spatially and spectrally high-resolution image. Despite the plethora of fusion algorithms in remote sensing, a necessary prerequisite, namely registration, is mostly ignored. This limits their application to well-registered images from the same source. In this article, we propose and validate an integrated registration and fusion approach (code available at https://github.com/zhouyuanzxcv/Hyperspectral ). The registration algorithm minimizes a least-squares (LSQ) objective function with the point spread function (PSF) incorporated together with a nonrigid freeform transformation applied to the HS image and a rigid transformation applied to the MS image. It can handle images with significant scale differences and spatial distortion. The fusion algorithm takes the full high-resolution HS image as an unknown in the objective function. Assuming that the pixels lie on a low-dimensional manifold invariant to local linear transformations from spectral degradation, the fusion optimization problem leads to a closed-form solution. The method was validated on the Pavia University, Salton Sea, and the Mississippi Gulfport datasets. When the proposed registration algorithm is compared to its rigid variant and two mutual information-based methods, it has the best accuracy for both the nonrigid simulated dataset and the real dataset, with an average error less than 0.15 pixels for nonrigid distortion of maximum 1 HS pixel. When the fusion algorithm is compared with current state-of-the-art algorithms, it has the best performance on images with registration errors as well as on simulations that do not consider registration effects.

31 citations


Journal ArticleDOI
TL;DR: In this paper, the causes of the wind-driven halo are reviewed and a method to analyze its contribution directly from the scientific images is presented, and its effect on the raw contrast and on the final contrast after postprocessing is demonstrated.
Abstract: Context. The wind driven halo is a feature observed within the images delivered by the latest generation of ground-based instruments equipped with an extreme adaptive optics system and a coronagraphic device, such as SPHERE at the VLT. This signature appears when the atmospheric turbulence conditions are varying faster than the adaptive optics loop can correct. The wind driven halo shows as a radial extension of the point spread function along a distinct direction (sometimes referred to as the butterfly pattern). When present, it significantly limits the contrast capabilities of the instrument and prevents the extraction of signals at close separation or extended signals such as circumstellar disks. This limitation is consequential because it contaminates the data a substantial fraction of the time: about 30% of the data produced by the VLT/SPHERE instrument are affected by the wind driven halo.Aims. This paper reviews the causes of the wind driven halo and presents a method to analyze its contribution directly from the scientific images. Its effect on the raw contrast and on the final contrast after post-processing is demonstrated.Methods. We used simulations and on-sky SPHERE data to verify that the parameters extracted with our method are capable of describing the wind driven halo present in the images. We studied the temporal, spatial and spectral variation of these parameters to point out its deleterious effect on the final contrast.Results. The data driven analysis we propose does provide information to accurately describe the wind driven halo contribution in the images. This analysis justifies why this is a fundamental limitation to the final contrast performance reached.Conclusions. With the established procedure, we will analyze a large sample of data delivered by SPHERE in order to propose, in the future, post-processing techniques tailored to remove the wind driven halo.

Journal ArticleDOI
TL;DR: In this paper, a unified method for atmospheric turbulence mitigation in both static, and dynamic sequences is presented, which utilizes a novel space-time non-local averaging method to construct a reliable reference frame, a geometric consistency and a sharpness metric to generate the lucky frame, and a physics-constrained prior model of the point spread function for blind deconvolution.
Abstract: Ground based long-range passive imaging systems often suffer from degraded image quality due to a turbulent atmosphere. While methods exist for removing such turbulent distortions, many are limited to static sequences which cannot be extended to dynamic scenes. In addition, the physics of the turbulence is often not integrated into the image reconstruction algorithms, making the physics foundations of the methods weak. In this article, we present a unified method for atmospheric turbulence mitigation in both static, and dynamic sequences. We are able to achieve better results compared to existing methods by utilizing (i) a novel space-time non-local averaging method to construct a reliable reference frame, (ii) a geometric consistency, and a sharpness metric to generate the lucky frame, (iii) a physics-constrained prior model of the point spread function for blind deconvolution. Experimental results based on synthetic and real long-range turbulence sequences validate the performance of the proposed method.

Journal ArticleDOI
TL;DR: This paper proposes a fractional-order total variation regularization to remove the blur and Poisson noise simultaneously and develops two efficient algorithms based on the alternating direction method of multipliers, while an expectation-maximization algorithm is adopted only in the blind case.
Abstract: In a wide range of applications such as astronomy, biology, and medical imaging, acquired data are usually corrupted by Poisson noise and blurring artifacts. Poisson noise often occurs when photon counting is involved in such imaging modalities as X-ray, positron emission tomography, and fluorescence microscopy. Meanwhile, blurring is also inevitable due to the physical mechanism of an imaging system, which can be modeled as a convolution of the image with a point spread function. In this paper, we consider both non-blind and blind image deblurring models that deal with Poisson noise. In the pursuit of high-order smoothness of a restored image, we propose a fractional-order total variation regularization to remove the blur and Poisson noise simultaneously. We develop two efficient algorithms based on the alternating direction method of multipliers, while an expectation-maximization algorithm is adopted only in the blind case. A variety of numerical experiments have demonstrated that the proposed algorithms can efficiently reconstruct piecewise smooth images degraded by Poisson noise and various types of blurring, including Gaussian and motion blurs. Specifically for blind image deblurring, we obtain significant improvements over the state of the art.

Journal ArticleDOI
TL;DR: A new accuracy index is proposed which considers SPM performances in classification and restoration of spatial structure simultaneously simultaneously, and shows that by considering the PSF effect, more accurate SPM results were produced and small-sized patches and elongated features were restored more satisfactorily.

Journal ArticleDOI
20 May 2020
TL;DR: In this article, a coded aperture realized using a dynamic metasurface was proposed to estimate the spectral source distribution from a series of single-port spectral magnitude measurements and complex characterization of the modulation patterns.
Abstract: Passive microwave imaging of incoherent sources is often approached in a lensless configuration through array-based interferometric processing. We present an alternative route in the form of a coded aperture realized using a dynamic metasurface. We demonstrate that this device can achieve an estimate of the spectral source distribution from a series of single-port spectral magnitude measurements and complex characterization of the modulation patterns. The image estimation problem is formulated in this case as compressive inversion of a set of standard linear matrix equations. In addition, we demonstrate that a dispersive metasurface design can achieve spectral encoding directly, offering the potential for spectral imaging from frequency-integrated, multiplexed measurements. The microwave dynamic metasurface aperture as an encoding structure is shown to comprise a substantially simplified hardware architecture than that employed in common passive microwave imaging systems. Our proposed technique can facilitate large scale microwave imaging applications that exploit pervasive ambient sources, while similar principles can readily be applied at terahertz, infrared, and optical frequencies.

Proceedings ArticleDOI
14 Jun 2020
TL;DR: In this article, a 3D human pose estimation method from transient images acquired by an optical non-line-of-sight (NLOS) imaging system is proposed to estimate the human pose by 'looking around corners' through the use of light indirectly reflected by the environment.
Abstract: We describe a method for 3D human pose estimation from transient images (i.e., a 3D spatio-temporal histogram of photons) acquired by an optical non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose by 'looking around corners' through the use of light indirectly reflected by the environment. We bring together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate. Our contributions are the design of data representation process which includes (1) a learnable inverse point spread function (PSF) to convert raw transient images into a deep feature vector; (2) a neural humanoid control policy conditioned on the transient image feature and learned from interactions with a physics simulator; and (3) a data synthesis and augmentation strategy based on depth data that can be transferred to a real-world NLOS imaging system. Our preliminary experiments suggest that our method is able to generalize to real-world NLOS measurement to estimate physically-valid 3D human poses.

Journal ArticleDOI
TL;DR: A novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration that can boost the frequency fall-off of the point spread function (PSF) associated with the optical blur.
Abstract: In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of finite impulse response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the point spread function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.

Journal ArticleDOI
TL;DR: This method uses deep convolution neural networks to reconstruct high-density multicolor super-resolution images from low-density, contaminated multicolors rendered using sSMLM datasets with much fewer frames, without compromising spatial resolution.
Abstract: Spectroscopic single-molecule localization microscopy (sSMLM) simultaneously provides spatial localization and spectral information of individual single-molecules emission, offering multicolor super-resolution imaging of multiple molecules in a single sample with the nanoscopic resolution. However, this technique is limited by the requirements of acquiring a large number of frames to reconstruct a super-resolution image. In addition, multicolor sSMLM imaging suffers from spectral cross-talk while using multiple dyes with relatively broad spectral bands that produce cross-color contamination. Here, we present a computational strategy to accelerate multicolor sSMLM imaging. Our method uses deep convolution neural networks to reconstruct high-density multicolor super-resolution images from low-density, contaminated multicolor images rendered using sSMLM datasets with much fewer frames, without compromising spatial resolution. High-quality, super-resolution images are reconstructed using up to 8-fold fewer frames than usually needed. Thus, our technique generates multicolor super-resolution images within a much shorter time, without any changes in the existing sSMLM hardware system. Two-color and three-color sSMLM experimental results demonstrate superior reconstructions of tubulin/mitochondria, peroxisome/mitochondria, and tubulin/mitochondria/peroxisome in fixed COS-7 and U2-OS cells with a significant reduction in acquisition time.

Journal ArticleDOI
TL;DR: This paper establishes a sufficient separation criterion between the sources, depending only on the PSF, above which the Beurling-LASSO estimator is guaranteed to return a stable estimate of the point sources, with the same number of estimated elements as that of the ground truth.
Abstract: The stability of spike deconvolution, which aims at recovering point sources from their convolution with a point spread function (PSF), is known to be related to the separation between those sources. When the observations are noisy, it is critical to ensure support stability , where the deconvolution does not lead to spurious, or oppositely, missing estimates of the point sources. In this paper, we study the resolution limit of stably recovering the support of two closely located point sources using the Beurling-LASSO estimator, which is a convex optimization approach based on total variation regularization. We establish a sufficient separation criterion between the sources, depending only on the PSF, above which the Beurling-LASSO estimator is guaranteed to return a stable estimate of the point sources, with the same number of estimated elements as that of the ground truth. Our result highlights the impact of PSF on the resolution limit in the noisy setting, which was not evident in previous studies of the noiseless setting. Towards the end, we show that the same resolution limit applies to resolving two close-located sources in conjunction of other well-separated sources.

Posted Content
TL;DR: This work brings together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate.
Abstract: We describe a method for 3D human pose estimation from transient images (i.e., a 3D spatio-temporal histogram of photons) acquired by an optical non-line-of-sight (NLOS) imaging system. Our method can perceive 3D human pose by `looking around corners' through the use of light indirectly reflected by the environment. We bring together a diverse set of technologies from NLOS imaging, human pose estimation and deep reinforcement learning to construct an end-to-end data processing pipeline that converts a raw stream of photon measurements into a full 3D human pose sequence estimate. Our contributions are the design of data representation process which includes (1) a learnable inverse point spread function (PSF) to convert raw transient images into a deep feature vector; (2) a neural humanoid control policy conditioned on the transient image feature and learned from interactions with a physics simulator; and (3) a data synthesis and augmentation strategy based on depth data that can be transferred to a real-world NLOS imaging system. Our preliminary experiments suggest that our method is able to generalize to real-world NLOS measurement to estimate physically-valid 3D human poses.

Journal ArticleDOI
TL;DR: The peak intensity of the effective point spread function (PSF) can be further increased by 4% by a new choice of the pixel reassignment factor, and image scanning microscopy exhibits axial resolution superior to a confocal microscope with a pinhole the same size as the detector array.
Abstract: Image scanning microscopy is a technique based on confocal microscopy, in which the confocal pinhole is replaced by a detector array, and the resulting image is reconstructed, usually by the process of pixel reassignment. The detector array collects most of the fluorescent light, so the signal-to-noise ratio is much improved compared with confocal microscopy with a small pinhole, while the resolution is improved compared with conventional (wide-field) microscopy. In previous studies, it has usually been assumed that pixels should be reassigned by a constant factor, to a point midway between the illumination and detection spots. Here it is shown that the peak intensity of the effective point spread function (PSF) can be further increased by 4% by a new choice of the pixel reassignment factor. For an array of two Airy units, the peak of the effective PSF is 1.90 times that of a conventional microscope, and the transverse resolution is 1.53 times better. It is confirmed that image scanning microscopy gives optical sectioning strength identical to that of a confocal microscope with a pinhole equal to the size of the detector array. However, it is shown that image scanning microscopy exhibits axial resolution superior to a confocal microscope with a pinhole the same size as the detector array. For a two-Airy-unit array, the axial resolution is 1.34 times better than in a conventional microscope for the standard reassignment factor, and 1.28 times better for the new reassignment factor. The axial resolution of a confocal microscope with a two-Airy-unit pinhole is only 1.04 times better than conventional microscopy. We also examine the signal-to-noise ratio of a point object in a uniform background (called the detectability), and show that it is 1.6 times higher than in a confocal microscope.

Journal ArticleDOI
TL;DR: In this article, the authors employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies.
Abstract: The deconvolution of large survey images with millions of galaxies requires developing a new generation of methods that can take a space-variant point spread function into account. These methods have also to be accurate and fast. We investigate how deep learning might be used to perform this task. We employed a U-net deep neural network architecture to learn parameters that were adapted for galaxy image processing in a supervised setting and studied two deconvolution strategies. The first approach is a post-processing of a mere Tikhonov deconvolution with closed-form solution, and the second approach is an iterative deconvolution framework based on the alternating direction method of multipliers (ADMM). Our numerical results based on GREAT3 simulations with realistic galaxy images and point spread functions show that our two approaches outperform standard techniques that are based on convex optimization, whether assessed in galaxy image reconstruction or shape recovery. The approach based on a Tikhonov deconvolution leads to the most accurate results, except for ellipticity errors at high signal-to-noise ratio. The ADMM approach performs slightly better in this case. Considering that the Tikhonov approach is also more computation-time efficient in processing a large number of galaxies, we recommend this approach in this scenario.

Journal ArticleDOI
TL;DR: An iterative calibration method based on phase measuring is proposed for fringe projection profilometry, exploiting the fact that the fringe phases on a plane board theoretically have a distribution of rational function to solve the problem of calibrating projector parameters.
Abstract: In fringe projection profilometry, system calibration is crucial for guaranteeing the measurement accuracies. Its difficulty lies in calibrating projector parameters, especially when the projector lens has distortions, since the projector, unlike a camera, cannot capture images, leading to an obstacle to knowing the correspondences between its pixels and object points. For solving this issue, this paper, exploiting the fact that the fringe phases on a plane board theoretically have a distribution of rational function, proposes an iterative calibration method based on phase measuring. Projecting fringes onto the calibration board and fitting the measured phases with a rational function allow us to determine projector pixels corresponding to the featured points on the calibration board. Using these correspondences, the projector parameters are easy to estimate. Noting that the projector lens distortions may deform the fitted phase map thus inducing errors in the estimates of the projector parameters, this paper suggests an iterative strategy to overcome this problem. By implementing the phase fitting and the parameter estimating alternately, the intrinsic and extrinsic parameters of the projector, as well as its lens distortion coefficients, are determined accurately. For compensating for the effects of the lens distortions on measurement, this paper gives two solutions. The pre-compensation actively curves the fringes in computer when generating them; whereas when using the post-compensation, the lens distortion correction is performed in the data processing stage. Both methods are experimentally verified to be effective in improving the measurement accuracies.

Journal ArticleDOI
TL;DR: In this article, the Fast and Furious (F&F) algorithm was used to measure and correct the low-wind effect (LWE), which severely distorts the point spread function (PSF) and degrading the contrast.
Abstract: High-contrast imaging (HCI) observations of exoplanets can be limited by the island effect (IE). On the current generation of telescopes, the IE becomes a severe problem when the ground wind speed is below a few meters per second. This is referred to as the low-wind effect (LWE). The LWE severely distorts the point spread function (PSF), significantly lowering the Strehl ratio and degrading the contrast. In this article, we aim to show that the focal-plane wavefront sensing (FPWFS) algorithm, Fast and Furious (F&F), can be used to measure and correct the IE/LWE. We deployed the algorithm on the SCExAO HCI instrument at the Subaru Telescope using the internal near-infrared camera in H-band. We tested F&F with the internal source, and it was deployed on-sky to test its performance with the full end-to-end system and atmospheric turbulence. The performance of the algorithm was evaluated by two metrics based on the PSF quality: 1) the Strehl ratio approximation ($SRA$), and 2) variance of the normalized first Airy ring ($VAR$). Random LWE phase screens with a peak-to-valley wavefront error between 0.4 $\mu$m and 2 $\mu$m were all corrected to a $SRA$ $>$90\% and an $VAR\lessapprox0.05$. Furthermore, the on-sky results show that F&F is able to improve the PSF quality during very challenging atmospheric conditions (1.3-1.4'' seeing at 500 nm). Closed-loop tests show that F&F is able to improve the $VAR$ from 0.27 to 0.03 and therefore significantly improve the symmetry of the PSF. Simultaneous observations of the PSF in the {optical} ($\lambda = $ 750 nm, $\Delta \lambda =$ 50 nm) show that during these tests we were correcting aberrations common to the optical and NIR paths within SCExAO. Going forward, the algorithm is suitable for incorporation into observing modes, which will enable PSFs of higher quality and stability during science observations.

Journal ArticleDOI
TL;DR: A simple, efficient, and most importantly fully operational point-spread-function (PSF)-reconstruction approach for laser-assisted ground layer adaptive optics (GLAO) in the frame of the Multi Unit Spectroscopic Explorer (MUSE) wide field mode is described.
Abstract: Context. Here we describe a simple, efficient, and most importantly fully operational point-spread-function (PSF)-reconstruction approach for laser-assisted ground layer adaptive optics (GLAO) in the frame of the Multi Unit Spectroscopic Explorer (MUSE) wide field mode.Aims. Based on clear astrophysical requirements derived by the MUSE team and using the functionality of the current ESO Adaptive Optics Facility we aim to develop an operational PSF-reconstruction (PSFR) algorithm and test it both in simulations and using on-sky data.Methods. The PSFR approach is based on a Fourier description of the GLAO correction to which the specific instrumental effects of MUSE wide field mode (pixel size, internal aberrations, etc.) have been added. It was first thoroughly validated with full end-to-end simulations. Sensitivity to the main atmospheric and AO system parameters was analysed and the code was re-optimised to account for the sensitivity found. Finally, the optimised algorithm was tested and commissioned using more than one year of on-sky MUSE data.Results. We demonstrate with an on-sky data analysis that our algorithm meets all the requirements imposed by the MUSE scientists, namely an accuracy better than a few percent on the critical PSF parameters including full width at half maximum and global PSF shape through the kurtosis parameter of a Moffat function.Conclusions. The PSFR algorithm is publicly available and is used routinely to assess the MUSE image quality for each observation. It can be included in any post-processing activity which requires knowledge of the PSF.

Journal ArticleDOI
TL;DR: Fast and Furious (F&F) as mentioned in this paper is a sequential phase diversity algorithm and a software-only solution to FPWFS that only requires access to images of non-coronagraphic PSFs and control of the deformable mirror.
Abstract: Context. High-contrast imaging (HCI) observations of exoplanets can be limited by the island effect (IE). The IE occurs when the main wavefront sensor (WFS) cannot measure sharp phase discontinuities across the telescope’s secondary mirror support structures (also known as spiders). On the current generation of telescopes, the IE becomes a severe problem when the ground wind speed is below a few meters per second. During these conditions, the air that is in close contact with the spiders cools down and is not blown away. This can create a sharp optical path length difference between light passing on opposite sides of the spiders. Such an IE aberration is not measured by the WFS and is therefore left uncorrected. This is referred to as the low-wind effect (LWE). The LWE severely distorts the point spread function (PSF), significantly lowering the Strehl ratio and degrading the contrast.Aims. In this article, we aim to show that the focal-plane wavefront sensing (FPWFS) algorithm, Fast and Furious (F&F), can be used to measure and correct the IE/LWE. The F&F algorithm is a sequential phase diversity algorithm and a software-only solution to FPWFS that only requires access to images of non-coronagraphic PSFs and control of the deformable mirror.Methods. We deployed the algorithm on the SCExAO HCI instrument at the Subaru Telescope using the internal near-infrared camera in H -band. We tested with the internal source to verify that F&F can correct a wide variety of LWE phase screens. Subsequently, F&F was deployed on-sky to test its performance with the full end-to-end system and atmospheric turbulence. The performance of the algorithm was evaluated by two metrics based on the PSF quality: (1) the Strehl ratio approximation (SRA), and (2) variance of the normalized first Airy ring (VAR). The VAR measures the distortion of the first Airy ring, and is used to quantify PSF improvements that do not or barely affect the PSF core (e.g., during challenging atmospheric conditions).Results. The internal source results show that F&F can correct a wide range of LWE phase screens. Random LWE phase screens with a peak-to-valley wavefront error between 0.4 μ m and 2 μ m were all corrected to a SRA > 90% and an VAR ⪅ 0.05. Furthermore, the on-sky results show that F&F is able to improve the PSF quality during very challenging atmospheric conditions (1.3–1.4″seeing at 500 nm). Closed-loop tests show that F&F is able to improve the VAR from 0.27–0.03 and therefore significantly improve the symmetry of the PSF. Simultaneous observations of the PSF in the optical (λ = 750 nm, Δλ = 50 nm) show that during these tests we were correcting aberrations common to the optical and NIR paths within SCExAO. We could not conclusively determine if we were correcting the LWE and/or (quasi-)static aberrations upstream of SCExAO.Conclusions. The F&F algorithm is a promising focal-plane wavefront sensing technique that has now been successfully tested on-sky. Going forward, the algorithm is suitable for incorporation into observing modes, which will enable PSFs of higher quality and stability during science observations.

Journal ArticleDOI
TL;DR: In this paper, the authors estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a CNN, which does not require instrument- or object-specific calibration.
Abstract: Optical microscopy is an essential tool in biology and medicine. Imaging thin, yet non-flat objects in a single shot (without relying on more sophisticated sectioning setups) remains challenging as the shallow depth of field that comes with high-resolution microscopes leads to unsharp image regions and makes depth localization and quantitative image interpretation difficult. Here, we present a method that improves the resolution of light microscopy images of such objects by locally estimating image distortion while jointly estimating object distance to the focal plane. Specifically, we estimate the parameters of a spatially-variant Point-Spread function (PSF) model using a Convolutional Neural Network (CNN), which does not require instrument- or object-specific calibration. Our method recovers PSF parameters from the image itself with up to a squared Pearson correlation coefficient of 0.99 in ideal conditions, while remaining robust to object rotation, illumination variations, or photon noise. When the recovered PSFs are used with a spatially-variant and regularized Richardson-Lucy deconvolution algorithm, we observed up to 2.1 dB better signal-to-noise ratio compared to other blind deconvolution techniques. Following microscope-specific calibration, we further demonstrate that the recovered PSF model parameters permit estimating surface depth with a precision of 2 micrometers and over an extended range when using engineered PSFs. Our method opens up multiple possibilities for enhancing images of non-flat objects with minimal need for a priori knowledge about the optical setup.

Journal ArticleDOI
TL;DR: This paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts, and provides a novel framework that incorporates the anticipated aberrations of the proposed Gali- MiniLFM into the point spread function (PSF) modeling.
Abstract: Integrating light field microscopy techniques with existing miniscope architectures has allowed for volumetric imaging of targeted brain regions in freely moving animals. However, the current design of light field miniscopes is limited by non-uniform resolution and long imaging path length. In an effort to overcome these limitations, this paper proposes an optimized Galilean-mode light field miniscope (Gali-MiniLFM), which achieves a more consistent resolution and a significantly shorter imaging path than its conventional counterparts. In addition, this paper provides a novel framework that incorporates the anticipated aberrations of the proposed Gali-MiniLFM into the point spread function (PSF) modeling. This more accurate PSF model can then be used in 3D reconstruction algorithms to further improve the resolution of the platform. Volumetric imaging in the brain necessitates the consideration of the effects of scattering. We conduct Monte Carlo simulations to demonstrate the robustness of the proposed Gali-MiniLFM for volumetric imaging in scattering tissue.

Journal ArticleDOI
TL;DR: The degradation of imaging quality with depth in deep brain multi-photon microscopy is characterized using a recently developed numerical model that computes wave propagation in scattering media and the signal-to-background ratio (SBR) and the resolution determined by the width of the point spread function are obtained as functions of depth.
Abstract: We have systematically characterized the degradation of imaging quality with depth in deep brain multi-photon microscopy, utilizing our recently developed numerical model that computes wave propagation in scattering media. The signal-to-background ratio (SBR) and the resolution determined by the width of the point spread function are obtained as functions of depth. We compare the imaging quality of two-photon (2PM), three-photon (3PM), and non-degenerate two-photon microscopy (ND-2PM) for mouse brain imaging. We show that the imaging depth of 2PM and ND-2PM are fundamentally limited by the SBR, while the SBR remains approximately invariant with imaging depth for 3PM. Instead, the imaging depth of 3PM is limited by the degradation of the resolution, if there is sufficient laser power to maintain the signal level at large depth. The roles of the concentration of dye molecules, the numerical aperture of the input light, the anisotropy factor g, noise level, input laser power, and the effect of temporal broadening are also discussed.

Journal ArticleDOI
TL;DR: In this article, a linear propagation forward model is used to reconstruct the structure of a microfluidic channel from a single image with only 8 elements of a 128-elements linear array.
Abstract: It has previously been demonstrated that model-based reconstruction methods relying on a priori knowledge of the imaging point spread function (PSF) coupled to sparsity priors on the object to image can provide super-resolution in photoacoustic (PA) or in ultrasound (US) imaging. Here, we experimentally show that such reconstruction also leads to super-resolution in both PA and US imaging with arrays having much less elements than used conventionally (sparse arrays). As a proof of concept, we obtained super-resolution PA and US cross-sectional images of microfluidic channels with only 8 elements of a 128-elements linear array using a reconstruction approach based on a linear propagation forward model and assuming sparsity of the imaged structure. Although the microchannels appear indistinguishable in the conventional delay-and-sum images obtained with all the 128 transducer elements, the applied sparsity-constrained model-based reconstruction provides super-resolution with down to only 8 elements. We also report simulation results showing that the minimal number of transducer elements required to obtain a correct reconstruction is fundamentally limited by the signal-to-noise ratio. The proposed method can be straigthforwardly applied to any transducer geometry, including 2D sparse arrays for 3D super-resolution PA and US imaging.