scispace - formally typeset
Search or ask a question

Showing papers on "Point spread function published in 2011"


Proceedings ArticleDOI
TL;DR: The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992 as mentioned in this paper, and has been used extensively for HST data analysis.
Abstract: Point spread function (PSF) models are critical to Hubble Space Te lescope (HST) data analysis. Astronomers unfamiliar with optical simulation techniques need access to PSF models that properly match the conditions of their observations, so any HST modeling software needs to be both easy-to-use and have detailed information on the telescope and instruments. The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992. We discuss the evolution of Tiny Tim over the years as new instruments and optical properties have been incorporated. We also dem onstrate how Tiny Tim PSF models have be en used for HST data analysis. Tiny Tim is freely available from tinytim.stsci.edu. Keywords: Hubble Space Telescope, point spread function 1. INTRODUCTION The point spread function (PSF) is the fundamental unit of imag e formation for an optical syst em such as a telescope. It encompasses the diffraction from obscurations, which is modified by aberrations, and the scattering from mid-to-high spatial frequency optical errors . Imaging performance is often described in terms of PSF properties, such as resolution and encircled energy. Optical engineering software, includi ng ray tracing and physical optic s propagation packages, are employed during the design phase of the system to predict the PSF to ensure that the imag ing requirements are met. But once the system is complete and operational, the software is usually packed away and the point spread function considered static, to be described in documentation for reference by the scientist. In this context, an optical engineer runs software to compute PSFs while the user of the optical system simply needs to know its basic characteristics. For the Hubble Space Telescope (HST), that is definitely not the case. To extract the maximum information out of an observation, even the smallest details of the PSF are important. Some examples include: deconvolvin g the PSF from an observed image to remove the blurring caused by diffraction and reveal fine structure; convolving a model image by the PSF to compare to an observed one; subtracting the PSF of an unresolved source (star or compact galactic nucleus) to reveal extended structure (a circumstellar disk or host galaxy) that would otherwise be unseen within the halo of diffracted and scattered light; and fitting a PSF to a star imag e to obtain accurate photometry and astrometry, especially if it is a binary star with blended PSFs Compared to ground-based telescopes HST is extremely stable, so the structure in its PSF is largely time-invariant. This allows the use of PSF models for data analysis. On the ground, the variable PSF structure due to the atmosphere and thermal-and-gravitationally-induced optical perturbations make it more difficult to produce a model that accurately matches the data. The effective HST PSF, though, is dependent on many parameters, including obscurations, aberrations, pointing errors, system wavelength response, object color, and detector pixel effects. An accurate PSF model must account for all of these, some of which may depend on time (focus, obscuration positions) or on field position within the camera (aberrations, CCD detector charge diffusion, obscuration patterns, geometric distortion). 1.1 Early HST PSF modeling: TIM Before launch of HST in 1990, a variety of commercial and proprietary software packages were used to compute PSFs. These provided predictions of HST’s imaging performance and guided the design, but they were not used by future HST observers. These programs were too complicated for general HS T users, and either were not publicly available or were too expensive. They also did not provide PSF models in forms that scientists would find useful, such as including the effects of detector pixelization and broadband system responses.

412 citations


Journal ArticleDOI
TL;DR: It is demonstrated both theoretically and experimentally that multidimensional MPI is a linear shift-invariant imaging system with an analytic point spread function and a fast image reconstruction method that obtains the intrinsic MPI image with high signal-to-noise ratio via a simple gridding operation in x-space.
Abstract: Magnetic particle imaging (MPI) is a promising new medical imaging tracer modality with potential applications in human angiography, cancer imaging, in vivo cell tracking, and inflammation imaging. Here we demonstrate both theoretically and experimentally that multidimensional MPI is a linear shift-invariant imaging system with an analytic point spread function. We also introduce a fast image reconstruction method that obtains the intrinsic MPI image with high signal-to-noise ratio via a simple gridding operation in x-space. We also demonstrate a method to reconstruct large field-of-view (FOV) images using partial FOV scanning, despite the loss of first harmonic image information due to direct feedthrough contamination. We conclude with the first experimental test of multidimensional x-space MPI.

264 citations


Journal ArticleDOI
TL;DR: Flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.
Abstract: The range of scene depths that appear focused in an image is known as the depth of field (DOF). Conventional cameras are limited by a fundamental trade-off between depth of field and signal-to-noise ratio (SNR). For a dark scene, the aperture of the lens must be opened up to maintain SNR, which causes the DOF to reduce. Also, today's cameras have DOFs that correspond to a single slab that is perpendicular to the optical axis. In this paper, we present an imaging system that enables one to control the DOF in new and powerful ways. Our approach is to vary the position and/or orientation of the image detector during the integration time of a single photograph. Even when the detector motion is very small (tens of microns), a large range of scene depths (several meters) is captured, both in and out of focus. Our prototype camera uses a micro-actuator to translate the detector along the optical axis during image integration. Using this device, we demonstrate four applications of flexible DOF. First, we describe extended DOF where a large depth range is captured with a very wide aperture (low noise) but with nearly depth-independent defocus blur. Deconvolving a captured image with a single blur kernel gives an image with extended DOF and high SNR. Next, we show the capture of images with discontinuous DOFs. For instance, near and far objects can be imaged with sharpness, while objects in between are severely blurred. Third, we show that our camera can capture images with tilted DOFs (Scheimpflug imaging) without tilting the image detector. Finally, we demonstrate how our camera can be used to realize nonplanar DOFs. We believe flexible DOF imaging can open a new creative dimension in photography and lead to new capabilities in scientific imaging, vision, and graphics.

208 citations


Journal ArticleDOI
TL;DR: It is shown that such a condenser can be replaced by a programmable LED array to achieve greater imaging flexibility and functionality and can be used for dark-field imaging, bright- field imaging, microscopy sectioning, and digital refocusing.
Abstract: The condenser is one of the main components in most transmitted light compound microscopes. In this Letter, we show that such a condenser can be replaced by a programmable LED array to achieve greater imaging flexibility and functionality. Without mechanically scanning the sample or changing the microscope setup, the proposed approach can be used for dark-field imaging, bright-field imaging, microscopy sectioning, and digital refocusing. Images of a starfish embryo were acquired by using such an approach for demonstration.

163 citations


Journal ArticleDOI
TL;DR: A confocal fluorescence microscope with adaptive optics, which can correct aberrations based on direct wavefront measurements using a Shack-Hartmann wavefront sensor with a fluorescent bead used as a point source reference beacon, is introduced.
Abstract: Optical aberrations due to the inhomogeneous refractive index of tissue degrade the resolution and brightness of images in deep-tissue imaging. We introduce a confocal fluorescence microscope with adaptive optics, which can correct aberrations based on direct wavefront measurements using a Shack-Hartmann wavefront sensor with a fluorescent bead used as a point source reference beacon. The results show a 4.3× improvement in the Strehl ratio and a 240% improvement in the signal intensity for fixed mouse tissues at depths of up to 100 μm.

154 citations


Journal ArticleDOI
TL;DR: An algorithm designed to achieve high contrast on both sides of the image plane while minimizing the stroke necessary from each deformable mirror (DM) is reviewed.
Abstract: The past decade has seen a significant growth in research targeted at space based observatories for imaging exo-solar planets. The challenge is in designing an imaging system for high-contrast. Even with a perfect coronagraph that modifies the point spread function to achieve high-contrast, wavefront sensing and control is needed to correct the errors in the optics and generate a "dark hole". The high-contrast imaging laboratory at Princeton University is equipped with two Boston Micromachines Kilo-DMs. We review here an algorithm designed to achieve high-contrast on both sides of the image plane while minimizing the stroke necessary from each deformable mirror (DM). This algorithm uses the first DM to correct for amplitude aberrations and the second DM to create a flat wavefront in the pupil plane. We then show the first results obtained at Princeton with this correction algorithm, and we demonstrate a symmetric dark hole in monochromatic light.

144 citations


Journal ArticleDOI
TL;DR: Three-dimensional super-resolution microscopy is demonstrated with the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2 μm depth of field with nanometer precision with limited numbers of photons.
Abstract: We describe the corkscrew point spread function (PSF), which can localize objects in three dimensions throughout a 3.2μm depth of field with nanometer precision. The corkscrew PSF rotates as a function of the axial (z) position of an emitter. Fisher information calculations show that the corkscrew PSF can achieve nanometer localization precision with limited numbers of photons. We demonstrate three-dimensional super-resolution microscopy with the corkscrew PSF by imaging beads on the surface of a triangular polydimethylsiloxane (PDMS) grating. With 99,000 photons detected, the corkscrew PSF achieves a localization precision of 2.7nm in x, 2.1nm in y, and 5.7nm in z.

127 citations


Journal ArticleDOI
TL;DR: A thorough theoretical model of the gamma-distorted fringe image is derived from an optical perspective, and a highly accurate and easy to implement gamma correction method is presented to reduce the obstinate phase error.
Abstract: In fast phase-measuring profilometry, phase error caused by gamma distortion is the dominant error source. Previous phase-error compensation or gamma correction methods require the projector to be focused for best performance. However, in practice, as digital projectors are built with large apertures, they cannot project ideal focused fringe images. In this Letter, a thorough theoretical model of the gamma-distorted fringe image is derived from an optical perspective, and a highly accurate and easy to implement gamma correction method is presented to reduce the obstinate phase error. With the proposed method, high measuring accuracy can be achieved with the conventional three-step phase-shifting algorithm. The validity of the technique is verified by experiments.

117 citations


Journal ArticleDOI
TL;DR: A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD) to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity.
Abstract: A full-wave equation that describes nonlinear propagation in a heterogeneous attenuating medium is solved numerically with finite differences in the time domain (FDTD). This numerical method is used to simulate propagation of a diagnostic ultrasound pulse through a measured representation of the human abdomen with heterogeneities in speed of sound, attenuation, density, and nonlinearity. Conventional delay-andsum beamforming is used to generate point spread functions (PSF) that display the effects of these heterogeneities. For the particular imaging configuration that is modeled, these PSFs reveal that the primary source of degradation in fundamental imaging is reverberation from near-field structures. Reverberation clutter in the harmonic PSF is 26 dB higher than the fundamental PSF. An artificial medium with uniform velocity but unchanged impedance characteristics indicates that for the fundamental PSF, the primary source of degradation is phase aberration. An ultrasound image is created in silico using the same physical and algorithmic process used in an ultrasound scanner: a series of pulses are transmitted through heterogeneous scattering tissue and the received echoes are used in a delay-and-sum beamforming algorithm to generate images. These beamformed images are compared with images obtained from convolution of the PSF with a scatterer field to demonstrate that a very large portion of the PSF must be used to accurately represent the clutter observed in conventional imaging.

115 citations


Journal ArticleDOI
TL;DR: An extension of the SPA concept for short-range applications on the basis of an optimized array design and an optimized beamforming algorithm is presented in this paper.
Abstract: This paper presents a multiple-input-multiple-output imaging system on the basis of a hybrid concept with synthetic aperture radar and digital beam forming. By moving a multistatic linear array perpendicularly to the array dimension, a 2-D aperture is sampled. The scope of application is concealed weapon detection in conjunction with the imaging of humans and, alternatively, nondestructive testing (NDT). The frequency range of 75-90 GHz was chosen because of the inherent high lateral resolution. For NDT, it seems to be a good compromise between lateral resolution and penetration depth as well. A moderate number of transmit and receive channels are achieved by a sparse periodic array (SPA) design. Since this is a far-field approach, ambiguities are not well suppressed in the near-field point spread function of the sparse array. An extension of the SPA concept for short-range applications on the basis of an optimized array design and an optimized beamforming algorithm is presented in this paper.

109 citations


Journal ArticleDOI
TL;DR: In this paper, a detection system combining a beam hodoscope and a double scattering Compton camera was proposed for real-time detection of the Bragg peak in the case of a photon point source in air.
Abstract: In hadrontherapy in order to fully take advantage of the assets of the ion irradiation, the position of the Bragg peak has to be monitored accurately. Here, we investigate a monitoring method relying on the detection in real time of the prompt γ emitted quasi instantaneously during the nuclear fragmentation processes. Our detection system combines a beam hodoscope and a double scattering Compton camera. The prompt-γ emission points are reconstructed by intersecting the ion trajectories given by the hodoscope and the Compton cones reconstructed with the camera. We propose here to study in terms of point spread function and efficiency the theoretical feasibility of the emission points reconstruction with our set-up in the case of a photon point source in air. First we analyze the nature of all the interactions which are likely to produce an energy deposit in the three detectors of the camera. It is underlined that upper energy thresholds in both scatter detectors are required in order to select mainly Compton events (one Compton interaction in each scatter detector and one interaction in the absorber detector). Then, we study the influence of various parameters such as the photon energy and the inter-detector distances on the Compton camera response. These studies are carried out by means of Geant4 simulations. We use a source with a spectrum corresponding to the prompt-γ spectrum emitted during the carbon ion irradiation of a water phantom. In the current configuration, the spatial resolution of the Compton camera is about 6 mm (Full Width at Half Maximum) and the detection efficiency 10-5. Finally, provided the detection efficiency is increased, the clinical applicability of our system is considered.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for recognizing faces degraded by blur using deblurring of facial images and shows and explains how combining the proposed facial deblur inference with the local phase quantization (LPQ) method can further enhance the performance.
Abstract: This paper proposes a novel method for recognizing faces degraded by blur using deblurring of facial images. The main issue is how to infer a Point Spread Function (PSF) representing the process of blur on faces. Inferring a PSF from a single facial image is an ill-posed problem. Our method uses learned prior information derived from a training set of blurred faces to make the problem more tractable. We construct a feature space such that blurred faces degraded by the same PSF are similar to one another. We learn statistical models that represent prior knowledge of predefined PSF sets in this feature space. A query image of unknown blur is compared with each model and the closest one is selected for PSF inference. The query image is deblurred using the PSF corresponding to that model and is thus ready for recognition. Experiments on a large face database (FERET) artificially degraded by focus or motion blur show that our method substantially improves the recognition performance compared to existing methods. We also demonstrate improved performance on real blurred images on the FRGC 1.0 face database. Furthermore, we show and explain how combining the proposed facial deblur inference with the local phase quantization (LPQ) method can further enhance the performance.

Journal ArticleDOI
TL;DR: In this article, an aberration compensation method using superposition imaging and inexpensive postprocessing is proposed, where the focusing distance and optical axis position of an imaging system with aberrations are varied over certain ranges, and the resulting images are superposed to equalize the point spread function within a three-dimensional region and remove space variance.
Abstract: In this paper, we propose an aberration compensation method using superposition imaging and inexpensive postprocessing. In the method, the focusing distance and optical axis position of an imaging system with aberrations are varied over certain ranges, and the resulting images are superposed to equalize the point spread function (PSF) within a three-dimensional region and remove space variance. A sharp image of an object with a large depth-of-field and field-of-view is then reconstructed by deconvolution of the superposed image using the effective three-dimensionally space-invariant PSF. The effectiveness of the proposed method was verified by simulations assuming defocus, the five Seidel aberrations, and vignetting.

Patent
20 Jun 2011
TL;DR: In this article, a system for output of virtual output images includes an array of image capturing devices for providing image data, which is processed by convolving the image data with a function, e.g., the path, and thereafter deconvolving them, either after or before summation, with an inverse point spread function or a filter equivalent thereto to produce all-focus image data.
Abstract: A system for output of virtual output images includes an array of image capturing devices for providing image data. This image data is processed by convolving the image data with a function, e.g., the path, and thereafter deconvolving them, either after or before summation, with an inverse point spread function or a filter equivalent thereto to produce all-focus image data.

Proceedings ArticleDOI
06 Nov 2011
TL;DR: An approach to alleviate image degradations caused by imperfect optics is presented, relying on a calibration step to encode the optical aberrations in a space-variant point spread function and obtain a corrected image by non-stationary deconvolution.
Abstract: Taking a sharp photo at several megapixel resolution traditionally relies on high grade lenses. In this paper, we present an approach to alleviate image degradations caused by imperfect optics. We rely on a calibration step to encode the optical aberrations in a space-variant point spread function and obtain a corrected image by non-stationary deconvolution. By including the Bayer array in our image formation model, we can perform demosaicing as part of the deconvolution.

Journal ArticleDOI
TL;DR: Deconvolving the volumetric reconstruction with an optimal kernel derived from the Rayleigh-Sommerfeld propagator itself emphasizes the objects responsible for the scattering pattern while suppressing both the propagating light and also such artifacts as the twin image.
Abstract: Rayleigh-Sommerfeld back-propagation can be used to reconstruct the three-dimensional light field responsible for the recorded intensity in an in-line hologram. Deconvolving the volumetric reconstruction with an optimal kernel derived from the Rayleigh-Sommerfeld propagator itself emphasizes the objects responsible for the scattering pattern while suppressing both the propagating light and also such artifacts as the twin image. Bright features in the deconvolved volume may be identified with such objects as colloidal spheres and nanorods. Tracking their thermally-driven Brownian motion through multiple holographic video images provides estimates of the tracking resolution, which approaches 1 nm in all three dimensions.

Journal ArticleDOI
TL;DR: This work proposes extensions to EM-TV, based on Bregman iterations and primal and dual inverse scale space methods, in order to obtain improved imaging results by simultaneous contrast enhancement and provides error estimates and convergence rates for exact and noisy data.
Abstract: Measurements in nanoscopic imaging suffer from blurring effects modeled with different point spread functions (PSF). Some apparatus even have PSFs that are locally dependent on phase shifts. Additionally, raw data are affected by Poisson noise resulting from laser sampling and "photon counts" in fluorescence microscopy. In these applications standard reconstruction methods (EM, filtered backprojection) deliver unsatisfactory and noisy results. Starting from a statistical modeling in terms of a MAP likelihood estimation we combine the iterative EM algorithm with total variation (TV) regularization techniques to make an efficient use of a-priori information. Typically, TV-based methods deliver reconstructed cartoon images suffering from contrast reduction. We propose extensions to EM-TV, based on Bregman iterations and primal and dual inverse scale space methods, in order to obtain improved imaging results by simultaneous contrast enhancement. Besides further generalizations of the primal and dual scale space methods in terms of general, convex variational regularization methods, we provide error estimates and convergence rates for exact and noisy data. We illustrate the performance of our techniques on synthetic and experimental biological data.

Journal ArticleDOI
TL;DR: In this paper, a deconvolution of the moments of the apparent brightness distribution of galaxies from the telescope's point spread function (PSF) is used for weak-lensing measurements.
Abstract: We introduce a novel method for weak-lensing measurements, which is based on a mathematically exact deconvolution of the moments of the apparent brightness distribution of galaxies from the telescope's point spread function (PSF). No assumptions on the shape of the galaxy or the PSF are made. The (de)convolution equations are exact for unweighted moments only, while in practice a compact weight function needs to be applied to the noisy images to ensure that the moment measurement yields significant results. We employ a Gaussian weight function, whose centroid and ellipticity are iteratively adjusted to match the corresponding quantities of the source. The change of the moments caused by the application of the weight function can then be corrected by considering higher order weighted moments of the same source. Because of the form of the deconvolution equations, even an incomplete weighting correction leads to an excellent shear estimation if galaxies and PSF are measured with a weight function of identical size. We demonstrate the accuracy and capabilities of this new method in the context of weak gravitational lensing measurements with a set of specialized tests and show its competitive performance on the GREAT08 Challenge data. A complete c++ implementation of the method can be requested from the authors.

Patent
03 Feb 2011
TL;DR: In this article, a method and system for obtaining a point spread function for deblurring image data captured by an imaging device comprising a motion sensor is presented, where motion path values indicating the motion of the imaging device during the exposure time are acquired.
Abstract: The present invention relates to a method and system for obtaining a point spread function for deblurring image data captured by an imaging device comprising a motion sensor. First, motion path values indicating the motion of the imaging device during the exposure time are acquired. The motion path values of the imaging device are then projected onto the sensor plane and for each sensor pixel the projected motion path values are integrated over time. Said integrated value represents for each sensor pixel an initial estimate of the point spread function. Optionally, the size of the point spread function can also be estimated based on the distance of the focused object and taken into account during the projecting step.

Journal ArticleDOI
TL;DR: Wave optics model of FINCH is presented, which allows analytical calculation of the Point Spread Function (PSF) for both the optical and digital part of imaging and takes into account Gaussian aperture for a spatial bounding of light waves.
Abstract: Fresnel Incoherent Correlation Holography (FINCH) allows digital reconstruction of incoherently illuminated objects from intensity records acquired by a Spatial Light Modulator (SLM) The article presents wave optics model of FINCH, which allows analytical calculation of the Point Spread Function (PSF) for both the optical and digital part of imaging and takes into account Gaussian aperture for a spatial bounding of light waves The 3D PSF is used to determine diffraction limits of the lateral and longitudinal size of a point image created in the FINCH set-up Lateral and longitudinal resolution is investigated both theoretically and experimentally using quantitative measures introduced for two-point imaging Dependence of the resolving power on the system parameters is studied and optimal geometry of the set-up is designed with regard to the best lateral and longitudinal resolution Theoretical results are confirmed by experiments in which the light emitting diode (LED) is used as a spatially incoherent source to create object holograms using the SLM

Journal ArticleDOI
TL;DR: This work develops a band-suppression filter to mitigate edge artifacts in PSF-based reconstruction, applies the filter to simulation and patient data, and compares its performance with other mitigation methods.
Abstract: PSF (point spread function) based image reconstruction causes an overshoot at sharp intensity transitions (edges) of the object. This edge artifact, or ringing, has not been fully studied. In this work, we analyze the properties of edge artifacts in PSF-based reconstruction in an effort to develop mitigation methods. Our study is based on 1D and 2D simulation experiments. Two approaches are adopted to analyze the artifacts. In the system theory approach, we relate the presence of edge artifacts to the null space and conditioning of the imaging operator. We show that edges cannot be accurately recovered with a practical number of image updates when the imaging matrices are poorly conditioned. In the frequency-domain analysis approach, we calculate the object-specific modulation transfer function (OMTF) of the system, defined as spectrum of the reconstruction divided by spectrum of the object. We observe an amplified frequency band in the OMTF of PSF-based reconstruction and that the band is directly related to the presence of ringing. Further analysis shows the amplified band is linearly related to kernel frequency support (the reciprocal of the reconstruction kernel FWHM), and the relation holds for different objects. Based on these properties, we develop a band-suppression filter to mitigate edge artifacts. We apply the filter to simulation and patient data, and compare its performance with other mitigation methods. Analysis shows the band-suppression filter provides better tradeoff of resolution and ringing suppression than a low-pass filter.

Journal ArticleDOI
TL;DR: The GREAT10 challenge as mentioned in this paper is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys.
Abstract: GRavitational lEnsing Accuracy Testing 2010 (GREAT10) is a public image analysis challenge aimed at the development of algorithms to analyze astronomical images. Specifically, the challenge is to measure varying image distortions in the presence of a variable convolution kernel, pixelization and noise. This is the second in a series of challenges set to the astronomy, computer science and statistics communities, providing a structured environment in which methods can be improved and tested in preparation for planned astronomical surveys. GREAT10 extends upon previous work by introducing variable fields into the challenge. The “Galaxy Challenge” involves the precise measurement of galaxy shape distortions, quantified locally by two parameters called shear, in the presence of a known convolution kernel. Crucially, the convolution kernel and the simulated gravitational lensing shape distortion both now vary as a function of position within the images, as is the case for real data. In addition, we introduce the “Star Challenge” that concerns the reconstruction of a variable convolution kernel, similar to that in a typical astronomical observation. This document details the GREAT10 Challenge for potential participants. Continually updated information is also available from www.greatchallenges.info.

Journal ArticleDOI
TL;DR: A set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF).
Abstract: The derivation of moment invariants has been extensively investigated in the past decades. In this paper, we construct a set of invariants derived from Zernike moments which is simultaneously invariant to similarity transformation and to convolution with circularly symmetric point spread function (PSF). Two main contributions are provided: the theoretical framework for deriving the Zernike moments of a blurred image and the way to construct the combined geometric-blur invariants. The performance of the proposed descriptors is evaluated with various PSFs and similarity transformations. The comparison of the proposed method with the existing ones is also provided in terms of pattern recognition accuracy, template matching and robustness to noise. Experimental results show that the proposed descriptors perform on the overall better.

Journal ArticleDOI
TL;DR: An accurate centroid displacement estimation algorithm that is applicable to a new mission concept of performing mirco-arcsecond level relative astrometry using a 1 m telescope for detecting terrestrial exoplanets and high-precision photometry missions.
Abstract: Conventional centroid estimation fits a template point spread function (PSF) to image data. Because the PSF is typically not known to high accuracy, systematic errors exist. Here, we present an accurate centroid displacement estimation algorithm by reconstructing the PSF from Nyquist-sampled images. In absence of inter-pixel response variations, this method can estimate centroid displacement between two 32×32 images to sub-micropixel accuracy. Inter-pixel response variations can be calibrated in Fourier space by using laser metrology. The inter-pixel variations of Fourier transforms of the pixel response functions can be conveniently expressed in terms of powers of spatial wavenumbers. Calibrating up to the third-order terms in the expansion, the displacement estimation is accurate to a few micro-pixels. This algorithm is applicable to a new mission concept of performing mirco-arcsecond level relative astrometry using a 1 m telescope for detecting terrestrial exoplanets and high-precision photometry missions.

Journal ArticleDOI
TL;DR: A laser scanning two-photon microscope with remote and motionless control of the focus position bypasses the limitations of microscopes based on moving objectives, enabling high-resolution inertia-free 3D imaging.
Abstract: The acquisition of high-resolution images in three dimensions is of utmost importance for the morphological and functional investigation of biological tissues. Here, we present a laser scanning two-photon microscope with remote and motionless control of the focus position. The movement of the excitation spot along the propagation direction is achieved by shaping the laser wavefront with a spatial light modulator. Depending on the optical properties of the objective in use, this approach allows z movements in a range of tens to hundreds of micrometers with small changes of the point spread function. We applied this technique for the three-dimensional (3D) imaging of fluorescent cells in the mouse neocortex in vivo. The presented system bypasses the limitations of microscopes based on moving objectives, enabling high-resolution inertia-free 3D imaging.

Patent
11 Apr 2011
TL;DR: In this paper, a point spread function (PSF) with a predefined 3D shape is implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element.
Abstract: Embodiments include methods, systems, and/or devices that may be used to image, obtain three-dimensional information from a scence, and/or locate multiple small particles and/or objects in three dimensions. A point spread function (PSF) with a predefined three dimensional shape may be implemented to obtain high Fisher information in 3D. The PSF may be generated via a phase mask, an amplitude mask, a hologram, or a diffractive optical element. The small particles may be imaged using the 3D PSF. The images may be used to find the precise location of the object using an estimation algorithm such as maximum likelihood estimation (MLE), expectation maximization, or Bayesian methods, for example. Calibration measurements can be used to improve the theoretical model of the optical system. Fiduciary particles/targets can also be used to compensate for drift and other type of movement of the sample relative to the detector.

Journal ArticleDOI
TL;DR: In this article, the authors studied the contribution of stray light to the two channels of the POlarimetric LIttrow Spectrograph (POLIS) at 396 nm and 630 nm as an example of a slit-spectrograph instrument.
Abstract: Context. Stray light caused by scattering on optical surfaces and in the Earth’s atmosphere degrades the spatial resolution of observations. Whereas post-facto reconstruction techniques are common for 2D imaging and spectroscopy, similar options for slitspectrograph data are rarely applied. Aims. We study the contribution of stray light to the two channels of the POlarimetric LIttrow Spectrograph (POLIS) at 396 nm and 630 nm as an example of a slit-spectrograph instrument. We test the performance of different methods of stray-light correction and spatial deconvolution to improve the spatial resolution post-facto. Methods. We model the stray light as having two components: a spectrally dispersed component and a “parasitic” component of spectrally undispersed light caused by scattering inside the spectrograph. We used several measurements to estimate the two contributions: a) observations with a (partly) blocked field of view (FOV); b) a convolution of the FTS spectral atlas; c) imaging of the spider mounting in the pupil plane; d) umbral profiles; and e) spurious polarization signal in telluric spectral lines. The measurements with a partly blocked FOV in the focal plane allowed us to estimate the spatial point spread function (PSF) of POLIS and the main spectrograph of the German Vacuum Tower Telescope (VTT). We then used the obtained PSF for a deconvolution of both spectroscopic and spectropolarimetric data and investigated the effect on the spectra. Results. The parasitic contribution can be directly and accurately determined for POLIS, amounting to about 5% (0.3%) of the (continuum) intensity at 396 nm (630 nm). The spectrally dispersed stray light is less accessible because of its many contributing sources. We estimate a lower limit of about 10% across the full FOV for the dispersed stray light from umbral profiles. In quiet Sun regions, the stray-light level from the close surroundings (d < 2 �� ) of a given spatial point is about 20%. The stray light reduces to below 2% at a distance of 20 �� from a lit area for both POLIS and the main spectrograph. The spatial deconvolution using the PSF obtained improves the spatial resolution and increases the contrast, with a minor amplification of noise. Conclusions. A two-component model of the stray-light contributions seems to be sufficient for a basic correction of observed spectra. The instrumental PSF obtained can be used to model the off-limb stray light, to determine the stray-light contamination accurately for observation targets with large spatial intensity gradients such as sunspots, and also to improve the spatial resolution of observations post-facto.

Patent
07 Jan 2011
TL;DR: In this article, a double field-of-view imaging system is presented, having a wide field of view with moderate magnification, and a narrow field of views with substantially higher magnification, axially superimposed thereon.
Abstract: An in-vivo imaging device incorporating a double field of view imaging system, having a wide field of view with moderate magnification, and a narrow field of view with substantially higher magnification, axially superimposed thereon. A single imaging array is used for both fields of view. At least some of the optical elements are shared between both of the two different field of view imaging systems. The imaging elements for the high magnification system, being of substantially smaller diameters than those of the low magnification system, are disposed coaxially with the imaging elements of the low magnification system, and can thus use the same imaging array without the need for deflection mirrors, beam combiners or motion systems. Their location on the axis of the low magnification system means that a small part of the imaging plane, around its central axis, is blocked out by the high magnification components.

Journal ArticleDOI
TL;DR: The integrated holographic optical tweezers system with double-helix point spread function (DH-PSF) imaging for high precision three-dimensional multi-particle tracking and quantitative estimation of the lateral and axial forces in an optical trap by measuring the fluid drag force exerted on the particles.
Abstract: We demonstrate an integrated holographic optical tweezers system with double-helix point spread function (DH-PSF) imaging for high precision three-dimensional multi-particle tracking. The tweezers system allows for the creation and control of multiple optical traps in three-dimensions, while the DH-PSF allows for high precision, 3D, multiple-particle tracking in a wide field. The integrated system is suitable for particles emitting/scattering either coherent or incoherent light and is easily adaptable to existing holographic tweezers systems. We demonstrate simultaneous tracking of multiple micro-manipulated particles and perform quantitative estimation of the lateral and axial forces in an optical trap by measuring the fluid drag force exerted on the particles. The system is thus capable of unveiling complex 3D force landscapes that make it suitable for quantitative studies of interactions in colloidal systems, biological materials, and a variety of soft matter systems.

Journal ArticleDOI
TL;DR: The phase mask DH-PSF microscope for 3D photo-activation localization microscopy (PM-DH-PALM) over an extended axial range and more than doubles the efficiency of current liquid crystal spatial light modulator implementations.
Abstract: We present a double-helix point spread function (DH-PSF) based three-dimensional (3D) microscope with efficient photon collection using a phase mask fabricated by gray-level lithography. The system using the phase mask more than doubles the efficiency of current liquid crystal spatial light modulator implementations. We demonstrate the phase mask DH-PSF microscope for 3D photo-activation localization microscopy (PM-DH-PALM) over an extended axial range.