About: Encircled energy is a research topic. Over the lifetime, 366 publications have been published within this topic receiving 4554 citations.
Papers published on a yearly basis
TL;DR: In this paper, the photometric calibration of the HST Advanced Camera for Surveys (ACS) is presented, and an overview of the performance and calibration of two CCD cameras, the Wide Field Channel (WFC) and the High Resolution Channel (HRC), and a description of the best techniques for reducing ACS CCD data.
Abstract: We present the photometric calibration of the HST Advanced Camera for Surveys (ACS). We give here an overview of the performance and calibration of the 2 CCD cameras, the Wide Field Channel (WFC) and the High Resolution Channel (HRC), and a description of the best techniques for reducing ACS CCD data. On-orbit observations of spectrophotometric standard stars have been used to revise the pre-launch estimate of the instrument response curves to best match predicted and observed count rates. Synthetic photometry has been used to determine zeropoints for all filters in 3 magnitude systems and to derive interstellar extinction values for the ACS photometric systems. Due to the CCD internal scattering of long wavelength photons, the width of the PSF increases significantly in the near-IR and the aperture correction for photometry with near-IR filters depends on the spectral energy distribution of the source. We provide encircled energy curves and a detailed recipe to correct for the latter effect. Transformations between the ACS photometric systems and the UBVRI and WFPC2 systems are presented. In general, two sets of transformations are available: 1 based on the observation of 2 star clusters; the other on synthetic photometry. We discuss the accuracy of these transformations and their sensitivity to details of the spectra being transformed. Initial signs of detector degradation due to the HST radiative environment are already visible. We discuss the impact on the data in terms of dark rate increase, charge transfer inefficiency, and hot pixel population.
TL;DR: The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992 as mentioned in this paper, and has been used extensively for HST data analysis.
Abstract: Point spread function (PSF) models are critical to Hubble Space Te lescope (HST) data analysis. Astronomers unfamiliar with optical simulation techniques need access to PSF models that properly match the conditions of their observations, so any HST modeling software needs to be both easy-to-use and have detailed information on the telescope and instruments. The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992. We discuss the evolution of Tiny Tim over the years as new instruments and optical properties have been incorporated. We also dem onstrate how Tiny Tim PSF models have be en used for HST data analysis. Tiny Tim is freely available from tinytim.stsci.edu. Keywords: Hubble Space Telescope, point spread function 1. INTRODUCTION The point spread function (PSF) is the fundamental unit of imag e formation for an optical syst em such as a telescope. It encompasses the diffraction from obscurations, which is modified by aberrations, and the scattering from mid-to-high spatial frequency optical errors . Imaging performance is often described in terms of PSF properties, such as resolution and encircled energy. Optical engineering software, includi ng ray tracing and physical optic s propagation packages, are employed during the design phase of the system to predict the PSF to ensure that the imag ing requirements are met. But once the system is complete and operational, the software is usually packed away and the point spread function considered static, to be described in documentation for reference by the scientist. In this context, an optical engineer runs software to compute PSFs while the user of the optical system simply needs to know its basic characteristics. For the Hubble Space Telescope (HST), that is definitely not the case. To extract the maximum information out of an observation, even the smallest details of the PSF are important. Some examples include: deconvolvin g the PSF from an observed image to remove the blurring caused by diffraction and reveal fine structure; convolving a model image by the PSF to compare to an observed one; subtracting the PSF of an unresolved source (star or compact galactic nucleus) to reveal extended structure (a circumstellar disk or host galaxy) that would otherwise be unseen within the halo of diffracted and scattered light; and fitting a PSF to a star imag e to obtain accurate photometry and astrometry, especially if it is a binary star with blended PSFs Compared to ground-based telescopes HST is extremely stable, so the structure in its PSF is largely time-invariant. This allows the use of PSF models for data analysis. On the ground, the variable PSF structure due to the atmosphere and thermal-and-gravitationally-induced optical perturbations make it more difficult to produce a model that accurately matches the data. The effective HST PSF, though, is dependent on many parameters, including obscurations, aberrations, pointing errors, system wavelength response, object color, and detector pixel effects. An accurate PSF model must account for all of these, some of which may depend on time (focus, obscuration positions) or on field position within the camera (aberrations, CCD detector charge diffusion, obscuration patterns, geometric distortion). 1.1 Early HST PSF modeling: TIM Before launch of HST in 1990, a variety of commercial and proprietary software packages were used to compute PSFs. These provided predictions of HSTs imaging performance and guided the design, but they were not used by future HST observers. These programs were too complicated for general HS T users, and either were not publicly available or were too expensive. They also did not provide PSF models in forms that scientists would find useful, such as including the effects of detector pixelization and broadband system responses.
Johns Hopkins University1, Space Telescope Science Institute2, University of California, Santa Cruz3, Goddard Space Flight Center4, Alion Science and Technology5, The Racah Institute of Physics6, Leiden University7, University of Arizona8, National Radio Astronomy Observatory9, European Southern Observatory10
TL;DR: In this article, the authors present an overview of the ACS on-orbit performance based on the calibration observations taken during the first three months of ACS operations, showing that ACS meets or exceeds all of its important performance specifications.
Abstract: We present an overview of the ACS on-orbit performance based on the calibration observations taken during the first three months of ACS operations. The ACS meets or exceeds all of its important performance specifications. The WFC and HRC FWHM and 50% encircled energy diameters at 555 nm are 0.088" and 0.14", and 0.050" and 0.10". The average rms WFC and HRC read noises are 5.0 e- and 4.7 e-. The WFC and HRC average dark currents are ~ 7.5 and ~ 9.1 e-/pixel/hour at their operating temperatures of - 76°C and - 80°C. The SBC + HST throughput is 0.0476 and 0.0292 through the F125LP and F150LP filters. The lower than expected SBC operating temperature of 15 to 27°C gives a dark current of 0.038 e-/pix/hour. The SBC just misses its image specification with an observed 50% encircled energy diameter of 0.24" at 121.6 nm. The ACS HRC coronagraph provides a 6 to 16 direct reduction of a stellar PSF, and a ~1000 to ~9000 PSF-subtracted reduction, depending on the size of the coronagraphic spot and the wavelength. The ACS grism has a position dependent dispersion with an average value of 3.95 nm/pixel. The average resolution λ/Δλ for stellar sources is 65, 87, and 78 at wavelengths of 594 nm, 802 nm, and 978 nm.
UK Astronomy Technology Centre1, University of Edinburgh2, Spanish National Research Council3, University of Padua4, INAF5, Ames Research Center6, California Institute of Technology7, Ghent University8, University of Nottingham9, University of Oxford10, Open University11, Max Planck Society12, Cardiff University13, International School for Advanced Studies14, University of California, Irvine15, Aix-Marseille University16, University of Hertfordshire17
TL;DR: In this article, the authors describe the reduction of data taken with the PACS instrument on board the Herschel Space Observatory in the Science Demonstration Phase of the H-ATLAS survey, specifically data obtained for a 4 × 4 deg2 region using Herschel's fast-scan (60 arcsec s−1) parallel mode.
Abstract: We describe the reduction of data taken with the PACS instrument on board the Herschel Space Observatory in the Science Demonstration Phase of the Herschel-ATLAS (H-ATLAS) survey, specifically data obtained for a 4 × 4 deg2 region using Herschel's fast-scan (60 arcsec s−1) parallel mode. We describe in detail a pipeline for data reduction using customized procedures within hipe from data retrieval to the production of science-quality images. We found that the standard procedure for removing cosmic ray glitches also removed parts of bright sources and so implemented an effective two-stage process to minimize these problems. The pronounced 1/f noise is removed from the timelines using 3.4- and 2.5-arcmin boxcar high-pass filters at 100 and 160 μm. Empirical measurements of the point spread function (PSF) are used to determine the encircled energy fraction as a function of aperture size. For the 100- and 160-μm bands, the effective PSFs are ∼9 and ∼13 arcsec (FWHM), and the 90-per cent encircled energy radii are 13 and 18 arcsec. Astrometric accuracy is good to ≲2 arcsec. The noise in the final maps is correlated between neighbouring pixels and rather higher than advertised prior to launch. For a pair of cross-scans, the 5σ point-source sensitivities are 125–165 mJy for 9–13 arcsec radius apertures at 100 μm and 150–240 mJy for 13–18 arcsec radius apertures at 160 μm.
TL;DR: In this paper, the authors used the Rayleigh-Sommerfeld theory of diffraction to obtain an exact expression for the axial irradiance of a focused annular laser beam valid for all axial points.
Abstract: Using the Rayleigh-Sommerfeld theory of diffraction we obtain an exact expression for the axial irradiance of a focused annular laser beam valid for all axial points. Conditions for the validity of the Fresnel theory are obtained. We discuss why and how the depth of focus and asymmetry of focused fields about the focal plane depend on the Fresnel number of the beam aperture as observed from the geometric focus. When a beam is focused on a distant target so that the Fresnel number is small (≲5), the principal maximum of axial irradiance occurs at a point which is significantly away from the geometric focus in the direction of the aperture. We discuss how to optimally focus a beam to illuminate a moving distant target in terms of the encircled energy on it. We show that, to obtain the maximum possible concentration of energy on a target, the beam must be focused on it, thus requiring active focusing for a moving target. However, if energy concentration is adequate for a beam focused on a target at a certain distance, it is more than adequate for a considerable range of the distance of a moving target without active focusing. In a shared-aperture optical system the aperture used for focusing a beam on the target is also used for imaging the target. Hence, in such a system the optical transfer function is also more than adequate over a wide range of the target distance without active focusing.