Showing papers in "Proceedings of SPIE in 2010"
••
TL;DR: The Multi Unit Spectroscopic Explorer (MUSE) as mentioned in this paper is a second-generation VLT panoramic integral-field spectrograph currently in manufacturing, assembly and integration phase.
Abstract: Summary: The Multi Unit Spectroscopic Explorer (MUSE) is a second-generation VLT panoramic integral-field
spectrograph currently in manufacturing, assembly and integration phase. MUSE has a field of 1x1 arcmin2 sampled at
0.2x0.2 arcsec2 and is assisted by the VLT ground layer adaptive optics ESO facility using four laser guide stars. The
instrument is a large assembly of 24 identical high performance integral field units, each one composed of an advanced
image slicer, a spectrograph and a 4kx4k detector. In this paper we review the progress of the manufacturing and report
the performance achieved with the first integral field unit.
634 citations
••
TL;DR: Pan-Pan-STARRS as mentioned in this paper is a wide-field optical/NIR imaging system that has been deployed on Haleakala on Maui, and has been collecting science quality survey data for approximately six months.
Abstract: Pan-STARRS is a highly cost-effective, modular and scalable approach to wide-field optical/NIR imaging. It uses 1.8m
telescopes with very large (7 square degree) field of view and revolutionary1.4 billion pixel CCD cameras with low
noise and rapid read-out to provide broad-band imaging from 400-1000nm wavelength. The first single telescope system,
PS1, has been deployed on Haleakala on Maui, and has been collecting science quality survey data for approximately six
months. PS1 will be joined by a second telescope PS2 in approximately 18 months. A four aperture system is planned to
become operational following the end of the PS1 mission. This will be able to scan the entire visible sky to
approximately 24th magnitude in less than a week, thereby meeting the goals set out by the NAS 2000 decadal review for
a "Large Synoptic Sky Telescope". Here we review the technical design, and give an update on the progress that has
been made with the PS1 system.
527 citations
••
TL;DR: A simple yet effective technique to detect median filtering in digital images-a widely used denoising and smoothing operator and backed with experimental evidence on a large image database is presented.
Abstract: In digital image forensics, it is generally accepted that intentional manipulations of the image content are
most critical and hence numerous forensic methods focus on the detection of such 'malicious' post-processing.
However, it is also beneficial to know as much as possible about the general processing history of an image,
including content-preserving operations, since they can affect the reliability of forensic methods in various ways.
In this paper, we present a simple yet effective technique to detect median filtering in digital images-a widely
used denoising and smoothing operator. As a great variety of forensic methods relies on some kind of a linearity
assumption, a detection of non-linear median filtering is of particular interest. The effectiveness of our method
is backed with experimental evidence on a large image database.
243 citations
••
TL;DR: An approach for superresolving plenoptic images that takes place at the time of demosaicing the raw color image data is presented and the theoretical possibility for rendering final images at full sensor resolution is shown.
Abstract: Methods and apparatus for super-resolution in focused plenoptic cameras. By examining the geometry of data capture for super-resolution with the focused plenoptic camera, configurations for which super-resolution is realizable at different modes in the focused plenoptic camera are generated. A focused plenoptic camera is described in which infinity is super resolved directly, with registration provided by the camera geometry and the microlens pitch. In an algorithm that may be used to render super-resolved images from flats captured with a focused plenoptic camera, a high-resolution observed image is generated from a flat by interleaving pixels from adjacent microlens images. A deconvolution method may then be applied to the high-resolution observed image to deblur the image.
240 citations
••
TL;DR: In this article, the authors discuss the depth cues in the human visual perception for both image quality and visual comfort of direct-view 3D displays and evaluate potential depth limitations of 3D display from a physiological point of view.
Abstract: Over the last decade, various technologies for visualizing
three-dimensional (3D) scenes on displays have been
technologically demonstrated and refined, among them such of stereoscopic, multi-view, integral imaging, volumetric,
or holographic type. Most of the current approaches utilize the conventional stereoscopic principle.
But they all lack of their inherent conflict between vergence and accommodation since scene depth cannot be
physically realized but only feigned by displaying two views of different perspective on a flat screen and delivering
them to the corresponding left and right eye. This mismatch requires the viewer to override the physiologically
coupled oculomotor processes of vergence and eye focus that may cause visual discomfort and fatigue.
This paper discusses the depth cues in the human visual perception for both image quality and visual comfort
of direct-view 3D displays. We concentrate our analysis especially on near-range depth cues, compare visual
performance and depth-range capabilities of stereoscopic and holographic displays, and evaluate potential depth
limitations of 3D displays from a physiological point of view.
226 citations
••
TL;DR: The recent MOtion-based Video Integrity Evaluation (MOVIE) index emerges as the leading objective VQA algorithm in this study, while the performance of the Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy.
Abstract: Automatic methods to evaluate the perceptual quality of a digital video sequence have widespread applications
wherever the end-user is a human. Several objective video quality assessment (VQA) algorithms exist, whose
performance is typically evaluated using the results of a subjective study performed by the video quality experts
group (VQEG) in 2000. There is a great need for a free, publicly available subjective study of video quality that
embodies state-of-the-art in video processing technology and that is effective in challenging and benchmarking
objective VQA algorithms. In this paper, we present a study and a resulting database, known as the LIVE
Video Quality Database, where 150 distorted video sequences obtained from 10 different source video content
were subjectively evaluated by 38 human observers. Our study includes videos that have been compressed by
MPEG-2 and H.264, as well as videos obtained by simulated transmission of H.264 compressed streams through
error prone IP and wireless networks. The subjective evaluation was performed using a single stimulus paradigm
with hidden reference removal, where the observers were asked to provide their opinion of video quality on
a continuous scale. We also present the performance of several freely available objective, full reference (FR)
VQA algorithms on the LIVE Video Quality Database. The recent MOtion-based Video Integrity Evaluation
(MOVIE) index emerges as the leading objective VQA algorithm in our study, while the performance of the
Video Quality Metric (VQM) and the Multi-Scale Structural SIMilarity (MS-SSIM) index is noteworthy. The
LIVE Video Quality Database is freely available for download1 and we hope that our study provides researchers
with a valuable tool to benchmark and improve the performance of objective VQA algorithms.
215 citations
••
National Institute of Standards and Technology1, Cardiff University2, University of Pennsylvania3, Pontifical Catholic University of Chile4, University of Toronto5, University of California, Berkeley6, University of Oxford7, Princeton University8, University of British Columbia9, University of KwaZulu-Natal10, Rutgers University11, University of Pittsburgh12, University of Michigan13, Haverford College14, West Chester University of Pennsylvania15, Goddard Space Flight Center16
TL;DR: The Atacama Cosmology Telescope (ACT) in Chile was built to measure the cosmic microwave background (CMB) at arcminute angular scales as discussed by the authors, and a new polarization sensitive receiver for ACT was proposed to characterize the gravitational lensing of the CMB and constrain the sum of the neutrino masses with ~ 0.05 eV precision.
Abstract: The six-meter Atacama Cosmology Telescope (ACT) in Chile was built to measure the cosmic microwave background
(CMB) at arcminute angular scales. We are building a new polarization sensitive receiver for ACT
(ACTPol). ACTPol will characterize the gravitational lensing of the CMB and aims to constrain the sum of the
neutrino masses with ~ 0.05 eV precision, the running of the spectral index of inflation-induced fluctuations,
and the primordial helium abundance to better than 1 %. Our observing fields will overlap with the SDSS BOSS
survey at optical wavelengths, enabling a variety of cross-correlation science, including studies of the growth of
cosmic structure from Sunyaev-Zel'dovich observations of clusters of galaxies as well as independent constraints
on the sum of the neutrino masses. We describe the science objectives and the initial receiver design.
200 citations
••
TL;DR: ESPRESSO, the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations, will combine the efficiency of modern echelle spectrograph design with extreme radial-velocity precision.
Abstract: ESPRESSO, the Echelle SPectrograph for Rocky Exoplanets and Stable Spectroscopic Observations, will combine the
efficiency of modern echelle spectrograph design with extreme radial-velocity precision. It will be installed on ESO's
VLT in order to achieve a gain of two magnitudes with respect to its predecessor HARPS, and the instrumental radialvelocity
precision will be improved to reach cm/s level. Thanks to its characteristics and the ability of combining
incoherently the light of 4 large telescopes, ESPRESSO will offer new possibilities in various fields of astronomy. The
main scientific objectives will be the search and characterization of rocky exoplanets in the habitable zone of quiet, nearby
G to M-dwarfs, and the analysis of the variability of fundamental physical constants. We will present the ambitious
scientific objectives, the capabilities of ESPRESSO, and the technical solutions of this challenging project.
192 citations
••
TL;DR: In this paper, a MATLAB code for synthetic aperture radar image reconstruction using the matched filter and backprojection algorithms is provided, and a manipulation of the back-projection imaging equations is provided to show how common MATLAB functions, ifft and interp1, may be used for straightforward SAR image formation.
Abstract: While many synthetic aperture radar (SAR) image formation techniques exist, two of the most intuitive methods
for implementation by SAR novices are the matched filter and backprojection algorithms. The matched filter and
(non-optimized) backprojection algorithms are undeniably computationally complex. However, the backprojection
algorithm may be successfully employed for many SAR research endeavors not involving considerably large
data sets and not requiring time-critical image formation. Execution of both image reconstruction algorithms
in MATLAB is explicitly addressed. In particular, a manipulation of the backprojection imaging equations is
supplied to show how common MATLAB functions, ifft and interp1, may be used for straight-forward SAR
image formation. In addition, limits for scene size and pixel spacing are derived to aid in the selection of an
appropriate imaging grid to avoid aliasing. Example SAR images generated though use of the backprojection
algorithm are provided given four publicly available SAR datasets. Finally, MATLAB code for SAR image
reconstruction using the matched filter and backprojection algorithms is provided.
191 citations
••
TL;DR: In this article, the detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter for super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data.
Abstract: The Kepler Mission simultaneously measures the brightness of more than 160,000 stars every 29.4 minutes over a 3.5-year mission to search for transiting planets. Detecting transits is a signal-detection problem where the signal of interest is a periodic pulse train and the predominant noise source is non-white, non-stationary (1/f) type process of stellar variability. Many stars also exhibit coherent or quasi-coherent oscillations. The detection algorithm first identifies and removes strong oscillations followed by an adaptive, wavelet-based matched filter. We discuss how we obtain super-resolution detection statistics and the effectiveness of the algorithm for Kepler flight data.
187 citations
••
[...]
Tadayuki Takahashi1, Kazuhisa Mitsuda1, Richard L. Kelley2, Felix Aharonian3 +173 more•Institutions (44)
TL;DR: The ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated by the Institute of Space and Astronautical Science (ISAS) as discussed by the authors.
Abstract: The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated
by the Institute of Space and Astronautical Science (ISAS). ASTRO-H will investigate the physics of the
high-energy universe by performing high-resolution, high-throughput spectroscopy with moderate angular
resolution. ASTRO-H covers very wide energy range from 0.3 keV to 600 keV. ASTRO-H allows a combination
of wide band X-ray spectroscopy (5-80 keV) provided by multilayer coating, focusing hard X-ray
mirrors and hard X-ray imaging detectors, and high energy-resolution soft X-ray spectroscopy (0.3-12 keV)
provided by thin-foil X-ray optics and a micro-calorimeter array. The mission will also carry an X-ray CCD
camera as a focal plane detector for a soft X-ray telescope (0.4-12 keV) and a non-focusing soft gamma-ray
detector (40-600 keV) . The micro-calorimeter system is developed by an international collaboration led
by ISAS/JAXA and NASA. The simultaneous broad bandpass, coupled with high spectral resolution of
ΔE ~7 eV provided by the micro-calorimeter will enable a wide variety of important science themes to be
pursued.
••
INAF1
TL;DR: In this article, the authors present the laboratory characterization and performance evaluation of the First Light Adaptive Optics (FLAO) the Natural Guide Star adaptive optics system for the Large Binocular Telescope (LBT), which uses an adaptive secondary mirror with 672 actuators and a pyramid wavefront sensor with adjustable sampled pupil from 30×30 down to 4×4 subapertures.
Abstract: In this paper we present the laboratory characterization and performance evaluation of the First Light Adaptive
Optics (FLAO) the Natural Guide Star adaptive optics system for the Large Binocular Telescope (LBT). The
system uses an adaptive secondary mirror with 672 actuators and a pyramid wavefront sensor with adjustable
sampling of the telescope pupil from 30×30 down to 4×4 subapertures. The system was fully assembled in the
Arcetri Observatory laboratories, passing the acceptance test in December 2009. The performance measured
during the test were closed to goal specifications for all star magnitudes. In particular FLAO obtained 83%
Strehl Ratio (SR) in the bright end (8.5 magnitudes star in R band) using H band filter and correcting 495
modes with 30×30 subapertures sampling. In the faint end (16.4 magnitude) a 5.0% SR correcting 36 modes
with 7×7 subapertures was measured. The seeing conditions for these tests were 0.8" (r0 = 0.14m @ 550 nm)
and an average wind speed of 15m/s. The results at other seeing conditions up to 1.5" are also presented. The
system has been shipped to the LBT site, and the commissioning is taking place since March to December 2010.
A few on sky results are presented.
••
TL;DR: It is argued that the security strength of template transformation techniques must consider also consider the computational complexity of obtaining a complete pre-image of the transformed template in addition to the complexity of recovering the original biometric template.
Abstract: One of the critical steps in designing a secure biometric system is protecting the templates of the users that
are stored either in a central database or on smart cards. If a biometric template is compromised, it leads to
serious security and privacy threats because unlike passwords, it is not possible for a legitimate user to revoke
his biometric identifiers and switch to another set of uncompromised identifiers. One methodology for biometric
template protection is the template transformation approach, where the template, consisting of the features
extracted from the biometric trait, is transformed using parameters derived from a user specific password or
key. Only the transformed template is stored and matching is performed directly in the transformed domain.
In this paper, we formally investigate the security strength of template transformation techniques and define
six metrics that facilitate a holistic security evaluation. Furthermore, we analyze the security of two wellknown
template transformation techniques, namely, Biohashing and cancelable fingerprint templates based on
the proposed metrics. Our analysis indicates that both these schemes are vulnerable to intrusion and linkage
attacks because it is relatively easy to obtain either a close approximation of the original template (Biohashing)
or a pre-image of the transformed template (cancelable fingerprints). We argue that the security strength
of template transformation techniques must consider also consider the computational complexity of obtaining a
complete pre-image of the transformed template in addition to the complexity of recovering the original biometric
template.
••
University of Virginia1, University of Arizona2, Princeton University3, Johns Hopkins University4, New Mexico State University5, Ames Research Center6, Ohio State University7, Carnegie Institution for Science8, University of Florida9, University of Texas at Austin10, Spanish National Research Council11, Texas Christian University12
TL;DR: The Apache Point Observatory Galactic Evolution Experiment (APOGEE) as mentioned in this paper uses a dedicated 300-fiber, narrow-band (1.5-1.7 micron) near-infrared spectrograph to survey approximately 100,000 giant stars across the Milky Way.
Abstract: The Apache Point Observatory Galactic Evolution Experiment (APOGEE) will use a dedicated 300-fiber, narrow-band
(1.5-1.7 micron), high resolution (R~30,000), near-infrared spectrograph to survey approximately 100,000 giant stars
across the Milky Way. This survey, conducted as part of the Sloan Digital Sky Survey III (SDSS III), will revolutionize
our understanding of kinematical and chemical enrichment histories of all Galactic stellar populations. The instrument,
currently in fabrication, will be housed in a separate building adjacent to the 2.5 m SDSS telescope and fed light via
approximately 45-meter fiber runs from the telescope. The instrument design includes numerous technological
challenges and innovations including a gang connector that allows simultaneous connection of all fibers with a single
plug to a telescope cartridge that positions the fibers on the sky, numerous places in the fiber train in which focal ratio
degradation must be minimized, a large (290 mm x 475 mm elliptically-shaped recorded area) mosaic-VPH, an f/1.4 sixelement
refractive camera featuring silicon and fused silica elements with diameters as large as 393 mm, three near-within a custom, LN2-cooled, stainless steel vacuum cryostat with dimensions 1.4 m x 2.3 m x 1.3 m.
••
TL;DR: A fast and very versatile solution to minimizing embedding impact in steganography based on syndrome coding using linear convolutional codes with the optimal binary quantizer implemented using the Viterbi algorithm run in the dual domain is proposed.
Abstract: In this paper, we propose a practical approach to minimizing embedding impact in steganography based on syndrome
coding and trellis-coded quantization and contrast its performance with bounds derived from appropriate
rate-distortion bounds. We assume that each cover element can be assigned a positive scalar expressing the impact
of making an embedding change at that element (single-letter distortion). The problem is to embed a given
payload with minimal possible average embedding impact. This task, which can be viewed as a generalization of
matrix embedding or writing on wet paper, has been approached using heuristic and suboptimal tools in the past.
Here, we propose a fast and very versatile solution to this problem that can theoretically achieve performance
arbitrarily close to the bound. It is based on syndrome coding using linear convolutional codes with the optimal
binary quantizer implemented using the Viterbi algorithm run in the dual domain. The complexity and memory
requirements of the embedding algorithm are linear w.r.t. the number of cover elements. For practitioners,
we include detailed algorithms for finding good codes and their implementation. Finally, we report extensive
experimental results for a large set of relative payloads and for different distortion profiles, including the wet
paper channel.
••
TL;DR: This paper reports on recent developments in video coding standardization, particularly focusing on the Call for Proposals on video coding technology made jointly in January 2010 by ITU-T VCEG and ISO/IEC MPEG and the April 2010 responses to that Call.
Abstract: This paper reports on recent developments in video coding standardization, particularly focusing on the Call for Proposals (CfP) on video coding technology made jointly in January 2010 by ITU-T VCEG and ISO/IEC MPEG and the April 2010 responses to that Call. The new standardization initiative is referred to as High Efficiency Video Coding (HEVC) and its development has been undertaken by a new Joint Collaborative Team on Video Coding (JCT-VC) formed by the two organizations. The HEVC standard is intended to provide significantly better compression capability than the existing AVC (ITU-T H.264 | ISO/IEC MPEG-4 Part 10) standard. The results of the CfP are summarized, and the first steps towards the definition of the HEVC standard are described.
••
TL;DR: The structure of the LSST application framework is outlined and its usefulness for constructing pipelines outside of theLSST context is explored, including two examples of which are discussed.
Abstract: The LSST Data Management System is built on an open source software framework that has middleware and
application layers. The middleware layer provides capabilities to construct, configure, and manage pipelines on
clusters of processing nodes, and to manage the data the pipelines consume and produce. It is not in any way specific
to astronomical applications. The complementary application layer provides the building blocks for constructing
pipelines that process astronomical data, both in image and catalog forms. The application layer does not directly
depend upon the LSST middleware, and can readily be used with other middleware implementations. Both layers
have object oriented designs that make the creation of more specialized capabilities relatively easy through class
inheritance.
This paper outlines the structure of the LSST application framework and explores its usefulness for constructing
pipelines outside of the LSST context, two examples of which are discussed. The classes that the framework provides
are related within a domain model that is applicable to any astronomical pipeline that processes imaging data.
Specifically modeled are mosaic imaging sensors; the images from these sensors and the transformations that result
as they are processed from raw sensor readouts to final calibrated science products; and the wide variety of catalogs
that are produced by detecting and measuring astronomical objects in a stream of such images. The classes are
implemented in C++ with Python bindings provided so that pipelines can be constructed in any desired mixture of
C++ and Python.
••
TL;DR: A tool called DTIPrep is developed which pipelines the QC steps with a detailed protocoling and reporting facility and has been successfully applied to several DTI studies with several hundred DWIs in the lab as well as collaborating labs in Utah and Iowa.
Abstract: Diffusion Tensor Imaging (DTI) has become an important MRI procedure to investigate the integrity of white matter in brain in vivo. DTI is estimated from a series of acquired Diffusion Weighted Imaging (DWI) volumes. DWI data suffers from inherent low SNR, overall long scanning time of multiple directional encoding with correspondingly large risk to encounter several kinds of artifacts. These artifacts can be too severe for a correct and stable estimation of the diffusion tensor. Thus, a quality control (QC) procedure is absolutely necessary for DTI studies. Currently, routine DTI QC procedures are conducted manually by visually checking the DWI data set in a gradient by gradient and slice by slice way. The results often suffer from low consistence across different data sets, lack of agreement of different experts, and difficulty to judge motion artifacts by qualitative inspection. Additionally considerable manpower is needed for this step due to the large number of images to QC, which is common for group comparison and longitudinal studies, especially with increasing number of diffusion gradient directions. We present a framework for automatic DWI QC. We developed a tool called DTIPrep which pipelines the QC steps with a detailed protocoling and reporting facility. And it is fully open source. This framework/tool has been successfully applied to several DTI studies with several hundred DWIs in our lab as well as collaborating labs in Utah and Iowa. In our studies, the tool provides a crucial piece for robust DTI analysis in brain white matter study.
••
TL;DR: This work is on CULA, a GPU accelerated implementation of linear algebra routines, and presents results from factorizations such as LU decomposition, singular value decomposition and QR decomposition along with applications like system solution and least squares.
Abstract: The modern graphics processing unit (GPU) found in many standard personal computers is a highly parallel math
processor capable of nearly 1 TFLOPS peak throughput at a cost similar to a high-end CPU and an excellent
FLOPS/watt ratio. High-level linear algebra operations are computationally intense, often requiring O(N3) operations
and would seem a natural fit for the processing power of the GPU. Our work is on CULA, a GPU accelerated
implementation of linear algebra routines. We present results from factorizations such as LU decomposition, singular
value decomposition and QR decomposition along with applications like system solution and least squares. The GPU
execution model featured by NVIDIA GPUs based on CUDA demands very strong parallelism, requiring between hundreds and thousands of simultaneous operations to achieve high performance. Some constructs from linear algebra map extremely well to the GPU and others map poorly. CPUs, on the other hand, do well at smaller order parallelism and perform acceptably during low-parallelism code segments. Our work addresses this via hybrid a processing model, in which the CPU and GPU work simultaneously to produce results. In many cases, this is accomplished by allowing each platform to do the work it performs most naturally.
••
TL;DR: The steps of data reduction necessary to fully reduce science observations in the different modes are described with examples on typical data calibrations and observations sequences.
Abstract: The X-shooter data reduction pipeline, as part of the ESO-VLT Data Flow System, provides recipes for Paranal
Science Operations, and for Data Product and Quality Control Operations at Garching headquarters. At Paranal,
it is used for the quick-look data evaluation. The pipeline recipes can be executed either with EsoRex at the
command line level or through the Gasgano graphical user interface. The recipes are implemented with the ESO
Common Pipeline Library (CPL).
X-shooter is the first of the second generation of VLT instruments. It makes possible to collect in one shot
the full spectrum of the target from 300 to 2500 nm, subdivided in three arms optimised for UVB, VIS and NIR
ranges, with an efficiency between 15% and 35% including the telescope and the atmosphere, and a spectral
resolution varying between 3000 and 17,000. It allows observations in stare, offset modes, using the slit or an
IFU, and observing sequences nodding the target along the slit.
Data reduction can be performed either with a classical approach, by determining the spectral format via
2D-polynomial transformations, or with the help of a dedicated instrument physical model to gain insight on the
instrument and allowing a constrained solution that depends on a few parameters with a physical meaning.
In the present paper we describe the steps of data reduction necessary to fully reduce science observations in
the different modes with examples on typical data calibrations and observations sequences.
••
TL;DR: MOSFIRE as discussed by the authors is a unique multi-object spectrometer and imager for the Cassegrain focus of the 10 m Keck 1 telescope, which can provide near-IR (0.97 to 2.45 μm) multioriental spectroscopy over a 6.14' x 6.8' field of view with a resolving power of R~3,270 for a 0.7" slit width (2.9 pixels in the dispersion direction).
Abstract: MOSFIRE is a unique multi-object spectrometer and imager for the Cassegrain focus of the 10 m Keck 1 telescope. A refractive optical design provides near-IR (0.97 to 2.45 μm) multi-object spectroscopy over a 6.14' x 6.14' field of view with a resolving power of R~3,270 for a 0.7" slit width (2.9 pixels in the dispersion direction), or imaging over a field of view of 6.8' diameter with 0.18" per pixel sampling. A single diffraction grating can be set at two fixed angles, and order-sorting filters provide spectra that cover the K, H, J or Y bands by selecting 3rd, 4th, 5th or 6th order respectively. A folding flat following the field lens is equipped with piezo transducers to provide tip/tilt control for flexure compensation at the 0.1 pixel level. A special feature of MOSFIRE is that its multiplex advantage of up to 46 slits is achieved using a cryogenic Configurable Slit Unit or CSU developed in collaboration with the Swiss Centre for Electronics and Micro Technology (CSEM). The CSU is reconfigurable under remote control in less than 5 minutes without any thermal cycling of the instrument. Slits are formed by moving opposable bars from both sides of the focal plane. An individual slit has a length of 7.1" but bar positions can be aligned to make longer slits. When masking bars are removed to their full extent and the grating is changed to a mirror, MOSFIRE becomes a wide-field imager. Using a single, ASIC-driven, 2K x 2K H2-RG HgCdTe array from Teledyne Imaging Sensors with exceptionally low dark current and low noise, MOSFIRE will be extremely sensitive and ideal for a wide range of science applications. This paper describes the design and testing of the instrument prior to delivery later in 2010.
••
TL;DR: This work review and categorize algorithms for contentaware image retargeting, i.e., resizing an image while taking its content into consideration to preserve important regions and minimize distortions, as it requires preserving the relevant information while maintaining an aesthetically pleasing image for the user.
Abstract: Advances in imaging technology have made the capture and display of digital images ubiquitous. A variety
of displays are used to view them, ranging from high-resolution computer monitors to low-resolution mobile
devices, and images often have to undergo changes in size and aspect ratio to adapt to different screens. Also,
displaying and printing documents with embedded images frequently entail resizing of the images to comply with
the overall layout. Straightforward image resizing operators, such as scaling, often do not produce satisfactory
results, since they are oblivious to image content. In this work, we review and categorize algorithms for contentaware
image retargeting, i.e., resizing an image while taking its content into consideration to preserve important
regions and minimize distortions. This is a challenging problem, as it requires preserving the relevant information
while maintaining an aesthetically pleasing image for the user. The techniques typically start by computing an
importance map which represents the relevance of every pixel, and then apply an operator that resizes the image
while taking into account the importance map and additional constraints. We intend this review to be useful to
researchers and practitioners interested in image retargeting.
••
TL;DR: In this paper, the authors present several biases that are uncovered while analyzing data on the HR8799 planetary system and how they have modified their analysis pipeline to calibrate or remove these effects so that high accuracy photometry and photometry is achievable.
Abstract: The Angular, Simultaneous Spectral and Reference Star Differential Imaging techniques (ADI, SSDI and RSDI) are currently the main observing approaches that are being used to pursue large-scale direct exoplanet imaging surveys and will be a key component of next-generation high-contrast imaging instrument science. To allow detection of faint planets, images from these observing techniques are combined in a way to retain the planet flux while subtracting as much as possible the residual speckle noise. The LOCI algorithm is a very efficient way of combining a set of reference images to subtract the noise of a given image. Although high contrast performances have been achieved with ADI/SSDI/RSDI & LOCI, achieving high accuracy photometry and astrometry can be a challenge, due to various biases coming mainly from the inevitable partial point source self-subtraction for ADI/SSDI and how LOCI is designed to suppress the noise. We present here several biases that we hare uncovered while analyzing data on the HR8799 planetary system and how we have modified our analysis pipeline to calibrate or remove these effects so that high accuracy astrometry and photometry is achievable. In addition, several new upgrades are presented in a new archive-based (i.e. performing ADI, SSDI and RSDI with LOCI as a single PSF subtraction step) multi-instrument reduction and analysis pipeline called SOSIE.
••
TL;DR: The Nuclear Spectroscopic Telescope Array (NuSTAR) is a NASA Small Explorer mission that will carry the first focusing hard X-ray (5 - 80 keV) telescope to orbit as discussed by the authors.
Abstract: The Nuclear Spectroscopic Telescope Array (NuSTAR) is a NASA Small Explorer mission that will carry the first focusing hard X-ray (5 - 80 keV) telescope to orbit. NuSTAR will offer a factor 50 - 100 sensitivity improvement compared to previous collimated or coded mask imagers that have operated in this energy band. In addition, NuSTAR provides sub-arcminute imaging with good spectral resolution over a 12-arcminute field of view. After launch, NuSTAR will carry out a two-year primary science mission that focuses on four key programs: studying the evolution of massive black holes through surveys carried out in fields with excellent multiwavelength coverage, understanding the population of compact objects and the nature of the massive black hole in the center of the Milky Way, constraining explosion dynamics and nucleosynthesis in supernovae, and probing the nature of particle acceleration in relativistic jets in active galactic nuclei. A number of additional observations will be included in the primary mission, and a. guest observer program will be proposed for an extended mission to expand the range of scientific targets. The payload consists of two co-aligned depth-graded multilayer coated grazing incidence optics focused onto solid state CdZnTe pixel detectors. To be launched in early 2012 on a Pegasus rocket into a low-inclination Earth orbit. NuSTAR largely avoids SAA passages, and will therefore have low and stable detector backgrounds. The telescope achieves a 10.15-meter focal length through on-orbit deployment of all mast. An aspect and alignment metrology system enable reconstruction of the absolute aspect and variations in the telescope alignment resulting from mast flexure during ground data processing. Data will be publicly available at GSFC's High Energy Astrophysics Science Archive Research Center (HEASARC) following validation at the science operations center located at Caltech.
••
TL;DR: In this paper, a new theory of exact particle flow for nonlinear filters is proposed, which generalizes our theory of particle flow that is already many orders of magnitude faster than the standard particle filters and which is several order of magnitude more accurate than the extended Kalman filter for difficult nonlinear problems.
Abstract: We have invented a new theory of exact particle flow for nonlinear filters. This
generalizes our theory of particle flow that is already many orders of magnitude faster than
standard particle filters and which is several orders of magnitude more accurate than the
extended Kalman filter for difficult nonlinear problems. The new theory generalizes our recent
log-homotopy particle flow filters in three ways: (1) the particle flow corresponds to the exact flow
of the conditional probability density; (2) roughly speaking, the old theory was based on
incompressible flow (like subsonic flight in air), whereas the new theory allows compressible flow
(like supersonic flight in air); (3) the old theory suffers from obstruction of particle flow as well as
singularities in the equations for flow, whereas the new theory has no obstructions and no
singularities. Moreover, our basic filter theory is a radical departure from all other particle filters in
three ways: (a) we do not use any proposal density; (b) we never resample; and (c) we compute
Bayes' rule by particle flow rather than as a point wise multiplication.
••
Columbia University1, University of Minnesota2, Cardiff University3, McGill University4, Lawrence Berkeley National Laboratory5, National Institute of Standards and Technology6, Imperial College London7, University of California, Berkeley8, Brown University9, Weizmann Institute of Science10, California Institute of Technology11, Paris Diderot University12, Centre national de la recherche scientifique13
TL;DR: The EBEX experiment as mentioned in this paper is a NASA-funded balloon-borne experiment designed to measure the polarization of the cosmic microwave background (CMB) using 1432 transition edge sensor (TES) bolometric detectors read out with frequency multiplexed SQuIDs.
Abstract: EBEX is a NASA-funded balloon-borne experiment designed to measure the polarization of the cosmic microwave background (CMB). Observations will be made using 1432 transition edge sensor (TES) bolometric detectors read out with frequency multiplexed SQuIDs. EBEX will observe in three frequency bands centered at 150, 250, and 410 GHz, with 768, 384, and 280 detectors in each band, respectively. This broad frequency coverage is designed to provide valuable information about polarized foreground signals from dust. The polarized sky signals will be modulated with an achromatic half wave plate (AHWP) rotating on a superconducting magnetic bearing (SMB) and analyzed with a fixed wire grid polarizer. EBEX will observe a patch covering ~1% of the sky with 8' resolution, allowing for observation of the angular power spectrum from l = 20 to 1000. This will allow EBEX to search for both the primordial B-mode signal predicted by inflation and the anticipated lensing B-mode signal. Calculations to predict EBEX constraints on r using expected noise levels show that, for a likelihood centered around zero and with negligible foregrounds, 99% of the area falls below r = 0.035. This value increases by a factor of 1.6 after a process of foreground subtraction. This estimate does not include systematic uncertainties. An engineering flight was launched in June, 2009, from Ft. Sumner, NM, and the long duration science flight in Antarctica is planned for 2011. These proceedings describe the EBEX instrument and the North American engineering flight.
••
TL;DR: In this paper, the current status of commissioning and recent results in performance of the Subaru laser guide star adaptive optics system is presented, which continuously achieved around 0.6 to 0.7 of Strehl ratio at K band using a bright guide star around 9th to 10th magnitude in R band.
Abstract: The current status of commissioning and recent results in performance of Subaru laser guide star adaptive optics system is presented. After the first light using natural guide stars with limited configuration of the system in October 2006, we concentrated to complete a final configuration for a natural guide star to serve AO188 to an open use observation. On sky test with full configurations using natural guide star started in August 2008, and opened to a public one month later. We continuously achieved around 0.6 to 0.7 of Strehl ratio at K band using a bright guide star around 9th to 10th magnitude in R band. We found an unexpectedly large wavefront error in our laser launching telescope. The modification to fix this large wavefront error was made and we resumed the characterization of a laser guide star in February 2009. Finally we obtained a round-shaped laser guide star, whose image size is about 1.2 to 1.6 arcsec under the typical seeing condition. We are in the final phase of commissioning. A diffraction limited image by our AO system using a laser guide star will be obtained in the end of 2010. An open use observation with laser guide star system will start in the middle of 2011.
••
TL;DR: The Carnegie Planet Finder Spectrograph (PFS) as mentioned in this paper uses an R4 echelle grating and a prism cross-disperser in a Littrow arrangement to provide complete wavelength coverage between 388 and672 nm distributed across 64 orders.
Abstract: The Carnegie Planet Finder Spectrograph (PFS) has been commissioned for use with the 6.5 meter Magellan
Clay telescope at Las Campanas Observatory in Chile. PFS is optimized for high precision measurements of
stellar radial velocities to support an ongoing search for extrasolar planets. PFS uses an R4 echelle grating
and a prism cross-disperser in a Littrow arrangement to provide complete wavelength coverage between 388 and
668 nm distributed across 64 orders. The spectral resolution is 38,000 with a 1 arcsecond wide slit. An iodine
absorption cell is used to superimpose well-defined absorption features on the stellar spectra, providing a fiducial
wavelength reference. Several uncommon features have been implemented in the pursuit of increased velocity
stability. These include enclosing the echelle grating in a vacuum tank, actively controlling the temperature of
the instrument, providing a time delayed integration mode to improve flatfielding, and actively controlling the
telescope guiding and focus using an image of the target star on the slit. Data collected in the first five months
of scientific operation indicate that velocity precision better than 1 m s -1 RMS is being achieved.
••
TL;DR: A depth map coding method based on a new distortion measurement by deriving relationships between distortions in coded depth map and rendered view is proposed, with coding gains up to 1.6 dB in interpolated frame quality.
Abstract: New data formats that include both video and the corresponding depth maps, such as multiview plus depth
(MVD), enable new video applications in which intermediate video views (virtual views) can be generated using
the transmitted/stored video views (reference views) and the corresponding depth maps as inputs. We propose a
depth map coding method based on a new distortion measurement by deriving relationships between distortions
in coded depth map and rendered view. In our experiments we use a codec based on H.264/AVC tools, where the
rate-distortion (RD) optimization for depth encoding makes use of the new distortion metric. Our experimental
results show the efficiency of the proposed method, with coding gains of up to 1.6 dB in interpolated frame
quality as compared to encoding the depth maps using the same coding tools but applying RD optimization
based on conventional distortion metrics.
••
TL;DR: PySALT as mentioned in this paper is a Python/PyRAF-based data reduction and analysis pipeline for the Southern African Large Telescope (SALT), a modern 10m class telescope with a large user community consisting of 13 partner institutions.
Abstract: PySALT is the python/PyRAF-based data reduction and analysis pipeline for the Southern African Large Telescope
(SALT), a modern 10m class telescope with a large user community consisting of 13 partner institutions. The two first
generation instruments on SALT are SALTICAM, a wide-field imager, and the Robert Stobie Spectrograph (RSS). Along
with traditional imaging and spectroscopy modes, these instruments provide a wide range of observing modes, including
Fabry-Perot imaging, polarimetric observations, and high-speed observations. Due to the large user community, resources
available, and unique observational modes of SALT, the development of reduction and analysis software is key to
maximizing the scientific return of the telescope. PySALT is developed in the Python/PyRAF environment and takes
advantage of a large library of open-source astronomical software. The goals in the development of PySALT are: (1)
Provide science quality reductions for the major operational modes of SALT, (2) Create analysis tools for the unique
modes of SALT, and (3) Create a framework for the archiving and distribution of SALT data. The data reduction software
currently provides support for the reduction and analysis of regular imaging, high-speed imaging, and long slit
spectroscopy with planned support for multi-object spectroscopy, high-speed spectroscopy, Fabry-Perot imaging, and
polarimetric data sets. We will describe the development and current status of PySALT and highlight its benefits through
early scientific results from SALT.