scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of SPIE in 2012"


Proceedings ArticleDOI
TL;DR: Drishti provides an intuitive and powerful interface for choreographing animations and is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings.
Abstract: Among several rendering techniques for volumetric data, direct volume rendering is a powerful visualization tool for a wide variety of applications. This paper describes the major features of hardware based volume exploration and presentation tool - Drishti. The word, Drishti, stands for vision or insight in Sanskrit, an ancient Indian language. Drishti is a cross-platform open-source volume rendering system that delivers high quality, state of the art renderings. The features in Drishti include, though not limited to, production quality rendering, volume sculpting, multi-resolution zooming, transfer function blending, profile generation, measurement tools, mesh generation, stereo/anaglyph/crosseye renderings. Ultimately, Drishti provides an intuitive and powerful interface for choreographing animations.

453 citations


Proceedings ArticleDOI
TL;DR: A new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution is introduced.
Abstract: Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.

412 citations


Proceedings ArticleDOI
TL;DR: Hyper Suprime-Cam (HSC) as mentioned in this paper is an 870 Mega pixel prime focus camera for the 8.2 m Subaru telescope that can produce a sharp image of 0.25 arc-sec FWHM in r-band over the entire 1.5 degree (in diameter) field of view.
Abstract: Hyper Suprime-Cam (HSC) is an 870 Mega pixel prime focus camera for the 8.2 m Subaru telescope. The wide field corrector delivers sharp image of 0.25 arc-sec FWHM in r-band over the entire 1.5 degree (in diameter) field of view. The collimation of the camera with respect to the optical axis of the primary mirror is realized by hexapod actuators whose mechanical accuracy is few microns. As a result, we expect to have seeing limited image most of the time. Expected median seeing is 0.67 arc-sec FWHM in i-band. The sensor is a p-ch fully depleted CCD of 200 micron thickness (2048 x 4096 15 μm square pixel) and we employ 116 of them to pave the 50 cm focal plane. Minimum interval between exposures is roughly 30 seconds including reading out arrays, transferring data to the control computer and saving them to the hard drive. HSC uniquely features the combination of large primary mirror, wide field of view, sharp image and high sensitivity especially in red. This enables accurate shape measurement of faint galaxies which is critical for planned weak lensing survey to probe the nature of dark energy. The system is being assembled now and will see the first light in August 2012.

399 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors describe the as-built performance of MOSFIRE, the multi-object spectrometer and imager for the Cassegrain focus of the 10m Keck 1 telescope.
Abstract: This paper describes the as-built performance of MOSFIRE, the multi-object spectrometer and imager for the Cassegrain focus of the 10-m Keck 1 telescope MOSFIRE provides near-infrared (097 to 241 μm) multi-object spectroscopy over a 61' x 61' field of view with a resolving power of R~3,500 for a 07" (0508 mm) slit (29 pixels in the dispersion direction), or imaging over a field of view of ~69' diameter with ~018" per pixel sampling A single diffraction grating can be set at two fixed angles, and order-sorting filters provide spectra that cover the K, H, J or Y bands by selecting 3rd, 4th, 5th or 6th order respectively A folding flat following the field lens is equipped with piezo transducers to provide tip/tilt control for flexure compensation at the <01 pixel level Instead of fabricated focal plane masks requiring frequent cryo-cycling of the instrument, MOSFIRE is equipped with a cryogenic Configurable Slit Unit (CSU) developed in collaboration with the Swiss Center for Electronics and Microtechnology (CSEM) Under remote control the CSU can form masks containing up to 46 slits with ~0007-0014" precision Reconfiguration time is < 6 minutes Slits are formed by moving opposable bars from both sides of the focal plane An individual slit has a length of 70" but bar positions can be aligned to make longer slits in increments of 75" When masking bars are retracted from the field of view and the grating is changed to a mirror, MOSFIRE becomes a wide-field imager The detector is a 2K x 2K H2-RG HgCdTe array from Teledyne Imaging Sensors with low dark current and low noise Results from integration and commissioning are presented

380 citations


Proceedings ArticleDOI
TL;DR: The visible spectrograph HARPS-N as discussed by the authors was used at the Telescopio Nazionale Galileo (TNG) to perform radial velocity measurements of extrasolar planetary systems.
Abstract: The Telescopio Nazionale Galileo (TNG)[9] hosts, starting in April 2012, the visible spectrograph HARPS-N. It is based on the design of its predecessor working at ESO's 3.6m telescope, achieving unprecedented results on radial velocity measurements of extrasolar planetary systems. The spectrograph's ultra-stable environment, in a temperature-controlled vacuum chamber, will allow measurements under 1 m/s which will enable the characterization of rocky, Earth-like planets. Enhancements from the original HARPS include better scrambling using octagonal section fibers with a shorter length, as well as a native tip-tilt system to increase image sharpness, and an integrated pipeline providing a complete set of parameters. Observations in the Kepler field will be the main goal of HARPS-N, and a substantial fraction of TNG observing time will be devoted to this follow-up. The operation process of the observatory has been updated, from scheduling constraints to telescope control system. Here we describe the entire instrument, along with the results from the first technical commissioning.

315 citations


Proceedings ArticleDOI
TL;DR: The Neutron star Interior Composition ExploreR (NICER) is a proposed NASA Explorer Mission of Opportunity dedicated to the study of the extraordinary gravitational, electromagnetic, and nuclear-physics environments embodied by neutron stars as discussed by the authors.
Abstract: The Neutron star Interior Composition ExploreR (NICER) is a proposed NASA Explorer Mission of Opportunity dedicated to the study of the extraordinary gravitational, electromagnetic, and nuclear-physics environments embodied by neutron stars. NICER will explore the exotic states of matter within neutron stars, where density and pressure are higher than in atomic nuclei, confronting theory with unique observational constraints. NICER will enable rotation-resolved spectroscopy of the thermal and non-thermal emissions of neutron stars in the soft (0.2–12 keV) X-ray band with unprecedented sensitivity, probing interior structure, the origins of dynamic phenomena, and the mechanisms that underlie the most powerful cosmic particle accelerators known. NICER will achieve these goals by deploying, following launch in December 2016, an X-ray timing and spectroscopy instrument as an attached payload aboard the International Space Station (ISS). A robust design compatible with the ISS visibility, vibration, and contamination environments allows NICER to exploit established infrastructure with low risk. Grazing-incidence optics coupled with silicon drift detectors, actively pointed for a full hemisphere of sky coverage, will provide photon-counting spectroscopy and timing registered to GPS time and position, with high throughput and relatively low background. In addition to advancing a vital multi-wavelength approach to neutron star studies through coordination with radio and γ-ray observations, NICER will provide a rapid-response capability for targeting of transients, continuity in X-ray timing astrophysics investigations post-RXTE through a proposed Guest Observer program, and new discovery space in soft X-ray timing science.

314 citations


Proceedings ArticleDOI
TL;DR: This paper proposes efficient algorithms for group sparse optimization with mixed l2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning.
Abstract: This paper proposes efficient algorithms for group sparse optimization with mixed l2,1-regularization, which arises from the reconstruction of group sparse signals in compressive sensing, and the group Lasso problem in statistics and machine learning. It is known that encoding the group information in addition to sparsity can often lead to better signal recovery/feature selection. The l2,1-regularization promotes group sparsity, but the resulting problem, due to the mixed-norm structure and possible grouping irregularity, is considered more difficult to solve than the conventional l1-regularized problem. Our approach is based on a variable splitting strategy and the classic alternating direction method (ADM). Two algorithms are presented, one derived from the primal and the other from the dual of the l2,1-regularized problem. The convergence of the proposed algorithms is guaranteed by the existing ADM theory. General group configurations such as overlapping groups and incomplete covers can be easily handled by our approach. Computational results show that on random problems the proposed ADM algorithms exhibit good efficiency, and strong stability and robustness.

288 citations


Proceedings ArticleDOI
TL;DR: A rich model of DCT coefficients in a JPEG file for the purpose of detecting steganographic embedding changes delivers superior performance across all tested algorithms and payloads.
Abstract: In this paper, we propose a rich model of DCT coefficients in a JPEG file for the purpose of detecting steganographic embedding changes. The model is built systematically as a union of smaller submodels formed as joint distributions of DCT coefficients from their frequency and spatial neighborhoods covering a wide range of statistical dependencies. Due to its high dimensionality, we combine the rich model with ensemble classifiers and construct detectors for six modern JPEG domain steganographic schemes: nsF5, model-based steganography, YASS, and schemes that use side information at the embedder in the form of the uncompressed image: MME, BCH, and BCHopt. The resulting performance is contrasted with previously proposed feature sets of both low and high dimensionality. We also investigate the performance of individual submodels when grouped by their type as well as the effect of Cartesian calibration. The proposed rich model delivers superior performance across all tested algorithms and payloads.

270 citations


Proceedings ArticleDOI
TL;DR: An efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device based on a joint-bilateral filtering framework that includes spatial and temporal information.
Abstract: In this paper we present an efficient hole filling strategy that improves the quality of the depth maps obtained with the Microsoft Kinect device The proposed approach is based on a joint-bilateral filtering framework that includes spatial and temporal information The missing depth values are obtained applying iteratively a joint-bilateral filter to their neighbor pixels The filter weights are selected considering three different factors: visual data, depth information and a temporal-consistency map Video and depth data are combined to improve depth map quality in presence of edges and homogeneous regions Finally, the temporal-consistency map is generated in order to track the reliability of the depth measurements near the hole regions The obtained depth values are included iteratively in the filtering process of the successive frames and the accuracy of the hole regions depth values increases while new samples are acquired and filtered

200 citations


Proceedings ArticleDOI
TL;DR: SPTpol as discussed by the authors is a dual-frequency polarization-sensitive camera that was deployed on the 10-meter South Pole Telescope in January 2012 to measure the polarization anisotropy of the cosmic microwave background (CMB) on angular scales spanning an arcminute to several degrees.
Abstract: SPTpol is a dual-frequency polarization-sensitive camera that was deployed on the 10-meter South Pole Telescope in January 2012. SPTpol will measure the polarization anisotropy of the cosmic microwave background (CMB) on angular scales spanning an arcminute to several degrees. The polarization sensitivity of SPTpol will enable a detection of the CMB “B-mode” polarization from the detection of the gravitational lensing of the CMB by large scale structure, and a detection or improved upper limit on a primordial signal due to inationary gravity waves. The two measurements can be used to constrain the sum of the neutrino masses and the energy scale of ination. These science goals can be achieved through the polarization sensitivity of the SPTpol camera and careful control of systematics. The SPTpol camera consists of 768 pixels, each containing two transition-edge sensor (TES) bolometers coupled to orthogonal polarizations, and a total of 1536 bolometers. The pixels are sensitive to light in one of two frequency bands centered at 90 and 150 GHz, with 180 pixels at 90 GHz and 588 pixels at 150 GHz. The SPTpol design has several features designed to control polarization systematics, including: singlemoded feedhorns with low cross-polarization, bolometer pairs well-matched to dfference atmospheric signals, an improved ground shield design based on far-sidelobe measurements of the SPT, and a small beam to reduce temperature to polarization leakage. We present an overview of the SPTpol instrument design, project status, and science projections.

178 citations


Proceedings ArticleDOI
TL;DR: In this article, the authors present the preliminary design of the WEAVE next generation spectroscopy facility for the William Herschel Telescope (WHT), which is a multi-object and multi-IFU facility utilizing a new 2 degree prime focus field of view.
Abstract: We present the preliminary design of the WEAVE next generation spectroscopy facility for the William Herschel Telescope (WHT), principally targeting optical ground-based follow up of upcoming ground-based (LOFAR) and spacebased (Gaia) surveys. WEAVE is a multi-object and multi-IFU facility utilizing a new 2 degree prime focus field of view at the WHT, with a buffered pick and place positioner system hosting 1000 multi-object (MOS) fibres or up to 30 integral field units for each observation. The fibres are fed to a single spectrograph, with a pair of 8k(spectral) x 6k (spatial) pixel cameras, located within the WHT GHRIL enclosure on the telescope Nasmyth platform, supporting observations at R~5000 over the full 370-1000nm wavelength range in a single exposure, or a high resolution mode with limited coverage in each arm at R~20000.

Proceedings ArticleDOI
TL;DR: The MARX ray-trace program was originally developed to simulate event data from the trans-mission grating spectrometers on-board the Chandra X-ray Observatory (CXO) as mentioned in this paper.
Abstract: MARX is a portable ray-trace program that was originally developed to simulate event data from the trans- mission grating spectrometers on-board the Chandra X-ray Observatory (CXO). MARX has since evolved to include detailed models of all CXO science instruments and has been further modified to serve as an event simulator for future X-ray observatory design concepts. We first review a number of CXO applications of MARX to demonstrate the roles such a program could play throughout the life of a mission, including its design and calibration, the production of input data products for the development of the various software pipelines, and for observer proposal planning. We describe how MARX was utilized in the design of a proposed future X-ray spectroscopy mission called AEGIS (Astrophysics Experiment for Grating and Imaging Spectroscopy), a mission concept optimized for the 0.2 to 1 keV soft X-ray band. AEGIS consists of six independent Critical Angle Transmission Grating Spectrometers (CATGS) arranged to provide a resolving power of 3000 and an effective area exceeding 1000 cm2 across its passband. Such high spectral resolution and effective area will permit AEGIS to address many astrophysics questions including those that pertain to the evolution of Large Scale Structure of the universe, and the behavior of matter at very high densities. The MARX ray-trace of the AEGIS spectrometer yields quantitative estimates of how the spectrometer’s performance is affected by misalignments between the various system elements, and by deviations of those elements from their idealized geometry. From this information, we are able to make the appropriate design tradeoffs to maximize the performance of the system.

Proceedings ArticleDOI
TL;DR: This paper describes an approach fording the rumor source and assessing the likelihood that a piece of information is in fact a rumor, in the absence of data provenance information, and shows that with a sucient number of monitor nodes, it is possible to recognize most rumors and their sources with high accuracy.
Abstract: Information that propagates through social networks can carry a lot of false claims. For example, rumors on certain topics can propagate rapidly leading to a large number of nodes reporting the same (incorrect) observations. In this paper, we describe an approach for nding the rumor source and assessing the likelihood that a piece of information is in fact a rumor, in the absence of data provenance information. We model the social network as a directed graph, where vertices represent individuals and directed edges represent information ow (e.g., who follows whom on Twitter). A number of monitor nodes are injected into the network whose job is to report data they receive. Our algorithm identies rumors and their sources by observing which of the monitors received the given piece of information and which did not. We show that, with a sucient number of monitor nodes, it is possible to recognize most rumors and their sources with high accuracy.

Proceedings ArticleDOI
TL;DR: The Habitable Zone Planet Finder (HPF) as mentioned in this paper is a stabilized ber-fed near-infrared (NIR) spectrograph for the 10 meter class Hobby-Eberly Telescope (HET) that will be capable of discovering low mass planets around M dwarfs.
Abstract: We present the scientic motivation and conceptual design for the recently funded Habitable-zone Planet Finder (HPF), a stabilized ber-fed near-infrared (NIR) spectrograph for the 10 meter class Hobby-Eberly Telescope (HET) that will be capable of discovering low mass planets around M dwarfs. The HPF will cover the NIR Y & J bands to enable precise radial velocities to be obtained on mid M dwarfs, and enable the detection of low mass planets around these stars. The conceptual design is comprised of a cryostat cooled to 200K, a dual ber-feed with a science and calibration ber, a gold coated mosaic echelle grating, and a Teledyne Hawaii-2RG (H2RG) NIR detector with a 1.7 m cuto. A uranium-neon hollow-cathode lamp is the baseline wavelength calibration source, and we are actively testing laser frequency combs to enable even higher radial velocity precision. We will present the overall instrument system design and integration with the HET, and discuss major system challenges, key choices, and ongoing research and development projects to mitigate risk. We also discuss the ongoing process of target selection for the HPF survey.

Proceedings ArticleDOI
TL;DR: In this article, a MEMS tunable VCSEL was demonstrated for OCT imaging with high speed and long depth range of up to 1.2MHz axial scan rate with unidirectional and bidirectional high duty cycle
Abstract: This paper demonstrates new wavelength swept light source technology, MEMS tunable VCSELs, for OCT imaging. The VCSEL achieves a combination of ultrahigh sweep speeds, wide spectral tuning range, flexibility in sweep trajectory, and extremely long coherence length, which cannot be simultaneously achieved with other technologies. A second generation prototype VCSEL is optically pumped at 980nm and a low mass electrostatically tunable mirror enables high speed wavelength tuning centered at ~1310nm with ~110nm of tunable bandwidth. Record coherence length >100mm enables extremely long imaging range. By changing the drive waveform, a single 1310nm VCSEL was driven to sweep at speeds from 100kHz to 1.2MHz axial scan rate with unidirectional and bidirectional high duty cycle sweeps. We demonstrate long range and high resolution 1310nm OCT imaging of the human anterior eye at 100kHz axial scan rate and imaging of biological samples at speeds of 60kHz - 1MHz. A first generation 1050nm device is shown to sweep over 100nm. The results of this study suggest that MEMS based VCSEL swept light source technology has unique performance characteristics and will be a critical technology for future ultrahigh speed and long depth range OCT imaging.

Proceedings ArticleDOI
TL;DR: In this article, a review of the state-of-the-art in high contrast imaging and their intricate interactions at very small angles (within the first 4 resolution elements from the star) is presented.
Abstract: Small-angle coronagraphy is technically and scientifically appealing because it enables the use of smaller telescopes, allows covering wider wavelength ranges, and potentially increases the yield and completeness of circumstellar environment – exoplanets and disks – detection and characterization campaigns. However, opening up this new parameter space is challenging. Here we will review the four posts of high contrast imaging and their intricate interactions at very small angles (within the first 4 resolution elements from the star). The four posts are: choice of coronagraph, optimized wavefront control, observing strategy, and post-processing methods. After detailing each of the four foundations, we will present the lessons learned from the 10+ years of operations of zeroth and first-generation adaptive optics systems. We will then tentatively show how informative the current integration of second-generation adaptive optics system is, and which lessons can already be drawn from this fresh experience. Then, we will review the current state of the art, by presenting world record contrasts obtained in the framework of technological demonstrations for space-based exoplanet imaging and characterization mission concepts. Finally, we will conclude by emphasizing the importance of the cross-breeding between techniques developed for both ground-based and space-based projects, which is relevant for future high contrast imaging instruments and facilities in space or on the ground.

Proceedings ArticleDOI
TL;DR: A comparative assessment of the strengths and weaknesses of both measurement principles from the current perspective will show that deflectometry is now heading to become a serious competitor to interferometry.
Abstract: Deflectometric methods that are capable of providing full-field topography data for specular freeform surfaces have been around for more than a decade. They have proven successful in various fields of application, such as the measurement of progressive power eyeglasses, painted car body panels, or windshields. However, up to now deflectometry has not been considered as a viable competitor to interferometry, especially for the qualification of optical components. The reason is that, despite the unparalleled local sensitivity provided by deflectometric methods, the global height accuracy attainable with this measurement technique used to be limited to several microns over a field of 100 mm. Moreover, spurious reflections at the rear surface of transparent objects could easily mess up the measured signal completely. Due to new calibration and evaluation procedures, this situation has changed lately. We will give a comparative assessment of the strengths and – now partly revised – weaknesses of both measurement principles from the current perspective. By presenting recent developments and measurement examples from different applications, we will show that deflectometry is now heading to become a serious competitor to interferometry.

Proceedings ArticleDOI
Tadayuki Takahashi, Kazuhisa Mitsuda, Richard L. Kelley1, H. Aarts2  +220 moreInstitutions (49)
TL;DR: The ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated by the Institute of Space and Astronautical Science (ISAS) as discussed by the authors.
Abstract: The joint JAXA/NASA ASTRO-H mission is the sixth in a series of highly successful X-ray missions initiated by the Institute of Space and Astronautical Science (ISAS). ASTRO-H will investigate the physics of the highenergy universe via a suite of four instruments, covering a very wide energy range, from 0.3 keV to 600 keV. These instruments include a high-resolution, high-throughput spectrometer sensitive over 0.3–12 keV with high spectral resolution of ΔE ≦ 7 eV, enabled by a micro-calorimeter array located in the focal plane of thin-foil X-ray optics; hard X-ray imaging spectrometers covering 5–80 keV, located in the focal plane of multilayer-coated, focusing hard X-ray mirrors; a wide-field imaging spectrometer sensitive over 0.4–12 keV, with an X-ray CCD camera in the focal plane of a soft X-ray telescope; and a non-focusing Compton-camera type soft gamma-ray detector, sensitive in the 40–600 keV band. The simultaneous broad bandpass, coupled with high spectral resolution, will enable the pursuit of a wide variety of important science themes.

Proceedings ArticleDOI
TL;DR: The giant Magellan Telescope (GMT) as discussed by the authors is a 25m optical/infrared extremely large telescope that is being built by an international consortium of universities and research institutions, which will be located at the Las Campanas Observatory, Chile.
Abstract: The Giant Magellan Telescope (GMT) is a 25-meter optical/infrared extremely large telescope that is being built by an international consortium of universities and research institutions. It will be located at the Las Campanas Observatory, Chile. The GMT primary mirror consists of seven 8.4-m borosilicate honeycomb mirror segments made at the Steward Observatory Mirror Lab (SOML). Six identical off-axis segments and one on-axis segment are arranged on a single nearly-paraboloidal parent surface having an overall focal ratio of f/0.7. The fabrication, testing and verification procedures required to produce the closely-matched off-axis mirror segments were developed during the production of the first mirror. Production of the second and third off-axis segments is underway. GMT incorporates a seven-segment Gregorian adaptive secondary to implement three modes of adaptive-optics operation: natural-guide star AO, laser-tomography AO, and ground-layer AO. A wide-field corrector/ADC is available for use in seeing-limited mode over a 20-arcmin diameter field of view. Up to seven instruments can be mounted simultaneously on the telescope in a large Gregorian Instrument Rotator. Conceptual design studies were completed for six AO and seeing-limited instruments, plus a multi-object fiber feed, and a roadmap for phased deployment of the GMT instrument suite is being developed. The partner institutions have made firm commitments for approximately 45% of the funds required to build the telescope. Project Office efforts are currently focused on advancing the telescope and enclosure design in preparation for subsystem- and system-level preliminary design reviews which are scheduled to be completed in the first half of 2013.

Proceedings ArticleDOI
TL;DR: Deformable mirrors have been widely used in astronomy, from very large voice coil deformable mirrors to very small and compact ones embedded in Multi Object Adaptive Optics systems as mentioned in this paper.
Abstract: From the ardent bucklers used during the Syracuse battle to set fire to Romans’ ships to more contemporary piezoelectric deformable mirrors widely used in astronomy, from very large voice coil deformable mirrors considered in future Extremely Large Telescopes to very small and compact ones embedded in Multi Object Adaptive Optics systems, this paper aims at giving an overview of Deformable Mirror technology for Adaptive Optics and Astronomy. First the main drivers for the design of Deformable Mirrors are recalled, not only related to atmospheric aberration compensation but also to environmental conditions or mechanical constraints. Then the different technologies available today for the manufacturing of Deformable Mirrors will be described, pros and cons analyzed. A review of the Companies and Institutes with capabilities in delivering Deformable Mirrors to astronomers will be presented, as well as lessons learned from the past 25 years of technological development and operation on sky. In conclusion, perspective will be tentatively drawn for what regards the future of Deformable Mirror technology for Astronomy.

Proceedings ArticleDOI
TL;DR: The Cornell-SLAC Pixel Array Detector (CSPAD) as mentioned in this paper is a camera system at the Linear Coherent Light Source (LCLS) that can image scattered x-rays on a per-shot basis.
Abstract: The Linear Coherent Light Source (LCLS), a free electron laser operating from 250eV to10keV at 120Hz, is opening windows on new science in biology, chemistry, and solid state, atomic, and plasma physics1,2. The FEL provides coherent x-rays in femtosecond pulses of unprecedented intensity. This allows the study of materials on up to 3 orders of magnitude shorter time scales than previously possible. Many experiments at the LCLS require a detector that can image scattered x-rays on a per-shot basis with high efficiency and excellent spatial resolution over a large solid angle and both good S/N (for single-photon counting) and large dynamic range (required for the new coherent x-ray diffractive imaging technique3). The Cornell-SLAC Pixel Array Detector (CSPAD) has been developed to meet these requirements. SLAC has built, characterized, and installed three full camera systems at the CXI and XPP hutches at LCLS. This paper describes the camera system and its characterization and performance.

Proceedings ArticleDOI
TL;DR: The focus of this effort is to develop an approach to measure, identify and threshold these differences in order to establish an effective land mapping and feature extraction process germane to WorldView-2 imagery.
Abstract: Multispectral imagery (MSI) provides information to support decision making across a growing number of private and industrial applications. Among them, land mapping, terrain classification and feature extraction rank highly in the interest of those who analyze the data to produce information, reports, and intelligence products. The 8 nominal band centers of WorldView-2 allow us to use non-traditional means of measuring the differences which exist in the features, artifacts, and surface materials in the data, and we can determine the most effective method for processing this information by exploiting the unique response values within those wavelength channels. The difference in responses across select bands can be sought using normalized difference index ratios to measure moisture content, indicate vegetation health, and distinguish natural features from man-made objects. The focus of this effort is to develop an approach to measure, identify and threshold these differences in order to establish an effective land mapping and feature extraction process germane to WorldView-2 imagery.

Proceedings ArticleDOI
TL;DR: The functionality of the pipeline, details of new and unorthodox processing steps, discuss which algorithms and code could be used from other projects, and the performance on both laboratory data as well as simulated scientific data are described.
Abstract: MUSE, the Multi Unit Spectroscopic Explorer, 1 is an integral-field spectrograph under construction for the ESO VLT to see first light in 2013. It can record spectra of a 1′x1′ field on the sky at a sampling of 0″.2x0″.2, over a wavelength range from 4650 to 9300A. The data reduction for this instrument is the process which converts raw data from the 24 CCDs into a combined datacube (with two spatial and one wavelength axis) which is corrected for instrumental and atmospheric effects. Since the instrument consists of many subunits (24 integral-field units, each slicing the light into 48 parts, i. e. 1152 regions with a total of almost 90000 spectra per exposure), this task requires many steps and is computationally expensive, in terms of processing speed, memory usage, and disk input/output. The data reduction software is designed to be mostly run as an automated pipeline and to fit into the open source environment of the ESO data flow as well as into a data management system based on AstroWISE. We describe the functionality of the pipeline, highlight details of new and unorthodox processing steps, discuss which algorithms and code could be used from other projects. Finally, we show the performance on both laboratory data as well as simulated scientific data.

Proceedings ArticleDOI
TL;DR: In this paper, the authors provide subjective evaluation results to assess the performance of the current HEVC codec for resolutions beyond HDTV, which is the latest attempt by ISO/MPEG and ITU-T/VCEG to define the next generation compression standard.
Abstract: High Efficiency Video Coding (HEVC) is the latest attempt by ISO/MPEG and ITU-T/VCEG to define the next generation compression standard beyond H.264/MPEG-4 Part 10 AVC. One of the major goals of HEVC is to provide efficient compression for resolutions beyond HDTV. However, the subjective evaluations that led to the selection of technologies were bound to HDTV resolution. Moreover, performance evaluation metrics to report efficiency results of this standard are mainly based on PSNR, especially for resolutions beyond HDTV. This paper provides subjective evaluation results to assess the performance of the current HEVC codec for resolutions beyond HDTV.

Proceedings ArticleDOI
TL;DR: In this paper, an adaptive centroiding and repositioning method (Peak-Up) that uses the Spitzer Pointing Control Reference Sensor (PCRS) to repeatedly position a target to within 1 IRAC pixels of an area of minimal gain variation was proposed.
Abstract: The Infrared Array Camera (IRAC) on the Spitzer Space Telescope has been used to measure < 10^(-4) temporal variations in point sources (such as transiting extrasolar planets) at 36 and 45 μm Due to the under-sampled nature of the PSF, the warm IRAC arrays show variations of as much as 8% in sensitivity as the center of the PSF moves across a pixel due to normal spacecraft pointing wobble and drift These intra-pixel gain variations are the largest source of correlated noise in IRAC photometry Usually this effect is removed by fitting a model to the science data themselves (self-calibration), which could result in the removal of astrophysically interesting signals We describe a new technique for significantly reducing the gain variations and improving photometric precision in a given observation, without using the data to be corrected This comprises: (1) an adaptive centroiding and repositioning method ("Peak-Up") that uses the Spitzer Pointing Control Reference Sensor (PCRS) to repeatedly position a target to within 01 IRAC pixels of an area of minimal gain variation; and (2) the high-precision, high-resolution measurement of the pixel gain structure using non-variable stars We show that the technique currently allows the reduction of correlated noise by almost an order of magnitude over raw data, which is comparable to the improvement due to self-calibration We discuss other possible sources of correlated noise, and proposals for reducing their impact on photometric precision

Proceedings ArticleDOI
M. Cirasuolo, Jose Afonso1, Marcella Carollo2, Hector Flores3, Roberto Maiolino4, Ernesto Oliva5, S. Paltani6, Leonardo Vanzi, Chris Evans, Manuel Abreu, David Atkinson, C. Babusiaux3, Steven Beard, Franz E. Bauer, Michele Bellazzini, Ralf Bender7, Philip Best8, Naidu Bezawada, Piercarlo Bonifacio3, Angela Bragaglia5, I. Bryson, D. Busher4, Alexandre Cabral, Karina Caputi, Mauro Centrone5, Fanny Chemla3, A. Cimatti, M. R. L. Cioni9, G. Clementini5, João Coelho, Denija Crnojević8, E. Daddi, James Dunlop8, Stephen Anthony Eales10, Sofia Feltzing11, Annette M. N. Ferguson8, Malcolm E. Fisher4, Adriano Fontana5, J. P. U. Fynbo, B. Garilli5, Gerard Gilmore4, Adrian M. Glauser2, Isabelle Guinouard3, Francois Hammer3, P. Hastings, A. Hess, Rob Ivison, P. Jagourel3, Matt J. Jarvis12, Lex Kaper, G. Kauffman7, A. T. Kitching13, Andy Lawrence8, D. Lee, B. Lemasle, G. Licausi5, Simon J. Lilly2, Dario Lorenzetti5, David Lunney, F. Mannucci5, Ross J. McLure8, Dante Minniti, David Montgomery, B. Muschielok, Kirpal Nandra7, Ramón Navarro14, Peder Norberg15, S. J. Oliver16, Livia Origlia5, Nelson D. Padilla, John A. Peacock8, Fernando Pedichini5, J. Peng4, Laura Pentericci5, J. Pragt14, Mathieu Puech3, Sofia Randich5, Phil Rees, Alvio Renzini5, Nils Ryde11, Mark Rodrigues17, Isaac Roseboom8, F. Royer3, R. P. Saglia, Ariel G. Sánchez7, Ricardo P. Schiavon18, H. Schnetler, David Sobral8, Roberto Speziali5, David Sun4, Remko Stuik14, Andy Taylor8, William Taylor, Stephen Todd, Eline Tolstoy, Miguel Torres, Monica Tosi5, Eros Vanzella5, Lars Venema19, Fabrizio Vitali5, Michael Wegner, Martyn Wells, Vivienne Wild20, G. Wright, G. Zamorani5, Manuela Zoccali 
TL;DR: MOONS as mentioned in this paper is a new multi-object optical and near-infrared spectrograph selected by ESO as a third generation instrument for the VLT, which can provide the European astronomical community with a powerful, unique instrument able to pioneer a wide range of Galactic, Extragalactic and Cosmological studies and provide crucial follow-up for major facilities such as Gaia, VISTA, Euclid and LSST.
Abstract: MOONS is a new Multi-Object Optical and Near-infrared Spectrograph selected by ESO as a third generation instrument for the Very Large Telescope (VLT). The grasp of the large collecting area offered by the VLT (8.2m diameter), combined with the large multiplex and wavelength coverage (optical to near-IR: 0.8μm - 1.8μm) of MOONS will provide the European astronomical community with a powerful, unique instrument able to pioneer a wide range of Galactic, Extragalactic and Cosmological studies and provide crucial follow-up for major facilities such as Gaia, VISTA, Euclid and LSST. MOONS has the observational power needed to unveil galaxy formation and evolution over the entire history of the Universe, from stars in our Milky Way, through the redshift desert, and up to the epoch of very first galaxies and re-ionization of the Universe at redshift z>8-9, just few million years after the Big Bang. On a timescale of 5 years of observations, MOONS will provide high quality spectra for >3M stars in our Galaxy and the local group, and for 1-2M galaxies at z>1 (SDSS-like survey), promising to revolutionise our understanding of the Universe. The baseline design consists of ~1000 fibers deployable over a field of view of ~500 square arcmin, the largest patrol field offered by the Nasmyth focus at the VLT. The total wavelength coverage is 0.8μm-1.8μm and two resolution modes: medium resolution and high resolution. In the medium resolution mode (R~4,000-6,000) the entire wavelength range 0.8μm-1.8μm is observed simultaneously, while the high resolution mode covers simultaneously three selected spectral regions: one around the CaII triplet (at R~8,000) to measure radial velocities, and two regions at R~20,000 one in the J-band and one in the H-band, for detailed measurements of chemical abundances. © (2014) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.

Proceedings ArticleDOI
TL;DR: In this paper, the authors compare various resonators in terms of sensor metrics for label-free bio-sensing in a micro-fluidic environment, and identify that, while evanescent-field sensors all operate on the principle that the analyte's refractive index shifts the resonant frequency, there are important differences between implementations that lie in the relationship between the optical field overlap with the analyzete and the relative contributions of the various loss mechanisms.
Abstract: Silicon photonic resonators, implemented using silicon-on-insulator substrates, are promising for numerous applications. The most commonly studied resonators are ring/racetrack resonators. We have fabricated these and other resonators including disk resonators, waveguide-grating resonators, ring resonator reflectors, contra-directional grating-coupler ring resonators, and racetrack-based multiplexer/demultiplexers. While numerous resonators have been demonstrated for sensing purposes, it remains unclear as to which structures provide the highest sensitivity and best limit of detection; for example, disc resonators and slot-waveguide-based ring resonators have been conjectured to provide an improved limit of detection. Here, we compare various resonators in terms of sensor metrics for label-free bio-sensing in a micro-fluidic environment. We have integrated resonator arrays with PDMS micro-fluidics for real-time detection of biomolecules in experiments such as antigen-antibody binding reaction experiments using Human Factor IX proteins. Numerous resonators are fabricated on the same wafer and experimentally compared. We identify that, while evanescent-field sensors all operate on the principle that the analyte's refractive index shifts the resonant frequency, there are important differences between implementations that lie in the relationship between the optical field overlap with the analyte and the relative contributions of the various loss mechanisms. The chips were fabricated in the context of the CMC-UBC Silicon Nanophotonics Fabrication course and workshop. This yearlong, design-based, graduate training program is offered to students from across Canada and, over the last four years, has attracted participants from nearly every Canadian university involved in photonics research. The course takes students through a full design cycle of a photonic circuit, including theory, modelling, design, and experimentation.

Proceedings ArticleDOI
TL;DR: A novel hyperspectral sensor that integrates a wedge filter on top of a standard CMOS sensor to enable the low-cost processing of a microscopic wedge filter and is able to compensate for process variability is presented.
Abstract: Although the potential of hyperspectral imaging has been demonstrated for several applications, using laboratory setups in research environments, its adoption by industry has so far been limited due to the lack of high speed, low cost and compact hyperspectral cameras. To bridge the gap between research and industry, we present a novel hyperspectral sensor that integrates a wedge filter on top of a standard CMOS sensor. To enable the low-cost processing of a microscopic wedge filter, we have introduced a design that is able to compensate for process variability. The result is a compact and fast hyperspectral camera made with low-cost CMOS process technology. The current prototype camera acquires 100 spectral bands over a spectral range from 560 nm to 1000 nm, with a spectral resolution better than 10 nm and a spatial resolution of 2048 pixels per line. The speed is 180 frames per second at illumination levels as typically used in machine vision. The prototype is a hyperspectral line scanner that acquires 16 lines per spectral band in parallel on a 4 MPixel sensor. The theoretic line rate for this implementation is thus 2880 lines per second.

Proceedings ArticleDOI
TL;DR: WebbPSF as mentioned in this paper is a software package for point-spread-function (PSF) simulations of the James Webb Space Telescope (JWST) in all imaging modes, including direct imaging, coronagraphy, and non-redundant aperture masking.
Abstract: Experience with the Hubble Space Telescope has shown that accurate models of optical performance are extremely desirable to astronomers, both for assessing feasibility and planning scientific observations, and for data analyses such as point-spread-function (PSF)-fitting photometry and astrometry, deconvolution, and PSF subtraction. Compared to previous space observatories, the temporal variability and active control of the James Webb Space Telescope (JWST) pose a significantly greater challenge for accurate modeling. We describe here some initial steps toward meeting the community's need for such PSF simulations. A software package called WebbPSF now provides the capability for simulating PSFs for JWST's instruments in all imaging modes, including direct imaging, coronagraphy, and non-redundant aperture masking. WebbPSF is intended to provide model PSFs suitable for planning observations and creating mock science data, via a straightforward interface accessible to any astronomer; as such it is complementary to the sophisticated but complex-to-use modeling tools used primarily by optical designers. WebbPSF is implemented using a new exible and extensible optical propagation library in the Python programming language. While the initial version uses static precomputed wavefront simulations, over time this system is evolving to include both spatial and temporal variation in PSFs, building on existing modeling efforts within the JWST program. Our long-term goal is to provide a general-purpose PSF modeling capability akin to Hubble's Tiny Tim software, and of sufficient accuracy to be useful to the community.

Proceedings ArticleDOI
TL;DR: In this paper, a hybrid prototype scanner built to explore benefits of the quantum-counting technique in the context of clinical CT is presented. But the authors do not provide a detailed analysis of the system.
Abstract: μWe introduce a novel hybrid prototype scanner built to explore benefits of the quantum-counting technique in the context of clinical CT. The scanner is equipped with two measurement systems. One is a CdTe-based counting detector with 22cm field-of-view. Its revised ASIC architecture allows configuration of the counter thresholds of the 225m small sub-pixels in chess patterns, enabling data acquisition in four energy bins or studying high-flux scenarios with pile-up trigger. The other one is a conventional GOS-based energy-integrating detector from a clinical CT scanner. The integration of both detection technologies in one CT scanner provides two major advantages. It allows direct comparison of image quality and contrast reproduction as well as instantaneous quantification of the relative dose usage and material separation performance achievable with counting techniques. In addition, data from the conventional detector can be used as complementary information during reconstruction of the images from the counting device. In this paper we present CT images acquired with the hybrid prototype scanner, illustrate its underlying conceptual methods, and provide first experimental results quantifying clinical benefits of quantum-counting CT.