scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of SPIE in 2011"


Proceedings ArticleDOI
TL;DR: The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992 as mentioned in this paper, and has been used extensively for HST data analysis.
Abstract: Point spread function (PSF) models are critical to Hubble Space Te lescope (HST) data analysis. Astronomers unfamiliar with optical simulation techniques need access to PSF models that properly match the conditions of their observations, so any HST modeling software needs to be both easy-to-use and have detailed information on the telescope and instruments. The Tiny Tim PSF simulation software package has been the standard HST modeling software since its release in early 1992. We discuss the evolution of Tiny Tim over the years as new instruments and optical properties have been incorporated. We also dem onstrate how Tiny Tim PSF models have be en used for HST data analysis. Tiny Tim is freely available from tinytim.stsci.edu. Keywords: Hubble Space Telescope, point spread function 1. INTRODUCTION The point spread function (PSF) is the fundamental unit of imag e formation for an optical syst em such as a telescope. It encompasses the diffraction from obscurations, which is modified by aberrations, and the scattering from mid-to-high spatial frequency optical errors . Imaging performance is often described in terms of PSF properties, such as resolution and encircled energy. Optical engineering software, includi ng ray tracing and physical optic s propagation packages, are employed during the design phase of the system to predict the PSF to ensure that the imag ing requirements are met. But once the system is complete and operational, the software is usually packed away and the point spread function considered static, to be described in documentation for reference by the scientist. In this context, an optical engineer runs software to compute PSFs while the user of the optical system simply needs to know its basic characteristics. For the Hubble Space Telescope (HST), that is definitely not the case. To extract the maximum information out of an observation, even the smallest details of the PSF are important. Some examples include: deconvolvin g the PSF from an observed image to remove the blurring caused by diffraction and reveal fine structure; convolving a model image by the PSF to compare to an observed one; subtracting the PSF of an unresolved source (star or compact galactic nucleus) to reveal extended structure (a circumstellar disk or host galaxy) that would otherwise be unseen within the halo of diffracted and scattered light; and fitting a PSF to a star imag e to obtain accurate photometry and astrometry, especially if it is a binary star with blended PSFs Compared to ground-based telescopes HST is extremely stable, so the structure in its PSF is largely time-invariant. This allows the use of PSF models for data analysis. On the ground, the variable PSF structure due to the atmosphere and thermal-and-gravitationally-induced optical perturbations make it more difficult to produce a model that accurately matches the data. The effective HST PSF, though, is dependent on many parameters, including obscurations, aberrations, pointing errors, system wavelength response, object color, and detector pixel effects. An accurate PSF model must account for all of these, some of which may depend on time (focus, obscuration positions) or on field position within the camera (aberrations, CCD detector charge diffusion, obscuration patterns, geometric distortion). 1.1 Early HST PSF modeling: TIM Before launch of HST in 1990, a variety of commercial and proprietary software packages were used to compute PSFs. These provided predictions of HST’s imaging performance and guided the design, but they were not used by future HST observers. These programs were too complicated for general HS T users, and either were not publicly available or were too expensive. They also did not provide PSF models in forms that scientists would find useful, such as including the effects of detector pixelization and broadband system responses.

412 citations


Proceedings ArticleDOI
TL;DR: An approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching.
Abstract: A generic and robust approach for the real-time detection of people and vehicles from an Unmanned Aerial Vehicle (UAV) is an important goal within the framework of fully autonomous UAV deployment for aerial reconnaissance and surveillance. Here we present an approach for the automatic detection of vehicles based on using multiple trained cascaded Haar classifiers with secondary confirmation in thermal imagery. Additionally we present a related approach for people detection in thermal imagery based on a similar cascaded classification technique combining additional multivariate Gaussian shape matching. The results presented show the successful detection of vehicle and people under varying conditions in both isolated rural and cluttered urban environments with minimal false positive detection. Performance of the detector is optimized to reduce the overall false positive rate by aiming at the detection of each object of interest (vehicle/person) at least once in the environment (i.e. per search patter flight path) rather than every object in each image frame. Currently the detection rate for people is ~70% and cars ~80% although the overall episodic object detection rate for each flight pattern exceeds 90%.

206 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors show that dielectric elastomers offer the promise of energy harvesting with few moving parts and demonstrate that power can be produced simply by stretching and contracting a relatively low-cost rubbery material.
Abstract: Dielectric elastomers offer the promise of energy harvesting with few moving parts. Power can be produced simply by stretching and contracting a relatively low-cost rubbery material. This simplicity, combined with demonstrated high energy density and high efficiency, suggests that dielectric elastomers are promising for a wide range of energy harvesting applications. Indeed, dielectric elastomers have been demonstrated to harvest energy from human walking, ocean waves, flowing water, blowing wind, and pushing buttons. While the technology is promising, there are challenges that must be addressed if dielectric elastomers are to be a successful and economically viable energy harvesting technology. These challenges include developing materials and packaging that sustains long lifetime over a range of environmental conditions, design of the devices that stretch the elastomer material, as well as system issues such as practical and efficient energy harvesting circuits. Progress has been made in many of these areas. We have demonstrated energy harvesting transducers that have operated over 5 million cycles. We have also shown the ability of dielectric elastomer material to survive for months underwater while undergoing voltage cycling. We have shown circuits capable of 78% energy harvesting efficiency. While the possibility of long lifetime has been demonstrated at the watt level, reliably scaling up to the power levels required for providing renewable energy to the power grid or for local use will likely require further development from the material through to the systems level.

172 citations


Proceedings ArticleDOI
TL;DR: XOP v2.4 as mentioned in this paper is a collection of computer programs for calculation of radiation characteristics of X-ray sources and their interaction with matter, including undulators and wigglers.
Abstract: XOP v2.4 consists of a collection of computer programs for calculation of radiation characteristics of X-ray sources and their interaction with matter. Many of the programs calculate radiation from undulators and wigglers, but others, such as X-ray tube codes, are also available. The computation of the index of refraction and attenuation coefficients of optical elements using user-selectable databases containing optical constants is an important part of the package for calculation of beam propagation. Coupled computations are thus feasible where the output from one program serves as the input to another program. Recent developments including enhancements to existing programs are described.

165 citations


PatentDOI
TL;DR: In this paper, an abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure, with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
Abstract: A fluidic optical device may include a first optical surface that includes an deformable material and a second optical surface that includes a rigid material. An optical fluid disposed between first and second optical surfaces and an actuator is disposed in communication with first optical surface. Activation of actuator results in a deformation of first optical surface and displacement of optical fluid. The deformation and displacement result in a change in an optical property of the device. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.

156 citations


Proceedings ArticleDOI
TL;DR: This work proposes ensemble classifiers as an alternative to the much more complex support vector machines for steganalysis, with the advantage of its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set.
Abstract: By working with high-dimensional representations of covers, modern steganographic methods are capable of preserving a large number of complex dependencies among individual cover elements and thus avoid detection using current best steganalyzers. Inevitably, steganalysis needs to start using high-dimensional feature sets as well. This brings two key problems - construction of good high-dimensional features and machine learning that scales well with respect to dimensionality. Depending on the classifier, high dimensionality may lead to problems with the lack of training data, infeasibly high complexity of training, degradation of generalization abilities, lack of robustness to cover source, and saturation of performance below its potential. To address these problems collectively known as the curse of dimensionality, we propose ensemble classifiers as an alternative to the much more complex support vector machines. Based on the character of the media being analyzed, the steganalyst first puts together a high-dimensional set of diverse "prefeatures" selected to capture dependencies among individual cover elements. Then, a family of weak classifiers is built on random subspaces of the prefeature space. The final classifier is constructed by fusing the decisions of individual classifiers. The advantage of this approach is its universality, low complexity, simplicity, and improved performance when compared to classifiers trained on the entire prefeature set. Experiments with the steganographic algorithms nsF5 and HUGO demonstrate the usefulness of this approach over current state of the art.

131 citations


Proceedings ArticleDOI
TL;DR: An algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc.
Abstract: Time-of-flight range cameras acquire a three-dimensional image of a scene simultaneously for all pixels from a single viewing location. Attempts to use range cameras for metrology applications have been hampered by the multi-path problem, which causes range distortions when stray light interferes with the range measurement in a given pixel. Correcting multi-path distortions by post-processing the three-dimensional measurement data has been investigated, but enjoys limited success because the interference is highly scene dependent. An alternative approach based on separating the strongest and weaker sources of light returned to each pixel, prior to range decoding, is more successful, but has only been demonstrated on custom built range cameras, and has not been suitable for general metrology applications. In this paper we demonstrate an algorithm applied to both the Mesa Imaging SR-4000 and Canesta Inc. XZ-422 Demonstrator unmodified off-the-shelf range cameras. Additional raw images are acquired and processed using an optimization approach, rather than relying on the processing provided by the manufacturer, to determine the individual component returns in each pixel. Substantial improvements in accuracy are observed, especially in the darker regions of the scene.

123 citations


Proceedings ArticleDOI
TL;DR: In this paper, a 300mm baseline process of record using a 12nm half-pitch PS-b-PMMA lamellae block copolymer was established to establish an initial measurement of the defect density due to inherent polymer phase separation defects such as leakage and disclinations.
Abstract: Directed self-assembly is an emerging technology that to-date has been primarily driven by research efforts in university and corporate laboratory environments. Through these environments, we have seen many promising demonstrations of forming self-assembled structures with small half pitch (<15 nm), registration control, and various device-oriented shapes. Now, the attention turns to integrating these capabilities into a 300mm pilot fab, which can study directed selfassembly in the context of a semiconductor fabrication environment and equipment set. The primary aim of this study is to create a 300mm baseline process of record using a 12nm half-pitch PS-b-PMMA lamellae block copolymer in order to establish an initial measurement of the defect density due to inherent polymer phase separation defects such as dislocations and disclinations.

122 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors used diffraction of coherent soft x-ray pulses for very high resolution of thermally-induced surface distortion, as well as femtosecond time resolution of dynamics.
Abstract: Heat dissipation from a nanoscale hot-spot is expected to be non-diffusive when a hot-spot is smaller than the phonon mean free path of the substrate. Our technique of observing diffraction of coherent soft x-ray pulses allows for very high resolution (~pm) of thermally-induced surface distortion, as well as femtosecond time resolution of dynamics. We successfully model our experimental results with a diffusive transport model that is modified to include an additional boundary resistance. These results confirm the importance of considering ballistic transport away from a nanoscale heat source, and identify a means of correctly accounting for this ballistic transport.

117 citations


Proceedings ArticleDOI
TL;DR: In this article, a directional coupler, fabricated by femtosecond laser waveguide writing, acting as an integrated beam splitter is presented, which is able to support polarization encoded qubits.
Abstract: The emerging strategy to overcome the limitations of bulk quantum optics consists of taking advantage of the robustness and compactness achievable by the integrated waveguide technology. Here we report the realization of a directional coupler, fabricated by femtosecond laser waveguide writing, acting as an integrated beam splitter able to support polarization encoded qubits. This maskless and single step technique allows to realize circular transverse waveguide profiles able to support the propagation of Gaussian modes with any polarization state. Using this device, we demonstrate the quantum interference with polarization entangled states.

113 citations


Proceedings ArticleDOI
TL;DR: In this article, an integrated intravascular photoacoustics (IVPA) and ultrasound (IVUS) catheter with an outer diameter of 1.25 mm was developed, which comprises an angle-polished optical fiber adjacent to a 30 MHz single-element transducer.
Abstract: We demonstrate intravascular photoacoustic imaging of human coronary atherosclerotic plaque. We specifically imaged lipid content, a key factor in vulnerable plaques that may lead to myocardial infarction. An integrated intravascular photoacoustics (IVPA) and ultrasound (IVUS) catheter with an outer diameter of 1.25 mm was developed. The catheter comprises an angle-polished optical fiber adjacent to a 30 MHz single-element transducer. The ultrasonic transducer was optically isolated to eliminate artifacts in the PA image. We performed measurements on a cylindrical vessel phantom and isolated point targets to demonstrate its imaging performance. Axial and lateral point spread function widths were 110 μm and 550 μm, respectively, for PA and 89 μm and 420 μm for US. We imaged two fresh human coronary arteries, showing different stages of disease, ex vivo. Specific photoacoustic imaging of lipid content, is achieved by spectroscopic imaging at different wavelengths between 1180 and 1230 nm.

Proceedings ArticleDOI
TL;DR: This paper presents a practical framework for optimizing the parameters of additive distortion functions to minimize statistical detectability and shows that the size of the margin between support vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend to be more secure w.r.t. blind steganalysis.
Abstract: Most steganographic schemes for real digital media embed messages by minimizing a suitably defined distortion function. In practice, this is often realized by syndrome codes which offer near-optimal rate-distortion performance. However, the distortion functions are designed heuristically and the resulting steganographic algorithms are thus suboptimal. In this paper, we present a practical framework for optimizing the parameters of additive distortion functions to minimize statistical detectability. We apply the framework to digital images in both spatial and DCT domain by first defining a rich parametric model which assigns a cost of making a change at every cover element based on its neighborhood. Then, we present a practical method for optimizing the parameters with respect to a chosen detection metric and feature space. We show that the size of the margin between support vectors in soft-margin SVMs leads to a fast detection metric and that methods minimizing the margin tend to be more secure w.r.t. blind steganalysis. The parameters obtained by the Nelder-Mead simplex-reflection algorithm for spatial and DCT-domain images are presented and the new embedding methods are tested by blind steganalyzers utilizing various feature sets. Experimental results show that as few as 80 images are sufficient for obtaining good candidates for parameters of the cost model, which allows us to speed up the parameter search.

Proceedings ArticleDOI
TL;DR: The comparison of the performances offered by a Natural Guide Star (NGS) system upgraded with the state-of-the-art technology and those delivered by existing Laser Guides Star (LGS) systems suggests rethinking the current role ascribed to NGS and LGS in the next generation of AO systems.
Abstract: The Large Binocular Telescope (LBT) is a unique telescope featuring two co-mounted optical trains with 8.4m primary mirrors. The telescope Adaptive Optics (AO) system uses two innovative key components, namely an adaptive secondary mirror with 672 actuators and a high-order pyramid wave-front sensor. During the on-sky commissioning such a system reached performances never achieved before on large ground-based optical telescopes. Images with 40mas resolution and Strehl Ratios higher than 80% have been acquired in H band (1.6 μm). Such images showed a contrast as high as 10-4. Based on these results, we compare the performances offered by a Natural Guide Star (NGS) system upgraded with the state-of-the-art technology and those delivered by existing Laser Guide Star (LGS) systems. The comparison, in terms of sky coverage and performances, suggests rethinking the current role ascribed to NGS and LGS in the next generation of AO systems for the 8-10 meter class telescopes and Extremely Large Telescopes (ELTs).

Proceedings ArticleDOI
TL;DR: The basic concepts involved, issues related to monitoring of civil structures, address the problem of non-linearity of the cost-to-utility mapping, and introduce an approximate Monte Carlo approach suitable for the implementation of time-consuming predictive models are presented.
Abstract: In the field of Structural Health Monitoring, tests and sensing systems are intended as tools providing diagnoses, which allow the operator of the facility to develop an efficient maintenance plan or to require extraordinary measures on a structure. The effectiveness of these systems depends directly on their capability to guide towards the most optimal decision for the prevailing circumstances, avoiding mistakes and wastes of resources. Though this is well known, most studies only address the accuracy of the information gained from sensors without discussing economic criteria. Other studies evaluate these criteria separately, with only marginal or heuristic connection with the outcomes of the monitoring system. The concept of “Value of Information” (VoI) provides a rational basis to rank measuring systems according to a utility-based metric, which fully includes the decision-making process affected by the monitoring campaign. This framework allows, for example, an explicit assessment of the economical justifiability of adopting a sensor depending on its precision. In this paper we outline the framework for assessing the VoI, as applicable to the ranking of competitive measuring systems. We present the basic concepts involved, highlight issues related to monitoring of civil structures, address the problem of non-linearity of the cost-to-utility mapping, and introduce an approximate Monte Carlo approach suitable for the implementation of time-consuming predictive models.

Proceedings ArticleDOI
TL;DR: In this article, a photoacoustic probe for endoscopic applications was developed, which consists of a single delivery optical fiber with a transparent Fabry Perot (FP) ultrasound sensor at its distal end.
Abstract: A miniature (250 μm outer diameter) photoacoustic probe for endoscopic applications has been developed. It comprises a single delivery optical fibre with a transparent Fabry Perot (FP) ultrasound sensor at its distal end. The fabrication of the sensor was achieved by depositing a thin film multilayer structure comprising a polymer spacer sandwiched between a pair of dichroic dielectric mirrors on to the tip of a single mode fiber. The probe was evaluated in terms of its acoustic bandwidth and sensitivity. Ultra high acoustic sensitivity has been achieved with a concave FP interferometer cavity design, which effectively suppresses the phase dispersion of multiple reflected beam within the cavity to achieve high finesse. The noise equivalent noise (NEP) achieved is 8 Pa over a 20 MHz bandwidth. Backward mode operation of the probe is demonstrated by detecting photoacoustic signals in a variety of phantoms designed to simulate endoscopic applications. A side-viewing probe is also demonstrated illustrating an all-optical design for intravascular imaging applications.

Proceedings ArticleDOI
TL;DR: The method for measuring stereo camera depth accuracy was validated with a stereo camera built of two SLRs (singlelens reflex) and showed that normal stereo acuity was achieved only using a tele lens.
Abstract: We present a method to evaluate stereo camera depth accuracy in human centered applications. It enables the comparison between stereo camera depth resolution and human depth resolution. Our method uses a multilevel test target which can be easily assembled and used in various studies. Binocular disparity enables humans to perceive relative depths accurately, making a multilevel test target applicable for evaluating the stereo camera depth accuracy when the accuracy requirements come from stereoscopic vision. The method for measuring stereo camera depth accuracy was validated with a stereo camera built of two SLRs (singlelens reflex). The depth resolution of the SLRs was better than normal stereo acuity at all measured distances ranging from 0.7 m to 5.8 m. The method was used to evaluate the accuracy of a lower quality stereo camera. Two parameters, focal length and baseline, were varied. Focal length had a larger effect on stereo camera's depth accuracy than baseline. The tests showed that normal stereo acuity was achieved only using a tele lens. However, a user's depth resolution in a video see-through system differs from direct naked eye viewing. The same test target was used to evaluate this by mixing the levels of the test target randomly and asking users to sort the levels according to their depth. The comparison between stereo camera depth resolution and perceived depth resolution was done by calculating maximum erroneous classification of levels.

Proceedings ArticleDOI
TL;DR: A novel volumetric display was used to examine how viewing distance and the sign of the vergence-accommodation conflict affect discomfort and fatigue and help define comfortable viewing conditions for stereo displays.
Abstract: Prolonged use of conventional stereo displays causes viewer discomfort and fatigue because of the vergence-accommodation conflict. We used a novel volumetric display to examine how viewing distance and the sign of the vergence-accommodation conflict affect discomfort and fatigue. In the first experiment, we presented a fixed conflict at short, medium, and long viewing distances. We compared subjects' symptoms in that condition and one in which there was no conflict. We observed more discomfort and fatigue with a given vergence-accommodation conflict at the longer distances. The second experiment compared symptoms when the conflict had one sign compared to when it had the opposite sign at short, medium, and long distances. We observed greater symptoms with uncrossed disparities at long distances and with crossed disparities at short distances. These findings help define comfortable viewing conditions for stereo displays.

Proceedings ArticleDOI
TL;DR: In this paper, a deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured.
Abstract: In this paper we describe the complex electric field reconstruction from image plane intensity measurements for high contrast coronagraphic imaging. A deformable mirror (DM) surface is modied with pairs of complementary shapes to create diversity in the image plane of the science camera where the intensity of the light is measured. Along with the Electric Field Conjugation correction algorithm, this estimation method has been used in various high contrast imaging testbeds to achieve the best contrasts to date both in narrow and in broad band light. We present the basic methodology of estimation in easy to follow list of steps, present results from HCIT and raise several open quations we are confronted with using this method.

Proceedings ArticleDOI
TL;DR: The NuSTAR flight optics modules are glass-graphite-epoxy-composite structures to be employed for the first time in space-based X-ray optics by NuSTAR, a NASA Small Penetrator Space Explorer schedule for launch in February 2012.
Abstract: We describe the fabrication of the two NuSTAR flight optics modules. The NuSTAR optics modules are glass-graphiteepoxy composite structures to be employed for the first time in space-based X-ray optics by NuSTAR, a NASA Small Explorer schedule for launch in February 2012. We discuss the optics manufacturing process, the qualification and environmental testing performed, and briefly discuss the results of X-ray performance testing of the two modules. The integration and alignment of the completed flight optics modules into the NuSTAR instrument is described as are the optics module thermal shields.

Proceedings ArticleDOI
TL;DR: This system can potentially serve as a basis for a flexible toolbox for X-ray image analysis and simulation, that can efficiently utilize modern multi-processor hardware for advanced scientific computations.
Abstract: A software system has been developed for high-performance Computed Tomography (CT) reconstruction, simulation and other X-ray image processing tasks utilizing remote computer clusters optionally equipped with multiple Graphics Processing Units (GPUs). The system has a streamlined Graphical User Interface for interaction with the cluster. Apart from extensive functionality related to X-ray CT in plane-wave and cone-beam forms, the software includes multiple functions for X-ray phase retrieval and simulation of phase-contrast imaging (propagation-based, analyzer crystal based and Talbot interferometry). Other features include several methods for image deconvolution, simulation of various phase-contrast microscopy modes (Zernike, Schlieren, Nomarski, dark-field, interferometry, etc.) and a large number of conventional image processing operations (such as FFT, algebraic and geometrical transformations, pixel value manipulations, simulated image noise, various filters, etc.). The architectural design of the system is described, as well as the two-level parallelization of the most computationally-intensive modules utilizing both the multiple CPU cores and multiple GPUs available in a local PC or a remote computer cluster. Finally, some results about the current system performance are presented. This system can potentially serve as a basis for a flexible toolbox for X-ray image analysis and simulation, that can efficiently utilize modern multi-processor hardware for advanced scientific computations.

Proceedings ArticleDOI
TL;DR: A powerful video denoising algorithm that exploits temporal and spatial redundancy characterizing natural video sequences and outperforms the state of the art is proposed.
Abstract: We propose a powerful video denoising algorithm that exploits temporal and spatial redundancy characterizing natural video sequences. The algorithm implements the paradigm of nonlocal grouping and collaborative filtering, where a higher-dimensional transform-domain representation is leveraged to enforce sparsity and thus regularize the data. The proposed algorithm exploits the mutual similarity between 3-D spatiotemporal volumes constructed by tracking blocks along trajectories defined by the motion vectors. Mutually similar volumes are grouped together by stacking them along an additional fourth dimension, thus producing a 4-D structure, termed group, where different types of data correlation exist along the different dimensions: local correlation along the two dimensions of the blocks, temporal correlation along the motion trajectories, and nonlocal spatial correlation (i.e. self-similarity) along the fourth dimension. Collaborative filtering is realized by transforming each group through a decorrelating 4-D separable transform and then by shrinkage and inverse transformation. In this way, collaborative filtering provides estimates for each volume stacked in the group, which are then returned and adaptively aggregated to their original position in the video. Experimental results demonstrate the effectiveness of the proposed procedure which outperforms the state of the art.

Proceedings ArticleDOI
TL;DR: Magnetorheological finishing (MRF) is a deterministic method for producing complex optics with figure accuracy <50======nm and surface roughness <1 nm as discussed by the authors, which was invented at the Luikov Institute of Heat and Mass Transfer in Minsk, Belarus in the late 1980s by a team led by William Kordonski.
Abstract: Magnetorheological finishing (MRF) is a deterministic method for producing complex optics with figure accuracy <50 nm and surface roughness <1 nm. MRF was invented at the Luikov Institute of Heat and Mass Transfer in Minsk, Belarus in the late 1980s by a team led by William Kordonski. When the Soviet Union opened up, New York businessman Lowell Mintz was invited to Minsk in 1990 to explore possibilities for technology transfer. Mintz was told of the potential for MRF, but did not understand whether it had value. Mintz was referred to Harvey Pollicove at the Center for Optics Manufacturing of the University of Rochester. As a result of their conversation, they sent Prof. Steve Jacobs to visit Minsk and evaluate MRF. From Jacobs' positive findings, and with support from Lowell Mintz, Kordonski and his colleagues were invited in 1993 to work at the Center for Optics Manufacturing with Jacobs and Don Golini to refine MRF technology. A "preprototype" finishing machine was operating by 1994. Prof. Greg Forbes and doctoral student Paul Dumas developed algorithms for deterministic control of MRF. In 1996, Golini recognized the commercial potential of MRF, secured investment capital from Lowell Mintz, and founded QED Technologies. The first commercial MRF machine was unveiled in 1998. It was followed by more advanced models and by groundbreaking subaperture stitching interferometers for metrology. In 2006, QED was acquired by and became a division of Cabot Microelectronics. This paper recounts the history of the development of MRF and the founding of QED Technologies.

Proceedings ArticleDOI
TL;DR: The tunable Q-factor wavelet transform (TQWT) is a fully-discrete wavelet Transform for which the Q-Factor, Q, of the underlying wavelet and the asymptotic redundancy, r, ofThe transform are easily and independently specified, and the specified parameters Q and r can be real-valued.
Abstract: The tunable Q-factor wavelet transform (TQWT) is a fully-discrete wavelet transform for which the Q-factor, Q, of the underlying wavelet and the asymptotic redundancy (over-sampling rate), r, of the transform are easily and independently specified. In particular, the specified parameters Q and r can be real-valued. Therefore, by tuning Q, the oscillatory behavior of the wavelet can be chosen to match the oscillatory behavior of the signal of interest, so as to enhance the sparsity of a sparse signal representation. The TQWT is well suited to fast algorithms for sparsity-based inverse problems because it is a Parseval frame, easily invertible, and can be efficiently implemented using radix-2 FFTs. The TQWT can also be used as an easily-invertible discrete approximation of the continuous wavelet transform.

Proceedings ArticleDOI
TL;DR: This paper compares and evaluates several state-of-the-art online object tracking algorithms and identifies the components of each tracking method and their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations.
Abstract: This paper reviews and evaluates several state-of-the-art online object tracking algorithms. Notwithstanding decades of efforts, object tracking remains a challenging problem due to factors such as illumination, pose, scale, deformation, motion blur, noise, and occlusion. To account for appearance change, most recent tracking algorithms focus on robust object representations and effective state prediction. In this paper, we analyze the components of each tracking method and identify their key roles in dealing with specific challenges, thereby shedding light on how to choose and design algorithms for different situations. We compare state-of-the-art online tracking methods including the IVT, 1 VRT, 2 FragT, 3 BoostT, 4 SemiT, 5 BeSemiT, 6 L1T, 7 MILT, 8 VTD 9 and TLD 10 algorithms on numerous challenging sequences, and evaluate them with different performance metrics. The qualitative and quantitative comparative results demonstrate the strength and weakness of these algorithms.

Proceedings ArticleDOI
TL;DR: This paper reviews how the terms crosstalk, ghosting and associated terms are defined and used in the stereoscopic literature and both descriptive definitions and mathematical definitions are considered.
Abstract: Crosstalk is a critical factor determining the image quality of stereoscopic displays. Also known as ghosting or leakage, high levels of crosstalk can make stereoscopic images hard to fuse and lack fidelity; hence it is important to achieve low levels of crosstalk in the development of high-quality stereoscopic displays. In the wider academic literature, the terms crosstalk, ghosting and leakage are often used interchangeably and unfortunately very few publications actually provide a descriptive or mathematical definition of these terms. Additionally the definitions that are available are sometimes contradictory. This paper reviews how the terms crosstalk, ghosting and associated terms (system crosstalk, viewer crosstalk, gray-to-gray crosstalk, leakage, extinction and extinction ratio, and 3D contrast) are defined and used in the stereoscopic literature. Both descriptive definitions and mathematical definitions are considered.

Proceedings ArticleDOI
TL;DR: In this article, a metal oxide patternable hardmask was designed for EUV lithography, which is highly absorbing (16 μm-1) and etch resistant (>100:1 for silicon).
Abstract: This paper describes a metal oxide patternable hardmask designed for EUV lithography. The material has imaged 15-nm half-pitch by projection EUV exposure on the SEMATECH Berkeley MET, and 12-nm half-pitch by electron beam exposure. The platform is highly absorbing (16 μm-1) and etch resistant (>100:1 for silicon). These properties enable resist film thickness to be reduced to 20nm, thereby reducing aspect ratio and susceptibility to pattern collapse. New materials and processes show a path to improved photospeed. This paper also presents data for on coating uniformity, metal-impurity content, outgassing, pattern transfer, and resist strip.

Proceedings ArticleDOI
TL;DR: The major issue for the 22-nm half-pitch node remains simultaneously meeting resolution, line-edge roughness (LER), and sensitivity requirements as discussed by the authors, although several materials have met the resolution requirements, LER and sensitivity remain a challenge.
Abstract: Although Extreme ultraviolet lithography (EUVL) is now well into the commercialization phase, critical challenges remain in the development of EUV resist materials. The major issue for the 22-nm half-pitch node remains simultaneously meeting resolution, line-edge roughness (LER), and sensitivity requirements. Although several materials have met the resolution requirements, LER and sensitivity remain a challenge. As we move beyond the 22-nm node, however, even resolution remains a significant challenge. Chemically amplified resists have yet to demonstrate the required resolution at any speed or LER for 16-nm half pitch and below. Going to non-chemically amplified resists, however, 16-nm resolution has been achieved with a LER of 2 nm but a sensitivity of only 70 mJ/cm{sup 2}.

Proceedings ArticleDOI
TL;DR: The Hybrid Metrology approach is defined to be the use of any two or more metrology toolsets in combination to measure the same dataset to optimize metrology recipe and improve measurement performance.
Abstract: Shrinking design rules and reduced process tolerances require tight control of CD linewidth, feature shape, and profile of the printed geometry. The Holistic Metrology approach consists of utilizing all available information from different sources like data from other toolsets, multiple optical channels, multiple targets, etc. to optimize metrology recipe and improve measurement performance. Various in-line critical dimension (CD) metrology toolsets like Scatterometry OCD (Optical CD), CD-SEM (CD Scanning Electron Microscope) and CD-AFM (CD Atomic Force Microscope) are typically utilized individually in fabs. Each of these toolsets has its own set of limitations that are intrinsic to specific measurement technique and algorithm. Here we define "Hybrid Metrology" to be the use of any two or more metrology toolsets in combination to measure the same dataset. We demonstrate the benefits of the Hybrid Metrology on two test structures: 22nm node Gate Develop Inspect (DI) & 32nm node FinFET Gate Final Inspect (FI). We will cover measurement results obtained using typical BKM as well as those obtained by utilizing the Hybrid Metrology approach. Measurement performance will be compared using standard metrology metrics for example accuracy and precision.

Proceedings ArticleDOI
TL;DR: In this article, the authors describe several aspects of holographic recording into Bayfol® HX which are beneficial for its effective use and discuss them within a more elaborate reaction-diffusion model.
Abstract: We have been developing a new class of recording materials for volume holography, offering the advantages of full color recording and depth tuning without any chemical or thermal processing, combined with low shrinkage and detuning. These photopolymers are based on the two-chemistry concept in which the writing chemistry is dissolved in a preformed polymeric network. This network gives the necessary mechanical stability to the material prior to recording. In this paper we describe several aspects of holographic recording into Bayfol® HX which are beneficial for its effective use and discuss them within a more elaborate reaction-diffusion model. Inhibition phenomena and the influence of precure are studied within this model and are investigated experimentally for single hologram recording and angular multiplexed hologram recordings. Also the dark reaction after the exposure period and the minimum allowable waiting time for full hologram formation are addressed. The proper understanding of these phenomena is important for the optimal usage of these new materials, in for example step-and-repeat mass production of holograms.

Proceedings ArticleDOI
TL;DR: An effective and versatile Matlab toolbox written in C++ has been developed to assist in developing new beam formation strategies and is a general 3D implementation capable of handling a multitude of focusing methods, interpolation schemes, and parametric and dynamic apodization.
Abstract: Focusing and apodization are an essential part of signal processing in ultrasound imaging. Although the fundamental principles are simple, the dramatic increase in computational power of CPUs, GPUs, and FPGAs motivates the development of software based beamformers, which further improves image quality (and the accuracy of velocity estimation). For developing new imaging methods, it is important to establish proof-of-concept before using resources on real-time implementations. With this in mind, an effective and versatile Matlab toolbox written in C++ has been developed to assist in developing new beam formation strategies. It is a general 3D implementation capable of handling a multitude of focusing methods, interpolation schemes, and parametric and dynamic apodization. Despite being flexible, it is capable of exploiting parallelization on a single computer, on a cluster, or on both. On a single computer, it mimics the parallization in a scanner containing multiple beam formers. The focusing is determined using the positions of the transducer elements, presence of virtual sources, and the focus points. For interpolation, a number of interpolation schemes can be chosen, e.g. linear, polynomial, or cubic splines. Apodization can be specified by a number of window functions of fixed size applied on the individual elements as a function of distance to a reference point, or it can be dynamic with an expanding or contracting aperture to obtain a constant F-number, or both. On a standard PC with an Intel Quad-Core Xeon E5520 processor running at 2.26 GHz, the toolbox can beamform 300.000 points using 700.000 data samples in 3 seconds using a transducer with 192 elements, dynamic apodization in transmit and receive, and cubic splines for interpolation. This is 19 times faster than our previous toolbox.