Showing papers in "Proceedings of SPIE in 2013"
TL;DR: In this paper, the authors proposed an external field-free photonic topological insulator with scatter-free edge transport, which is composed of an array of evanescently coupled helical waveguides arranged in a graphene-like honeycomb lattice.
Abstract: Topological insulators are a new phase of matter, with the striking property that conduction of electrons occurs only on the surface. In two dimensions, surface electrons in topological insulators do not scatter despite defects and disorder, providing robustness akin to superconductors. Topological insulators are predicted to have wideranging applications in fault-tolerant quantum computing and spintronics. Recently, large theoretical efforts were directed towards achieving topological insulation for electromagnetic waves. One-dimensional systems with topological edge states have been demonstrated, but these states are zero-dimensional, and therefore exhibit no transport properties. Topological protection of microwaves has been observed using a mechanism similar to the quantum Hall effect, by placing a gyromagnetic photonic crystal in an external magnetic field. However, since magnetic effects are very weak at optical frequencies, realizing photonic topological insulators with scatterfree edge states requires a fundamentally different mechanism - one that is free of magnetic fields. Recently, a number of proposals for photonic topological transport have been put forward. Specifically, one suggested temporally modulating a photonic crystal, thus breaking time-reversal symmetry and inducing one-way edge states. This is in the spirit of the proposed Floquet topological insulators, where temporal variations in solidstate systems induce topological edge states. Here, we propose and experimentally demonstrate the first external field-free photonic topological insulator with scatter-free edge transport: a photonic lattice exhibiting topologically protected transport of visible light on the lattice edges. Our system is composed of an array of evanescently coupled helical waveguides arranged in a graphene-like honeycomb lattice. Paraxial diffraction of light is described by a Schrodinger equation where the propagation coordinate acts as ‘time’. Thus the waveguides' helicity breaks zreversal symmetry in the sense akin to Floquet Topological Insulators. This structure results in scatter-free, oneway edge states that are topologically protected from scattering.
TL;DR: The various architectures that have been proposed in industry to implement see-through head-mounted display (HMD) especially for the consumer electronics market are reviewed.
Abstract: We review in this paper the various architectures that have been proposed in industry to implement see-through head-mounted display (HMD), especially for the consumer electronics market.
TL;DR: This paper discusses all key component technologies required to realize optical cellular communication systems referred to here as optical attocell networks.
Abstract: Motivated by the looming radio frequency (RF) spectrum crisis, this paper aims at demonstrating that optical wireless communication (OWC) has now reached a state where it can demonstrate that it is a viable and matured solution to this fundamental problem. In particular, for indoor communications where most mobile data traffic is consumed, light fidelity (Li-Fi) which is related to visible light communication (VLC) offers many key advantages, and effective solutions to the issues that have been posed in the last decade. This paper discusses all key component technologies required to realize optical cellular communication systems referred to here as optical attocell networks. Optical attocells are the next step in the progression towards ever smaller cells, a progression which is known to be the most significant contributor to the improvements in network spectral efficiencies in RF wireless networks.
TL;DR: The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data.
Abstract: Hybrid Reality Environments represent a new kind of visualization spaces that blur the line between virtual environments and high resolution tiled display walls. This paper outlines the design and implementation of the CAVE2TM Hybrid Reality Environment. CAVE2 is the world’s first near-seamless flat-panel-based, surround-screen immersive system. Unique to CAVE2 is that it will enable users to simultaneously view both 2D and 3D information, providing more flexibility for mixed media applications. CAVE2 is a cylindrical system of 24 feet in diameter and 8 feet tall, and consists of 72 near-seamless, off-axisoptimized passive stereo LCD panels, creating an approximately 320 degree panoramic environment for displaying information at 37 Megapixels (in stereoscopic 3D) or 74 Megapixels in 2D and at a horizontal visual acuity of 20/20. Custom LCD panels with shifted polarizers were built so the images in the top and bottom rows of LCDs are optimized for vertical off-center viewing- allowing viewers to come closer to the displays while minimizing ghosting. CAVE2 is designed to support multiple operating modes. In the Fully Immersive mode, the entire room can be dedicated to one virtual simulation. In 2D model, the room can operate like a traditional tiled display wall enabling users to work with large numbers of documents at the same time. In the Hybrid mode, a mixture of both 2D and 3D applications can be simultaneously supported. The ability to treat immersive work spaces in this Hybrid way has never been achieved before, and leverages the special abilities of CAVE2 to enable researchers to seamlessly interact with large collections of 2D and 3D data. To realize this hybrid ability, we merged the Scalable Adaptive Graphics Environment (SAGE) - a system for supporting 2D tiled displays, with Omegalib - a virtual reality middleware supporting OpenGL, OpenSceneGraph and Vtk applications.
TL;DR: A unique triplemodality magnetic resonance imaging - photoacoustic imaging - Raman imaging nanoparticle (termed here MPR nanoparticles), can accurately help delineate the margins of brain tumors in living mice both preoperatively and intraoperatively.
Abstract: The difficulty in delineating brain tumor margins is a major obstacle in the path toward better outcomes for patients with brain tumors. Current imaging methods are often limited by inadequate sensitivity, specificity and spatial resolution. Here we show that a unique triplemodality magnetic resonance imaging - photoacoustic imaging - Raman imaging nanoparticle (termed here MPR nanoparticles), can accurately help delineate the margins of brain tumors in living mice both preoperatively and intraoperatively. The MPRs were detected by all three modalities with at least a picomolar sensitivity both in vitro and in living mice. Intravenous injection of MPRs into glioblastoma-bearing mice led to MPR accumulation and retention by the tumors, with no MPR accumulation in the surrounding healthy tissue, allowing for a noninvasive tumor delineation using all three modalities through the intact skull. Raman imaging allowed for guidance of intraoperative tumor resection, and a histological correlation validated that Raman imaging was accurately delineating the brain tumor margins. This new triple-modality– nanoparticle approach has promise for enabling more accurate brain tumor imaging and resection.
TL;DR: The Lytro camera is considered a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography, and the interpretation of Lytro image data saved by the camera is used.
Abstract: The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
TL;DR: TEMPO was selected in 2012 by NASA as the first Earth Venture Instrument, for launch circa 2018, and it will measure atmospheric pollution for greater North America from space using ultraviolet and visible spectroscopy.
Abstract: TEMPO was selected in 2012 by NASA as the first Earth Venture Instrument, for launch circa 2018. It will measure atmospheric pollution for greater North America from space using ultraviolet and visible spectroscopy. TEMPO measures from Mexico City to the Canadian tar sands, and from the Atlantic to the Pacific, hourly and at high spatial resolution (~2 km N/S×4.5 km E/W at 36.5°N, 100°W). TEMPO provides a tropospheric measurement suite that includes the key elements of tropospheric air pollution chemistry. Measurements are from geostationary (GEO) orbit, to capture the inherent high variability in the diurnal cycle of emissions and chemistry. The small product spatial footprint resolves pollution sources at sub-urban scale. Together, this temporal and spatial resolution improves emission inventories, monitors population exposure, and enables effective emission-control strategies. TEMPO takes advantage of a commercial GEO host spacecraft to provide a modest cost mission that measures the spectra required to retrieve O3, NO2, SO2, H2CO, C2H2O2, H2O, aerosols, cloud parameters, and UVB radiation. TEMPO thus measures the major elements, directly or by proxy, in the tropospheric O3 chemistry cycle. Multi-spectral observations provide sensitivity to O3 in the lowermost troposphere, substantially reducing uncertainty in air quality predictions. TEMPO quantifies and tracks the evolution of aerosol loading. It provides near-real-time air quality products that will be made widely, publicly available. TEMPO will launch at a prime time to be the North American component of the global geostationary constellation of pollution monitoring together with European Sentinel-4 and Korean GEMS.
TL;DR: A multi-modal (hyperspectral, multispectral and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab(NRL), United Technologies Aerospace Systems (UTAS) and MITRE as discussed by the authors.
Abstract: A multi-modal (hyperspectral, multispectral, and LIDAR) imaging data collection campaign was conducted just south of Rochester New York in Avon, NY on September 20, 2012 by the Rochester Institute of Technology (RIT) in conjunction with SpecTIR, LLC, the Air Force Research Lab (AFRL), the Naval Research Lab (NRL), United Technologies Aerospace Systems (UTAS) and MITRE. The campaign was a follow on from the SpecTIR Hyperspectral Airborne Rochester Experiment (SHARE) from 2010. Data was collected in support of the eleven simultaneous experiments described here. The airborne imagery was collected over four different sites with hyperspectral, multispectral, and LIDAR sensors. The sites for data collection included Avon, NY, Conesus Lake, Hemlock Lake and forest, and a nearby quarry. Experiments included topics such as target unmixing, subpixel detection, material identification, impacts of illumination on materials, forest health, and in-water target detection. An extensive ground truthing effort was conducted in addition to collection of the airborne imagery. The ultimate goal of the data collection campaign is to provide the remote sensing community with a shareable resource to support future research. This paper details the experiments conducted and the data that was collected during this campaign.
TL;DR: In this paper, the authors considered stray light as the likely cause, and studied calibration data from the space view, and concluded that stray light is likely entering the VIIRS scan cavity directly or indirectly through the nadir door and solar diffuser openings.
Abstract: The Day/Night Band (DNB) on the VIIRS sensor is a panchromatic band that can detect very dim nighttime scenes. After launch of Suomi NPP in October 2011 we observed a gray haze in radiance images with offsets up to 5×10-9 W cm-2 sr-1. Overall this impacts about 25% of nighttime scenes. We considered stray light as the likely cause, and studied calibration data from the space view. We concluded that stray light is likely entering the VIIRS scan cavity directly or indirectly through the nadir door and solar diffuser openings. We also studied the darkest earth scenes without any solar, lunar or artificial illumination, and found that the offset is a function of cross-track pixel, solar angle of incidence (AOI) and detector number. We observed a strong detector dependence causing striping. Dividing scenes into 3 or 4 zones in each hemisphere based on solar AOI, we devised an algorithmic correction based on polynomial fits of the medians of the data for each zone. The correction removed almost all the haze and striping and improved dynamic range by 2 orders-of-magnitude, to as low as 1×10-10 Wcm-2 sr-1, but some striping remains in the twilight region due to extrapolation.
TL;DR: In this article, the authors describe a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process by using a continuously moving Mylar film canister.
Abstract: Oak Ridge National Laboratory (ORNL) has been utilizing the ARCAM electron beam melting technology to additively manufacture complex geometric structures directly from powder. Although the technology has demonstrated the ability to decrease costs, decrease manufacturing lead-time and fabricate complex structures that are impossible to fabricate through conventional processing techniques, certification of the component quality can be challenging. Because the process involves the continuous deposition of successive layers of material, each layer can be examined without destructively testing the component. However, in-situ process monitoring is difficult due to metallization on inside surfaces caused by evaporation and condensation of metal from the melt pool. This work describes a solution to one of the challenges to continuously imaging inside of the chamber during the EBM process. Here, the utilization of a continuously moving Mylar film canister is described. Results will be presented related to in-situ process monitoring and how this technique results in improved mechanical properties and reliability of the process.
TL;DR: An adaptive denoising algorithm based on Block-Matching 3D that reduces image noise and has the potential for improving assessment of left ventricular function from low-dose coronary CTA is optimized and validated.
Abstract: Our aim in this study was to optimize and validate an adaptive denoising algorithm based on Block-Matching 3D, for reducing image noise and improving assessment of left ventricular function from low-radiation dose coronary CTA. In this paper, we describe the denoising algorithm and its validation, with low-radiation dose coronary CTA datasets from 7 consecutive patients. We validated the algorithm using a novel method, with the myocardial mass from the low-noise cardiac phase as a reference standard, and objective measurement of image noise. After denoising, the myocardial mass were not statistically different by comparison of individual datapoints by the students' t-test (130.9±31.3g in low-noise 70% phase vs 142.1±48.8g in the denoised 40% phase, p= 0.23). Image noise improved significantly between the 40% phase and the denoised 40% phase by the students' t-test, both in the blood pool (p <0.0001) and myocardium (p <0.0001). In conclusion, we optimized and validated an adaptive BM3D denoising algorithm for coronary CTA. This new method reduces image noise and has the potential for improving assessment of left ventricular function from low-dose coronary CTA.
TL;DR: This research focuses on applying common, low-cost,Low-overhead, cyber-attacks on a robot featuring ROS and documents the effectiveness of those attacks.
Abstract: Over the course of the last few years, the Robot Operating System (ROS) has become a highly popular software framework for robotics research. ROS has a very active developer community and is widely used for robotics research in both academia and government labs. The prevalence and modularity of ROS cause many people to ask the question: “What prevents ROS from being used in commercial or government applications?” One of the main problems that is preventing this increased use of ROS in these applications is the question of characterizing its security (or lack thereof). In the summer of 2012, a crowd sourced cyber-physical security contest was launched at the cyber security conference DEF CON 20 to begin the process of characterizing the security of ROS. A small-scale, car-like robot was configured as a cyber-physical security “honeypot” running ROS. DEFFCON-20 attendees were invited to find exploits and vulnerabilities in the robot while network traffic was collected. The results of this experiment provided some interesting insights and opened up many security questions pertaining to deployed robotic systems. The Federal Aviation Administration is tasked with opening up the civil airspace to commercial drones by September 2015 and driverless cars are already legal for research purposes in a number of states. Given the integration of these robotic devices into our daily lives, the authors pose the following question: “What security exploits can a motivated person with little-to-no experience in cyber security execute, given the wide availability of free cyber security penetration testing tools such as Metasploit?” This research focuses on applying common, low-cost, low-overhead, cyber-attacks on a robot featuring ROS. This work documents the effectiveness of those attacks.
TL;DR: This work proposes and demonstrates on an operational SEM a fast method to sparsely sample and reconstruct smooth images, and reports image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes.
Abstract: Scanning electron microscopes (SEMs) are used in neuroscience and materials science to image centimeters of sample area at nanometer scales. Since imaging rates are in large part SNR-limited, large collections can lead to weeks of around-the-clock imaging time. To increase data collection speed, we propose and demonstrate on an operational SEM a fast method to sparsely sample and reconstruct smooth images. To accurately localize the electron probe position at fast scan rates, we model the dynamics of the scan coils, and use the model to rapidly and accurately visit a randomly selected subset of pixel locations. Images are reconstructed from the undersampled data by compressed sensing inversion using image smoothness as a prior. We report image fidelity as a function of acquisition speed by comparing traditional raster to sparse imaging modes. Our approach is equally applicable to other domains of nanometer microscopy in which the time to position a probe is a limiting factor (e.g., atomic force microscopy), or in which excessive electron doses might otherwise alter the sample being observed (e.g., scanning transmission electron microscopy).
TL;DR: In this article, the authors examine the future for critical dimension (CD) metrology and present the extensive list of applications for which CD metrology solutions are needed, showing commonalities and differences among the various applications.
Abstract: This paper will examine the future for critical dimension (CD) metrology. First, we will present the extensive list of applications for which CD metrology solutions are needed, showing commonalities and differences among the various applications. We will then report on the expected technical limits of the metrology solutions currently being investigated by SEMATECH and others in the industry to address the metrology challenges of future nodes, including conventional CD scanning electron microscopy (CD-SEM) and optical critical dimension (OCD) metrology and new potential solutions such as He-ion microscopy (HeIM, sometimes elsewhere referred to as HIM), CD atomic force microscopy (CD-AFM), CD small-angle x-ray scattering (CD-SAXS), high-voltage scanning electron microscopy (HV-SEM), and other types. A technical gap analysis matrix will then be demonstrated, showing the current state of understanding of the future of the CD metrology space.
TL;DR: The key remaining challenge is productivity, which translates to a cost-effective introduction of EUVL in high-volume manufacturing (HVM).
Abstract: All six NXE:3100, 0.25 NA EUV exposure systems are in use at customer sites enabling device development and cycles of learning for early production work in all lithographic segments; Logic, DRAM, MPU, and FLASH memory. NXE EUV lithography has demonstrated imaging and overlay performance both at ASML and end-users that supports sub- 27nm device work. Dedicated chuck overlay performance of <2nm has been shown on all six NXE:3100 systems. The key remaining challenge is productivity, which translates to a cost-effective introduction of EUVL in high-volume manufacturing (HVM). High volume manufacturing of the devices and processes in development is expected to be done with the third generation EUV scanners - the NXE:3300B. The NXE:3300B utilizes an NA of 0.33 and is positioned at a resolution of 22nm which can be extended to 18nm with off-axis illumination. The subsystem performance is improved to support these imaging resolutions and overall productivity enhancements are integrated into the NXE platform consistent with 125 wph. Since EUV reticles currently do not use a pellicle, special attention is given to reticle-addeddefects performance in terms of system design and machine build including maintenance procedures. In this paper we will summarize key lithographic performance of the NXE:3100 and the NXE:3300B, the NXE platform improvements made from learning on NXE:3100 and the Alpha Demo Tool, current status of EUV sources and development for the high-power sources needed for HVM. Finally, the possibilities for EUV roadmap extension will be reviewed.
TL;DR: A cooperative game concept to derive the power production of individual wind turbine so that the total wind-farm power efficiency is optimized and numerical simulations show that the cooperative control strategy can increase the power productions in a wind farm.
Abstract: The objective of this study is to improve the cost-effectiveness and production efficiency of wind farms using cooperative control. The key factors in determining the power production and the loading for a wind turbine are the nacelle yaw and blade pitch angles. However, the nacelle and blade angles may adjust the wake direction and intensity in a way that may adversely affect the performance of other wind turbines in the wind farm. Conventional wind-turbine control methods maximize the power production of a single turbine, but can lower the overall wind-farm power efficiency due to wake interference. This paper introduces a cooperative game concept to derive the power production of individual wind turbine so that the total wind-farm power efficiency is optimized. Based on a wake interaction model relating the yaw offset angles and the induction factors of wind turbines to the wind speeds experienced by the wind turbines, an optimization problem is formulated with the objective of maximizing the sum of the power production of a wind farm. A steepest descent algorithm is applied to find the optimal combination of yaw offset angles and the induction factors that increases the total wind farm power production. Numerical simulations show that the cooperative control strategy can increase the power productions in a wind farm.
TL;DR: A highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization that combines very attractive runtimes with state-of-the-art accuracy in a unique way.
Abstract: Lung registration in thoracic CT scans has received much attention in the medical imaging community Possible applications range from follow-up analysis, motion correction for radiation therapy, monitoring of air flow and pulmonary function to lung elasticity analysis In a clinical environment, runtime is always a critical issue, ruling out quite a few excellent registration approaches In this paper, a highly efficient variational lung registration method based on minimizing the normalized gradient fields distance measure with curvature regularization is presented The method ensures diffeomorphic deformations by an additional volume regularization Supplemental user knowledge, like a segmentation of the lungs, may be incorporated as well The accuracy of our method was evaluated on 40 test cases from clinical routine In the EMPIRE10 lung registration challenge, our scheme ranks third, with respect to various validation criteria, out of 28 algorithms with an average landmark distance of 072 mm The average runtime is about 1:50 min on a standard PC, making it by far the fastest approach of the top-ranking algorithms Additionally, the ten publicly available DIR-Lab inhale-exhale scan pairs were registered to subvoxel accuracy at computation times of only 20 seconds Our method thus combines very attractive runtimes with state-of-the-art accuracy in a unique way
TL;DR: The results of an extensive imaging study that explored the advantages of using extremely wide beamwidth and bandwidth are presented, primarily for 10-40 GHz frequency band.
Abstract: Active millimeter-wave imaging is currently being used for personnel screening at airports and other high-security facilities. The cylindrical imaging techniques used in the deployed systems are based on licensed technology developed at the Pacific Northwest National Laboratory. The cylindrical and a related planar imaging technique form three-dimensional images by scanning a diverging beam swept frequency transceiver over a two-dimensional aperture and mathematically focusing or reconstructing the data into three-dimensional images of the person being screened. The resolution, clothing penetration, and image illumination quality obtained with these techniques can be significantly enhanced through the selection of the aperture size, antenna beamwidth, center frequency, and bandwidth. The lateral resolution can be improved by increasing the center frequency, or it can be increased with a larger antenna beamwidth. The wide beamwidth approach can significantly improve illumination quality relative to a higher frequency system. Additionally, a wide antenna beamwidth allows for operation at a lower center frequency resulting in less scattering and attenuation from the clothing. The depth resolution of the system can be improved by increasing the bandwidth. Utilization of extremely wide bandwidths of up to 30 GHz can result in depth resolution as fine as 5 mm. This wider bandwidth operation may allow for improved detection techniques based on high range resolution. In this paper, the results of an extensive imaging study that explored the advantages of using extremely wide beamwidth and bandwidth are presented, primarily for 10-40 GHz frequency band.
TL;DR: An overview of different VLC based indoor positioning techniques and their performance is provided, including a detailed comparison on their performances in terms of accuracy, space dimension and complexity.
Abstract: Indoor positioning based on visible light communication (VLC) technology has become a very popular research topic recently. This paper provides an overview of different VLC based indoor positioning techniques and their performance. Three schemes of location estimation together with their mechanisms are introduced. We then present a detailed comparison on their performances in terms of accuracy, space dimension and complexity. We further discuss a couple of factors that would affect the performance of each of these positioning systems, including multipath reflections and synchronization. Finally, conclusions and future prospects of research are addressed.
TL;DR: In this article, the performance of chemically-amplified resists and inorganic resists using EUV-IL has been evaluated with the aim of resolving patterns with CARs for 16 nm half pitch (HP) and 11 nm HP.
Abstract: The performance of EUV resists is one of the main challenges for the cost-effectiveness and the introduction of EUV lithography into high-volume manufacturing. The EUV interference lithography (EUV-IL) is a simple and powerful technique to print periodic nanostructures with a resolution beyond the capabilities of other tools. In addition, the well-defined and pitch-independent aerial image of the EUV-IL provides further advantages for the analysis of resist performance. In this paper, we present evaluation of chemically-amplified resists (CAR) and inorganic resists using EUV-IL. We illustrate the performance of the tool through a reproducibility study of a baseline resist over the course of 16 months. A comparative study of the performance of different resists is presented with the aim of resolving patterns with CARs for 16 nm half-pitch (HP) and 11 nm HP. Critical dimension (CD) and line-edge roughness (LER) are evaluated as functions of dose for different process conditions. With a CAR with about 10 mJ/cm2 sensitivity, 18 nm L/S patterns are obtained with low LER and well-resolved patterns are achieved down to 16 nm HP. With another CAR of about 35 mJ/cm2 sensitivity, L/S patterns with low LER are demonstrated down to 14 nm HP. Resolved patterns are achieved down to 12 HP, demonstrating the capability of its potential towards 11 nm HP if pattern collapse mitigation can be successfully applied. With EUV-sensitive inorganic resists, patterning down to 8 nm has been realized. In summary, we show that resist platforms with reasonable sensitivities are already available for patterning at 16 nm HP, 11 nm HP, and beyond, although there is still significant progress is needed. We also show that with decreasing HP, pattern collapse becomes a crucial issue limiting the resolution and LER. Therefore resist stability, collapse mitigation, and etch resistance are some of the significant problems to be addressed in the development of resist platforms for future technology nodes.
TL;DR: SoftVue’s imaging performance was consistent across all breast density categories and had much better resolution and contrast, and may eliminate the current trade-off between the cost effectiveness of mammography and the imaging performance of more expensive systems such as magnetic resonance imaging.
Abstract: For women with dense breast tissue, who are at much higher risk for developing breast cancer, the performance of mammography is at its worst. Consequently, many early cancers go undetected when they are the most treatable. Improved cancer detection for women with dense breasts would decrease the proportion of breast cancers diagnosed at later stages, which would significantly lower the mortality rate. The emergence of whole breast ultrasound provides good performance for women with dense breast tissue, and may eliminate the current trade-off between the cost effectiveness of mammography and the imaging performance of more expensive systems such as magnetic resonance imaging. We report on the performance of SoftVue, a whole breast ultrasound imaging system, based on the principles of ultrasound tomography. SoftVue was developed by Delphinus Medical Technologies and builds on an early prototype developed at the Karmanos Cancer Institute. We present results from preliminary testing of the SoftVue system, performed both in the lab and in the clinic. These tests aimed to validate the expected improvements in image performance. Initial qualitative analyses showed major improvements in image quality, thereby validating the new imaging system design. Specifically, SoftVue’s imaging performance was consistent across all breast density categories and had much better resolution and contrast. The implications of these results for clinical breast imaging are discussed and future work is described.
TL;DR: This work proposes a fast noise variance estimation algorithm based on principal component analysis of image blocks that was faster than the methods with similar or higher accuracy during experiments involving seven state of the art methods.
Abstract: Noise variance estimation is required in many image denoising, compression, and segmentation applications. In this work, we propose a fast noise variance estimation algorithm based on principal component analysis of image blocks. First, we rearrange image blocks into vectors and compute the covariance matrix of these vectors. Then, we use Bartlett's test in order to select the covariance matrix eigenvalues, which correspond only to noise. This allows estimating the noise variance as the average of these eigenvalues. Since the maximum possible number of eigenvalues corresponding to noise is utilized, it is enough to process only a small number of image blocks, which allows reduction of the execution time. The blocks to process are selected from image regions with the smallest variance. During our experiments involving seven state of the art methods, the proposed approach was signi_cantly faster than the methods with similar or higher accuracy. Meanwhile, the relative error of our estimator was always less than 15%. We also show that the proposed method can process images without homogeneous areas.
TL;DR: In this article, the authors derived several new algorithms for particle flow with non-zero diffusion corresponding to Bayes' rule and applied them to arbitrary multimodal densities with smooth nowhere vanishing non-Gaussian densities.
Abstract: We derive several new algorithms for particle flow with non-zero diffusion corresponding to Bayes’ rule. This is unlike all of our previous particle flows, which assumed zero diffusion for the flow corresponding to Bayes’ rule. We emphasize, however, that all of our particle flows have always assumed non-zero diffusion for the dynamical model of the evolution of the state vector in time. Our new algorithm is simple and fast, and it has an especially nice intuitive formula, which is the same as Newton’s method to solve the maximum likelihood estimation (MLE) problem (but for each particle rather than only the MLE), and it is also the same as the extended Kalman filter for the special case of Gaussian densities (but for each particle rather than just the point estimate). All of these new flows apply to arbitrary multimodal densities with smooth nowhere vanishing non-Gaussian densities.
TL;DR: In this paper, the authors investigate a set of pellicle requirements and potential EUV PELLicle materials, and present experimental results of PELLC performance results and imaging results.
Abstract: EUV defectivity has been an important topic of investigation in past years. Today, the absence of a pellicle raises concerns for particle adders on reticle front side. A desire to improve defectivity on reticle front side via implementation of a pellicle could greatly assist in propelling EUV into high volume manufacturing. In this paper, we investigate a set of pellicle requirements and potential EUV pellicle materials. Further, we present experimental results of pellicle performance results and imaging results.
TL;DR: In this article, the infusion, flow and cure of RTM6 resin in a carbon fiber reinforced composite preform have been monitored using a variety of multiplexed fiber optic sensors.
Abstract: The infusion, flow and cure of RTM6 resin in a carbon fibre reinforced composite preform have been monitored using a variety of multiplexed fibre optic sensors. Optical fibre Fresnel sensors and tilted fibre Bragg grating (TFBG) sensors were configured to monitor resin infusion/flow in-plane of the component. The results obtained from the different sensors were in good agreement with visual observations. The degree of cure was monitored by Fresnel sensors via a measurement of the refractive index of the resin which was converted to degree of cure using a calibration determined from Differential Scanning Calorimetry. Fibre Bragg grating sensors fabricated in highly linearly birefringent fibre were used to monitor the development of transverse strain during the cure process, revealing through-thickness material shrinkage of about 712 μe and residual strain of 223 μe. An alternative approach to infusion monitoring, based on an array of multiplexed tapered optical fibre sensors interrogated using optical frequency domain reflectometry, was also investigated in a separate carbon fibre preform that was infused with RTM6 resin.
TL;DR: In this paper, the authors review the technology advances of liquid crystal on silicon (LCOS) and the display applications and related system implementation, including image and illumination waveguides and laser illuminators, for HMD, HUD, projector, and image analysis/surveillance direct view monitor applications.
Abstract: LCOS (Liquid Crystal on Silicon) is a reflective microdisplay technology based on a single crystal silicon pixel controller backplane which drives a liquid crystal layer. Using standard CMOS processes, microdisplays with extremely small pixels, high fill factor (pixel aperture ratio) and low fabrication costs are created. Recent advances in integrated circuit design and liquid crystal materials have increased the application of LCOS to displays and other optical functions. Pixel pitch below 3 µm, resolution of 8K x 4K, and sequential contrast ratios of 100K:1 have been achieved. These devices can modulate light spatially in amplitude or phase, so they act as an active dynamic optical element. Liquid crystal materials can be chosen to modulate illumination sources from the UV through far IR. The new LCOS designs have reduced power consumption to make portable displays and viewing elements more viable. Also innovative optical system elements including image and illumination waveguides and laser illuminators have been combined into LCOS based display systems for HMD, HUD, projector, and image analysis/surveillance direct view monitor applications. Dynamic displays utilizing the fine pixel pitch and phase mode operation of LCOS are advancing the development of true holographic displays. The paper will review these technology advances of LCOS and the display applications and related system implementation.
TL;DR: In this article, the degree of mutual correlation Muller matrix of biological tissue has been proposed as an experimental parameter in problems of differentiation orientation-phase component of architectural and physiological state of biological tissues based on statistical and correlation approach.
Abstract: On the basis of the approach to the analysis of degree of consistency between the states of polarization between two points of coherent laser field proposed by Ellis and Dogario, new parameter describing the polarization correlation similarity between manifestation of optically anisotropic components in different points of biological tissues - the degree of mutual correlation Muller matrix of biological tissue have been received. An experimental parameter in problems of differentiation orientation-phase component of architectonic and physiological state of biological tissue based on statistical and correlation approach has been proposed.
TL;DR: The hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering, and it is shown that the hyperfan’s performance scales with aperture count.
Abstract: Imaging in low light is problematic as sensor noise can dominate imagery, and increasing illumination or aperture size is not always effective or practical. Computational photography offers a promising solution in the form of the light field camera, which by capturing redundant information offers an opportunity for elegant noise rejection. We show that the light field of a Lambertian scene has a 4D hyperfan-shaped frequency-domain region of support at the intersection of a dual-fan and a hypercone. By designing and implementing a filter with appropriately shaped passband we accomplish denoising with a single all-in-focus linear filter. Drawing examples from the Stanford Light Field Archive and images captured using a commercially available lenselet- based plenoptic camera, we demonstrate that the hyperfan outperforms competing methods including synthetic focus, fan-shaped antialiasing filters, and a range of modern nonlinear image and video denoising techniques. We show the hyperfan preserves depth of field, making it a single-step all-in-focus denoising filter suitable for general-purpose light field rendering. We include results for different noise types and levels, over a variety of metrics, and in real-world scenarios. Finally, we show that the hyperfan’s performance scales with aperture count.
TL;DR: A rigorous testing framework is developed that examines the performance of both systems as they scale system resources and shows that a single instance of Suricata is able to deliver substantially higher performance than a correspondingsingle instance of Snort.
Abstract: Given competing claims, an objective head-to-head comparison of the performance of both the Snort R and Suricata Intrusion Detection Systems is needed. In this paper, we present a comprehensive quantitative comparison of the two systems. We have developed a rigorous testing framework that examines the performance of both systems as we scale system resources. Our results show that a single instance of Suricata is able to deliver substantially higher performance than a corresponding single instance of Snort. This paper describes in detail both the testing framework capabilities, tests performed and results found.
TL;DR: In this paper, a method to protect entanglement from amplitude damping decoherence via weak measurement and quantum measurement reversal was proposed, and it was shown that even the entangler sudden death can be circumvented.
Abstract: Decoherence, which causes degradation of entanglement and in some cases entanglement sudden death, is a critical issue faced in quantum information. Protecting entanglement from decoherence, therefore, is essential in practical realization of quantum computing and quantum communication protocols. In this paper, we demonstrate a novel method to protect entanglement from amplitude damping decoherence via weak measurement and quantum measurement reversal. It is shown that even entanglement sudden death can be circumvented.