scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of SPIE in 2017"


Proceedings ArticleDOI
TL;DR: To assess the Microsoft HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application and showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.
Abstract: Industry and academia have repeatedly demonstrated the transformative potential of Augmented Reality (AR) guided assembly instructions. In the past, however, computational and hardware limitations often dictated that these systems were deployed on tablets or other cumbersome devices. Often, tablets impede worker progress by diverting a user's hands and attention, forcing them to alternate between the instructions and the assembly process. Head Mounted Displays (HMDs) overcome those diversions by allowing users to view the instructions in a hands-free manner while simultaneously performing an assembly operation. Thanks to rapid technological advances, wireless commodity AR HMDs are becoming commercially available. Specifically, the pioneering Microsoft HoloLens, provides an opportunity to explore a hands-free HMD’s ability to deliver AR assembly instructions and what a user interface looks like for such an application. Such an exploration is necessary because it is not certain how previous research on user interfaces will transfer to the HoloLens or other new commodity HMDs. In addition, while new HMD technology is promising, its ability to deliver a robust AR assembly experience is still unknown. To assess the HoloLens’ potential for delivering AR assembly instructions, the cross-platform Unity 3D game engine was used to build a proof of concept application. Features focused upon when building the prototype were: user interfaces, dynamic 3D assembly instructions, and spatially registered content placement. The research showed that while the HoloLens is a promising system, there are still areas that require improvement, such as tracking accuracy, before the device is ready for deployment in a factory assembly setting.

154 citations


Proceedings ArticleDOI
TL;DR: This research demonstrates that a more general method (i.e. deep learning) can outperform specialized methods that require image dilation and ring-forming subregions on tumors.
Abstract: Recent research has shown that deep learning methods have performed well on supervised machine learning, image classification tasks. The purpose of this study is to apply deep learning methods to classify brain images with different tumor types: meningioma, glioma, and pituitary. A dataset was publicly released containing 3,064 T1-weighted contrast enhanced MRI (CE-MRI) brain images from 233 patients with either meningioma, glioma, or pituitary tumors split across axial, coronal, or sagittal planes. This research focuses on the 989 axial images from 191 patients in order to avoid confusing the neural networks with three different planes containing the same diagnosis. Two types of neural networks were used in classification: fully connected and convolutional neural networks. Within these two categories, further tests were computed via the augmentation of the original 512×512 axial images. Training neural networks over the axial data has proven to be accurate in its classifications with an average five-fold cross validation of 91.43% on the best trained neural network. This result demonstrates that a more general method (i.e. deep learning) can outperform specialized methods that require image dilation and ring-forming subregions on tumors.

151 citations


Proceedings ArticleDOI
TL;DR: This work converts three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task, achieving high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.
Abstract: Colorectal cancer (CRC) is one of the most common and deadliest forms of cancer, accounting for nearly 10% of all forms of cancer in the world. Even though colonoscopy is considered the most effective method for screening and diagnosis, the success of the procedure is highly dependent on the operator skills and level of hand-eye coordination. In this work, we propose to adapt fully convolution neural networks (FCN), to identify and segment polyps in colonoscopy images. We converted three established networks into a fully convolution architecture and fine-tuned their learned representations to the polyp segmentation task. We validate our framework on the 2015 MICCAI polyp detection challenge dataset, surpassing the state-of-the-art in automated polyp detection. Our method obtained high segmentation accuracy and a detection precision and recall of 73.61% and 86.31%, respectively.

122 citations


Proceedings ArticleDOI
TL;DR: This work trains a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset, and converts it to a fully convolutional network (FCN) which leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case.
Abstract: Deep convolutional neural networks (CNNs) form the backbone of many state-of-the-art computer vision systems for classification and segmentation of 2D images. The same principles and architectures can be extended to three dimensions to obtain 3D CNNs that are suitable for volumetric data such as CT scans. In this work, we train a 3D CNN for automatic detection of pulmonary nodules in chest CT images using volumes of interest extracted from the LIDC dataset. We then convert the 3D CNN which has a fixed field of view to a 3D fully convolutional network (FCN) which can generate the score map for the entire volume efficiently in a single pass. Compared to the sliding window approach for applying a CNN across the entire input volume, the FCN leads to a nearly 800-fold speed-up, and thereby fast generation of output scores for a single case. This screening FCN is used to generate difficult negative examples that are used to train a new discriminant CNN. The overall system consists of the screening FCN for fast generation of candidate regions of interest, followed by the discrimination CNN.

111 citations


Proceedings ArticleDOI
TL;DR: In this paper, the authors investigated the microscopic origin of the homogeneous linewidth and coherence lifetime of excitonic resonances in monolayer molybdenum disulfide, taking exciton phonon scattering and radiative recombination into account.
Abstract: Monolayers of transition metal dichalcogenides are direct gap semiconductors, which have attracted much attention in the recent past. Due to a strong Coulomb interaction, they possess strongly bound electron-hole pairs, with binding energies of hundreds of meV which is an order of magnitude larger than in conventional materials. Here, we investigate the microscopic origin of the homogeneous linewidth and coherence lifetime of excitonic resonances in monolayer molybdenum disulfide, taking exciton phonon scattering and radiative recombination into account. We find a superlinear increasing homogeneous linewidth from 2 meV at 5K to 14 meV at room temperature corresponding to a coherence lifetime of 160 fs and 25 fs.

107 citations


Proceedings ArticleDOI
TL;DR: This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs), which comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other.
Abstract: Manual segmentation of brain lesions from Magnetic Resonance Images (MRI) is cumbersome and introduces errors due to inter-rater variability. This paper introduces a semi-supervised technique for detection of brain lesion from MRI using Generative Adversarial Networks (GANs). GANs comprises of a Generator network and a Discriminator network which are trained simultaneously with the objective of one bettering the other. The networks were trained using non lesion patches (n=13,000) from 4 different MR sequences. The network was trained on BraTS dataset and patches were extracted from regions excluding tumor region. The Generator network generates data by modeling the underlying probability distribution of the training data, (PData). The Discriminator learns the posterior probability P (Label Data) by classifying training data and generated data as “Real” or “Fake” respectively. The Generator upon learning the joint distribution, produces images/patches such that the performance of the Discriminator on them are random, i.e. P (Label Data = GeneratedData) = 0.5. During testing, the Discriminator assigns posterior probability values close to 0.5 for patches from non lesion regions, while patches centered on lesion arise from a different distribution (PLesion) and hence are assigned lower posterior probability value by the Discriminator. On the test set (n=14), the proposed technique achieves whole tumor dice score of 0.69, sensitivity of 91% and specificity of 59%. Additionally the generator network was capable of generating non lesion patches from various MR sequences.

79 citations


Proceedings ArticleDOI
TL;DR: An evaluative protocol for the HoloLensTM is developed using an optical measurement device to digitize the perceived pose of the rendered hologram, showing promise of this device’s potential for intraoperative clinical use.
Abstract: Augmented reality (AR) has an increasing presence in the world of image-guided interventions which is amplified by the availability of consumer-grade head-mounted display (HMD) technology. The Microsoft® HoloLensTM optical passthrough device is at the forefront of consumer technology, as it is the first un-tethered head mounted computer (HMC). It shows promise of effectiveness in guiding clinical interventions, however its accuracy and stability must still be evaluated for the clinical environment. We have developed an evaluative protocol for the HoloLensTM using an optical measurement device to digitize the perceived pose of the rendered hologram. This evaluates the ability of the HoloLensTM to maintain the hologram in its intended pose. The stability is measured when actions are performed that may cause a shift in the holograms’ pose due to errors in its simultaneous localization and mapping. An emphasis is placed on actions that are more likely to be performed in a clinical setting. This will be used to determine the most applicable use cases for this technology in the future and how to minimize errors when in use. Our results show promise of this device’s potential for intraoperative clinical use. Further analysis must be performed to evaluate other potential sources of hologram disruption.

77 citations


Proceedings ArticleDOI
TL;DR: A convolutional neural network (CNN) is integrated into the computed tomography image reconstruction process to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule.
Abstract: The rapidly-rising field of machine learning, including deep learning, has inspired applications across many disciplines. In medical imaging, deep learning has been primarily used for image processing and analysis. In this paper, we integrate a convolutional neural network (CNN) into the computed tomography (CT) image reconstruction process. Our first task is to monitor the quality of CT images during iterative reconstruction and decide when to stop the process according to an intelligent numerical observer instead of using a traditional stopping rule, such as a fixed error threshold or a maximum number of iterations. After training on ground truth images, the CNN was successful in guiding an iterative reconstruction process to yield high-quality images. Our second task is to improve a sinogram to correct for artifacts caused by metal objects. A large number of interpolation and normalization-based schemes were introduced for metal artifact reduction (MAR) over the past four decades. The NMAR algorithm is considered a state-of-the-art method, although residual errors often remain in the reconstructed images, especially in cases of multiple metal objects. Here we merge NMAR with deep learning in the projection domain to achieve additional correction in critical image regions. Our results indicate that deep learning can be a viable tool to address CT reconstruction challenges.

72 citations


Proceedings ArticleDOI
TL;DR: In this article, a novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge.
Abstract: A novel deep learning architecture (XmasNet) based on convolutional neural networks was developed for the classification of prostate cancer lesions, using the 3D multiparametric MRI data provided by the PROSTATEx challenge. End-to-end training was performed for XmasNet, with data augmentation done through 3D rotation and slicing, in order to incorporate the 3D information of the lesion. XmasNet outperformed traditional machine learning models based on engineered features, for both train and test data. For the test data, XmasNet outperformed 69 methods from 33 participating groups and achieved the second highest AUC (0.84) in the PROSTATEx challenge. This study shows the great potential of deep learning for cancer imaging.

70 citations


Proceedings ArticleDOI
TL;DR: A convolution neural network model is developed to support computer-aided diagnosis (CADx) in PCa based on the appearance of prostate tissue in mpMRI, conducted as part of the SPIE-AAPM-NCI PROSTATEx challenge.
Abstract: Prostate cancer (PCa) remains a leading cause of cancer mortality among American men. Multi-parametric magnetic resonance imaging (mpMRI) is widely used to assist with detection of PCa and characterization of its aggressiveness. Computer-aided diagnosis (CADx) of PCa in MRI can be used as clinical decision support system to aid radiologists in interpretation and reporting of mpMRI. We report on the development of a convolution neural network (CNN) model to support CADx in PCa based on the appearance of prostate tissue in mpMRI, conducted as part of the SPIE-AAPM-NCI PROSTATEx challenge. The performance of different combinations of mpMRI inputs to CNN was assessed and the best result was achieved using DWI and DCE-MRI modalities together with the zonal information of the finding. On the test set, the model achieved an area under the receiver operating characteristic curve of 0.80.

66 citations


Proceedings ArticleDOI
TL;DR: An interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data is developed and compared its performances with the other interpolation techniques.
Abstract: Spare-view sampling and its associated iterative image reconstruction in computed tomography have actively investigated. Sparse-view CT technique is a viable option to low-dose CT, particularly in cone-beam CT (CBCT) applications, with advanced iterative image reconstructions with varying degrees of image artifacts. One of the artifacts that may occur in sparse-view CT is the streak artifact in the reconstructed images. Another approach has been investigated for sparse-view CT imaging by use of the interpolation methods to fill in the missing view data and that reconstructs the image by an analytic reconstruction algorithm. In this study, we developed an interpolation method using convolutional neural network (CNN), which is one of the widely used deep-learning methods, to find missing projection data and compared its performances with the other interpolation techniques.

Proceedings ArticleDOI
TL;DR: In this paper, a fully automatic approach that detects prostate histopathology slides with high-grade Gleason score is proposed, achieving an accuracy of 78% in a balanced set of 46 unseen test images with different Gleason grades in a 2-class decision: high and low Gleason grade.
Abstract: The Gleason grading system was developed for assessing prostate histopathology slides. It is correlated to the outcome and incidence of relapse in prostate cancer. Although this grading is part of a standard protocol performed by pathologists, visual inspection of whole slide images (WSIs) has an inherent subjectivity when evaluated by different pathologists. Computer aided pathology has been proposed to generate an objective and reproducible assessment that can help pathologists in their evaluation of new tissue samples. Deep convolutional neural networks are a promising approach for the automatic classification of histopathology images and can hierarchically learn subtle visual features from the data. However, a large number of manual annotations from pathologists are commonly required to obtain sufficient statistical generalization when training new models that can evaluate the daily generated large amounts of pathology data. A fully automatic approach that detects prostatectomy WSIs with high–grade Gleason score is proposed. We evaluate the performance of various deep learning architectures training them with patches extracted from automatically generated regions–of–interest rather than from manually segmented ones. Relevant parameters for training the deep learning model such as size and number of patches as well as the inclusion or not of data augmentation are compared between the tested deep learning architectures. 235 prostate tissue WSIs with their pathology report from the publicly available TCGA data set were used. An accuracy of 78% was obtained in a balanced set of 46 unseen test images with different Gleason grades in a 2–class decision: high vs. low Gleason grade. Grades 7–8, which represent the boundary decision of the proposed task, were particularly well classified. The method is scalable to larger data sets with straightforward re–training of the model to include data from multiple sources, scanners and acquisition techniques. Automatically generated heatmaps for theWSIs could be useful for improving the selection of patches when training networks for big data sets and to guide the visual inspection of these images.

Proceedings ArticleDOI
TL;DR: A convolutional neural network was trained to identify the location of individual point targets from pre-beamformed data simulated with k-Wave to contain various medium sound speeds, target locations, and absorber sizes, demonstrating strong promise to identify point targets without requiring traditional geometry-based beamforming.
Abstract: Interventional applications of photoacoustic imaging often require visualization of point-like targets, including the circular cross sectional tips of needles and catheters or the circular cross sectional views of small cylindrical implants such as brachytherapy seeds. When these point-like targets are imaged in the presence of highly echogenic structures, the resulting photoacoustic wave creates a reflection artifact that may appear as a true signal. We propose to use machine learning principles to identify these type of noise artifacts for removal. A convolutional neural network was trained to identify the location of individual point targets from pre-beamformed data simulated with k-Wave to contain various medium sound speeds (1440-1640 m/s), target locations (5-25 mm), and absorber sizes (1-5 mm). Based on 2,412 randomly selected test images, the mean axial and lateral point location errors were 0.28 mm and 0.37 mm, respectively, which can be regarded as the average imaging system resolution for our trained network. This trained network successfully identified the location of two point targets in a single image with mean axial and lateral errors of 2.6 mm and 2.1 mm, respectively. A true signal and a corresponding reflection artifact were then simulated. The same trained network identified the location of the artifact with mean axial and lateral errors of 2.1 mm and 3.0 mm, respectively. Identified artifacts may be rejected based on wavefront shape differences. These results demonstrate strong promise to identify point targets without requiring traditional geometry-based beamforming, leading to the eventual elimination of reflection artifacts from interventional images.

Proceedings ArticleDOI
Hansang Lee1, Minseok Park1, Junmo Kim1
TL;DR: This work proposes an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN) and develops a detection system using CNN-based coordinate-wise regression systems.
Abstract: In dental X-ray images, an accurate detection of cephalometric landmarks plays an important role in clinical diagnosis, treatment and surgical decisions for dental problems. In this work, we propose an end-to-end deep learning system for cephalometric landmark detection in dental X-ray images, using convolutional neural networks (CNN). For detecting 19 cephalometric landmarks in dental X-ray images, we develop a detection system using CNN-based coordinate-wise regression systems. By viewing x- and y-coordinates of all landmarks as 38 independent variables, multiple CNN-based regression systems are constructed to predict the coordinate variables from input X-ray images. First, each coordinate variable is normalized by the length of either height or width of an image. For each normalized coordinate variable, a CNN-based regression system is trained on training images and corresponding coordinate variable, which is a variable to be regressed. We train 38 regression systems with the same CNN structure on coordinate variables, respectively. Finally, we compute 38 coordinate variables with these trained systems from unseen images and extract 19 landmarks by pairing the regressed coordinates. In experiments, the public database from the Grand Challenges in Dental X-ray Image Analysis in ISBI 2015 was used and the proposed system showed promising performance by successfully locating the cephalometric landmarks within considerable margins from the ground truths.

Proceedings ArticleDOI
TL;DR: A convolutional neural network based deep-learing (DCNN) architecture is investigated to find an improved solution for PCa detection on mpMRI with results comparable to an existing prostate-CAD showing potential for further development.
Abstract: Prostate cancer (PCa) is the second most common cause of cancer related deaths in men. Multiparametric MRI (mpMRI) is the most accurate imaging method for PCa detection; however, it requires the expertise of experienced radiologists leading to inconsistency across readers of varying experience. To increase inter-reader agreement and sensitivity, we developed a computer-aided detection (CAD) system that can automatically detect lesions on mpMRI that readers can use as a reference. We investigated a convolutional neural network based deep-learing (DCNN) architecture to find an improved solution for PCa detection on mpMRI. We adopted a network architecture from a state-of-the-art edge detector that takes an image as an input and produces an image probability map. Two-fold cross validation along with a receiver operating characteristic (ROC) analysis and free-response ROC (FROC) were used to determine our deep-learning based prostate-CAD’s (CADDL) performance. The efficacy was compared to an existing prostate CAD system that is based on hand-crafted features, which was evaluated on the same test-set. CADDL had an 86% detection rate at 20% false-positive rate while the top-down learning CAD had 80% detection rate at the same false-positive rate, which translated to 94% and 85% detection rate at 10 false-positives per patient on the FROC. A CNN based CAD is able to detect cancerous lesions on mpMRI of the prostate with results comparable to an existing prostate-CAD showing potential for further development.

Proceedings ArticleDOI
TL;DR: The NXE:3400B features lower aberration levels and a revolutionary new illumination system, offering improved pupil-fill ratio and larger sigma range, and overlay and Focus are further improved by implementation of a new wafer clamp and improved scanner controls.
Abstract: With the introduction of its fifth-generation EUV scanner, the NXE:3400B, ASML has brought EUV to High-Volume Manufacturing for sub-10nm node lithography. This paper presents lithographic performance results obtained with the NXE:3400B, characterized by an NA of 0.33, a Pupil Fill Ratio (PFR) of 0.2 and throughput capability of 125 wafers per hour (or wph). Advances in source power have enabled a further increase of tool productivity requiring an associated increase of stage scan speeds. To maximize the number of yielding die per day a stringent Overlay, Focus, and Critical Dimension (CD) control is required. Tight CD control at improved resolution is obtained through a number of innovations: the NXE:3400B features lower aberration levels and a revolutionary new illumination system, offering improved pupil-fill ratio and larger sigma range. Overlay and Focus are further improved by implementation of a new wafer clamp and improved scanner controls. The NXE:3400B also offers full support for reticle pellicles.

Proceedings ArticleDOI
TL;DR: An automated method for detecting spine compression fractures in Computed Tomography (CT) scans that is composed of three processes: the spinal column is segmented and sagittal patches are extracted, and a Recurrent Neural Network is utilized to predict whether a vertebral fracture is present in the series of patches.
Abstract: The presence of a vertebral compression fracture is highly indicative of osteoporosis and represents the single most robust predictor for development of a second osteoporotic fracture in the spine or elsewhere. Less than one third of vertebral compression fractures are diagnosed clinically. We present an automated method for detecting spine compression fractures in Computed Tomography (CT) scans. The algorithm is composed of three processes. First, the spinal column is segmented and sagittal patches are extracted. The patches are then binary classified using a Convolutional Neural Network (CNN). Finally a Recurrent Neural Network (RNN) is utilized to predict whether a vertebral fracture is present in the series of patches.

Proceedings ArticleDOI
TL;DR: Project Loon as mentioned in this paper uses free-space optical communications (FSOC) for the inter-balloon crosslinks, which is well suited for communication between high-altitude platforms.
Abstract: Internet connectivity is limited and in some cases non-existent for a significant part of the world's population. Project Loon aims to address this with a network of high-altitude balloons traveling in the stratosphere, at an altitude of approximately 20 km. The balloons navigate by using the stratified wind layers at different altitudes, adjusting the balloon's altitude to catch winds in a desired direction. Data transfer is achieved by 1) uplinking a signal from an Internet-connected ground station to a balloon terminal, 2) crosslinking the signal through the balloon network to reach the geographic area of the users, and 3) downlinking the signal directly to the end-users' phones or other LTE-enabled devices. We describe Loon's progress on utilizing free-space optical communications (FSOC) for the inter-balloon crosslinks. FSOC, offering high data rates and long communication ranges, is well-suited for communication between high-altitude platforms. A stratospheric link is sufficiently high to be above weather events (clouds, fog, rain, etc.), and the impact of atmospheric turbulence is significantly weaker than at ground level. In addition, being in the stratosphere as opposed to space helps avoid the typical challenges faced by space-based systems, namely operation in a vacuum environment with significant radiation. Finally, the angular pointing disturbances introduced by a floating balloon-based platform are notably less than any propelled platform, which simplifies the disturbance rejection requirements on the FSOC system. We summarize results from Project Loon's early-phase experimental inter-balloon links at 20 km altitude, demonstrating full duplex 130 Mbps throughput at distances in excess of 100 km over the course of several-day flights. The terminals utilize a monostatic design, with dual wavelengths for communication and a dedicated wide-angle beacon for pointing, acquisition, and tracking. We summarize the constraints on the terminal design, and the key design trades that led to our initial system. We illustrate measured performance during flight tests: received signal power variations with range, pointing system performance, and data throughput.

Proceedings ArticleDOI
TL;DR: An efficient and robust algorithm is presented for UAV detection using static VIS and SWIR cameras for detecting approaching drones using a background estimation and structural adaptive change detection process.
Abstract: Recent progress in the development of unmanned aerial vehicles (UAVs) has led to more and more situations in which drones like quadrocopters or octocopters pose a potential serious thread or could be used as a powerful tool for illegal activities. Therefore, counter-UAV systems are required in a lot of applications to detect approaching drones as early as possible. In this paper, an efficient and robust algorithm is presented for UAV detection using static VIS and SWIR cameras. Whereas VIS cameras with a high resolution enable to detect UAVs in the daytime in further distances, surveillance at night can be performed with a SWIR camera. First, a background estimation and structural adaptive change detection process detects movements and other changes in the observed scene. Afterwards, the local density of changes is computed used for background density learning and to build up the foreground model which are compared in order to finally get the UAV alarm result. The density model is used to filter out noise effects, on the one hand. On the other hand, moving scene parts like moving leaves in the wind or driving cars on a street can easily be learned in order to mask such areas out and suppress false alarms there. This scene learning is done automatically simply by processing without UAVs in order to capture the normal situation. The given results document the performance of the presented approach in VIS and SWIR in different situations.

Proceedings ArticleDOI
TL;DR: In this article, a local CD uniformity (LCDU) model is introduced and validated with experimental contact hole (CH) data, and a dynamic gas lock (DGL) membrane is introduced between projection optics box (POB) and wafer stage.
Abstract: Extreme ultraviolet (EUV) lithography with 13.5 nm wavelength is the main option for sub-10nm patterning in the semiconductor industry. We report improvements in resist performance towards EUV high volume manufacturing. A local CD uniformity (LCDU) model is introduced and validated with experimental contact hole (CH) data. Resist performance is analyzed in terms of ultimate printing resolution (R), line width roughness (LWR), sensitivity (S), exposure latitude (EL) and depth of focus (DOF). Resist performance of dense lines at 13 nm half-pitch and beyond is shown by chemical amplified resist (CAR) and non-CAR (Inpria YA Series) on NXE scanner. Resolution down to 10nm half pitch (hp) is shown by Inpria YA Series resist exposed on interference lithography at the Paul Sherrer Institute. Contact holes contrast and consequent LCDU improvement is achieved on a NXE:3400 scanner by decreasing the pupil fill ratio. State-of-the-art imaging meets 5nm node requirements for CHs. A dynamic gas lock (DGL) membrane is introduced between projection optics box (POB) and wafer stage. The DGL membrane will suppress the negative impact of resist outgassing on the projection optics by 100%, enabling a wider range of resist materials to be used. The validated LCDU model indicates that the imaging requirements of the 3nm node can be met with single exposure using a high-NA EUV scanner. The current status, trends, and potential roadblocks for EUV resists are discussed. Our results mark the progress and the improvement points in EUV resist materials to support EUV ecosystem.

Proceedings ArticleDOI
TL;DR: Simulation with 3D printed phantoms shows potential to inform clinical interventional procedures in addition to CTA diagnostic imaging and to avoid periprocedural complications and improve training.
Abstract: Following new trends in precision medicine, Juxatarenal Abdominal Aortic Aneurysm (JAAA) treatment has been enabled by using patient-specific fenestrated endovascular grafts. The X-ray guided procedure requires precise orientation of multiple modular endografts within the arteries confirmed via radiopaque markers. Patient-specific 3D printed phantoms could familiarize physicians with complex procedures and new devices in a risk-free simulation environment to avoid periprocedural complications and improve training. Using the Vascular Modeling Toolkit (VMTK), 3D Data from a CTA imaging of a patient scheduled for Fenestrated EndoVascular Aortic Repair (FEVAR) was segmented to isolate the aortic lumen, thrombus, and calcifications. A stereolithographic mesh (STL) was generated and then modified in Autodesk MeshMixer for fabrication via a Stratasys Eden 260 printer in a flexible photopolymer to simulate arterial compliance. Fluoroscopic guided simulation of the patient-specific FEVAR procedure was performed by interventionists using all demonstration endografts and accessory devices. Analysis compared treatment strategy between the planned procedure, the simulation procedure, and the patient procedure using a derived scoring scheme. Results: With training on the patient-specific 3D printed AAA phantom, the clinical team optimized their procedural strategy. Anatomical landmarks and all devices were visible under x-ray during the simulation mimicking the clinical environment. The actual patient procedure went without complications. Conclusions: With advances in 3D printing, fabrication of patient specific AAA phantoms is possible. Simulation with 3D printed phantoms shows potential to inform clinical interventional procedures in addition to CTA diagnostic imaging.

Proceedings ArticleDOI
TL;DR: In this article, the authors reviewed current applications of laser-induced periodic surface structures (LIPSS), including the colorization of technical surfaces, the control of surface wetting, the tailoring of surface colonization by bacterial biofilms, and the improvement of the tribological performance of nanostructured metal surfaces.
Abstract: Laser-induced periodic surface structures (LIPSS, ripples) are a universal phenomenon that can be observed on almost any material after the irradiation by linearly polarized laser beams, particularly when using ultrashort laser pulses with durations in the picosecond to femtosecond range. During the past few years significantly increasing research activities have been reported in the field of LIPSS, since their generation in a single-step process provides a simple way of nanostructuring and surface functionalization towards the control of optical, mechanical or chemical properties. In this contribution current applications of LIPSS are reviewed, including the colorization of technical surfaces, the control of surface wetting, the tailoring of surface colonization by bacterial biofilms, and the improvement of the tribological performance of nanostructured metal surfaces.

Proceedings ArticleDOI
TL;DR: In this paper, the authors present a single-shot proof-of-principle experiment to demonstrate new high-intensity laser-matter interactions and subsequent secondary particle and photon sources.
Abstract: Large laser systems that deliver optical pulses with peak powers exceeding one Petawatt (PW) have been constructed at dozens of research facilities worldwide and have fostered research in High-Energy-Density (HED) Science, High-Field and nonlinear physics [1]. Furthermore, the high intensities exceeding 1018W/cm2 allow for efficiently driving secondary sources that inherit some of the properties of the laser pulse, e.g. pulse duration, spatial and/or divergence characteristics. In the intervening decades since that first PW laser, single-shot proof-of-principle experiments have been successful in demonstrating new high-intensity laser-matter interactions and subsequent secondary particle and photon sources. These secondary sources include generation and acceleration of charged-particle (electron, proton, ion) and neutron beams, and x-ray and gamma-ray sources, generation of radioisotopes for positron emission tomography (PET), targeted cancer therapy, medical imaging, and the transmutation of radioactive waste [2, 3]. Each of these promising applications requires lasers with peak power of hundreds of terawatt (TW) to petawatt (PW) and with average power of tens to hundreds of kW to achieve the required secondary source flux.

Proceedings ArticleDOI
TL;DR: The Direct Laser Interference Patterning (DLIP) method has been continuously developed in the last 20 years as discussed by the authors and has achieved impressive processing speeds even close to 1 m2/min.
Abstract: Starting from a simple concept, transferring the shape of an interference pattern directly to the surface of a material, the method of Direct Laser Interference Patterning (DLIP) has been continuously developed in the last 20 years. From lamp-pumped to high power diode-pumped lasers, DLIP permits today for the achievement of impressive processing speeds even close to 1 m2/min. The objective: to improve the performance of surfaces by the use of periodically ordered micro- and nanostructures. This study describes 20 years of evolution of the DLIP method in Germany. From the structuring of thin metallic films to bulk materials using nano- and picosecond laser systems, going through different optical setups and industrial systems which have been recently developed. Several technological applications are discussed and summarized in this article including: surface micro-metallurgy, tribology, electrical connectors, biological interfaces, thin film organic solar cells and electrodes as well as decorative elements and safety features. In all cases, DLIP has not only shown to provide outstanding surface properties but also outstanding economic advantages compared to traditional methods.

Proceedings ArticleDOI
TL;DR: This work investigated the use of deep features extracted from pre-trained convolutional neural networks (CNNs) in predicting survival time and fine-tuned a CNN initially trained on a large natural image recognition dataset and transferred the learned feature representations to the survival time prediction task.
Abstract: Prediction of survival time from brain tumor magnetic resonance images (MRI) is not commonly performed and would ordinarily be a time consuming process. However, current cross-sectional imaging techniques, particularly MRI, can be used to generate many features that may provide information on the patient’s prognosis, including survival. This information can potentially be used to identify individuals who would benefit from more aggressive therapy. Rather than using pre-defined and hand-engineered features as with current radiomics methods, we investigated the use of deep features extracted from pre-trained convolutional neural networks (CNNs) in predicting survival time. We also provide evidence for the power of domain specific fine-tuning in improving the performance of a pre-trained CNN’s, even though our data set is small. We fine-tuned a CNN initially trained on a large natural image recognition dataset (Imagenet ILSVRC) and transferred the learned feature representations to the survival time prediction task, obtaining over 81% accuracy in a leave one out cross validation.

Proceedings ArticleDOI
TL;DR: Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.
Abstract: In this paper we discuss the edge placement error (EPE) for multi-patterning semiconductor manufacturing. In a multi-patterning scheme the creation of the final pattern is the result of a sequence of lithography and etching steps, and consequently the contour of the final pattern contains error sources of the different process steps. We describe the fidelity of the final pattern in terms of EPE, which is defined as the relative displacement of the edges of two features from their intended target position. We discuss our holistic patterning optimization approach to understand and minimize the EPE of the final pattern. As an experimental test vehicle we use the 7-nm logic device patterning process flow as developed by IMEC. This patterning process is based on Self-Aligned-Quadruple-Patterning (SAQP) using ArF lithography, combined with line cut exposures using EUV lithography. The computational metrology method to determine EPE is explained. It will be shown that ArF to EUV overlay, CDU from the individual process steps, and local CD and placement of the individual pattern features, are the important contributors. Based on the error budget, we developed an optimization strategy for each individual step and for the final pattern. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets.

Proceedings ArticleDOI
TL;DR: In preparation for the 2020 Astrophysics Decadal Survey, NASA has commissioned the study of four large mission concepts, including the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor as mentioned in this paper, which is a 15 meter-diameter segmented-aperture telescope with a suite of serviceable instruments operating over a range of wavelengths between 100 nm to 2.5 μm.
Abstract: In preparation for the 2020 Astrophysics Decadal Survey, NASA has commissioned the study of four large mission concepts, including the Large Ultraviolet / Optical / Infrared (LUVOIR) Surveyor. The LUVOIR Science and Technology Definition Team (STDT) has identified a broad range of science objectives including the direct imaging and spectral characterization of habitable exoplanets around sun-like stars, the study of galaxy formation and evolution, the epoch of reionization, star and planet formation, and the remote sensing of Solar System bodies. NASA’s Goddard Space Flight Center (GSFC) is providing the design and engineering support to develop executable and feasible mission concepts that are capable of the identified science objectives. We present an update on the first of two architectures being studied: a 15- meter-diameter segmented-aperture telescope with a suite of serviceable instruments operating over a range of wavelengths between 100 nm to 2.5 μm. Four instruments are being developed for this architecture: an optical / near-infrared coronagraph capable of 10-10 contrast at inner working angles as small as 2 λ/D; the LUVOIR UV Multi-object Spectrograph (LUMOS), which will provide low- and medium-resolution UV (100 – 400 nm) multi-object imaging spectroscopy in addition to far-UV imaging; the High Definition Imager (HDI), a high-resolution wide-field-of-view NUV-Optical-IR imager; and a UV spectro-polarimeter being contributed by Centre National d’Etudes Spatiales (CNES). A fifth instrument, a multi-resolution optical-NIR spectrograph, is planned as part of a second architecture to be studied in late 2017.

Proceedings ArticleDOI
TL;DR: An overview of the key technology innovations and infrastructure requirements for the next generation EUV systems is presented and a novel, anamorphic lens design is developed to provide the required Numerical Aperture.
Abstract: While EUV systems equipped with a 0.33 Numerical Aperture lenses are readying to start volume manufacturing, ASML and Zeiss are ramping up their development activities on a EUV exposure tool with Numerical Aperture greater than 0.5. The purpose of this scanner, targeting a resolution of 8nm, is to extend Moore’s law throughout the next decade. A novel, anamorphic lens design, has been developed to provide the required Numerical Aperture; this lens will be paired with new, faster stages and more accurate sensors enabling Moore’s law economical requirements, as well as the tight focus and overlay control needed for future process nodes. The tighter focus and overlay control budgets, as well as the anamorphic optics, will drive innovations in the imaging and OPC modelling, and possibly in the metrology concepts. Furthermore, advances in resist and mask technology will be required to image lithography features with less than 10nm resolution. This paper presents an overview of the key technology innovations and infrastructure requirements for the next generation EUV systems.

Proceedings ArticleDOI
TL;DR: In this paper, an environmentally stable Yb ultrafast ring oscillator utilizing a new method of passive mode-locking was presented, which makes it insensitive to environmental factors, like temperature, humidity, vibrations, and shocks.
Abstract: We present an environmentally stable Yb ultrafast ring oscillator utilizing a new method of passive mode-locking. The laser is using all-fiber architecture which makes it insensitive to environmental factors, like temperature, humidity, vibrations, and shocks. The new method of mode-locking is utilizing crossed bandpass transmittance filters in ring architecture to discriminate against CW lasing. Broadband pulse evolves from cavity noise under amplification, after passing each filter, causing strong spectral broadening. The laser is self-starting. It generates transform limited spectrally flat pulses of 1 – 50 nm width at 6 – 15 MHz repetition rate and pulse energy 0.2 – 15 nJ at 1010 – 1080 nm CWL.

Proceedings ArticleDOI
TL;DR: An automatic segmentation method by combining a deep learning method and multi-atlas refinement, which uses the convolutional neural networks to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels to obtain the preliminary segmentation results.
Abstract: Automatic segmentation of the prostate on CT images has many applications in prostate cancer diagnosis and therapy. However, prostate CT image segmentation is challenging because of the low contrast of soft tissue on CT images. In this paper, we propose an automatic segmentation method by combining a deep learning method and multi-atlas refinement. First, instead of segmenting the whole image, we extract the region of interesting (ROI) to delete irrelevant regions. Then, we use the convolutional neural networks (CNN) to learn the deep features for distinguishing the prostate pixels from the non-prostate pixels in order to obtain the preliminary segmentation results. CNN can automatically learn the deep features adapting to the data, which are different from some handcrafted features. Finally, we select some similar atlases to refine the initial segmentation results. The proposed method has been evaluated on a dataset of 92 prostate CT images. Experimental results show that our method achieved a Dice similarity coefficient of 86.80% as compared to the manual segmentation. The deep learning based method can provide a useful tool for automatic segmentation of the prostate on CT images and thus can have a variety of clinical applications.