scispace - formally typeset
Search or ask a question

Showing papers in "Proceedings of SPIE in 2015"


Proceedings ArticleDOI
TL;DR: This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.
Abstract: In this work, we examine the strength of deep learning approaches for pathology detection in chest radiograph data. Convolutional neural networks (CNN) deep architecture classification approaches have gained popularity due to their ability to learn mid and high level image representations. We explore the ability of a CNN to identify different types of pathologies in chest x-ray images. Moreover, since very large training sets are generally not available in the medical domain, we explore the feasibility of using a deep learning approach based on non-medical learning. We tested our algorithm on a dataset of 93 images. We use a CNN that was trained with ImageNet, a well-known large scale nonmedical image database. The best performance was achieved using a combination of features extracted from the CNN and a set of low-level features. We obtained an area under curve (AUC) of 0.93 for Right Pleural Effusion detection, 0.89 for Enlarged heart detection and 0.79 for classification between healthy and abnormal chest x-ray, where all pathologies are combined into one large class. This is a first-of-its-kind experiment that shows that deep learning with large scale non-medical image databases may be sufficient for general medical image recognition tasks.

299 citations


Proceedings ArticleDOI
David Morgan1
TL;DR: This work considers the specific application of Automatic Target Recognition using Synthetic Aperture Radar data from the MSTAR public release data set and shows how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target.
Abstract: Deep architectures for classification and representation learning have recently attracted significant attention within academia and industry, with many impressive results across a diverse collection of problem sets. In this work we consider the specific application of Automatic Target Recognition (ATR) using Synthetic Aperture Radar (SAR) data from the MSTAR public release data set. The classification performance achieved using a Deep Convolutional Neural Network (CNN) on this data set was found to be competitive with existing methods considered to be state-of-the-art. Unlike most existing algorithms, this approach can learn discriminative feature sets directly from training data instead of requiring pre-specification or pre-selection by a human designer. We show how this property can be exploited to efficiently adapt an existing classifier to recognise a previously unseen target and discuss potential practical applications.

187 citations


Proceedings ArticleDOI
TL;DR: In this paper, a method was developed to retrieve the shape of an object hidden behind a diffusing screen, which can be used to image through thick clouds or deep into biological tissues.
Abstract: Light scattering is known for blurring images to the point of making them appear as a white halo. For this reason imaging through thick clouds or deep into biological tissues is difficult. Here we discuss in details a method we developed recently to retrieve the shape of an object hidden behind a diffusing screen.

161 citations


Proceedings ArticleDOI
TL;DR: This work presents a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen based on hierarchical coarse-to-fine classification of local image regions (superpixels).
Abstract: Automatic organ segmentation is an important prerequisite for many computer-aided diagnosis systems. The high anatomical variability of organs in the abdomen, such as the pancreas, prevents many segmentation methods from achieving high accuracies when compared to state-of-the-art segmentation of organs like the liver, heart or kidneys. Recently, the availability of large annotated training sets and the accessibility of affordable parallel computing resources via GPUs have made it feasible for "deep learning" methods such as convolutional networks (ConvNets) to succeed in image classification tasks. These methods have the advantage that used classification features are trained directly from the imaging data. We present a fully-automated bottom-up method for pancreas segmentation in computed tomography (CT) images of the abdomen. The method is based on hierarchical coarse-to-fine classification of local image regions (superpixels). Superpixels are extracted from the abdominal region using Simple Linear Iterative Clustering (SLIC). An initial probability response map is generated, using patch-level confidences and a two-level cascade of random forest classifiers, from which superpixel regions with probabilities larger 0.5 are retained. These retained superpixels serve as a highly sensitive initial input of the pancreas and its surroundings to a ConvNet that samples a bounding box around each superpixel at different scales (and random non-rigid deformations at training time) in order to assign a more distinct probability of each superpixel region being pancreas or not. We evaluate our method on CT images of 82 patients (60 for training, 2 for validation, and 20 for testing). Using ConvNets we achieve maximum Dice scores of an average 68% ± 10% (range, 43-80%) in testing. This shows promise for accurate pancreas segmentation, using a deep learning approach and compares favorably to state-of-the-art methods.

141 citations


Proceedings ArticleDOI
TL;DR: GeoMesa is a distributed spatio-temporal database built on top of Hadoop and column-family databases such as Accumulo and HBase that includes a suite of tools for indexing, managing and analyzing both vector and raster data.
Abstract: Recent advances in distributed databases and computing have transformed the landscape of spatio-temporal machine learning. This paper presents GeoMesa, a distributed spatio-temporal database built on top of Hadoop and column-family databases such as Accumulo and HBase, that includes a suite of tools for indexing, managing and analyzing both vector and raster data. The indexing techniques use space filling curves to map multi-dimensional data to the single lexicographic list managed by the underlying distributed database. In contrast to traditional non-distributed RDBMS, GeoMesa is capable of scaling horizontally by adding more resources at runtime; the index rebalances across the additional resources. In the raster domain, GeoMesa leverages Accumulo's server-side iterators and aggregators to perform raster interpolation and associative map algebra operations in parallel at query time. The paper concludes with two geo-time data fusion examples: using GeoMesa to aggregate Twitter data by keywords; and georegistration to drape full-motion video (FMV) over terrain.

120 citations


Proceedings ArticleDOI
TL;DR: In this paper, a mid-infrared oscillator FEL has been constructed at the Fritz Haber Institute, which provides a flnal electron energy adjustable from 15 to 50 MeV, low longitudinal (< 50 keV ps) and transverse emittance (< 20 mm mrad).
Abstract: A mid-infrared oscillator FEL has been commissioned at the Fritz Haber Institute. The accelerator consists of a thermionic gridded gun, a subharmonic buncher, and two S-band standing-wave copper structures. It provides a flnal electron energy adjustable from 15 to 50 MeV, low longitudinal (< 50 keV ps) and transverse emittance (< 20… mm mrad), at more than 200 pC bunch charge with a micro-pulse repetition rate of 1 GHz and a macro-pulse length of up to 15 „s. Pulsed radiation with up to 100 mJ macro-pulse energy at about 0.5% FWHM bandwidth is routinely produced in the wavelength range from 4 to 48 „m. A characterization of the FEL performance in terms of pulse energy, bandwidth, and micro-pulse shape of the IR radiation is given. In addition, selected user results are presented. These include, for instance, spectroscopy of bio-molecules (peptides and small proteins) either conformer selected by ion mobility spectrometry or embedded in super∞uid helium nano-droplets at 0.4 K, as well as vibrational spectroscopy of mass-selected metal-oxide clusters and protonated water clusters in the gas phase.

105 citations


Proceedings ArticleDOI
TL;DR: Focused ultrasound (FUS) is a noninvasive method to locally and transiently disrupt the blood-brain barrier at discrete targets and presents new opportunities for the use of drugs and for the study of the brain.
Abstract: The physiology of the vasculature in the central nervous system (CNS), which includes the blood-brain barrier (BBB) and other factors, complicates the delivery of most drugs to the brain. Different methods have been used to bypass the BBB, but they have limitations such as being invasive, non-targeted or requiring the formulation of new drugs. Focused ultrasound (FUS), when combined with circulating microbubbles, is a noninvasive method to locally and transiently disrupt the BBB at discrete targets. The method presents new opportunities for the use of drugs and for the study of the brain.

88 citations


Proceedings ArticleDOI
TL;DR: In this paper, the Mie resonances of high-index dielectric nanoparticles have been studied for controlling both magnetic and electric response of structured matter by engineering the MIE resonances in high index dielectrics.
Abstract: We review a new, rapidly developing field of all-dielectric nanophotonics which allows to control both magnetic and electric response of structured matter by engineering the Mie resonances in high-index dielectric nanoparticles. We discuss optical properties of such dielectric nanoparticles, methods of their fabrication, and also recent advances in all-dielectric metadevices including couple-resonator dielectric waveguides, nanoantennas, and metasurfaces.

84 citations


Proceedings ArticleDOI
TL;DR: After almost a full year in orbit, the XCO2 product is beginning to reveal some of the most robust features of the atmospheric carbon cycle, including the northern hemisphere spring drawdown, and enhanced values co-located with intense fossil fuel and biomass burning emissions.
Abstract: The Orbiting Carbon Observatory-2 (OCO-2) is this first NASA satellite designed to measure atmospheric carbon dioxide (CO2) with the accuracy, resolution, and coverage needed to detect CO2 sources and sinks on regional scales over the globe. OCO-2 was launched from Vandenberg Air Force Base on 2 July 2014, and joined the 705 km Afternoon Constellation a month later. Its primary instrument, a 3-channel imaging grating spectrometer, was then cooled to its operating temperatures and began collecting about one million soundings over the sunlit hemisphere each day. As expected, about 13% of these measurements are sufficiently cloud free to yield full-column estimates of the columnaveraged atmospheric CO2 dry air mole fraction, XCO2. After almost a full year in orbit, the XCO2 product is beginning to reveal some of the most robust features of the atmospheric carbon cycle, including the northern hemisphere spring drawdown, and enhanced values co-located with intense fossil fuel and biomass burning emissions. As the carbon cycle science community continues to analyze these OCO-2 data, information on regional-scale sources (emitters) and sinks (absorbers) as well as far more subtle features are expected to emerge from this high resolution, global data set.

78 citations


Proceedings ArticleDOI
TL;DR: A highly accurate and low-false-alarm hotspot detection framework that outperforms other works in the 2012 ICCAD contest in terms of both accuracy and false alarm.
Abstract: Under the low-k1 lithography process, lithography hotspot detection and elimination in the physical verification phase have become much more important for reducing the process optimization cost and improving manufacturing yield. This paper proposes a highly accurate and low-false-alarm hotspot detection framework. To define an appropriate and simplified layout feature for classification model training, we propose a novel feature space evaluation index. Furthermore, by applying a robust classifier based on the probability distribution function of layout features, our framework can achieve very high accuracy and almost zero false alarm. The experimental results demonstrate the effectiveness of the proposed method in that our detector outperforms other works in the 2012 ICCAD contest in terms of both accuracy and false alarm.

77 citations


Proceedings ArticleDOI
TL;DR: The X-ray Surveyor (X-S) as discussed by the authors is a large-scale mission with a high-resolution mirror assembly and an instrument set, which may include an x-ray microcalorimeter, a highdefinition imager, and a dispersive grating spectrometer and its readout.
Abstract: NASA's Chandra X-ray Observatory continues to provide an unparalleled means for exploring the high-energy universe. With its half-arcsecond angular resolution, Chandra studies have deepened our understanding of galaxy clusters, active galactic nuclei, galaxies, supernova remnants, neutron stars, black holes, and solar system objects. As we look beyond Chandra, it is clear that comparable or even better angular resolution with greatly increased photon throughput is essential to address ever more demanding science questions—such as the formation and growth of black hole seeds at very high redshifts; the emergence of the first galaxy groups; and details of feedback over a large range of scales from galaxies to galaxy clusters. Recently, we initiated a concept study for such a mission, dubbed X-ray Surveyor. The X-ray Surveyor strawman payload is comprised of a high-resolution mirror assembly and an instrument set, which may include an X-ray microcalorimeter, a high-definition imager, and a dispersive grating spectrometer and its readout. The mirror assembly will consist of highly nested, thin, grazing-incidence mirrors, for which a number of technical approaches are currently under development—including adjustable X-ray optics, differential deposition, and new polishing techniques applied to a variety of substrates. This study benefits from previous studies of large missions carried out over the past two decades and, in most areas, points to mission requirements no more stringent than those of Chandra.

Proceedings ArticleDOI
TL;DR: This talk and accompanying paper attempts to provide a review and summary of the deep learning techniques used in the state-of-the-art, and highlights the need for both larger and more challenging public datasets to benchmark these systems.
Abstract: Deep Neural Networks (DNNs) have established themselves as a dominant technique in machine learning DNNs have been top performers on a wide variety of tasks including image classification, speech recognition, and face recognition1-3 Convolutional neural networks (CNNs) have been used in nearly all of the top performing methods on the Labeled Faces in the Wild (LFW) dataset3-6 In this talk and accompanying paper, I attempt to provide a review and summary of the deep learning techniques used in the state-of-the-art In addition, I highlight the need for both larger and more challenging public datasets to benchmark these systems Despite the ability of DNNs and autoencoders to perform unsupervised feature learning, modern facial recognition pipelines still require domain specific engineering in the form of re-alignment For example, in Facebook's recent DeepFace paper, a 3D "frontalization" step lies at the beginning of the pipeline This step creates a 3D face model for the incoming image and then uses a series of affine transformations of the fiducial points to "frontalize" the image This step enables the DeepFace system to use a neural network architecture with locally connected layers without weight sharing as opposed to standard convolutional layers6 Deep learning techniques combined with large datasets have allowed research groups to surpass human level performance on the LFW dataset3, 5 The high accuracy (9963% for FaceNet at the time of publishing) and utilization of outside data (hundreds of millions of images in the case of Google's FaceNet) suggest that current face verification benchmarks such as LFW may not be challenging enough, nor provide enough data, for current techniques3, 5 There exist a variety of organizations with mobile photo sharing applications that would be capable of releasing a very large scale and highly diverse dataset of facial images captured on mobile devices Such an "ImageNet for Face Recognition" would likely receive a warm welcome from researchers and practitioners alike

Proceedings ArticleDOI
TL;DR: In this paper, the authors describe the design and experimental analysis of novel artificial muscles, made of twisted and coiled nylon fibers, for powering a biomimetic robotic hand, which is based on circulating hot and cold water to actuate the artificial muscles and obtain fast finger movements.
Abstract: This paper describes the design and experimental analysis of novel artificial muscles, made of twisted and coiled nylon fibers, for powering a biomimetic robotic hand. The design is based on circulating hot and cold water to actuate the artificial muscles and obtain fast finger movements. The actuation system consists of a spring and a coiled muscle within a compliant silicone tube. The silicone tube provides a watertight, expansible compartment within which the coiled muscle contracts when heated and expands when cooled. The fabrication and characterization of the actuating system are discussed in detail. The performance of the coiled muscle fiber in embedded conditions and the related characteristics of the actuated robotic finger are described.

Proceedings ArticleDOI
TL;DR: The Cloud-Aerosol Transport System (CATS) is a multi-wavelength lidar instrument developed to enhance Earth Science remote sensing capabilities from the International Space Station.
Abstract: The Cloud-Aerosol Transport System (CATS) is a multi-wavelength lidar instrument developed to enhance Earth Science remote sensing capabilities from the International Space Station The CATS project was chartered to be an experiment in all senses: science, technology, and management As a low-cost project following a strict build-to-cost/ build-to-schedule philosophy, CATS is following a new management approach while also serving as a technology demonstration for future NASA missions This presentation will highlight the CATS instrument and science objectives with emphasis on how the ISS platform enables the specific objectives of the payload The development process used for CATS and a look at data being produced by the instrument will also be presented

Proceedings ArticleDOI
TL;DR: Exo-S as discussed by the authors is a direct imaging space-based mission to discover and characterize exoplanets, which can reach down to Earth-size planets in the habitable zones of nearly two dozen nearby stars.
Abstract: Exo-S is a direct imaging space-based mission to discover and characterize exoplanets. With its modest size, Exo-S bridges the gap between census missions like Kepler and a future space-based flagship direct imaging exoplanet mission. With the ability to reach down to Earth-size planets in the habitable zones of nearly two dozen nearby stars, Exo-S is a powerful first step in the search for and identification of Earth-like planets. Compelling science can be returned at the same time as the technological and scientific framework is developed for a larger flagship mission. The Exo-S Science and Technology Definition Team studied two viable starshade-telescope missions for exoplanet direct imaging, targeted to the $1B cost guideline. The first Exo-S mission concept is a starshade and telescope system dedicated to each other for the sole purpose of direct imaging for exoplanets (The "Starshade Dedicated Mission"). The starshade and commercial, 1.1-m diameter telescope co-launch, sharing the same low-cost launch vehicle, conserving cost. The Dedicated mission orbits in a heliocentric, Earth leading, Earth-drift away orbit. The telescope has a conventional instrument package that includes the planet camera, a basic spectrometer, and a guide camera. The second Exo-S mission concept is a starshade that launches separately to rendezvous with an existing on-orbit space telescope (the "Starshade Rendezvous Mission"). The existing telescope adopted for the study is the WFIRST-AFTA (Wide-Field Infrared Survey Telescope Astrophysics Focused Telescope Asset). The WFIRST-AFTA 2.4-m telescope is assumed to have previously launched to a Halo orbit about the Earth-Sun L2 point, away from the gravity gradient of Earth orbit which is unsuitable for formation flying of the starshade and telescope. The impact on WFIRST-AFTA for starshade readiness is minimized; the existing coronagraph instrument performs as the starshade science instrument, while formation guidance is handled by the existing coronagraph focal planes with minimal modification and an added transceiver.

Proceedings ArticleDOI
TL;DR: A classification method to automatically distinguish AMD patients from healthy subjects with high accuracy is proposed, based on an unsupervised feature learning approach, and allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms.
Abstract: Age-related Macular Degeneration (AMD) is a common eye disorder with high prevalence in elderly people The disease mainly affects the central part of the retina, and could ultimately lead to permanent vision loss Optical Coherence Tomography (OCT) is becoming the standard imaging modality in diagnosis of AMD and the assessment of its progression However, the evaluation of the obtained volumetric scan is time consuming, expensive and the signs of early AMD are easy to miss In this paper we propose a classification method to automatically distinguish AMD patients from healthy subjects with high accuracy The method is based on an unsupervised feature learning approach, and processes the complete image without the need for an accurate pre-segmentation of the retina The method can be divided in two steps: an unsupervised clustering stage that extracts a set of small descriptive image patches from the training data, and a supervised training stage that uses these patches to create a patch occurrence histogram for every image on which a random forest classifier is trained Experiments using 384 volume scans show that the proposed method is capable of identifying AMD patients with high accuracy, obtaining an area under the Receiver Operating Curve of 0:984 Our method allows for a quick and reliable assessment of the presence of AMD pathology in OCT volume scans without the need for accurate layer segmentation algorithms

Proceedings ArticleDOI
TL;DR: A novel enhancement filter based on ratio of multiscale Hessian eigenvalues, which yields a close-to-uniform response in all vascular structures and accurately enhances the border between theascular structures and the background is proposed.
Abstract: Vascular diseases are among the top three causes of death in the developed countries. Effective diagnosis of vascular pathologies from angiographic images is therefore very important and usually relies on segmentation and visualization of vascular structures. To enhance the vascular structures prior to their segmentation and visualization, and to suppress non-vascular structures and image noise, the filters enhancing vascular structures are used extensively. Even though several enhancement filters are widely used, the responses of these filters are typically not uniform between vessels of different radii and, compared to the response in the central part of vessels, their response is lower at vessels' edges and bifurcations, and vascular pathologies like aneurysm. In this paper, we propose a novel enhancement filter based on ratio of multiscale Hessian eigenvalues, which yields a close-to-uniform response in all vascular structures and accurately enhances the border between the vascular structures and the background. The proposed and four state-of-the-art enhancement filters were evaluated and compared on a 3D synthetic image containing tubular structures and a clinical dataset of 15 cerebral 3D digitally subtracted angiograms with manual expert segmentations. The evaluation was based on quantitative metrics of segmentation performance, computed as area under the precision-recall curve, signal-to-noise ratio of the vessel enhancement and the response uniformity within vascular structures. The proposed filter achieved the best scores in all three metrics and thus has a high potential to further improve the performance of existing or encourage the development of more advanced methods for segmentation and visualization of vascular structures.

Proceedings ArticleDOI
TL;DR: imec’s silicon photonics active platform accessible through multi-project wafer runs is presented.
Abstract: Silicon photonics has become in the past years an important technology adopted by a growing number of industrial players to develop their next generation optical transceivers. However most of the technology platforms established in CMOS fabrication lines are kept captive or open to only a restricted number of customers. In order to make silicon photonics accessible to a large number of players several initiatives exist around the world to develop open platforms. In this paper we will present imec’s silicon photonics active platform accessible through multi-project wafer runs.

Proceedings ArticleDOI
TL;DR: A regression model for OPC using a Hierarchical Bayes Model (HBM) to reduce the number of iterations in model-based OPC and achieve a better solution than other conventional models.
Abstract: Optical Proximity Correction (OPC) is one of the most important techniques in today's optical lithography based manufacturing process. Although the most widely used model-based OPC is expected to achieve highly accurate correction, it is also known to be extremely time-consuming. This paper proposes a regression model for OPC using a Hierarchical Bayes Model (HBM). The goal of the regression model is to reduce the number of iterations in model-based OPC. Our approach utilizes a Bayes inference technique to learn the optimal parameters from given data. All parameters are estimated by the Markov Chain Monte Carlo method. Experimental results show that utilizing HBM can achieve a better solution than other conventional models, e.g., linear regression based model, or non-linear regression based model. In addition, our regression results can be fed as the starting point of conventional model based OPC, through which we are able to overcome the runtime bottleneck.

Proceedings ArticleDOI
TL;DR: In this article, the authors used a high-speed terahertz spectrometer with laser pulse duration of 90 fs and repetition rate of 250 MHz with spectral range up to 3 THz to distinguish between authentic and counterfeit parts.
Abstract: THz radiation is capable of penetrating most of nonmetallic materials and allows THz spectroscopy to be used to image the interior structures and constituent materials of wide variety of objects including Integrated circuits (ICs) The fact that many materials in THz spectral region have unique spectral fingerprints provides an authentication platform to distinguish between authentic and counterfeit electronic components Counterfeit and authentic ICs are investigated using a high-speed terahertz spectrometer with laser pulse duration of 90 fs and repetition rate of 250 MHz with spectral range up to 3 THz Time delays, refractive indices and absorption characteristics are extracted to distinguish between authentic and counterfeit parts Spot measurements are used to develop THz imaging techniques In this work it was observed that the packaging of counterfeit ICs, compared to their authentic counterparts, are not made from homogeneous materials Moreover, THz techniques were used to observe different layers of the electronic components to inspect die and lead geometries Considerable differences between the geometries of the dies/leads of the counterfeit ICs and their authentic counterparts were observed Observing the different layers made it possible to distinguish blacktopped counterfeit ICs precisely According to the best knowledge of authors the reported THz inspection techniques in this paper are reported for the first time for authentication of electronic components Wide varieties of techniques such as X-ray tomography, scanning electron microscopy (SEM), Energy Dispersive X-ray Spectroscopy (EDS) and optical inspections using a high resolution microscope have also been being employed for detection of counterfeit ICs In this paper, the achieved data from THz material inspections/ THz imaging are compared to the obtained results from other techniques to show excellent correlation Compared to other techniques, THz inspection techniques have the privilege to be nondestructive, nonhazardous, less human dependent and fast

Proceedings ArticleDOI
TL;DR: It is shown how the KLA paradigm is inherently immune to double counting of data and how consensus can effectively be adopted in order to perform in a scalable way the K LA fusion of multitarget densities over a peer-to-peer sensor network.
Abstract: The paper presents a theoretical approach to the multiagent fusion of multitarget densities based on the information-theoretic concept of Kullback-Leibler Average (KLA). In particular, it is shown how the KLA paradigm is inherently immune to double counting of data. Further, it is shown how consensus can effectively be adopted in order to perform in a scalable way the KLA fusion of multitarget densities over a peer-to-peer (i.e. without coordination center) sensor network. When the multitarget information available in each node can be expressed as a (possibly Cardinalized) Probability Hypothesis Density (PHD), application of the proposed KLA fusion rule leads to a consensus (C)PHD filter which can be successfully exploited for distributed multitarget tracking over a peer-to-peer sensor network.

Proceedings ArticleDOI
TL;DR: This paper discusses edge placement error (EPE) for multi-patterning application and compares the EPE budget with the one for EUV single expose application case and results are compared to the more straightforward alternative of using single expose patterning with EUV for all critical layers.
Abstract: In this paper we discuss edge placement error (EPE) for multi-patterning application and compare the EPE budget with the one for EUV single expose application case. These two patterning methods are candidate for the manufacturing of 10-nm and 7-nm logic semiconductor devices. EUV will enable 2D random pattern layout, while in the multi-patterning case a more restricted 1D design layout is needed. For the 1D design approach we discuss the patterning control spacer pitch division resulting in complex multi-layer alignment and EPE optimization strategies. Solutions include overlay and CD metrology based on angle resolved scatterometry, scanner actuator control to enable high order overlay corrections and computational lithography optimization to minimize imaging induced pattern placement errors of devices and metrology targets. We use 10-nm node experimental data and extrapolate the error budgets towards the 7-nm technology node. The experimental data will be based on NXE:3300B and NXT:1960Bi/NXT:1970Ci exposure systems. The results are compared to the more straightforward alternative of using single expose patterning with EUV for all critical layers.

Proceedings ArticleDOI
TL;DR: The first use of smartphone spectrophotometry for readout of fluorescence-based biological assays is demonstrated, offering a route toward portable biomolecular assays for viral/bacterial pathogens, disease biomarkers, and toxins.
Abstract: We demonstrate the first use of smartphone spectrophotometry for readout of fluorescence-based biological assays. We evaluated the smartphone fluorimeter in the context of a fluorescent molecular beacon (MB) assay for detection of specific nucleic acid sequences in a liquid test sample and compared performance against a conventional laboratory fluorimeter. The capability of distinguishing a one-point mismatch is also demonstrated by detecting single-base mutation in target nucleic acids. Our approach offers a route toward portable biomolecular assays for viral/bacterial pathogens, disease biomarkers, and toxins.

Proceedings ArticleDOI
TL;DR: Inpria as mentioned in this paper developed a patternable, metal oxide hardmasks as robust, high-resolution photoresists for EUV lithography, which achieved 13nm half-pitch at 35 mJ/cm2 on an ASML's NXE:3300B scanner.
Abstract: Inpria is developing directly patternable, metal oxide hardmasks as robust, high-resolution photoresists for EUV lithography Targeted formulations have achieved 13nm half-pitch at 35 mJ/cm2 on an ASML’s NXE:3300B scanner Inpria’s second-generation materials have an absorbance of 20/μm, thereby enabling an equivalent photon shot noise compared to conventional resists at a dose lower by a factor of 4X These photoresists have ~40:1 etch selectivity into a typical carbon underlayer, so ultrathin 20nm films are possible, mitigating pattern collapse In addition to lithographic performance, we review progress in parallel advances required to enable the transition from lab to fab for such a metal oxide photoresist This includes considerations and data related to: solvent compatibility, metals cross-contamination, coat uniformity, stability, outgassing, and rework

Proceedings ArticleDOI
TL;DR: In this article, high power UV-B LEDs grown by MOVPE on sapphire substrates have been realized with an L50 lifetime beyond 10,000 hours, indicating an L 50 lifetime beyond 3,500 hours of operation.
Abstract: UV light emitters in the UV-B spectral range between 280 nm and 320 nm are of great interest for applications such as phototherapy, gas sensing, plant growth lighting, and UV curing. In this paper we present high power UV-B LEDs grown by MOVPE on sapphire substrates. By optimizing the heterostructure design, growth parameters and processing technologies, significant progress was achieved with respect to internal efficiency, injection efficiency and light extraction. LED chips emitting at 310 nm with maximum output powers of up to 18 mW have been realized. Lifetime measurements show approximately 20% decrease in emission power after 1,000 operating hours at 100 mA and 5 mW output power and less than 30% after 3,500 hours of operation, thus indicating an L50 lifetime beyond 10,000 hours.

Proceedings ArticleDOI
TL;DR: In this paper, the burst mode for ps and fs pulses for steel and copper was investigated and it was found that the reduction of the energy in a single pulse (in the burst) represents the main factor for the often reported gain in the removal rate using burst mode e.g. for steel no investigated burst sequence lead to a higher removal rate compared to single pulses at higher repetition rate.
Abstract: The burst mode for ps and fs pulses for steel and copper is investigated. It is found that the reduction of the energy in a single pulse (in the burst) represents the main factor for the often reported gain in the removal rate using the burst mode e.g. for steel no investigated burst sequence lead to a higher removal rate compared to single pulses at higher repetition rate. But for copper a situation was found where the burst mode leads to a real increase of the removal rate in the range of 20%. Further the burst mode offers the possibility to generate slightly melted flat surfaces with good optical properties in the case of steel. Temperature simulations indicate that the surface state during the burst mode could be responsible for the melting effect or the formation of cavities in clusters which reduces the surface quality.

Proceedings ArticleDOI
TL;DR: The optimal workflow to obtain phantoms from 3D data for interventionist to practice on prior to an actual procedure was investigated and should allow for adjustments to treatment plans to be made before the patient is actually in the procedure room and enabling reduced risk of peri-operative complications or delays.
Abstract: Minimally invasive endovascular image-guided interventions (EIGIs) are the preferred procedures for treatment of a wide range of vascular disorders. Despite benefits including reduced trauma and recovery time, EIGIs have their own challenges. Remote catheter actuation and challenging anatomical morphology may lead to erroneous endovascular device selections, delays or even complications such as vessel injury. EIGI planning using 3D phantoms would allow interventionists to become familiarized with the patient vessel anatomy by first performing the planned treatment on a phantom under standard operating protocols. In this study the optimal workflow to obtain such phantoms from 3D data for interventionist to practice on prior to an actual procedure was investigated. Patientspecific phantoms and phantoms presenting a wide range of challenging geometries were created. Computed Tomographic Angiography (CTA) data was uploaded into a Vitrea 3D station which allows segmentation and resulting stereo-lithographic files to be exported. The files were uploaded using processing software where preloaded vessel structures were included to create a closed-flow vasculature having structural support. The final file was printed, cleaned, connected to a flow loop and placed in an angiographic room for EIGI practice. Various Circle of Willis and cardiac arterial geometries were used. The phantoms were tested for ischemic stroke treatment, distal catheter navigation, aneurysm stenting and cardiac imaging under angiographic guidance. This method should allow for adjustments to treatment plans to be made before the patient is actually in the procedure room and enabling reduced risk of peri-operative complications or delays.

Proceedings ArticleDOI
John V. Monaco1
TL;DR: A novel methodology for identifying and verifying Bitcoin users based on the observation of Bitcoin transactions over time, which shows an inherent lack of anonymity by exploiting patterns in long-term transactional behavior.
Abstract: Digital currencies, such as Bitcoin, offer convenience and security to criminals operating in the black marketplace. Some Bitcoin marketplaces, such as Silk Road, even claim anonymity. This claim contradicts the findings in this work, where long term transactional behavior is used to identify and verify account holders. Transaction timestamps and network properties observed over time contribute to this finding. The timestamp of each transaction is the result of many factors: the desire purchase an item, daily schedule and activities, as well as hardware and network latency. Dynamic network properties of the transaction, such as coin flow and the number of edge outputs and inputs, contribute further to reveal account identity. In this paper, we propose a novel methodology for identifying and verifying Bitcoin users based on the observation of Bitcoin transactions over time. The behavior we attempt to quantify roughly occurs in the social band of Newell's time scale. A subset of the Blockchain 230686 is taken, selecting users that initiated between 100 and 1000 unique transactions per month for at least 6 different months. This dataset shows evidence of being nonrandom and nonlinear, thus a dynamical systems approach is taken. Classification and authentication accuracies are obtained under various representations of the monthly Bitcoin samples: outgoing transactions, as well as both outgoing and incoming transactions are considered, along with the timing and dynamic network properties of transaction sequences. The most appropriate representations of monthly Bitcoin samples are proposed. Results show an inherent lack of anonymity by exploiting patterns in long-term transactional behavior.

Proceedings ArticleDOI
Bert Geelen1, Carolina Blanch1, Pilar Gonzalez1, Nicolaas Tack1, Andy Lambrechts1 
TL;DR: This paper describes the integration of the per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera that can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.
Abstract: Spectral imaging can reveal a lot of hidden details about the world around us, but is currently confined to laboratory environments due to the need for complex, costly and bulky cameras. Imec has developed a unique spectral sensor concept in which the spectral unit is monolithically integrated on top of a standard CMOS image sensor at wafer level, hence enabling the design of compact, low cost and high acquisition speed spectral cameras with a high design flexibility. This flexibility has previously been demonstrated by imec in the form of three spectral camera architectures: firstly a high spatial and spectral resolution scanning camera, secondly a multichannel snapshot multispectral camera and thirdly a per-pixel mosaic snapshot spectral camera. These snapshot spectral cameras sense an entire multispectral data cube at one discrete point in time, extending the domain of spectral imaging towards dynamic, video-rate applications. This paper describes the integration of our per-pixel mosaic snapshot spectral sensors inside a tiny, portable and extremely user-friendly camera. Our prototype demonstrator cameras can acquire multispectral image cubes, either of 272x512 pixels over 16 bands in the VIS (470-620nm) or of 217x409 pixels over 25 bands in the VNIR (600-900nm) at 170 cubes per second for normal machine vision illumination levels. The cameras themselves are extremely compact based on Ximea xiQ cameras, measuring only 26x26x30mm, and can be operated from a laptop-based USB3 connection, making them easily deployable in very diverse environments.

Proceedings ArticleDOI
Jianle Chen1, Ying Chen1, Marta Karczewicz1, Xiang Li1, Hongbin Liu1, Li Zhang1, Xin Zhao1 
TL;DR: Simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.
Abstract: The new state-of-the-art video coding standard, H.265/HEVC, has been finalized in 2013 and it achieves roughly 50% bit rate saving compared to its predecessor, H.264/MPEG-4 AVC. This paper provides the evidence that there is still potential for further coding efficiency improvements. A brief overview of HEVC is firstly given in the paper. Then, our improvements on each main module of HEVC are presented. For instance, the recursive quadtree block structure is extended to support larger coding unit and transform unit. The motion information prediction scheme is improved by advanced temporal motion vector prediction, which inherits the motion information of each small block within a large block from a temporal reference picture. Cross component prediction with linear prediction model improves intra prediction and overlapped block motion compensation improves the efficiency of inter prediction. Furthermore, coding of both intra and inter prediction residual is improved by adaptive multiple transform technique. Finally, in addition to deblocking filter and SAO, adaptive loop filter is applied to further enhance the reconstructed picture quality. This paper describes above-mentioned techniques in detail and evaluates their coding performance benefits based on the common test condition during HEVC development. The simulation results show that significant performance improvement over HEVC standard can be achieved, especially for the high resolution video materials.