scispace - formally typeset
Search or ask a question

Showing papers on "Aerial image published in 2012"


Proceedings Article
26 Jun 2012
TL;DR: This work proposes two robust loss functions for dealing with incomplete and poorly registered label noise and uses the loss functions to train a deep neural network on two challenging aerial image datasets.
Abstract: When training a system to label images, the amount of labeled training data tends to be a limiting factor. We consider the task of learning to label aerial images from existing maps. These provide abundant labels, but the labels are often incomplete and sometimes poorly registered. We propose two robust loss functions for dealing with these kinds of label noise and use the loss functions to train a deep neural network on two challenging aerial image datasets. The robust loss functions lead to big improvements in performance and our best system substantially outperforms the best published results on the task we consider.

416 citations


Journal ArticleDOI
04 Jan 2012-Sensors
TL;DR: Recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation are given.
Abstract: The objective of this investigation was to develop and investigate methods for point cloud generation by image matching using aerial image data collected by quadrocopter type micro unmanned aerial vehicle (UAV) imaging systems. Automatic generation of high-quality, dense point clouds from digital images by image matching is a recent, cutting-edge step forward in digital photogrammetric technology. The major components of the system for point cloud generation are a UAV imaging system, an image data collection process using high image overlaps, and post-processing with image orientation and point cloud generation. Two post-processing approaches were developed: one of the methods is based on Bae Systems’ SOCET SET classical commercial photogrammetric software and another is built using Microsoft®’s Photosynth™ service available in the Internet. Empirical testing was carried out in two test areas. Photosynth processing showed that it is possible to orient the images and generate point clouds fully automatically without any a priori orientation information or interactive work. The photogrammetric processing line provided dense and accurate point clouds that followed the theoretical principles of photogrammetry, but also some artifacts were detected. The point clouds from the Photosynth processing were sparser and noisier, which is to a large extent due to the fact that the method is not optimized for dense point cloud generation. Careful photogrammetric processing with self-calibration is required to achieve the highest accuracy. Our results demonstrate the high performance potential of the approach and that with rigorous processing it is possible to reach results that are consistent with theory. We also point out several further research topics. Based on theoretical and empirical results, we give recommendations for properties of imaging sensor, data collection and processing of UAV image data to ensure accurate point cloud generation.

252 citations


Journal ArticleDOI
TL;DR: Results indicate that the proposed automated system can be used in detecting the road network on very high resolution satellite and aerial images in a reliable and fast manner.
Abstract: Road network detection from very high resolution satellite and aerial images has diverse and important usage areas such as map generation and updating. Although an expert can label road pixels in a given image, this operation is prone to errors and quite time consuming. Therefore, an automated system is needed to detect the road network in a given satellite or aerial image in a robust manner. In this paper, we propose such a novel system. Our system has three main modules: probabilistic road center detection, road shape extraction, and graph-theory-based road network formation. These modules may be used sequentially or interchangeably depending on the application at hand. To show the strengths and weaknesses of our system, we tested it on several very high resolution satellite (Geoeye, Ikonos, and QuickBird) and aerial image sets. We compared our system with the ones existing in the literature. We also tested the sensitivity of our system to different parameter values. Obtained results indicate that our system can be used in detecting the road network on such images in a reliable and fast manner.

141 citations


Journal ArticleDOI
TL;DR: A simple and robust feature point matching algorithm, called Restricted Spatial Order Constraints (RSOC), is proposed to remove outliers for registering aerial images with monotonous backgrounds, similar patterns, low overlapping areas, and large affine transformation.
Abstract: Accurate point matching is a critical and challenging process in feature-based image registration. In this paper, a simple and robust feature point matching algorithm, called Restricted Spatial Order Constraints (RSOC), is proposed to remove outliers for registering aerial images with monotonous backgrounds, similar patterns, low overlapping areas, and large affine transformation. In RSOC, both local structure and global information are considered. Based on adjacent spatial order, an affine invariant descriptor is defined, and point matching is formulated as an optimization problem. A graph matching method is used to solve it and yields two matched graphs with a minimum global transformation error. In order to eliminate dubious matches, a filtering strategy is designed. The strategy integrates two-way spatial order constraints and two decision criteria restrictions, i.e., the stability and accuracy of transformation error. Twenty-nine pairs of optical and Synthetic Aperture Radar (SAR) aerial images are utilized to evaluate the performance. Compared with RANdom SAmple Consensus (RANSAC), Graph Transformation Matching (GTM), and Spatial Order Constraints (SOC), RSOC obtained the highest precision and stability.

119 citations


Journal ArticleDOI
TL;DR: The novel memory efficient (eSGM) method is an advancement in which the amount of temporary memory only depends on the number of pixels and not on the disparity range and permits matching of huge images in one piece and reduces the requirements of the memory bandwidth for real-time mobile robotics.
Abstract: . Semi-GlobalMatching (SGM) is a robust stereo method that has proven its usefulness in various applications ranging from aerial image matching to driver assistance systems. It supports pixelwise matching for maintaining sharp object boundaries and fine structures and can be implemented efficiently on different computation hardware. Furthermore, the method is not sensitive to the choice of parameters. The structure of the matching algorithm is well suited to be processed by highly paralleling hardware e.g. FPGAs and GPUs. The drawback of SGM is the temporary memory requirement that depends on the number of pixels and the disparity range. On the one hand this results in long idle times due to the bandwidth limitations of the external memory and on the other hand the capacity bounds are quickly reached. A full HD image with a size of 1920 × 1080 pixels and a disparity range of 512 pixels requires already 1 billion elements, which is at least several GB of RAM, depending on the element size, wich are not available at standard FPGA- and GPUboards. The novel memory efficient (eSGM) method is an advancement in which the amount of temporary memory only depends on the number of pixels and not on the disparity range. This permits matching of huge images in one piece and reduces the requirements of the memory bandwidth for real-time mobile robotics. The feature comes at the cost of 50% more compute operations as compared to SGM. This overhead is compensated by the previously idle compute logic within the FPGA and the GPU and therefore results in an overall performance increase. We show that eSGM produces the same high quality disparity images as SGM and demonstrate its performance both on an aerial image pair with 142 MPixel and within a real-time mobile robotic application. We have implemented the new method on the CPU, GPU and FPGA.We conclude that eSGM is advantageous for a GPU implementation and essential for an implementation on our FPGA.

80 citations


Patent
15 Feb 2012
TL;DR: In this paper, a system and method for construction estimation using aerial images is presented, which allows users to generate two-dimensional and three-dimensional models of the roof by automatically delineating various roof features, and generates a report including information about the roof of the building.
Abstract: A system and method for construction estimation using aerial images is provided. The system receives at least one aerial image of a building. An estimation engine processes the aerial image at a plurality of angles to automatically identify a plurality (e.g., perimeter and interior) lines in the image corresponding to a plurality of features of a roof the building. The estimation engine allows users to generate two-dimensional and three-dimensional models of the roof by automatically delineating various roof features, and generates a report including information about the roof of the building.

75 citations


Journal ArticleDOI
TL;DR: In this paper, the authors summarize the capacity of high-resolution (< 1m ) digital aerial imagery to support forest health monitoring and posit that such imagery can play at least two critical roles.
Abstract: We summarize the capacity of high spatial resolution (< 1m ) digital aerial imagery to support forest health monitoring. We review the current use of digital aerial imagery in the con- text of the recent mountain pine beetle epidemic in western Canada. Supported by this review, we posit that high spatial resolution digital aerial imagery can play at least two critical roles in forest health monitoring. First, the capacity to characterize damage at the individual tree level directly supports a broad range offorest health information needs (e.g., tree-level attributes for estimating the population at risk and for inputs to models, estimates of mortality, rates of population growth). Second, the level of detail afforded by the digital high spatial resolution aerial imagery provides critical calibration and validation data for lower spatial resolution remotely sensed ima- gery (e.g., QuickBird, Landsat) for large-area detection and mapping of forest damage and can be used in a double sampling scheme as a bridge between detailed field measures and landscape- level estimates of mortality. In an era with increasing numbers of commercially deployed sensors capable of acquiring high spatial resolution satellite imagery, the flexibility and cost- effectiveness of aerial image options should not be disregarded. Moreover, experiences with airborne imagery can continue to inform applications using high spatial resolution satellite ima- gery for forest health information needs. © 2012 Society of Photo-Optical Instrumentation Engineers (SPIE). (DOI: 10.1117/1.JRS.6.062527)

32 citations


Proceedings ArticleDOI
22 Jul 2012
TL;DR: A novel and robust method for automatic vehicle detection from highway aerial image is presented, in which a GIS road vector map is used to constrain a vehicle detection system to the highway networks.
Abstract: Vehicle detection from high resolution aerial images has been studied for many years. However, a robust and efficient vehicle detection method is still challenging. In this paper, a novel and robust method for automatic vehicle detection from highway aerial image was presented. In this method, a GIS road vector map is used to constrain a vehicle detection system to the highway networks. After the structure element is identified, morphological preprocessing method is used to identify candidate vehicles. Experiment is conducted with 0.15 m resolution aerial image. And the result demonstrated that the novel method has an excellent detection performance, thus the method is very promising.

30 citations


Journal ArticleDOI
TL;DR: A linear combination of two quadratic line-contour objectives to approximate the resist image effect for fast convergence is presented and a 100x speedup with comparable image fidelity and a slightly improved process window is shown.
Abstract: In Abbe’s formulation, source optimization (SO) is often formulated into a linear or quadratic problem, depending on the choice of objective functions. However, the conventional approach for the resist image, involving a sigmoid transformation of the aerial image, results in an objective with a functional form. The applicability of the resist-image objective to SO or simultaneous source and mask optimization (SMO) is therefore limited. In this paper, we present a linear combination of two quadratic line-contour objectives to approximate the resist image effect for fast convergence. The line-contour objectives are based on the aerial image on drawn edges using a constant threshold resist model and that of pixels associated with an intensity minimum for side-lobe suppression. A conjugate gradient method is employed to assure the convergence to the global minimum within the number of iterations less than that of source variables. We further compare the optimized illumination with the proposed line-contour objectives to that with a sigmoid resist-image using a steepest decent method. The results show a 100x speedup with comparable image fidelity and a slightly improved process window for the two cases studied.

29 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the use of light weight rotary-wing UAV for mapping simulation model and the accuracy of photogrammetric product will be assessed as one of the objectives of this study.
Abstract: Many applications can be solved by using the unmanned aerial vehicle (UAV) technology such as farming, surveillance, monitoring, fire disaster, flood monitoring and aerial terrain mapping. This study was carried out to investigate the use of light weight rotary-wing UAV for mapping simulation model. The accuracy of photogrammetric product will be assessed as one of the objectives of this study. There are two types of UAV units known as rotor wing and fixed-wing. Based on few studies, rotor wing units are more stable and are able to capture images easily. It allows remote control UAV to be practiced in the environment and urban mapping. In the simulation model, ground control points (GCP) and checked point (CP) were established using total station. The GCP is used in the photogrammetric processes to produce photogrammetric output while the CP is used for accuracy assessment. This study also used a low cost digital camera in image acquisition to capture the aerial image of a simulated model. Two methods were implemented in this study. In the first method, the camera was mounted vertically at a fixed height on the simulated model. In the second method, the camera was mounted vertically; it was then attached at the bottom of rotary-wing UAV and the images were captured at an altitude. The productions of digital orthophoto and digital elevation model of the simulated model were obtained after the acquired images were processed using the photogrammetric software. Based on the finding, the root mean square errors (RMSEs) for fixed platform are ±0.002, ±0.001 and ±0.214 for coordinate x, y and z, respectively while the RMSE for UAV platform are ±0.002, ±0.002 and ±0.223 for coordinate x, y and z, respectively. It can be concluded that the differences between the mobile and fixed platforms are small. In conclusion, UAV system can be used for large scale mapping of aerial terrain mapping. Key words: Mobile mapping, non-metric, digital elevation models (DEM), photogrammetry, simulation.

29 citations


Proceedings ArticleDOI
22 Jul 2012
TL;DR: This paper introduces a formulation that relates image registration with the poses of a moving camera observing a 3D plane and combines this formulation with Newton's algorithm in order to estimate camera poses in a given aerial video.
Abstract: Aerial image change detection is highly dependent on the accuracy of camera pose and may be subject to false alarms caused by misregistrations. In this paper, we present a novel pose estimation approach based on Visual Servoing that combines aerial videos with 3D models. Firstly, we introduce a formulation that relates image registration with the poses of a moving camera observing a 3D plane. Then, we combine this formulation with Newton's algorithm in order to estimate camera poses in a given aerial video. Finally, we present and discuss experimental results which demonstrate the robustness and the accuracy of our method.

Patent
11 Apr 2012
TL;DR: In this article, the authors proposed an automatic registration method of an airborne laser point cloud and an aerial image. And the registration method provided by the invention has the following advantages that: the building outline is directly extracted from the laser point clouds without interpolation for the laser points cloud so as to obtain the building angular characteristics as the registration element, which prevents interpolation errors and improves the registration accuracy; and the exterior orientation element of the image is solved by virtue of bundle block adjustment.
Abstract: The invention provides an automatic registration method of an airborne laser point cloud and an aerial image. The automatic registration method comprises the following steps of: extracting a building outline from the point cloud without interpolation for the laser point cloud; obtaining building angular characteristics as a registration element through outline regularization; automatically matching the point cloud with the image according to the homonymic angular characteristics with the aid of an approximate exterior orientation element of the aerial image; and utilizing bundle block adjustment and a cyclic iterative policy so as to realize overall optimal registration of aerial image and point cloud data. The registration method provided by the invention has the following advantages that: the building outline is directly extracted from the laser point cloud without interpolation for the laser point cloud so as to obtain the building angular characteristics as the registration element, which prevents interpolation errors and improves the registration accuracy; and the exterior orientation element of the image is solved by virtue of bundle block adjustment, and meanwhile overall optimal registration between the aerial image and the airborne laser point cloud is realized by adopting the cyclic iterative registration policy.

Journal ArticleDOI
Wenxuan Shi1, Jie Li1
TL;DR: A new algorithm by which shadows are extracted from a single color aerial image is presented, which presents better shadow detection accuracy in the images that contain some dark green lawn, river, or low brightness shadow regions.
Abstract: Many problems in image processing and computer vision arise from shadows in a single color aerial image. This article presents a new algorithm by which shadows are extracted from a single color aerial image. Apart from using the ratio value of the hue over the intensity in some state-of-the-art algorithms, this article introduces another ratio map, which is obtained by applying the saturation over the intensity. Candidate shadow and nonshadow regions are separated by applying Otus’s thresholding method. The color attenuation relationship that describes the relationship between the attenuation of each color channel is derived from the Planck’s blackbody irradiance law. For each region, the color attenuation relationship and other determination conditions are performed iteratively to segment it into smaller sub-regions and to identify whether each sub-region is a true shadow region. Compared with previous methods, the proposed algorithm presents better shadow detection accuracy in the images that contain some dark green lawn, river, or low brightness shadow regions. The experimental results demonstrate the advantage of the proposed algorithm.

Proceedings ArticleDOI
06 Nov 2012
TL;DR: This paper proposes self-supervised computer vision algorithms that analyze a publicly available cartographic resource to identify road image-regions and detects overpasses and applies an MRF to smooth potentially inconsistent binary classification outputs.
Abstract: Cartographic databases can be kept up to date through aerial image analysis. Such analysis is optimized when one knows what parts of an aerial image are roads and when one knows locations of complex road structures, such as overpasses and intersections. This paper proposes self-supervised computer vision algorithms that analyze a publicly available cartographic resource (i.e., screenshots of road vectors) to, without human intervention, identify road image-regions and detects overpasses.Our algorithm segments a given input image into two parts: road- and non-road image regions. It does so not by learning a global appearance model of roads from hand-labeled data, but rather by approximating a locally consistent model of the roads' appearance from self-obtained data. In particular, the learned local model is used to execute a binary classification. We then apply an MRF to smooth potentially inconsistent binary classification outputs.To detect overpasses, our method scrutinizes screenshots of road vector images to approximate the geometry of the underlying road vector and use the estimated geometry to localize overpasses.Our methods, based on experiments using inter-city highway ortho-images, show promising results. Segmentation results showed on average over 90% recall; overpass detection results showed 94% accuracy.

Journal ArticleDOI
TL;DR: This letter proposes a novel approach using vector roads to aid in generating seams for aerial image mosaicking that is typically more efficient than the existing raster-based approaches, particularly when vector road networks are available.
Abstract: This letter proposes a novel approach using vector roads to aid in generating seams for aerial image mosaicking. A representative seam of two adjacent images is extracted as follows. First, the vector roads in the overlapping area of adjacent images are overlaid with the straight skeleton of the overlapping area to build a weighted graph G(V, E). Dijkstra's algorithm is then applied to And the lowest cost path in G(V, E) that connects two intersections of the boundary polygons of adjacent images. The lowest cost path is considered as a seam candidate. Second, the seam candidate is refined by considering its surrounding pixels. The refined seam is employed as the final seam. Experiments demonstrate that this vector-based approach is typically more efficient than the existing raster-based approaches, particularly when vector road networks are available.

Patent
Yu-Po Tang1, Chih-Shiang Chou1, Yan-Ting He1, Ya-Ting Chang1, Fu-Sheng Chu1 
01 Nov 2012
TL;DR: In this paper, a method of performing initial optical proximity correction (OPC) with a calibrated lithography simulation model is proposed, which includes providing a photomask having an integrated circuit (IC) pattern formed thereon, acquiring an aerial image of the IC pattern formed on the photosmask using an optical microscope, and calibrating an optical component of the lithography model based on the aerial image.
Abstract: A method of performing initial optical proximity correction (OPC) with a calibrated lithography simulation model. The method includes providing a photomask having an integrated circuit (IC) pattern formed thereon, acquiring an aerial image of the IC pattern formed on the photomask using an optical microscope, and calibrating an optical component of the lithography simulation model based on the aerial image. The method also includes exposing and developing a photoresist layer on a semiconductor wafer using the photomask to form a post-development pattern on the photoresist layer, acquiring a post-development image of the post-development pattern on the photoresist layer, and calibrating the photoresist component of the lithography simulation model based on the post-development image. Further, the method includes performing initial optical proximity correction (OPC) on an IC design layout based on a simulation of the IC design layout by the lithography simulation model including the calibrated optical and photoresist components.

Journal ArticleDOI
Z. Lin, G. Su, F. Xie1
TL;DR: More than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.
Abstract: In this paper,the aforementioned three major aspects related to the Unmanned Aerial Vehicles (UAV) system for low altitude aerial photogrammetry, i.e., flying platform, imaging sensor system and data processing software, are discussed. First of all, according to the technical requirements about the least cruising speed, the shortest taxiing distance, the level of the flight control and the performance of turbulence flying, the performance and suitability of the available UAV platforms (e.g., fixed wing UAVs, the unmanned helicopters and the unmanned airships) are compared and analyzed. Secondly, considering the restrictions on the load weight of a platform and the resolution pertaining to a sensor, together with the exposure equation and the theory of optical information, the principles of designing self-calibration and self-stabilizing combined wide-angle digital cameras (e.g., double-combined camera and four-combined camera) are placed more emphasis on. Finally, a software named MAP-AT, considering the specialty of UAV platforms and sensors, is developed and introduced. Apart from the common functions of aerial image processing, MAP-AT puts more effort on automatic extraction, automatic checking and artificial aided adding of the tie points for images with big tilt angles. Based on the recommended process for low altitude photogrammetry with UAVs in this paper, more than ten aerial photogrammetry missions have been accomplished, the accuracies of Aerial Triangulation, Digital orthophotos(DOM)and Digital Line Graphs(DLG) of which meet the standard requirement of 1:2000, 1:1000 and 1:500 mapping.

Proceedings ArticleDOI
11 Jul 2012
TL;DR: This paper presents a complete mosaicing system named EsiReg and gives a brief comparative insight on results of stitching satellite and aerial images using well known performance metrics.
Abstract: Image registration or image stitching is a central operation in many useful and important tasks in image processing like maps construction, scanning large documents and panoramic photos creation. In particular image mosaicing is used to assemble several overlapping images in order to constitute the global frame. We will focus on a feature-point matching method to perform the mosaicing. The SIFT algorithm is used to extract the feature points in both images. The mosaicing result is obtained after transforming the sensed or target image to align to the reference image. Performing a mosaicing operation is not sufficient to claim reaching the goal. Objective metrics must be used to evaluate the resulting mosaic. In this paper we present a complete mosaicing system named EsiReg and give a brief comparative insight on results of stitching satellite and aerial images using well known performance metrics.

Proceedings Article
01 Jan 2012
TL;DR: A manifold embedding algorithm to transfer different-sized graphlets into equal length feature vectors and further integrate these feature vectors into a kernel is developed and used to train a SVM classifier for aerial image categories recognition.
Abstract: This paper presents a method for recognizing aerial image categories based on matching graphlets(i.e., small connected subgraphs) extracted from aerial images. By constructing a Region Adjacency Graph (RAG) to encode the geometric property and the color distribution of each aerial image, we cast aerial image category recognition as RAG-to-RAG matching. Based on graph theory, RAG-to-RAG matching is conducted by matching all their respective graphlets. Towards an effective graphlet matching process, we develop a manifold embedding algorithm to transfer different-sized graphlets into equal length feature vectors and further integrate these feature vectors into a kernel. This kernel is used to train a SVM [8] classifier for aerial image categories recognition. Experimental results demonstrate our method outperforms several state-of-the-art object/scene recognition models.

Proceedings ArticleDOI
06 Nov 2012
TL;DR: New aerial image analysis algorithms that, from highway ortho-images, produce lane-level detailed maps and some of the refined image cues about the underlying road network are used to guide a linking process of road-lane hypotheses.
Abstract: This paper presents new aerial image analysis algorithms that, from highway ortho-images, produce lane-level detailed maps. We analyze screenshots of road vectors to obtain the relevant spatial and photometric cues of road image-regions. We then refine the obtained patterns to generate hypotheses about the true road-lanes. A road-lane hypothesis, since it explains only a part of the true road-lane, is then linked to other hypotheses to completely delineate boundaries of the true road-lanes. Finally, some of the refined image cues about the underlying road network are used to guide a linking process of road-lane hypotheses.We tested the accuracy and robustness of our algorithms with high-resolution, inter-city highway ortho-images. Experimental results show promise in producing lane-level detailed highway maps from ortho-image analysis -- 89% of the true road-lane boundary pixels were successfully detected and 337 out of 417 true road-lanes were correctly recovered.

Patent
03 Oct 2012
TL;DR: In this article, a spliced large area array digital aerial camera, which comprises a full color camera module and a multispectral camera module, was presented, where the full-color camera objective lens, a reflective pyramid beam splitting prism and four full color charge coupled devices (CCDs) were arranged symmetrically at the periphery of the camera.
Abstract: The invention discloses a spliced large area array digital aerial camera, which comprises a full color camera module and a multispectral camera module, wherein the full color camera module comprises a full color camera objective lens, a reflective pyramid beam splitting prism and four full color charge coupled devices (CCDs); and the multispectral camera module comprises four multispectral objective lenses and a CCD; the full color camera objective lens is positioned in the middle, and the four multispectral objective lenses and the CCD are symmetrically arranged at the periphery of the full color camera objective lens; the rear of the full color camera objective lens is provided with the reflective pyramid beam splitting prism; and the periphery of the reflective pyramid beam splitting prism is provided with the four full color CCDs. By the camera, the problem that a digital aerial camera has small breadth is solved, and a 20k*20k large-breadth high-resolution aerial image is provided; the camera comprises wide-angle, middle-angle and normal-angle full color camera objective lenses and multispectral objective lenses; and during work, the full color camera objective lenses at different angles can be flexibly selected according to different mapping applications, the angles of the multispectral objective lenses are matched with those of the full color camera objective lenses, and the problem that the conventional area array digital aerial camera has small field of view and single objective lens type is solved. Meanwhile, due to a mode of splicing a single full color objective lens and multiple full color CCDs, the image splicing difference caused by exposure difference of various full color cameras in the mode of splicing the multispectral objective lenses and the multiple CCDs is avoided.

Journal ArticleDOI
TL;DR: A short introduction into the lithography roadmap driving the development cycles of the AIMS systems focusing primarily on the complexity of the structures to be reviewed.
Abstract: Photomasks carry the structured information of the chip designs printed with lithography scanners onto wafers. These structures, for the most modern tech- nologies, are enlarged by a factor of 4 with respect to the final circuit design, and 20 - 60 of these photomasks are needed for the production of a single completed chip used, for example, in computers or cell phones. Lately, designs have been reported to be on the drawing board with close to 100 of these layers. Each of these photomasks will be reproduced onto the wafer several hundred times and typically 5000 - 50 000 wafers will be produced with each of them. Hence, the photomasks need to be absolutely defect-free to avoid any fatal elec- trical shortcut in the design or drastic performance deg- radation. One well-known method in the semiconductor industry is to analyze the aerial image of the photomask in a dedicated tool referred to as Aerial Imaging Meas- urement System, which emulates the behavior of the respective lithography scanner used for the imaging of the mask. High-end lithography scanners use light with a wavelength of 193 nm and high numerical apertures (NAs) of 1.35 utilizing a water film between the last lens and the resist to be illuminated (immersion scan- ners). Complex illumination shapes enable the imaging of structures well below the wavelength used. Future lithography scanners will work at a wavelength of 13.5 nm (extreme ultraviolet (EUV)) and require the optical system to work with mirrors in vacuum instead of the clas- sical lenses used in current systems. The exact behavior of these systems is emulated by the Aerial Image Meas- urement System (AIMS ™ ; a Trademark of Carl Zeiss). With these systems, any position of the photomask can be imaged under the same illumination condition used by the scanners, and hence, a prediction of the printing behavior of any structure can be derived. This system is used by mask manufacturers in their process flow to review critical defects or verify defect repair success. In this paper, we give a short introduction into the lithog- raphy roadmap driving the development cycles of the AIMS systems focusing primarily on the complexity of the structures to be reviewed. Second, we describe the basic principle of the AIMS technology and how it is used. The last section is dedicated to the development of the latest generation of the AIMS for EUV, which is cofi- nanced by several semiconductor companies in order to close a major gap in the mask manufacturing infrastruc- ture and the challenges to be met.

Patent
01 Mar 2012
TL;DR: In this paper, integrated circuit (IC) methods for optical proximity correction (OPC) modeling and mask repair are described, which include use of an optical model that generates a simulated aerial image from an actual aerial image obtained in an optical microscope system.
Abstract: Integrated circuit (IC) methods for optical proximity correction (OPC) modeling and mask repair are described. The methods include use of an optical model that generates a simulated aerial image from an actual aerial image obtained in an optical microscope system. In the OPC modeling methods, OPC according to stage modeling is simulated, and OPC features may be added to a design layout according to the simulating OPC. In the mask repair methods, inverse image rendering is performed on the actual aerial image and diffraction image by applying an optical model that divides an incoherent exposure source into a plurality of coherent sources.

Proceedings Article
11 Apr 2012
TL;DR: This paper proposes using cross-correlations between Gabor wavelet coefficients at different orientations as a de- scriptor for aerial image classification and extends it to color images using new quaternion representation based on symplectic decomposition.
Abstract: Texture plays a fundamental role in remote sensing image analysis and texture descriptors based on Gabor filter banks provide a reasonable baseline for aerial image classification. It is known that Gabor wavelet coefficients in different sub- bands are correlated and cross-correlations between Gabor wavelet coefficients at different scales have already been used for image classification. In this paper we propose using cross-correlations between Gabor wavelet coefficients at different orientations as a de- scriptor for aerial image classification. We extend our de- scriptor to color images using new quaternion representation based on symplectic decomposition. On currently the largest publicly available dataset of aerial images we show that the proposed descriptors obtain 85% correct classification rate thus improving state-of-the-art considerably.

Journal ArticleDOI
TL;DR: In this paper, a fast aerial image simulation method is proposed for partially coherent systems by decomposing the transmission cross coefficient (TCC) into analytical kernels, which can be flexibly set to any desired value in aerial image simulations.
Abstract: Aerial image simulation is one of the most critical components in the model-based optical proximity correction (OPC), which has become a necessary part of resolution enhancement techniques used to improve the performance of subwavelength optical lithography. In this paper, a fast aerial image simulation method is proposed for partially coherent systems by decomposing the transmission cross coefficient (TCC) into analytical kernels. The TCC matrix is projected onto a function space whose basis is analytical circle-sampling functions (CSFs) and converted into a much smaller projected matrix. By performing singular value decomposition (SVD) to the projected matrix, its eigenvectors together with the CSFs are used to generate a set of analytical TCC kernels. The proposed method avoids directly performing SVD to the large TCC matrix, making it much more runtime efficient than the conventional SVD method. Furthermore, the grid size of the kernels can be flexibly set to any desired value in aerial image simulations, which is not realizable with the conventional SVD method. The comparison of aerial image intensity errors and edge placement errors calculated by the proposed method and the conventional SVD method has confirmed the validity of the proposed method. An OPC example is also provided to further demonstrate its efficiency.

Journal Article
Shen Xiang1
TL;DR: The main advantage of the proposed algorithm is that the three-dimensional building outlines are extracted directly from LiDar points without transforming LiDAR points into range image or intensity image, and therefore there are no interpolation errors.
Abstract: This paper presents a new algorithm for the automatic registration of airborne LiDAR data with aerial images using building corner features as registration primitives First, three-dimensional building outlines are directly extracted from LiDAR points and building corner features which consist of two orthogonal straight lines are obtained by the regularization of three-dimensional building outlines Straight lines are also extracted from every aerial image Second, the building corner features are projected onto aerial images and corresponding image corner features are determined using the similarity measures Lastly, the exterior orientation parameters are refined by bundle adjustment using the corner points of corner features as control points Iteration strategy is adopted to obtain optimal results The main advantage of the proposed algorithm is that the three-dimensional building outlines are extracted directly from LiDAR points without transforming LiDAR points into range image or intensity image, and therefore there are no interpolation errors The experimental results show that the proposed algorithm can obtain more accurate results in comparison with the registration method based on LiDAR intensity image

Proceedings ArticleDOI
29 Jun 2012
TL;DR: In this paper, the impact of all components in the photomask and optical models, including resist, optical, and etch processes, were systematically explored, and the authors provided a pathway to sub-nanometer accuracy required for 20 nm infrared technology.
Abstract: OPC models describe the entire patterning process, including photomask, optics, resist, and etch as a set of separately characterized modules. It is difficult, however, to definitively calibrate the optics model since the aerial image is not easily measurable. It is therefore common to calibrate relevant optical parameters using a constant threshold applied to the aerial image in order to best fit the experimentally measured photoresist CDs. With the optical model thus fixed, a subsequent tuning of resist model parameters is enabled. Alternatively, both optical and resist parameters can be tuned in a single lumped calibration to the measurement data. The parameters associated with the calibration of the photomask, optical, resist and etch processes can be sorted into three classes. There are parameters that are directly measurable or known as designed-in values, such as wavelength, numerical aperture (NA), illumination source profile, and film stack optical constants. A second class of parameters is associated with physical phenomena, where direct measurement is not done, but rather the model contains mathematical proxies for the parameters. A final class of calibration options includes software options for altering the approximations used in the model, such as number of optical kernels, optical diameter, and the functional form of the resist or etch model. The degree to which the model can faithfully decouple photomask from optical exposure from resist processing is related not only to the details of the resist model, but also to the nature of the approximations "upstream" in the representation of the mask and optical system. This paper will systematically explore the impact of all components in the photomask and optical models, and will provide a pathway to sub-nanometer accuracy required for 20 nm technology.

Patent
05 Jul 2012
TL;DR: In this article, a system and method for detecting a smooth/rough boundary from an aerial image to solve the problem of isolating image features without access to the subject of the image is presented.
Abstract: System and method for detecting a smooth/rough boundary from an aerial image to solve the problem of isolating image features without access to the subject of the image. The system and method convert the image to gray scale, edge pad the converted image, calculate an image entropy based on a distribution of local entropy across the padded, converted image, threshold the image entropy to binarize the padded, converted image, clean noise, and close defects and voids by mathematical morphologically opening and closing the binarized image, and detect the smooth/rough boundary of the opened/closed binarized image as a gradient across the pixels of the opened/closed binarized image resulting in a single pixel width edge. The single pixel width edge can be, for example, provided to numerical prediction models and computer games.

Journal ArticleDOI
TL;DR: In this paper, a vector-based approach is proposed to generate seamline networks for aerial image mosaicking, where the straight skeleton of the overlapping area of adjacent images is extracted and the seamline candidate is refined by considering its surrounding pixels.
Abstract: This article proposes a novel approach for using vector roads for the generation of seamline networks for aerial image mosaicking. A representative seamline is extracted as follows. First, the straight skeleton of the overlapping area of adjacent images is extracted. Second, vector roads in the overlapping regions are overlaid with the extracted skeleton to build a weighted graph G and the lowest-cost path is considered as a seamline candidate. Finally, the seamline candidate is refined by considering its surrounding pixels. The refined seamline is employed as the final seamline. The resultant seamlines commonly follow the centerlines of roads as much as possible and the mosaic image appears more seamless. Moreover, the vector-based approach is typically more efficient than the existing raster-based approaches: the vector-based approach is nearly ten times faster than a raster-based representative.

Patent
Dirk Seidel1, Michael Arnz1
06 Jul 2012
TL;DR: In this paper, the position of a structure on a mask for microlithography is determined by comparing an aerial image, measured by a recording device, of a portion of the mask with an aerial images determined by simulation.
Abstract: A method and an apparatus for determining the position of a structure on a mask for microlithography, in which the position is determined by comparing an aerial image, measured by a recording device, of a portion of the mask with an aerial image determined by simulation. The position determination includes carrying out a plurality of such comparisons which differ from one another with regard to the input parameters of the simulation.