scispace - formally typeset
Search or ask a question

Showing papers on "Aerial image published in 2007"


Book
01 Jan 2007
TL;DR: In this article, the authors present an approach for image formation in resist using the Dirac Delta Function (DDF) and a normalized image log-slope, which is used to detect critical dimension variations.
Abstract: Preface. 1. Introduction to Semiconductor Lithography. 1.1 Basics of IC Fabrication. 1.2 Moore's Law and the Semiconductor Industry. 1.3 Lithography Processing. Problems. 2. Aerial Image Formation - The Basics. 2.1 Mathematical Description of Light. 2.2 Basic Imaging Theory. 2.3 Partial Coherence. 2.4 Some Imaging Examples. Problems. 3. Aerial Image Formation - The Details. 3.1 Aberrations. 3.2 Pupil Filters and Lens Apodization. 3.3 Flare. 3.4 Defocus. 3.5 Imaging with Scanners Versus Steppers. 3.6 Vector Nature of Light. 3.7 Immersion Lithography. 3.8 Image Quality. Problems. 4. Imaging in Resist: Standing Waves and Swing Curves. 4.1 Standing Waves. 4.2 Swing Curves. 4.3 Bottom Antirefl ection Coatings. 4.4 Top Antirefl ection Coatings. 4.5 Contrast Enhancement Layer. 4.6 Impact of the Phase of the Substrate Refl ectance. 4.7 Imaging in Resist. 4.8 Defi ning Intensity. Problems. 5. Conventional Resists: Exposure and Bake Chemistry. 5.1 Exposure. 5.2 Post-Apply Bake. 5.3 Post-exposure Bake Diffusion. 5.4 Detailed Bake Temperature Behavior. 5.5 Measuring the ABC Parameters. Problems. 6. Chemically Amplifi ed Resists: Exposure and Bake Chemistry. 6.1 Exposure Reaction. 6.2 Chemical Amplifi cation. 6.3 Measuring Chemically Amplifi ed Resist Parameters. 6.4 Stochastic Modeling of Resist Chemistry. Problems. 7. Photoresist Development. 7.1 Kinetics of Development. 7.2 The Development Contrast. 7.3 The Development Path. 7.4 Measuring Development Rates. Problems. 8. Lithographic Control in Semiconductor Manufacturing. 8.1 Defi ning Lithographic Quality. 8.2 Critical Dimension Control. 8.3 How to Characterize Critical Dimension Variations. 8.4 Overlay Control. 8.5 The Process Window. 8.6 H-V Bias. 8.7 Mask Error Enhancement Factor (MEEF). 8.8 Line-End Shortening. 8.9 Critical Shape and Edge Placement Errors. 8.10 Pattern Collapse. Problems. 9. Gradient-Based Lithographic Optimization: Using the Normalized Image Log-Slope. 9.1 Lithography as Information Transfer. 9.2 Aerial Image. 9.3 Image in Resist. 9.4 Exposure. 9.5 Post-exposure Bake. 9.6 Develop. 9.7 Resist Profi le Formation. 9.8 Line Edge Roughness. 9.9 Summary. Problems. 10. Resolution Enhancement Technologies. 10.1 Resolution. 10.2 Optical Proximity Correction (OPC). 10.3 Off-Axis Illumination (OAI). 10.4 Phase-Shifting Masks (PSM). 10.5 Natural Resolutions. Problems. Appendix A. Glossary of Microlithographic Terms. Appendix B. Curl, Divergence, Gradient, Laplacian. Appendix C. The Dirac Delta Function. Index.

514 citations


Journal ArticleDOI
Jiuxiang Hu1, Anshuman Razdan1, John Femiani1, Ming Cui1, Peter Wonka1 
TL;DR: An automatic road seeding method based on rectangular approximations to road footprints and a toe-finding algorithm to classify footprints for growing a road tree and introduces a lognormal distribution to characterize the conditional probability of A/P ratios of the footprints in the road tree.
Abstract: In this paper, a new two-step approach (detecting and pruning) for automatic extraction of road networks from aerial images is presented. The road detection step is based on shape classification of a local homogeneous region around a pixel. The local homogeneous region is enclosed by a polygon, called the footprint of the pixel. This step involves detecting road footprints, tracking roads, and growing a road tree. We use a spoke wheel operator to obtain the road footprint. We propose an automatic road seeding method based on rectangular approximations to road footprints and a toe-finding algorithm to classify footprints for growing a road tree. The road tree pruning step makes use of a Bayes decision model based on the area-to-perimeter ratio (the A/P ratio) of the footprint to prune the paths that leak into the surroundings. We introduce a lognormal distribution to characterize the conditional probability of A/P ratios of the footprints in the road tree and present an automatic method to estimate the parameters that are related to the Bayes decision model. Results are presented for various aerial images. Evaluation of the extracted road networks using representative aerial images shows that the completeness of our road tracker ranges from 84% to 94%, correctness is above 81%, and quality is from 82% to 92%.

344 citations


Journal ArticleDOI
TL;DR: This letter proposes a two-step method for tree detection consisting of segmentation followed by classification using weighted features from aerial image and lidar, such as height, texture map, height variation, and normal vector estimates.
Abstract: In this letter, we present an approach to detecting trees in registered aerial image and range data obtained via lidar. The motivation for this problem comes from automated 3-D city modeling, in which such data are used to generate the models. Representing the trees in these models is problematic because the data are usually too sparsely sampled in tree regions to create an accurate 3-D model of the trees. Furthermore, including the tree data points interferes with the polygonization step of the building roof top models. Therefore, it is advantageous to detect and remove points that represent trees in both lidar and aerial imagery. In this letter, we propose a two-step method for tree detection consisting of segmentation followed by classification. The segmentation is done using a simple region-growing algorithm using weighted features from aerial image and lidar, such as height, texture map, height variation, and normal vector estimates. The weights for the features are determined using a learning method on random walks. The classification is done using the weighted support vector machines, allowing us to control the misclassification rate. The overall problem is formulated as a binary detection problem, and the results presented as receiver operating characteristic curves are shown to validate our approach

158 citations


Journal ArticleDOI
TL;DR: An algorithm to automatically extract the power line from aerial images acquired by an aerial digital camera onboard a helicopter is presented and has successfully been applied in China National 863 project for power line surveillance, 3-D reconstruction, and modeling.
Abstract: There has been little investigation for the automatic extraction of power lines from aerial images due to the low resolution of aerial images in the past decades. With increasing aerial photogrammetric technology and sensor technology, it is possible for photogrammetrists to monitor the status of power lines. This letter analyzes the property of imaged power lines and presents an algorithm to automatically extract the power line from aerial images acquired by an aerial digital camera onboard a helicopter. This algorithm first uses a Radon transform to extract line segments of the power line, then uses the grouping method to link each segment, and finally applies the Kalman filter technology to connect the segments into an entire line. We compared our algorithm with the line mask detector method and the ratio line detector, and evaluated their performances. The experimental results demonstrated that our algorithm can successfully extract the power lines from aerial images regardless of background complexity. This presented method has successfully been applied in China National 863 project for power line surveillance, 3-D reconstruction, and modeling.

150 citations


Proceedings ArticleDOI
05 Oct 2007
TL;DR: A newly developed sub-resolution assist feature (SRAF) placement technique with two-dimensional transmission cross coefficient (2D-TCC) is described in this paper and can be automatically optimized to the given optical condition to generate the optimized reticle.
Abstract: A newly developed sub-resolution assist feature (SRAF) placement technique with two-dimensional transmission cross coefficient (2D-TCC) is described in this paper. In SRAF placement with 2D-TCC, Hopkins' aerial image equation with four-dimensional TCC is decomposed into the sum of Fourier transforms of diffracted light weighted by 2D-TCC, introducing an approximated aerial image so as to place SRAFs into a given reticle layout. SRAFs are placed at peak positions of the approximated aerial image for enhanced resolution. Since the approximated aerial image can handle the full optical model, SRAFs can be automatically optimized to the given optical condition to generate the optimized reticle. The validity of this technique was confirmed by experiment using a Canon FPA6000-ES6a, 248 nm with a numerical aperture (NA) of 0.86. A binary reticle optimized by this technique with mild off-axis illumination was used in the experiment. Both isolated and dense 100 nm contacts (k 1 = 0.35) were simultaneously resolved with the aid of this technique.

104 citations


Journal ArticleDOI
TL;DR: A new multicomponent image segmentation method is developed using a nonparametric unsupervised artificial neural network called Kohonen's self-organizing map (SOM) and hybrid genetic algorithm (HGA) that is used to detect the main features that are present in the image.
Abstract: Image segmentation is an essential process for image analysis. Several methods were developed to segment multicomponent images, and the success of these methods depends on several factors including (1) the characteristics of the acquired image and (2) the percentage of imperfections in the process of image acquisition. The majority of these methods require a priori knowledge, which is difficult to obtain. Furthermore, they assume the existence of models that can estimate its parameters and fit to the given data. However, such a parametric approach is not robust, and its performance is severely affected by the correctness of the utilized parametric model. In this letter, a new multicomponent image segmentation method is developed using a nonparametric unsupervised artificial neural network called Kohonen's self-organizing map (SOM) and hybrid genetic algorithm (HGA). SOM is used to detect the main features that are present in the image; then, HGA is used to cluster the image into homogeneous regions without any a priori knowledge. Experiments that are performed on different satellite images confirm the efficiency and robustness of the SOM-HGA method compared to the Iterative Self-Organizing DATA analysis technique (ISODATA).

86 citations


Patent
04 Jun 2007
TL;DR: In this paper, a model-based sub-resolution assist feature (MB-SRAF) is proposed, where each design target edge location votes for a given field point on whether a single-pixel SRAF placed on this field point would improve or degrade the aerial image over the process window.
Abstract: Methods are disclosed to create efficient model-based Sub-Resolution Assist Features (MB-SRAF). An SRAF guidance map is created, where each design target edge location votes for a given field point on whether a single-pixel SRAF placed on this field point would improve or degrade the aerial image over the process window. In one embodiment, the SRAF guidance map is used to determine SRAF placement rules and/or to fine tune already-placed SRAFs. In another embodiment the SRAF guidance map is used directly to place SRAFs in a mask layout.

82 citations


Patent
Yuichi Shibazaki1
21 Feb 2007
TL;DR: In this article, the position of the wafer stage in a direction parallel to the optical axis of the projection optical system is adjusted with high accuracy based on the measurement result of the best focus position.
Abstract: A partial section of an aerial image measuring unit is arranged at a wafer stage and part of the remaining section is arranged at a measurement stage, and the aerial image measuring unit measures an aerial image of a mark formed by a projection optical system. Therefore, for example, when the aerial image measuring unit measures a best focus position of the projection optical system, the measurement can be performed using the position of the wafer stage, at which a partial section of the aerial image measuring unit is arranged, in a direction parallel to an optical axis of the projection optical system as a datum for the best focus position. Accordingly, when exposing an object with illumination light, the position of the wafer stage in the direction parallel to the optical axis is adjusted with high accuracy based on the measurement result of the best focus position.

76 citations


01 Jan 2007
TL;DR: The classification tree method appeared to be a feasible and highly automatic approach for distinguishing buildings from trees and the results suggest that satisfactory building detection results can be obtained with different combinations of input data sources.
Abstract: A classification tree based approach for building detection was tested. A digital surface model (DSM) derived from last pulse laser scanner data was first segmented and the segments were classified into classes ‘ground’ and ‘building or tree’ on the basis of preclassified laser points. ‘Building and tree’ segments were further classified into buildings and trees by using the classification tree method. Four classification tests were carried out using different combinations of 44 input attributes. The attributes were derived from the last pulse DSM, first pulse DSM and an aerial colour ortho image. In addition, shape attributes calculated for the segments were used. The attributes of training segments were presented as input data for the classification tree method, which constructed automatically a classification tree for each test. The trees were then applied to classification of a separate test area. Compared with a building map, a mean accuracy of almost 90% was achieved for buildings in each test. The classification tree method appeared to be a feasible and highly automatic approach for distinguishing buildings from trees. If new data sources become available in the future, they can be easily included in the classification process. The results also suggest that satisfactory building detection results can be obtained with different combinations of input data sources. By using a statistical method, it is possible to find useful attributes and classification rules in different cases. The use of an aerial image or both first pulse and last pulse laser scanner data does not necessarily improve the results significantly, compared with a classification that uses only last pulse laser scanner data.

62 citations


Proceedings ArticleDOI
26 Dec 2007
TL;DR: An on-line boosting algorithm is used to incrementally improve the detection results of an efficient car detector for aerial images with minimal hand labeling effort, and it is shown that similar results to hand labeling by iteratively applying this strategy are obtained.
Abstract: This paper demonstrates how to reduce the hand labeling effort considerably by 3D information in an object detection task. In particular, we demonstrate how an efficient car detector for aerial images with minimal hand labeling effort can be build. We use an on-line boosting algorithm to incrementally improve the detection results. Initially, we train the classifier with a single positive (car) example, randomly drawn from a fixed number of given samples. When applying this detector to an image we obtain many false positive detections. We use information from a stereo matcher to detect some of these false positives (e.g. detected cars on a facade) and feed back this information to the classifier as negative updates. This improves the detector considerably, thus reducing the number of false positives. We show that we obtain similar results to hand labeling by iteratively applying this strategy. The performance of our algorithm is demonstrated on digital aerial images of urban environments.

60 citations


Patent
24 Oct 2007
TL;DR: In this article, the authors proposed a method of defining a game zone for a video game system consisting of a remotely-controlled vehicle and an electronic entity for remotely controlling the vehicle.
Abstract: The invention relates to a method of defining a game zone for a video game system. The system comprises a remotely-controlled vehicle ( 1 ) and an electronic entity ( 3 ) for remotely controlling the vehicle ( 1 ), the method comprising the following steps: acquiring the terrestrial position of the vehicle ( 1 ) via a position sensor ( 37 ) arranged on the vehicle ( 1 ); transmitting the terrestrial position of the vehicle ( 1 ) to the electronic entity ( 3 ); establishing a connection between the electronic entity ( 3 ) and a database ( 17 ) containing aerial images of the Earth; in the database ( 17 ), selecting an aerial image corresponding to the terrestrial position transmitted to the electronic entity ( 3 ); downloading the selected aerial image from the database ( 17 ) to the electronic entity ( 3 ); and incorporating the downloaded aerial image in a video game being executed on the electronic entity ( 3 ).

Patent
11 Jan 2007
TL;DR: In this paper, a model of an exposure lithography system for chip fabrication is adapted to accommodate the band limited mask pattern as an input which is input into the model to obtain an aerial image of the mask pattern that is processed with a photoresist model yielding a resist-modeled image.
Abstract: A method for identifying lithographically significant defects. A photomask is illuminated to produce images that experience different parameters of the reticle as imaged by an inspection tool. Example parameters include a transmission intensity image and a reflection intensity image. The images are processed together to recover a band limited mask pattern associated with the photomask. A model of an exposure lithography system for chip fabrication is adapted to accommodate the band limited mask pattern as an input which is input into the model to obtain an aerial image of the mask pattern that is processed with a photoresist model yielding a resist-modeled image. The resist-modeled image is used to determine if the photomask has lithographically significant defects.

Patent
22 Aug 2007
TL;DR: In this article, the authors present an aerial projection system based on a set of rules that eliminate boundary transgressions and maximizes the illusion of a 3D aerial image, which can be used for special effects or for providing the appearance of linear motion towards or away from the observer.
Abstract: An aerial projection system and method having a housing for positioning low cost optical elements capable of generating a three dimensional aerial images at video rates without reflected artifacts or visible display of the display screen. A method for generating the display images is based on a set of rules that eliminate boundary transgressions and maximizes the illusion of a three dimensional aerial image. An optional second display is a transparent imaging panel that acts selectively as a light valve, as a display platform for special effects or for providing the appearance of linear motion towards or away from the observer. The aerial projection system includes a plastic spherical mirror having a plastic part of at least the following descriptions: mirror surface of sufficient sphericity supported by wall structures, of a plastic material formulation, excellent optical grade finish, has a reflective metal coating and a protective overcoat.

Patent
Ashutosh Garg1, Mayur Datar1
30 Mar 2007
TL;DR: In this article, a system receives a request from a client and provides an aerial image to the client in response to the request, which includes an advertisement superimposed on the aerial image.
Abstract: A system receives a request from a client and provides an aerial image to the client in response to the request. The aerial image includes an advertisement superimposed on the aerial image.

Journal ArticleDOI
TL;DR: Comparisons to a hybrid condition: aerial-with-turns differentiated the behavioral and brain consequences attributable to changes in orientation from those attributable to other characteristics of ground-level and aerial perspectives, providing leverage on how orientation information is processed in everyday spatial learning.
Abstract: Ground-level and aerial perspectives in virtual space provide simplified conditions for investigating differences between exploratory navigation and map reading in large-scale environmental learning. General similarities and differences in ground-level and aerial encoding have been identified, but little is known about the specific characteristics that differentiate them. One such characteristic is the need to process orientation; ground-level encoding (and navigation) typically requires dynamic orientations, whereas aerial encoding (and map reading) is typically conducted in a fixed orientation. The present study investigated how this factor affected spatial processing by comparing ground-level and aerial encoding to a hybrid condition: aerial-with-turns. Experiment 1 demonstrated that scene recognition was sensitive to both perspective (ground-level or aerial) and orientation (dynamic or fixed). Experiment 2 investigated brain activation during encoding, revealing regions that were preferentially activated perspective as in previous studies (Shelton and Gabrieli in J Neurosci 22:2711–2717, 2002), but also identifying regions that were preferentially activated as a function of the presence or absence of turns. Together, these results differentiated the behavioral and brain consequences attributable to changes in orientation from those attributable to other characteristics of ground-level and aerial perspectives, providing leverage on how orientation information is processed in everyday spatial learning.

Patent
Haim Feldman1
28 Dec 2007
TL;DR: In this paper, a coherent decomposition of the optical system is computed based on the coherence characteristic of optical system, which includes a series of expansion functions having angular and radial components that are expressed as explicit functions.
Abstract: A method for generating a simulated aerial image of a mask projected by an optical system includes determining a coherence characteristic of the optical system. A coherent decomposition of the optical system is computed based on the coherence characteristic. The decomposition includes a series of expansion functions having angular and radial components that are expressed as explicit functions. The expansion functions are convolved with a transmission function of the mask in order to generate the simulated aerial image.

Journal ArticleDOI
TL;DR: This article extends the earlier framework for image prewarping to solve the mask design problem for coherent, incoherent, and partially coherent imaging systems and discusses the synthesis of three variants of phase shift masks (PSM); namely, attenuated (or weak)PSM, 100% transmission PSM, and strong PSM with chrome.

Proceedings ArticleDOI
27 Nov 2007
TL;DR: This study has the potential to apply on national defense and resource exploitation and describes the role of image segmentation via clustering, which is capable of both simplifying computation and accelerating convergence.
Abstract: For underwater and aerial images, the dispersing in atmosphere and the fluctuation in current flow are essential factors to consider. It is evitable that these types of images will be affected by uncertainties. As a result, image segmentation is especially useful for the processing of underwater and aerial images. Segmentation acts as a basic approach to clarify both feature ambiguity and information noise. It categorizes an image into separate parts which correlate with objects or areas involved. Image segmentation by clustering refers to grouping similar data points into different clusters. K-means clustering requires that the number of partitioning clusters be specified and its distance metric be defined to quantify the relative orientation of objects. Being a competitive learning method, winner-take-all (WTA) methodology has been selected to update one particular cluster centroid each time, which is an effective and optimal approach. K-means clustering is capable of both simplifying computation and accelerating convergence. To evaluate the role of image segmentation in image processing process, quantitative measures should be defined. The discrete entropy of the grayscale image is a statistical measure of randomness which can be used to characterize original and segmented images. The measure of the proximity between the probability density functions of the clustered and original images is described as relative entropy. Both measures are proposed to further study the influence of image segmentation via clustering. This study has the potential to apply on national defense and resource exploitation.

Proceedings ArticleDOI
Robert L. Bristol1
TL;DR: In this article, a simple analytical model for line-edge roughness in chemically amplified resists is derived from an account of stochastic fluctuations of photon ("shot noise") and acid number densities.
Abstract: A simple analytical model for line-edge roughness in chemically amplified resists is derived from an accounting of stochastic fluctuations of photon ("shot noise") and acid number densities. Statistics from this counting exercise are applied to a region defined by the effective acid diffusion length; these statistics are then modulated by the slope of the image intensity to produce a value for LER. The model produces the familiar dependence of LER on aerial image (more specifically on latent image) and dose also seen in many other models and data. The model is then applied to the special case of interference imaging, for which the aerial image is a simple, known analytic function. The resulting expression is compared to experimental data at both relatively large half-pitches, shot with 257nm, and sub-50nm half-pitches shot with 13.5nm and hyper-NA 193nm. The model captures the primary scaling trends seen at the larger length scales, however at the sub-50nm problems arise. It appears that additional effects not covered by counting photons and acids are becoming increasingly important as length scales drop below about 50nm. These additional effects will require increased attention in order to improve LER in lockstep with diminishing CD and pitch.

Patent
14 Dec 2007
TL;DR: In this article, a system and methods for accessing and displaying three dimensional data through a panoramic image is presented. But this system is not suitable for the display of aerial images.
Abstract: The proposed invention defines a system and methods for accessing and displaying three dimensional data through a panoramic image. Three dimensional data comprised of points and polygons is stored in a spatial database on a server and is delivered to the client on demand through point or area selections defined in a panoramic image viewer in the 2D spherical coordinate system of the panoramic image. A number of different processes are defined for transforming these two dimensional spherical coordinates into a three dimensional coordinate, the result of which is returned to the client and used to update the panoramic image and corresponding map or aerial image in the client side application.

Proceedings ArticleDOI
04 Jun 2007
TL;DR: The objective of this algorithm is to register aerial images having only partial overlap, which are also geometrically distorted due to the different sensing conditions and in addition they may be contaminated with noise, may be blurred, etc.
Abstract: In this paper an algorithm for aerial image registration is proposed. The objective of this algorithm is to register aerial images having only partial overlap, which are also geometrically distorted due to the different sensing conditions and in addition they may be contaminated with noise, may be blurred, etc. The geometric distortions considered in the registration process are rotation, translation and scaling. The proposed algorithm consists of three main steps: feature point extraction using a feature point extractor based on scale-interaction of Mexican-hat wavelets, obtaining the correspondence between the feature points of the first (reference) and the second image based on Zernike moments of neighborhoods centered on the feature points, and estimating the transformation parameters between the first and the second images using an iterative weighted least squares algorithm. Experimental results illustrate the accuracy of image registration for images with partial overlap in the presence of additional image distortions, such as noise contamination and image blurring.

Journal Article
TL;DR: An algorithm of automatic extraction of 550kV power lines from complex background in aerial images is presented, designed to resist the strong noise and the potential power line pixels are acquired by Ratio operator.
Abstract: With the development of aerial photogrammetric technology and the digital camera’s spatial resolution. It is possible to use photogrammetric technology in the inspection of power lines. However, there is little published method for automatic extraction of power lines from aerial images. In this paper, an algorithm of automatic extraction of 550kV power lines from complex background in aerial images is presented. Linear feature operators are designed to resist the strong noise and the potential power line pixels are acquired by Ratio operator. Part Radon transform is used to acquire and link the segments. The gaps of the power line segments are filled through similar Kalman filter tracing method. The proposed method is validated by the experimental images.

Patent
Stanley E. Stokowski1
15 Mar 2007
TL;DR: In this article, the authors present a method for finding lithographically significant defects on a reticle using a pair of related intensity images of the reticle in question using an inspection apparatus.
Abstract: Disclosed are apparatus and methods for finding lithographically significant defects on a reticle. In general, at least a pair of related intensity images of the reticle in question are obtained using an inspection apparatus. The intensity images are obtained such that each of the images experience different focus settings for the reticle so that there is a constant focus offset between the two focus values of the images. These images are then analyzed to obtain a transmission function of the reticle. This transmission function is then input into a model of the lithography system (e.g., a stepper, scanner, or other related photolithography system) to then produce an aerial image of the reticle pattern. The aerial image produced can then be input to a photoresist model to yield a “resist-modeled image” that corresponds to an image pattern to be printed onto the substrate using the reticle. This resist-modeled image can then be compared with a reference image to obtain defect information. In particular, due to the introduction of the lithography tool and photoresist model, this defect information pertains to lithographically significant defects.

Patent
10 Apr 2007
TL;DR: In this article, the authors proposed a method and a program for producing an original data that can produce data of an original for forming a highly-accurate pattern with a smaller amount of calculations and in a shorter time.
Abstract: PROBLEM TO BE SOLVED: To provide a method and a program for producing an original data that can produce data of an original for forming a highly-accurate pattern with a smaller amount of calculations and in a shorter time. SOLUTION: The method includes: a step of obtaining a two-dimensional transmission cross coefficient based on a function representing a light intensity distribution formed by an illumination apparatus on a pupil plane of the projection optical system and a pupil function of the projection optical system; an aerial image calculation step (S62) of obtaining an approximated aerial image by using one of a plurality of components of an aerial image on an image plane of the projection optical system or by adding two or more components in the plurality of components, based on the two-dimensional transmission cross coefficient and information about a target pattern; and an original data producing step (S63) of producing data of an original pattern based on the approximated aerial image. COPYRIGHT: (C)2008,JPO&INPIT

Patent
29 Jan 2007
TL;DR: In this article, a 3D measurement apparatus was proposed to generate stereo pair images from aerial images from a small UAV, where the body bank of the UAV was removed by projecting the aerial images onto a world coordinate plane.
Abstract: PROBLEM TO BE SOLVED: To implement three-dimensional measurement by acquiring stereo pair images of high resolution inexpensively. SOLUTION: A three-dimensional measurement apparatus 300 implements three-dimensional measurement by generating stereo pair images from aerial images from a small unmanned airplane. The small unmanned airplane has the advantage of low cost and low flight, that is, aerial images of high resolution can be acquired at low cost, but has the disadvantage of a large body bank when turning around. An image projection part 330 removes the effects of the body bank from the aerial images. Specifically, the image projection part 330 projects the aerial images onto a world coordinate plane to convert the aerial images from central projection images into central vertical images. The image projection part 330 thus produces aerial images as if being imaged from directly above in a level flight state. A longitudinal parallax removal part 340 removes longitudinal parallax from the pair of aerial images produced by the image projection part 330 to generate stereo pair images, and a three-dimensional measurement part 350 implements stereoscopic three-dimensional measurement. COPYRIGHT: (C)2008,JPO&INPIT

Proceedings ArticleDOI
26 Dec 2007
TL;DR: A semiautomatic approach of detecting feature correspondences between ground-level images and the building footprint in an orthorectified aerial image for complete, photo-realistic and large-scale urban models.
Abstract: Aerial imagery and ground-level imagery are two complementary data sources for architectural modeling. How to integrate them is a critical issue in creating complete, photo-realistic and large-scale urban models. We describe a semiautomatic approach of detecting feature correspondences between ground-level images and the building footprint in an orthorectified aerial image. The ground-level images are stitched into panoramas in order to obtain a wide camera field of view. Line segments are extracted from ground-level images. Their corresponding segments on the building footprints are automatically detected through a voting process. Meanwhile the camera pose of the ground-level images is also obtained. Wrong correspondences are corrected through user interaction. Later, the height values of the building roof corners are computed and a piece-wise planar 3D model with photo-realistic facade and roof texture is then created.

Proceedings ArticleDOI
11 Apr 2007
TL;DR: A fusion of high-resolution InSAR data and one aerial image is discussed for the example of a scene containing bridges that are core elements of infrastructure, improving the 3D visualization of the scene and the extraction of the main parameters of the bridges' geometry.
Abstract: Modern airborne SAR sensor systems provide geometric resolution in the order well below half a meter. By SAR interferometry from pairs of such images DEM of the same grid size can be obtained. In data of this kind many features of urban objects become visible, which were beyond the scope of radar remote sensing only a few years ago. However, because of the side-looking SAR sensor principle, layover and occlusion issues inevitably arise in undulated terrain or urban areas. Therefore, SAR data are difficult to interpret even for senior human interpreters. Furthermore, the quality of the InSAR DEM may vary significantly depending on the local topography. In order to support interpretation SAR data are often analyzed using additional complementary information provided by maps or other remote sensing imagery. In this paper a fusion of high-resolution InSAR data and one aerial image is discussed for the example of a scene containing bridges that are core elements of infrastructure. The aims are improvement of the 3D visualization of the scene and the extraction of the main parameters of the bridges' geometry.

Patent
Kenji Yamazoe1
10 Jul 2007
TL;DR: In this paper, a two-dimensional transmission cross coefficient is obtained based on a function representing a light intensity distribution formed by an illumination apparatus on a pupil plane of the projection optical system.
Abstract: A two-dimensional transmission cross coefficient is obtained based on a function representing a light intensity distribution formed by an illumination apparatus on a pupil plane of the projection optical system and a pupil function of the projection optical system. Based on the two-dimensional transmission cross coefficient and data of a pattern on an object plane of the projection optical system, an approximate aerial image is calculated by using at least one of plural components of an aerial image on an image plane of the projection optical system. Data of a pattern of an original is produced based on the approximate aerial image.

Book ChapterDOI
Mehdi Rezaeian1, Armin Gruen1
01 Jan 2007
TL;DR: The results of the analysis show that using multiple features can be useful to classify damages automatically and with high success rate, and can give first very valuable hints to rescue teams.
Abstract: We present a method based on two kinds of image-extracted features comparing stereo pairs of aerial images before and after an earthquake. The study area is a part of the city of Bam, Iran which was hit strongly by an earthquake on December 26, 2003. In order to classify damages caused by earthquakes, we have explored the use of two kinds of extracted features: volumes (defined in object space) and edges (defined in image space). For this purpose, digital surface models (DSM) were created automatically from pre- and post-earthquake aerial images. Then the volumes of the buildings were calculated. In addition, a criterion for edge existence - in accordance with pre-event building polygon lines — from post-event images is proposed. A simple clustering algorithm, based on the nearest neighbor rule was implemented using these two features simultaneously. Based on visual inspection of the stereo images, three levels of damage (total collapse, partial collapse, no damage) were considered. By comparing pre- and post-earthquake data the results have been evaluated. The overall success rate — total number of correctly classified divided by the total number of samples — was found to be 71.4%. With respect to the totally collapsed buildings we obtained a success rate of 86.5% and 90.4% in terms of producer’s and user’s accuracies respectively, which is quite encouraging. The results of the analysis show that using multiple features can be useful to classify damages automatically and with high success rate. This can give first very valuable hints to rescue teams.

Proceedings ArticleDOI
TL;DR: This paper presents an approach to automatically detect patterns that are found in real designs and have considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to include these structures.
Abstract: Process models are responsible for the prediction of the latent image in the resist in a lithographic process. In order for the process model to calculate the latent image, information about the aerial image at each layout fragment is evaluated first and then some aerial image characteristics are extracted. These parameters are passed to the process models to calculate wafer latent image. The process model will return a threshold value that indicates the position of the latent image inside the resist, the accuracy of this value will depend on the calibration data that were used to build the process model in the first place. The calibration structures used in building the models are usually gathered in a single layout file called the test pattern. Real raw data from the lithographic process are measured and attached to its corresponding structure in the test pattern, this data is then applied to the calibration flow of the models. In this paper we present an approach to automatically detect patterns that are found in real designs and have considerable aerial image parameters differences with the nearest test pattern structure, and repair the test patterns to include these structures. This detect-and-repair approach will guarantee accurate prediction of different layout fragments and therefore correct OPC behavior.