scispace - formally typeset
Search or ask a question

Showing papers on "Aerial image published in 2008"


Proceedings ArticleDOI
01 Mar 2008
TL;DR: A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data and shows that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude.
Abstract: The aim of this paper is to explore the possibility of using geo-referenced satellite or aerial images to augment an Unmanned Aerial Vehicle (UAV) navigation system in case of GPS failure. A vision based navigation system which combines inertial sensors, visual odometer and registration of a UAV on-board video to a given geo-referenced aerial image has been developed and tested on real flight-test data. The experimental results show that it is possible to extract useful position information from aerial imagery even when the UAV is flying at low altitude. It is shown that such information can be used in an automated way to compensate the drift of the UAV state estimation which occurs when only inertial sensors and visual odometer are used.

196 citations


Journal ArticleDOI
TL;DR: A novel and robust framework for automatic car detection from aerial images that does not rely on any priori knowledge of the image like a site-model or contextual information, but if necessary this information can be incorporated.
Abstract: Car detection from aerial images has been studied for years. However, given a large-scale aerial image with typical car and background appearance variations, robust and efficient car detection is still a challenging problem. In this paper, we present a novel and robust framework for automatic car detection from aerial images. The main contribution is a new on-line boosting algorithm for efficient car detection from large-scale aerial images. Boosting with interactive on-line training allows the car detector to be trained and improved efficiently. After training, detection is performed by exhaustive search. For post processing, a mean shift clustering method is employed, improving the detection rate significantly. In contrast to related work, our framework does not rely on any priori knowledge of the image like a site-model or contextual information, but if necessary this information can be incorporated. An extensive set of experiments on high resolution aerial images using the new UltraCamD shows the superiority of our approach.

159 citations


Book ChapterDOI
12 Oct 2008
TL;DR: An algorithm for fully automatic building reconstruction from aerial images usingarse line features delineating height discontinuities and dense depth data providing the roof surface are combined in an innovative manner with a global optimization algorithm based on Graph Cuts.
Abstract: Accurate and realistic building models of urban environments are increasingly important for applications, like virtual tourism or city planning. Initiatives like Virtual Earth or Google Earth are aiming at offering virtual models of all major cities world wide. The prohibitively high costs of manual generation of such models explain the need for an automatic workflow. This paper proposes an algorithm for fully automatic building reconstruction from aerial images. Sparse line features delineating height discontinuities and dense depth data providing the roof surface are combined in an innovative manner with a global optimization algorithm based on Graph Cuts. The fusion process exploits the advantages of both information sources and thus yields superior reconstruction results compared to the indiviual sources. The nature of the algorithm also allows to elegantly generate image driven levels of detail of the geometry. The algorithm is applied to a number of real world data sets encompassing thousands of buildings. The results are analyzed in detail and extensively evaluated using ground truth data.

147 citations


Proceedings ArticleDOI
16 Dec 2008
TL;DR: This study presents a novel approach for building detection using multiple cues, which benefits from segmentation of aerial images using invariant color features and determines the shape of the building by a novel method.
Abstract: Robust detection of buildings is an important part of the automated aerial image interpretation problem. Automatic detection of buildings enables creation of maps, detecting changes, and monitoring urbanization. Due to the complexity and uncontrolled appearance of the scene, an intelligent fusion of different methods gives better results. In this study, we present a novel approach for building detection using multiple cues. We benefit from segmentation of aerial images using invariant color features. Besides, we use the edge and shadow information for building detection. We also determine the shape of the building by a novel method.

144 citations


Journal ArticleDOI
Thomas Esch, Michael Thiel1, M. Bock, Achim Roth, Stefan Dech 
TL;DR: The quantitative assessment of segmentation accuracy based on reference objects is derived from an aerial image, and a high-resolution synthetic aperture radar scene shows an improvement of 20%-40% in object accuracy by applying the proposed procedure.
Abstract: This letter proposes an optimization approach that enhances the quality of image segmentation using the software Definiens Developer. The procedure aims at the minimization of over- and undersegmentations in order to attain more accurate segmentation results. The optimization iteratively combines a sequence of multiscale segmentation, feature-based classification, and classification-based object refinement. The developed method has been applied to various remotely sensed data and is compared to the results achieved with the established segmentation procedures provided by the Definiens Developer software. The quantitative assessment of segmentation accuracy based on reference objects is derived from an aerial image, and a high-resolution synthetic aperture radar scene shows an improvement of 20%-40% in object accuracy by applying the proposed procedure.

116 citations


Proceedings ArticleDOI
TL;DR: A new resolution enhancement technique named 2D-TCC technique is proposed, which can enhance resolution of line patterns as well as that of contact hole patterns by the use of an approximate aerial image.
Abstract: In this paper, a new resolution enhancement technique named 2D-TCC technique is proposed. This method can enhance resolution of line patterns as well as that of contact hole patterns by the use of an approximate aerial image. The aerial image, which is obtained by 2D-TCC calculation, expresses the degree of coherence at the image plane of a projection optic considering mask transmission at the object plane. OPC of desired patterns and placement of assist patterns can be simultaneously performed according to an approximate aerial image called a 2D-TCC map. Fast calculation due to truncation of a series in calculating an aerial image is another advantage. Results of mask optimization for various line patterns and the validity of the 2D-TCC technique by simulations and experiments are reported.

98 citations


Patent
06 Oct 2008
TL;DR: In this article, the authors present a system for remotely determining the measurements of a roof, including a sizing tool for determining the size, geometry and pitch of the roof sections of a building being displayed.
Abstract: The invention provides consumers, private enterprises, government agencies, contractors and third party vendors with tools and resources for gathering site specific information related to purchase and installation of energy systems. A system according to one embodiment of the invention remotely determines the measurements of a roof. An exemplary system comprises a computer including an input means, a display means and a working memory. An aerial image file database contains a plurality of aerial images of roofs of buildings in a selected region. A roof estimating software program receives location information of a building in the selected region and then presents the aerial image files showing roof sections of building located at the location information. Some embodiments of the system include a sizing tool for determining the size, geometry, and pitch of the roof sections of a building being displayed.

94 citations


Proceedings ArticleDOI
23 Jun 2008
TL;DR: A novel method for parsing aerial images with a hierarchical and contextual model learned in a statistical framework that allows the model to rule out inconsistent detections and verify low probability detections based on their local context.
Abstract: In this paper we present a novel method for parsing aerial images with a hierarchical and contextual model learned in a statistical framework. We learn hierarchies at the scene and object levels to handle the difficult task of representing scene elements at different scales and add contextual constraints to resolve ambiguities in the scene interpretation. This allows the model to rule out inconsistent detections, like cars on trees, and to verify low probability detections based on their local context, such as small cars in parking lots. We also present a two-step algorithm for parsing aerial images that first detects object-level elements like trees and parking lots using color histograms and bag-of-words models, and objects like roofs and roads using compositional boosting, a powerful method for finding image structures. We then activate the top-down scene model to prune false positives from the first stage. We learn this scene model in a minimax entropy framework and show unique samples from our prior model, which capture the layout of scene objects. We present experiments showing that hierarchical and contextual information greatly reduces the number of false positives in our results.

78 citations


Patent
05 May 2008
TL;DR: In this article, a unified triangulation method is provided for an overlapping area between an aerial image and a satellite image that are captured by a frame camera and a line camera equipped with different types of sensors.
Abstract: Disclosed is a digital photogrammetric method and apparatus using the integrated modeling of different types of sensors. A unified triangulation method is provided for an overlapping area between an aerial image and a satellite image that are captured by a frame camera and a line camera equipped with different types of sensors. Ground control lines or ground control surfaces are used as ground control features used for the triangulation. A few ground control points may be used together with the ground control surface in order to further improve the three-dimensional position. The ground control line and the ground control surface may be extracted from LiDAR data. In addition, triangulation may be performed by bundle adjustment in the units of blocks each having several aerial images and satellite images. When an orthophoto is needed, it is possible to generate the orthophoto by appropriately using elevation models with various accuracies that are created by a LiDAR system, according to desired accuracy.

76 citations


Journal ArticleDOI
TL;DR: In this paper, an automated image segmentation was used to delineate image objects representing vegetation patches of similar physiognomy and structure, and data sources extracted from individual species distribution models, Landsat spectral data, and life form cover estimates derived from aerial image-based texture variables.
Abstract: Objective:The objective of this study was to map vegetation composition across a 24 000 ha watershed Location: The study was conducted on the western slope of the Sierra Nevada mountain range of California, USA Methods: Automated image segmentation was used to delineate image objects representing vegetation patches of similar physiognomy and structure Image objects were classified us ing a decision tree and data sources extracted from individual species distribution models, Landsat spectral data, and life form cover estimates derived from aerial image-based texture variables Results: A total of 12 plant communities were mapped with an overall accuracy of 75% and a κ-value of 069 Species distribution model inputs improved map accuracy by approximately 15% over maps derived solely from image data Automated mapping of existing vegetation distributions, based solely on predictive distribution model results, proved to be more accurate than mapping based on Landsat data, and equivalent in accuracy to mapping based on all image data sources Conclusions: Results highlight the importance of terrain, edaphic, and bioclimatic variables when mapping vegetation communities in complex terrain Mapping errors stemmed from the lack of spectral discernability between vegetation classes, and the inability to account for the confounding effects of land use history and disturbance within a static distribution modeling framework

63 citations


Proceedings ArticleDOI
19 May 2008
TL;DR: A monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as the reference map and image processing techniques are employed to create a feature map from an aerial image.
Abstract: This paper presents the design of a monocular vision based particle filter localization system for urban settings that uses aerial orthoimagery as the reference map. One of the design objectives is to provide a low cost method for outdoor localization using a single camera. This relaxes the need for global positioning system (GPS) which may experience degraded reliability in urban settings. The second objective is to study the achievable localization performance with the aforementioned resources. Image processing techniques are employed to create a feature map from an aerial image, and also to extract features from camera images to provide observations that are used by a particle filter for localization.

Patent
28 Aug 2008
TL;DR: In this article, a coarse camera pose estimation is determined that is then refined into a fine camera pose estimator, which is used to map texture from the aerial image onto the 3D model.
Abstract: A camera pose may be determined automatically and is used to map texture onto a 3D model based on an aerial image. In one embodiment, an aerial image of an area is first determined. A 3D model of the area is also determined, but does not have texture mapped on it. To map texture from the aerial image onto the 3D model, a camera pose is determined automatically. Features of the aerial image and 3D model may be analyzed to find corresponding features in the aerial image and the 3D model. In one example, a coarse camera pose estimation is determined that is then refined into a fine camera pose estimation. The fine camera pose estimation may be determined based on the analysis of the features. When the fine camera pose is determined, it is used to map texture onto the 3D model based on the aerial image.

Journal ArticleDOI
TL;DR: An automatic and fast algorithm for registering aerial image sequences to vector map data using linear features as control information is proposed, based on the extraction of linear features using active contour models followed by the construction of a polygonal template upon which a matching process is applied.
Abstract: This paper proposes an automatic and fast algorithm for registering aerial image sequences to vector map data using linear features as control information. Our method is based on the extraction of linear features using active contour models (also known as, snakes), followed by the construction of a polygonal template upon which a matching process is applied. To accommodate more robust matching, this work presents both exact and inexact matching schemes for linear features. Additionally, in order to overcome the influence of the snakes-based extraction process on the matching results, a matching refinement process is suggested. Using the information derived from the matching process, we then determine the transformation parameters between overlapping images and generate a mosaic image sequence, which can then be registered to a map. The performance of the proposed scheme was tested on sequences of aerial imagery of 1 m resolution that were subjected to shifts and rotations using both the exact and inexact matching scheme, and was shown to produce angular accuracy of less than 0.7 degrees and positional accuracy of less than two pixels.

Journal ArticleDOI
TL;DR: A dense matching process based on the minimization of a multi-view pixelwise similarity criterion combined with a discretized L1-norm or total variation (TV) regularization term is proposed for dense height map reconstruction from aerial oblique image sequences.

Proceedings ArticleDOI
19 May 2008
TL;DR: A method to incorporate sensor measurement errors into target position estimates and a calibration methodology to measure the error distributions is presented and a preliminary experiment with real flight data is presented.
Abstract: Sensor-based control is an emerging challenge in UAV applications. It is essential in a sensing task to account for sensor measurement errors when computing a target position estimate. Source of measurement error includes those in vehicle position and orientation measurements as well as algorithm failures such as missed detections or false detections. Incorporating such errors in aerial sensors is non-trival because of the camera's perspective geometry. This paper is about a method to incorporate such errors into target position estimates and a calibration methodology to measure the error distributions. A preliminary experiment with real flight data is presented.

Journal ArticleDOI
TL;DR: The SEMATECH Berkeley actinic inspection tool (AIT) as mentioned in this paper uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a charge coupled device camera.
Abstract: The SEMATECH Berkeley actinic inspection tool (AIT) is an extreme ultraviolet (EUV)-wavelength mask inspection microscope designed for direct aerial image measurements and precommercial EUV mask research. Operating on a synchrotron bending magnet beamline, the AIT uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a charge coupled device camera. The authors present the results of recent system upgrades that have improved the imaging resolution, illumination uniformity, and partial coherence. Benchmarking tests show image contrast above 75% for 100nm mask features and significant improvements and across the full range of measured sizes. The zoneplate lens has been replaced by an array of user-selectable zoneplates with higher magnification and numerical aperture (NA) values up to 0.0875, emulating the spatial resolution of a 0.35NA 4× EUV stepper. Illumination uniformity is above 90% for mask areas 2μm wide and smaller. An angle-scanning mirror reduces the high ...

Journal Article
TL;DR: In this paper, two data fusion methods, namely Bayesian and Dempster-Shafer, are evaluated for the detection of buildings in aerial image and laser range data, and their performances are compared.
Abstract: Automated approaches to building detection are of great importance in a number of different applications including map updating and monitoring of informal settlements. With the availability of multi-source aerial data in recent years, data fusion approaches to automated building detection have become more popular. In this paper, two data fusion methods, namely Bayesian and Dempster-Shafer, are evaluated for the detection of buildings in aerial image and laser range data, and their performances are compared. The results indicate that the Bayesian maximum likelihood method yields a higher detection rate, while the Dempster-Shafer method results in a lower false-positive rate. A comparison of the results in pixel level and object level reveals that both methods perform slightly better in object level.

Book ChapterDOI
01 Jan 2008
TL;DR: This work proposes a new conceptual model which it is argued more accurately represents how the HVS performs aerial image interpretation and extracts a novel complementary set of intensity and texture gradients which offer increased discrimination strength over existing competing gradient sets.
Abstract: Object Based Image Analysis (OBIA) is a form of remote sensing which attempts to model the ability of the human visual system (HVS) to interpret aerial imagery. We argue that in many of its current implementations, OBIA is not an accurate model of this system. Drawing from current theories in cognitive psychology, we propose a new conceptual model which we believe more accurately represents how the HVS performs aerial image interpretation. The first step in this conceptual model is the generation of image segmentation where each area of uniform visual properties is represented correctly. The goal of this work is to implement this first step. To achieve this we extract a novel complementary set of intensity and texture gradients which offer increased discrimination strength over existing competing gradient sets. These gradients are then fused using a strategy which accounts for spatial uncertainty in boundary localization. Finally segmentation is performed using the watershed segmentation algorithm. Results achieved are very accurate and outperform the popular Canny gradient operator.

Journal ArticleDOI
TL;DR: An optical aerial image partitioning method using level set evolution for an arbitrary number of regions and embark on the concept of using one level set function for each region in this paper.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the accuracy of IKONOS and QuickBird satellite stereo image pairs with aerial images acquired over a region at Tampa Bay, Florida and showed that the accuracy is related to a few factors of imaging geometry.
Abstract: This paper investigates the geopositioning accuracy achievable from integrating IKONOS and QuickBird satellite stereo image pairs with aerial images acquired over a region at Tampa Bay, Florida. The results showed that the accuracy is related to a few factors of imaging geometry. For example, the geopositioning accuracy of a stereo pair of IKONOS or QuickBird images can be improved by integrating a set of aerial images, even just a single aerial image or a stereo pair of aerial images. Shorelines derived from the IKONOS and QuickBird stereo images, particularly the vertical positions, are compared with the corresponding observations of water-penetrating LiDAR and water gauge stations and proved that differences are within the limit of the geopositioning uncertainty of the satellite images.

Journal ArticleDOI
TL;DR: In this paper, a methodology was developed to use image processing techniques for automated detection of tile drains from multiple dates of aerial photography at the Agronomy Center for Research and Education (ACRE), West Lafayette, Indiana.
Abstract: Although subsurface drainage provides many agronomic and environmental benefits, extensive subsurface drainage systems have important implications for surface water quality and hydrology. Due to limited information on subsurface drainage extent, it is difficult to understand the hydrology of intensively tile-drained watersheds. In order to address this problem, a methodology was developed to use image processing techniques for automated detection of tile drains from multiple dates of aerial photography at the Agronomy Center for Research and Education (ACRE), West Lafayette, Indiana. A stepwise approach was adopted to first identify potential tile-drained fields from the GIS-based analysis of land use class, soil drainage class, and surface slope using decision tree classification. Based on preliminary classification of potential tile-drained area from the decision tree classifier, a combination of image processing techniques such as directional edge enhancement filtering, density slice classification, Hough transformation, and automatic vectorization were used to identify individual tile lines from images of 1976, 1998, and 2002. Accuracy assessment of the predicted tile line maps (Hough transformed and untransformed) was accomplished by comparing the locations of predicted tile lines with the known tile lines mapped through manual digitization from historic design diagrams using both a confusion matrix approach and drainage density. Forty-eight percent of tile lines were correctly predicted for the Hough transformed map and 58% for the untransformed map based on the producer accuracy. Similarly, 73% of non-tile area was correctly predicted for Hough transformed and 68% for untransformed lines. Based on drainage density calculation, 60% of tile lines were predicted from the aerial image of 1976 and 50% from the aerial image of 2002 for both techniques, while 72% of tile lines were predicted from the aerial image of 1998 for untransformed and 50% for Hough transformed lines. The Hough transformation provided the best results in producing a map without discontinuity between lines. The overall performance of the image processing techniques used in this study shows that these techniques can be successfully applied to identify tile lines from aerial photographs over a large area.

Patent
17 Jun 2008
TL;DR: In this paper, an on-board computing and projection unit with an image projector coupled to the carriage of a light aircraft suspended below a parachute-wing was used for aerial image projection.
Abstract: An aerial image projection system ( 20 ) includes a light aircraft ( 22 ) having a parachute-wing ( 24 ) attached to a carriage ( 26 ). The system ( 20 ) further includes an on-board computing and projection unit ( 30 ) with an image projector ( 32 ) coupled to the carriage ( 26 ) and suspended below the parachute-wing ( 24 ). An image display process ( 98 ) utilizing the system ( 20 ) entails generating ( 100 ) still and/or moving images ( 34 ) in a digital format for management by the computing and projection unit ( 30 ) and projecting ( 102 ) those images ( 34 ) from the projector ( 32 ) onto the parachute-wing ( 24 ). The projection of particular images ( 34 ) may be governed by time of day, particular events, current location of aerial image projection system ( 20 ) as detected by a navigational system receiver ( 66 ) of the unit, and so forth.

Proceedings ArticleDOI
TL;DR: The SEMATECH Berkeley Actinic Inspection Tool (AIT) as discussed by the authors uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a CCD camera.
Abstract: The SEMATECH Berkeley Actinic Inspection Tool (AIT) is an EUV-wavelength mask inspection microscope designed for direct aerial image measurements, and pre-commercial EUV mask research. Operating on a synchrotron bending magnet beamline, the AIT uses an off-axis Fresnel zoneplate lens to project a high-magnification EUV image directly onto a CCD camera. We present the results of recent system upgrades that have improved the imaging resolution, illumination uniformity, and partial coherence. Benchmarking tests show image contrast above 75% for 100-nm mask features, and significant improvements and across the full range of measured sizes. The zoneplate lens has been replaced by an array of user-selectable zoneplates with higher magnification and NA values up to 0.0875, emulating the spatial resolution of a 0.35-NA 4 x EUV stepper. Illumination uniformity is above 90% for mask areas 2-{micro}m-wide and smaller. An angle-scanning mirror reduces the high coherence of the synchrotron beamline light source giving measured {sigma} values of approximately 0.125 at 0.0875 NA.

Patent
07 Jul 2008
TL;DR: In this paper, a method for determining vibration-related information by projecting an aerial image at an image position in a projection plane, mapping an intensity of the aerial image into an image map, the image map arranged for comprising values of coordinates of sampling locations and of the intensity sampled at each sampling location, and measuring intensity of aerial image received through a slot pattern.
Abstract: The invention provides a method for determining vibration-related information by projecting an aerial image at an image position in a projection plane, mapping an intensity of the aerial image into an image map, the image map arranged for comprising values of coordinates of sampling locations and of the intensity sampled at each sampling location, and measuring intensity of the aerial image received through a slot pattern. The method further includes determining from the image map a detection position of a slope portion of the image map, at the detection position of the slope portion, measuring of a temporal intensity of the aerial image and measuring of relative positions of the slot pattern and the image position, the relative positions of the slot being measured as position-related data of the slot pattern and determining from the temporal intensity of the aerial image vibration-related information for said aerial image.

Patent
Azat Latypov1
22 May 2008
TL;DR: In this article, a method of calculating an aerial image of a spatial light modulator array is proposed, where the pair-wise interference is represented by a matrix of functions and the effective graytones are approximated using sinc functions, or using polynomial functions.
Abstract: A method of calculating an aerial image of a spatial light modulator array includes calculating pair-wise interference between pixels of the spatial light modulator array; calculating effective graytones corresponding to modulation states of the pixels; and calculating the aerial image based on the pair-wise interference and the effective graytones. The graytones depend only on the modulation states of the pixels. The pair-wise interference depends only on position variables. The position variables are position in an image plane and position in a plane of a source of electromagnetic radiation. The pair-wise interference can be represented by a matrix of functions. The pair-wise interference can be represented by a four dimensional matrix. The effective graytones are approximated using sinc functions, or using polynomial functions.

Patent
19 Feb 2008
TL;DR: In this article, a cell-level process compensation technique (PCT) processing is performed on a number of levels of one or more cells to generate a PCT processed version of the one more cells in the layout.
Abstract: A layout of cells is generated to satisfy a netlist of an integrated circuit. Cell-level process compensation technique (PCT) processing is performed on a number of levels of one or more cells in the layout to generate a PCT processed version of the one more cells in the layout. An as-fabricated aerial image of each PCT processed cell level is generated to facilitate evaluation of PCT processing adequacy. Cell-level circuit extraction is performed on the PCT processed version of each cell using the generated as-fabricated aerial images. The cell-level PCT processing and cell-level circuit extraction are performed before placing and routing of the layout on a chip. The PCT processed version of the one or more cells and corresponding as-fabricated aerial images are stored in a cell library.

Patent
Miyoko Kawashima1
15 Sep 2008
TL;DR: In this article, the authors proposed a method of generating data of a mask, comprising a calculation step of calculating an aerial image formed on an image plane of a projection optical system, an extraction step of extracting a two-dimensional image from the aerial image, a determination step of determining a main pattern of the mask based on the 2D image, and a peak portion at which a light intensity takes a peak value in a region other than a region in which the main pattern is projected.
Abstract: The invention provides a generation method of generating data of a mask, comprising a calculation step of calculating an aerial image formed on an image plane of a projection optical system, an extraction step of extracting a two-dimensional image from the aerial image, a determination step of determining a main pattern of the mask based on the two-dimensional image, an extraction step of extracting, from the aerial image, a peak portion at which a light intensity takes a peak value in a region other than a region in which the main pattern is projected, a determination step of determining an assist pattern based on the light intensity of the peak portion, and a generation step of inserting the assist pattern into a portion of the mask, which corresponds to the peak portion, thereby generating, as the data of the mask, pattern data including the assist pattern and the main pattern.

Journal ArticleDOI
TL;DR: In this paper, a ground level semantic map, which shows open ground and indicates the probability of cells being occupied by walls of buildings, is obtained by a mobile robot equipped with an omni-directional camera, GPS and a laser range finder.

Patent
04 Nov 2008
TL;DR: In this paper, a method of recognizing an event depicted in an image from the image and a location information associated with the image is disclosed, which includes acquiring the image, its associated location information, using the location information to acquire an aerial image(s) correlated to the location, and storing the event in association with an image for subsequent use.
Abstract: A method of recognizing an event depicted in an image from the image and a location information associated with the image is disclosed. The method includes acquiring the image and its associated location information; using the location information to acquire an aerial image(s) correlated to the location information; identifying the event using both the image and the acquired aerial image(s); and storing the event in association with the image for subsequent use.

Proceedings ArticleDOI
07 Jul 2008
TL;DR: This paper proposes the use of phase correlation for the automatic registration of light detection and ranging (LiDAR) data and aerial imagery, producing both a range image and a building binary mask.
Abstract: This paper proposes the use of phase correlation for the automatic registration of light detection and ranging (LiDAR) data and aerial imagery. First, buildings existent in the LiDAR data and aerial imagery are detected. Then the LiDAR data is interpolated to fixed point spacings, producing both a range image and a building binary mask. In the range image the pixel intensities correspond to the terrain's elevation and in the building mask the bright pixels correspond to buildings and dark pixels to everything else. A building binary mask is also produced from buildings detected in a corresponding aerial image. The Fourier transforms and the log polar Fourier transforms of both building binary masks are computed. Phase components are correlated and their peaks reveal the translation, rotation and scaling geometric transformation parameters. Results with real data are presented.