scispace - formally typeset
Search or ask a question
Journal ArticleDOI

A comprehensive review of earthquake-induced building damage detection with remote sensing techniques

TL;DR: This paper provides a comprehensive review of remote sensing methods in two categories: multi-temporal techniques that evaluate the changes between the pre- and post-event data and mono-tem temporal techniques that interpret only the post- event data.
Abstract: Earthquakes are among the most catastrophic natural disasters to affect mankind. One of the critical problems after an earthquake is building damage assessment. The area, amount, rate, and type of the damage are essential information for rescue, humanitarian and reconstruction operations in the disaster area. Remote sensing techniques play an important role in obtaining building damage information because of their non-contact, low cost, wide field of view, and fast response capacities. Now that more and diverse types of remote sensing data become available, various methods are designed and reported for building damage assessment. This paper provides a comprehensive review of these methods in two categories: multi-temporal techniques that evaluate the changes between the pre- and post-event data and mono-temporal techniques that interpret only the post-event data. Both categories of methods are discussed and evaluated in detail in terms of the type of remote sensing data utilized, including optical, LiDAR and SAR data. Performances of the methods and future efforts are drawn from this extensive evaluation.
Citations
More filters
Dissertation
01 Jan 2002

570 citations

Journal ArticleDOI
TL;DR: This paper performs a critical review on RS tasks that involve UAV data and their derived products as their main sources including raw perspective images, digital surface models, and orthophotos and focuses on solutions that address the “new” aspects of the U drone data including ultra-high resolution; availability of coherent geometric and spectral data; and capability of simultaneously using multi-sensor data for fusion.
Abstract: The unmanned aerial vehicle (UAV) sensors and platforms nowadays are being used in almost every application (e.g., agriculture, forestry, and mining) that needs observed information from the top or oblique views. While they intend to be a general remote sensing (RS) tool, the relevant RS data processing and analysis methods are still largely ad-hoc to applications. Although the obvious advantages of UAV data are their high spatial resolution and flexibility in acquisition and sensor integration, there is in general a lack of systematic analysis on how these characteristics alter solutions for typical RS tasks such as land-cover classification, change detection, and thematic mapping. For instance, the ultra-high-resolution data (less than 10 cm of Ground Sampling Distance (GSD)) bring more unwanted classes of objects (e.g., pedestrian and cars) in land-cover classification; the often available 3D data generated from photogrammetric images call for more advanced techniques for geometric and spectral analysis. In this paper, we perform a critical review on RS tasks that involve UAV data and their derived products as their main sources including raw perspective images, digital surface models, and orthophotos. In particular, we focus on solutions that address the “new” aspects of the UAV data including (1) ultra-high resolution; (2) availability of coherent geometric and spectral data; and (3) capability of simultaneously using multi-sensor data for fusion. Based on these solutions, we provide a brief summary of existing examples of UAV-based RS in agricultural, environmental, urban, and hazards assessment applications, etc., and by discussing their practical potentials, we share our views in their future research directions and draw conclusive remarks.

301 citations

Journal ArticleDOI
TL;DR: A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification and the results are encouraging: while CNN features produced an average classification accuracy, the integration of 3D point cloud features led to an additional improvement of about 3%.
Abstract: Oblique aerial images offer views of both building roofs and facades, and thus have been recognized as a potential source to detect severe building damages caused by destructive disaster events such as earthquakes. Therefore, they represent an important source of information for first responders or other stakeholders involved in the post-disaster response process. Several automated methods based on supervised learning have already been demonstrated for damage detection using oblique airborne images. However, they often do not generalize well when data from new unseen sites need to be processed, hampering their practical use. Reasons for this limitation include image and scene characteristics, though the most prominent one relates to the image features being used for training the classifier. Recently features based on deep learning approaches, such as convolutional neural networks (CNNs), have been shown to be more effective than conventional hand-crafted features, and have become the state-of-the-art in many domains, including remote sensing. Moreover, often oblique images are captured with high block overlap, facilitating the generation of dense 3D point clouds – an ideal source to derive geometric characteristics. We hypothesized that the use of CNN features, either independently or in combination with 3D point cloud features, would yield improved performance in damage detection. To this end we used CNN and 3D features, both independently and in combination, using images from manned and unmanned aerial platforms over several geographic locations that vary significantly in terms of image and scene characteristics. A multiple-kernel-learning framework, an effective way for integrating features from different modalities, was used for combining the two sets of features for classification. The results are encouraging: while CNN features produced an average classification accuracy of about 91%, the integration of 3D point cloud features led to an additional improvement of about 3% (i.e. an average classification accuracy of 94%). The significance of 3D point cloud features becomes more evident in the model transferability scenario (i.e., training and testing samples from different sites that vary slightly in the aforementioned characteristics), where the integration of CNN and 3D point cloud features significantly improved the model transferability accuracy up to a maximum of 7% compared with the accuracy achieved by CNN features alone. Overall, an average accuracy of 85% was achieved for the model transferability scenario across all experiments. Our main conclusion is that such an approach qualifies for practical use.

234 citations

Journal ArticleDOI
TL;DR: This article gives a comprehensive review of techniques focusing on multi-temporal SAR procedures for rapid damage assessment: interferometric coherence and intensity correlation, and reports possible solutions of these limitations.
Abstract: Fast crisis response after natural disasters, such as earthquakes and tropical storms, is necessary to support, for instance, rescue, humanitarian, and reconstruction operations in the crisis area. Therefore, rapid damage mapping after a disaster is crucial, i.e., to detect the affected area, including grade and type of damage. Thereby, satellite remote sensing plays a key role due to its fast response, wide field of view, and low cost. With the increasing availability of remote sensing data, numerous methods have been developed for damage assessment. This article gives a comprehensive review of these techniques focusing on multi-temporal SAR procedures for rapid damage assessment: interferometric coherence and intensity correlation. The review is divided into six parts: First, methods based on coherence; second, the ones using intensity correlation; and third, techniques using both methodologies combined to increase the accuracy of the damage assessment are reviewed. Next, studies using additional data (e.g., GIS and optical imagery) to support the damage assessment and increase its accuracy are reported. Moreover, selected studies on post-event SAR damage assessment techniques and examples of other applications of the interferometric coherence are presented. Then, the preconditions for a successful worldwide application of multi-temporal SAR methods for damage assessment and the limitations of current SAR satellite missions are reported. Finally, an outlook to the Sentinel-1 SAR mission shows possible solutions of these limitations, enabling a worldwide applicability of the presented damage assessment methods.

195 citations


Cites background from "A comprehensive review of earthquak..."

  • ...The areas affected by the destruction have to be identified and it is also crucial to detect which roads, railroads, airports and ports are still intact to be used for the crisis support [2,3]....

    [...]

  • ...However, damage detection using only post-disaster imagery is much more complicated and its accuracy is much lower compared to accuracy achieved by multi-temporal data approaches [3]....

    [...]

Journal ArticleDOI
Lei Ma1, Liang Cheng, Manchun Li, Yongxue Liu, Xiaoxue Ma1 
TL;DR: A strategy for the semi-automatic optimization of object-based classification is developed, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size and suggests that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation.
Abstract: Unmanned Aerial Vehicle (UAV) has been used increasingly for natural resource applications in recent years due to their greater availability and the miniaturization of sensors. In addition, Geographic Object-Based Image Analysis (GEOBIA) has received more attention as a novel paradigm for remote sensing earth observation data. However, GEOBIA generates some new problems compared with pixel-based methods. In this study, we developed a strategy for the semi-automatic optimization of object-based classification, which involves an area-based accuracy assessment that analyzes the relationship between scale and the training set size. We found that the Overall Accuracy (OA) increased as the training set ratio (proportion of the segmented objects used for training) increased when the Segmentation Scale Parameter (SSP) was fixed. The OA increased more slowly as the training set ratio became larger and a similar rule was obtained according to the pixel-based image analysis. The OA decreased as the SSP increased when the training set ratio was fixed. Consequently, the SSP should not be too large during classification using a small training set ratio. By contrast, a large training set ratio is required if classification is performed using a high SSP. In addition, we suggest that the optimal SSP for each class has a high positive correlation with the mean area obtained by manual interpretation, which can be summarized by a linear correlation equation. We expect that these results will be applicable to UAV imagery classification to determine the optimal SSP for each class.

174 citations


Cites background from "A comprehensive review of earthquak..."

  • ...…range of UAV has expanded to include forest resource management and monitoring (Dunford et al., 2009), vegetation and river monitoring (Sugiura et al., 2005), disaster management, especially earthquake monitoring (Dong and Shan, 2013), and precision agriculture (Zhang and Kovacs, 2012)....

    [...]

  • ..., 2005), disaster management, especially earthquake monitoring (Dong and Shan, 2013), and precision agriculture (Zhang and Kovacs, 2012)....

    [...]

References
More filters
Journal ArticleDOI
01 Nov 1973
TL;DR: These results indicate that the easily computable textural features based on gray-tone spatial dependancies probably have a general applicability for a wide variety of image-classification applications.
Abstract: Texture is one of the important characteristics used in identifying objects or regions of interest in an image, whether the image be a photomicrograph, an aerial photograph, or a satellite image. This paper describes some easily computable textural features based on gray-tone spatial dependancies, and illustrates their application in category-identification tasks of three different kinds of image data: photomicrographs of five kinds of sandstones, 1:20 000 panchromatic aerial photographs of eight land-use categories, and Earth Resources Technology Satellite (ERTS) multispecial imagery containing seven land-use categories. We use two kinds of decision rules: one for which the decision regions are convex polyhedra (a piecewise linear decision rule), and one for which the decision regions are rectangular parallelpipeds (a min-max decision rule). In each experiment the data set was divided into two parts, a training set and a test set. Test set identification accuracy is 89 percent for the photomicrographs, 82 percent for the aerial photographic imagery, and 83 percent for the satellite imagery. These results indicate that the easily computable textural features probably have a general applicability for a wide variety of image-classification applications.

20,442 citations

01 Jan 2005

2,654 citations


"A comprehensive review of earthquak..." refers background or methods in this paper

  • ...Using post-event panchromatic aerial imagery, Sumer and Turker (2006) proposed a building damage detection method based on gray-value and gradient orientation of the buildings and developed a corresponding building damage detection system....

    [...]

  • ...…data Rehor et al. (2008) Optical and LiDAR data Aerial Yu et al. (2010) Rehor and Voegtle (2008) Satellite Hussain et al. (2011) Optical and ancillary data Aerial Guler and Turker (2004), Turker and San (2004), Turker and Sumer (2008), and Sumer and Turker (2006) Satellite Trianni and Gamba (2008)...

    [...]

Journal ArticleDOI
TL;DR: This review paper, which summarizes the methods and the results of digital change detection in the optical/infrared domain, has as its primary objective a synthesis of the state of the art today.
Abstract: Techniques based on multi-temporal, multi-spectral, satellite-sensor-acquired data have demonstrated potential as a means to detect, identify, map and monitor ecosystem changes, irrespective of their causal agents. This review paper, which summarizes the methods and the results of digital change detection in the optical/infrared domain, has as its primary objective a synthesis of the state of the art today. It approaches digital change detection from three angles. First, the different perspectives from which the variability in ecosystems and the change events have been dealt with are summarized. Change detection between pairs of images (bi-temporal) as well as between time profiles of imagery derived indicators (temporal trajectories), and, where relevant, the appropriate choices for digital imagery acquisition timing and change interval length definition, are discussed. Second, pre-processing routines either to establish a more direct linkage between remote sensing data and biophysical phenomena, or to temporally mosaic imagery and extract time profiles, are reviewed. Third, the actual change detection methods themselves are categorized in an analytical framework and critically evaluated. Ultimately, the paper highlights how some of these methodological aspects are being fine-tuned as this review is being written, and we summarize the new developments that can be expected in the near future. The review highlights the high complementarity between different change detection methods.

2,043 citations


"A comprehensive review of earthquak..." refers background in this paper

  • ...Post-classification comparison examines different temporal images after independent classification (Coppin et al., 2004)....

    [...]

Journal ArticleDOI
08 Jul 1993-Nature
TL;DR: In this article, the authors used Synthetic Aperture Radar (SAR) interferometry to capture the movements produced by the 1992 earthquake in Landers, California, by combining topographic information with SAR images obtained by the ERS-1 satellite before and after the earthquake.
Abstract: GEODETIC data, obtained by ground- or space-based techniques, can be used to infer the distribution of slip on a fault that has ruptured in an earthquake. Although most geodetic techniques require a surveyed network to be in place before the earthquake1–3, satellite images, when collected at regular intervals, can capture co-seismic displacements without advance knowledge of the earthquake's location. Synthetic aperture radar (SAR) interferometry, first introduced4 in 1974 for topographic mapping5–8 can also be used to detect changes in the ground surface, by removing the signal from the topography9,10. Here we use SAR interferometry to capture the movements produced by the 1992 earthquake in Landers, California11. We construct an interferogram by combining topographic information with SAR images obtained by the ERS-1 satellite before and after the earthquake. The observed changes in range from the ground surface to the satellite agree well with the slip measured in the field, with the displacements measured by surveying, and with the results of an elastic dislocation model. As a geodetic tool, the SAR interferogram provides a denser spatial sampling (100 m per pixel) than surveying methods1–3 and a better precision (∼3 cm) than previous space imaging techniques12,13.

1,970 citations


"A comprehensive review of earthquak..." refers methods in this paper

  • ..., 2009); InSAR (Interferometric Synthetic Aperture Radar) for measuring Earth’s surface deformation (Gabriel et al., 1989; Massonnet et al., 1993); optical, SAR, LiDAR (Light Detection And Ranging) data can also be used for building damage assessment (Ehrlich et al....

    [...]

  • ...…et al., 2006; Joyce et al., 2009); InSAR (Interferometric Synthetic Aperture Radar) for measuring Earth’s surface deformation (Gabriel et al., 1989; Massonnet et al., 1993); optical, SAR, LiDAR (Light Detection And Ranging) data can also be used for building damage assessment (Ehrlich et al.,…...

    [...]

Journal ArticleDOI
TL;DR: In this article, a technique based on synthetic aperture radar (SAR) interferometry is described, which uses SAR images for measuring very small (1 cm or less) surface motions with good resolution (10 m) over swaths of up to 50 km.
Abstract: A technique is described, based on synthetic aperture radar (SAR) interferometry, which uses SAR images for measuring very small (1 cm or less) surface motions with good resolution (10 m) over swaths of up to 50 km. The method was applied to a Seasat data set of an imaging site in Imperial Valley, California, where motion effects were observed that were identified with movements due to the expansion of water-absorbing clays. The technique can be used for accurate measurements of many geophysical phenomena, including swelling and buckling in fault zones, residual displacements from seismic events, and prevolcanic swelling.

1,325 citations


"A comprehensive review of earthquak..." refers methods in this paper

  • ..., 2009); InSAR (Interferometric Synthetic Aperture Radar) for measuring Earth’s surface deformation (Gabriel et al., 1989; Massonnet et al., 1993); optical, SAR, LiDAR (Light Detection And Ranging) data can also be used for building damage assessment (Ehrlich et al....

    [...]

  • ...…et al., 2002; Ouzounov et al., 2006; Joyce et al., 2009); InSAR (Interferometric Synthetic Aperture Radar) for measuring Earth’s surface deformation (Gabriel et al., 1989; Massonnet et al., 1993); optical, SAR, LiDAR (Light Detection And Ranging) data can also be used for building damage…...

    [...]