scispace - formally typeset
Search or ask a question

Showing papers in "Isprs Journal of Photogrammetry and Remote Sensing in 2013"


Journal ArticleDOI
Masroor Hussain1, Dongmei Chen1, Angela Cheng1, Hui Wei, David Stanley 
TL;DR: This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context, followed by a review of object-basedchange detection techniques.
Abstract: The appetite for up-to-date information about earth’s surface is ever increasing, as such information provides a base for a large number of applications, including local, regional and global resources monitoring, land-cover and land-use change monitoring, and environmental studies. The data from remote sensing satellites provide opportunities to acquire information about land at varying resolutions and has been widely used for change detection studies. A large number of change detection methodologies and techniques, utilizing remotely sensed data, have been developed, and newer techniques are still emerging. This paper begins with a discussion of the traditionally pixel-based and (mostly) statistics-oriented change detection techniques which focus mainly on the spectral values and mostly ignore the spatial context. This is succeeded by a review of object-based change detection techniques. Finally there is a brief discussion of spatial data mining techniques in image processing and change detection from remote sensing data. The merits and issues of different techniques are compared. The importance of the exponential increase in the image data volume and multiple sensors and associated challenges on the development of change detection techniques are highlighted. With the wide use of very-high-resolution (VHR) remotely sensed images, object-based methods and data mining techniques may have more potential in change detection.

1,159 citations


Journal ArticleDOI
TL;DR: In this article, a 3D point cloud comparison method is proposed to measure surface changes via 3D surface estimation and orientation in 3D at a scale consistent with the local surface roughness.
Abstract: Surveying techniques such as terrestrial laser scanner have recently been used to measure surface changes via 3D point cloud (PC) comparison. Two types of approaches have been pursued: 3D tracking of homologous parts of the surface to compute a displacement field, and distance calculation between two point clouds when homologous parts cannot be defined. This study deals with the second approach, typical of natural surfaces altered by erosion, sedimentation or vegetation between surveys. Current comparison methods are based on a closest point distance or require at least one of the PC to be meshed with severe limitations when surfaces present roughness elements at all scales. To solve these issues, we introduce a new algorithm performing a direct comparison of point clouds in 3D. The method has two steps: (1) surface normal estimation and orientation in 3D at a scale consistent with the local surface roughness; (2) measurement of the mean surface change along the normal direction with explicit calculation of a local confidence interval. Comparison with existing methods demonstrates the higher accuracy of our approach, as well as an easier workflow due to the absence of surface meshing or Digital Elevation Model (DEM) generation. Application of the method in a rapidly eroding, meandering bedrock river (Rangitikei River canyon) illustrates its ability to handle 3D differences in complex situations (flat and vertical surfaces on the same scene), to reduce uncertainty related to point cloud roughness by local averaging and to generate 3D maps of uncertainty levels. We also demonstrate that for high precision survey scanners, the total error budget on change detection is dominated by the point clouds registration error and the surface roughness. Combined with mm-range local georeferencing of the point clouds, levels of detection down to 6 mm (defined at 95% confidence) can be routinely attained in situ over ranges of 50 m. We provide evidence for the self-affine behaviour of different surfaces. We show how this impacts the calculation of normal vectors and demonstrate the scaling behaviour of the level of change detection. The algorithm has been implemented in a freely available open source software package. It operates in complex 3D cases and can also be used as a simpler and more robust alternative to DEM differencing for the 2D cases.

881 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluated the potential of the future ESA Sentinel-2 (S-2) satellite for estimation of canopy chlorophyll content, leaf area index (LAI) and leaf chlorophyLL concentration (LCC) using data from multiple field campaigns.
Abstract: The red edge position (REP) in the vegetation spectral reflectance is a surrogate measure of vegetation chlorophyll content, and hence can be used to monitor the health and function of vegetation. The Multi-Spectral Instrument (MSI) aboard the future ESA Sentinel-2 (S-2) satellite will provide the opportunity for estimation of the REP at much higher spatial resolution (20 m) than has been previously possible with spaceborne sensors such as Medium Resolution Imaging Spectrometer (MERIS) aboard ENVISAT. This study aims to evaluate the potential of S-2 MSI sensor for estimation of canopy chlorophyll content, leaf area index (LAI) and leaf chlorophyll concentration (LCC) using data from multiple field campaigns. Included in the assessed field campaigns are results from SEN3Exp in Barrax, Spain composed of 35 elementary sampling units (ESUs) of LCC and LAI which have been assessed for correlation with simulated MSI data using a CASI airborne imaging spectrometer. Analysis also presents results from SicilyS2EVAL, a campaign consisting of 25 ESUs in Sicily, Italy supported by a simultaneous Specim Aisa-Eagle data acquisition. In addition, these results were compared to outputs from the PROSAIL model for similar values of biophysical variables in the ESUs. The paper in turn assessed the scope of S-2 for retrieval of biophysical variables using these combined datasets through investigating the performance of the relevant Vegetation Indices (VIs) as well as presenting the novel Inverted Red-Edge Chlorophyll Index (IRECI) and Sentinel-2 Red-Edge Position (S2REP). Results indicated significant relationships between both canopy chlorophyll content and LAI for simulated MSI data using IRECI or the Normalised Difference Vegetation Index (NDVI) while S2REP and the MERIS Terrestrial Chlorophyll Index (MTCI) were found to have the strongest correlation for retrieval of LCC.

463 citations


Journal ArticleDOI
TL;DR: This paper provides a comprehensive review of remote sensing methods in two categories: multi-temporal techniques that evaluate the changes between the pre- and post-event data and mono-tem temporal techniques that interpret only the post- event data.
Abstract: Earthquakes are among the most catastrophic natural disasters to affect mankind. One of the critical problems after an earthquake is building damage assessment. The area, amount, rate, and type of the damage are essential information for rescue, humanitarian and reconstruction operations in the disaster area. Remote sensing techniques play an important role in obtaining building damage information because of their non-contact, low cost, wide field of view, and fast response capacities. Now that more and diverse types of remote sensing data become available, various methods are designed and reported for building damage assessment. This paper provides a comprehensive review of these methods in two categories: multi-temporal techniques that evaluate the changes between the pre- and post-event data and mono-temporal techniques that interpret only the post-event data. Both categories of methods are discussed and evaluated in detail in terms of the type of remote sensing data utilized, including optical, LiDAR and SAR data. Performances of the methods and future efforts are drawn from this extensive evaluation.

339 citations


Journal ArticleDOI
TL;DR: Historic Building Information Modelling can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models for both the analysis and conservation of historic objects, structures and environments.
Abstract: Historic Building Information Modelling (HBIM) is a novel prototype library of parametric objects, based on historic architectural data and a system of cross platform programmes for mapping parametric objects onto point cloud and image survey data. The HBIM process begins with remote collection of survey data using a terrestrial laser scanner combined with digital photo modelling. The next stage involves the design and construction of a parametric library of objects, which are based on the manuscripts ranging from Vitruvius to 18th century architectural pattern books. In building parametric objects, the problem of file format and exchange of data has been overcome within the BIM ArchiCAD software platform by using geometric descriptive language (GDL). The plotting of parametric objects onto the laser scan surveys as building components to create or form the entire building is the final stage in the reverse engineering process. The final HBIM product is the creation of full 3D models including detail behind the object’s surface concerning its methods of construction and material make-up. The resultant HBIM can automatically create cut sections, details and schedules in addition to the orthographic projections and 3D models (wire frame or textured) for both the analysis and conservation of historic objects, structures and environments.

327 citations


Journal ArticleDOI
TL;DR: The Simple Morphological Filter is intended to serve as a stable base from which more advanced progressive filters can be designed and is particularly effective at minimizing Type I error rates, while maintaining acceptable Type II error rates.
Abstract: Terrain classification of LIDAR point clouds is a fundamental problem in the production of Digital Elevation Models (DEMs). The Simple Morphological Filter (SMRF) addresses this problem by applying image processing techniques to the data. This implementation uses a linearly increasing window and simple slope thresholding, along with a novel application of image inpainting techniques. When tested against the ISPRS LIDAR reference dataset, SMRF achieved a mean 85.4% Kappa score when using a single parameter set and 90.02% when optimized. SMRF is intended to serve as a stable base from which more advanced progressive filters can be designed. This approach is particularly effective at minimizing Type I error rates, while maintaining acceptable Type II error rates. As a result, the final surface preserves subtle surface variation in the form of tracks and trails that make this approach ideally suited for the production of DEMs used as ground surfaces in immersive virtual environments.

227 citations


Journal ArticleDOI
TL;DR: Support Vector Machines were shown to be affected by feature space size and could benefit from RF-based feature selection and Uncertainty measures from SVM are an informative source of information on the spatial distribution of error in the crop maps.
Abstract: Crop mapping is one major component of agricultural resource monitoring using remote sensing. Yield or water demand modeling requires that both, the total surface that is cultivated and the accurate distribution of crops, respectively is known. Map quality is crucial and influences the model outputs. Although the use of multi-spectral time series data in crop mapping has been acknowledged, the potentially high dimensionality of the input data remains an issue. In this study Support Vector Machines (SVM) are used for crop classification in irrigated landscapes at the object-level. Input to the classifications is 71 multi-seasonal spectral and geostatistical features computed from RapidEye time series. The random forest (RF) feature importance score was used to select a subset of features that achieved optimal accuracies. The relationship between the hard result accuracy and the soft output from the SVM is investigated by employing two measures of uncertainty, the maximum a posteriori probability and the alpha quadratic entropy. Specifically the effect of feature selection on map uncertainty is investigated by looking at the soft outputs of the SVM, in addition to classical accuracy metrics. Overall the SVMs applied to the reduced feature subspaces that were composed of the most informative multi-seasonal features led to a clear increase in classification accuracy up to 4.3%, and to a significant decline in thematic uncertainty. SVM was shown to be affected by feature space size and could benefit from RF-based feature selection. Uncertainty measures from SVM are an informative source of information on the spatial distribution of error in the crop maps.

226 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed the use of stem curve and crown size geometric measurements from TLS data as a basis for allometric biomass models rather than statistical three-dimensional point metrics, since TLS statistical metrics are dependent on various scanning parameters and tree neighbourhood characteristics.
Abstract: Determination of stem and crown biomass requires accurate measurements of individual tree stem, bark, branch and needles. These measurements are time-consuming especially for mature trees. Accurate field measurements can be done only in a destructive manner. Terrestrial laser scanning (TLS) measurements are a viable option for measuring the reference information needed. TLS measurements provide dense point clouds in which features describing biomass can be extracted for stem form and canopy dimensions. Existing biomass models do not utilise canopy size information and therefore TLS-based estimation methods should improve the accuracy of biomass estimation. The main objective of this study was to estimate single-tree-level aboveground biomass (AGB), based on models developed using TLS data. The modelling dataset included 64 laboratory-measured trees. Models were developed for total AGB, tree stem-, living branch- and dead branch biomass. Modelling results were also compared with existing individual tree-level biomass models and showed that AGB estimation accuracies were improved, compared with those of existing models. However, current biomass models based on diameter-at-breast height (DBH), tree height and species worked rather well for stem- and total biomass. TLS-based models improved estimation accuracies, especially estimation of branch biomass. We suggest the use of stem curve and crown size geometric measurements from TLS data as a basis for allometric biomass models rather than statistical three-dimensional point metrics, since TLS statistical metrics are dependent on various scanning parameters and tree neighbourhood characteristics.

225 citations


Journal ArticleDOI
TL;DR: The proposed method partitions MLS point clouds into a set of consecutive “scanning lines”, which each consists of a road cross section, and a moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns.
Abstract: Accurate 3D road information is important for applications such as road maintenance and virtual 3D modeling. Mobile laser scanning (MLS) is an efficient technique for capturing dense point clouds that can be used to construct detailed road models for large areas. This paper presents a method for extracting and delineating roads from large-scale MLS point clouds. The proposed method partitions MLS point clouds into a set of consecutive “scanning lines”, which each consists of a road cross section. A moving window operator is used to filter out non-ground points line by line, and curb points are detected based on curb patterns. The detected curb points are tracked and refined so that they are both globally consistent and locally similar. To evaluate the validity of the proposed method, experiments were conducted using two types of street-scene point clouds captured by Optech’s Lynx Mobile Mapper System. The completeness, correctness, and quality of the extracted roads are over 94.42%, 91.13%, and 91.3%, respectively, which proves the proposed method is a promising solution for extracting 3D roads from MLS point clouds.

219 citations


Journal ArticleDOI
Ali Ozgun Ok1
TL;DR: A distinctive specialty of the proposed approach is its applicability to buildings with diverse characteristics as well as to VHR images with significantly different illumination properties.
Abstract: In this study, we propose a novel methodology for automated detection of buildings from single very-high-resolution (VHR) multispectral images The methodology uses the principal evidence of buildings: the shadows that they cast We model the directional spatial relationship between buildings and their shadows using a recently proposed probabilistic landscape approach An effective shadow post-processing step is developed to focus on landscapes that belong to building regions The building regions are detected using an original two-level graph theory approach In the first level, each shadow region is addressed separately, and building regions are identified via iterative graph cuts designed in two-label partitioning The final building regions are characterised in a second level in which the previously labelled building regions are subjected to a single-step multi-label graph optimisation performed over the entire image domain Numerical assessments performed on 16 VHR GeoEye-1 images demonstrate that the proposed approach is highly robust and reliable A distinctive specialty of the proposed approach is its applicability to buildings with diverse characteristics as well as to VHR images with significantly different illumination properties

208 citations


Journal ArticleDOI
TL;DR: An efficient octree to store and compress 3D data without loss of precision is proposed and its usage is demonstrated for an exchange file format, fast point cloud visualization, sped-up 3D scan matching, and shape detection algorithms.
Abstract: Automated 3-dimensional modeling pipelines include 3D scanning, registration, data abstraction, and visualization. All steps in such a pipeline require the processing of a massive amount of 3D data, due to the ability of current 3D scanners to sample environments with a high density. The increasing sampling rates make it easy to acquire Billions of spatial data points. This paper presents algorithms and data structures for handling these data. We propose an efficient octree to store and compress 3D data without loss of precision. We demonstrate its usage for an exchange file format, fast point cloud visualization, sped-up 3D scan matching, and shape detection algorithms. We evaluate our approach using typical terrestrial laser scans.

Journal ArticleDOI
TL;DR: The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Abstract: Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.

Journal ArticleDOI
TL;DR: In this article, state-of-the-art methods on shadow detection were surveyed and categorized into six classes: histogram thresholding, invariant color models, object segmentation, geometrical methods, physics-based methods, unsupervised and supervised machine learning methods.
Abstract: Automatic shadow detection is a very important pre-processing step for many remote sensing applications, particularly for images acquired with high spatial resolution. In complex urban environments, shadows may occupy a significant portion of the image. Ignoring these regions would lead to errors in various applications, such as atmospheric correction and classification. To better understand the radiative impact of shadows, a physical study was conducted through the simulation of a synthetic urban canyon scene. Its results helped to explain the most common assumptions made on shadows from a physical point of view in the literature. With this understanding, state-of-the-art methods on shadow detection were surveyed and categorized into six classes: histogram thresholding, invariant color models, object segmentation, geometrical methods, physics-based methods, unsupervised and supervised machine learning methods. Among them, some methods were selected and tested on a large dataset of multispectral and hyperspectral airborne images with high spatial resolution. The dataset chosen contains a large variety of typical occidental urban scenes. The results were compared based on accurate reference shadow masks. In these experiments, histogram thresholding on RGB and NIR channels performed the best with an average accuracy of 92.5%, followed by physics-based methods, such as Richter’s method with 90.0%. Finally, this paper analyzes and discusses the limits of these algorithms, concluding with some recommendations for shadow detection.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that GeoBIA would benefit from another technical enhancement involving knowledge representation techniques such as ontologies, and summarize the main applications of ontologies in GeoABIA, especially for data discovery, automatic image interpretation, data interoperability, workflow management and data publication.
Abstract: Geographic Object-Based Image Analysis (GEOBIA) represents the most innovative new trend for processing remote sensing images that has appeared during the last decade. However, its application is mainly based on expert knowledge, which consequently highlights important scientific issues with respect to the robustness of the methods applied in GEOBIA. In this paper, we argue that GEOBIA would benefit from another technical enhancement involving knowledge representation techniques such as ontologies. Although the role of ontologies in Geographical Information Sciences (GISciences) is not a new topic, few works have discussed how ontologies, considered from the perspective of a remote sensing specialist, can contribute to advancing remote sensing science. We summarize the main applications of ontologies in GEOBIA, especially for data discovery, automatic image interpretation, data interoperability, workflow management and data publication. Finally, we discuss the major issues related to the construction of ontologies suitable for remote sensing applications and outline long-term future advances that can be expected for the remote sensing community.

Journal ArticleDOI
TL;DR: This paper demonstrates how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps of the Iterative Closest Point algorithm.
Abstract: Automatic 3D point cloud registration is a main issue in computer vision and remote sensing. One of the most commonly adopted solution is the well-known Iterative Closest Point (ICP) algorithm. This standard approach performs a fine registration of two overlapping point clouds by iteratively estimating the transformation parameters, assuming good a priori alignment is provided. A large body of literature has proposed many variations in order to improve each step of the process (namely selecting, matching, rejecting, weighting and minimizing). The aim of this paper is to demonstrate how the knowledge of the shape that best fits the local geometry of each 3D point neighborhood can improve the speed and the accuracy of each of these steps. First we present the geometrical features that form the basis of this work. These low-level attributes indeed describe the neighborhood shape around each 3D point. They allow to retrieve the optimal size to analyze the neighborhoods at various scales as well as the privileged local dimension (linear, planar, or volumetric). Several variations of each step of the ICP process are then proposed and analyzed by introducing these features. Such variants are compared on real datasets with the original algorithm in order to retrieve the most efficient algorithm for the whole process. Therefore, the method is successfully applied to various 3D lidar point clouds from airborne, terrestrial, and mobile mapping systems. Improvement for two ICP steps has been noted, and we conclude that our features may not be relevant for very dissimilar object samplings.

Journal ArticleDOI
TL;DR: In this article, a mapping method based on the Landsat data and a decision tree classification algorithm is described, which is used to automatically derive a spatially and temporally explicit time-series of surface water body extent on the Swan Coastal Plain.
Abstract: Detailed information on the spatiotemporal dynamic in surface water bodies is important for quantifying the effects of a drying climate, increased water abstraction and rapid urbanization on wetlands. The Swan Coastal Plain (SCP) with over 1500 wetlands is a global biodiversity hotspot located in the southwest of Western Australia, where more than 70% of the wetlands have been lost since European settlement. SCP is located in an area affected by recent climate change that also experiences rapid urban development and ground water abstraction. Landsat TM and ETM+ imagery from 1999 to 2011 has been used to automatically derive a spatially and temporally explicit time-series of surface water body extent on the SCP. A mapping method based on the Landsat data and a decision tree classification algorithm is described. Two generic classifiers were derived for the Landsat 5 and Landsat 7 data. Several landscape metrics were computed to summarize the intra and interannual patterns of surface water dynamic. Top of the atmosphere (TOA) reflectance of band 5 followed by TOA reflectance of bands 4 and 3 were the explanatory variables most important for mapping surface water bodies. Accuracy assessment yielded an overall classification accuracy of 96%, with 89% producer’s accuracy and 93% user’s accuracy of surface water bodies. The number, mean size, and total area of water bodies showed high seasonal variability with highest numbers in winter and lowest numbers in summer. The number of water bodies in winter increased until 2005 after which a decline can be noted. The lowest numbers occurred in 2010 which coincided with one of the years with the lowest rainfall in the area. Understanding the spatiotemporal dynamic of surface water bodies on the SCP constitutes the basis for understanding the effect of rainfall, water abstraction and urban development on water bodies in a spatially explicit way.

Journal ArticleDOI
TL;DR: In this paper, a contribution index was proposed to explore the effect of urbanization on land surface temperature (LST) using Moderate-Resolution Imaging Spectroradiometer (MODIS)-derived data with high temporal resolution.
Abstract: Beijing has experienced rapid urbanization and associated urban heat island effects and air pollution. In this study, a contribution index was proposed to explore the effect of urbanization on land surface temperature (LST) using Moderate-Resolution Imaging Spectroradiometer (MODIS)-derived data with high temporal resolution. The analysis indicated that different zones and landscapes make diurnally and seasonally different contributions to the regional thermal environment. The differences in contributions by the three main functional zones resulted from differences in their landscape compositions. The roles of landscapes in this process varied diurnally and seasonally. Urban land was the most important contributor to increases in regional LSTs. The contributions of cropland and forest varied distinctly between daytime and nighttime owing to differences in their thermal inertias. Vegetation had a notable cooling effect as the normalized vegetation difference index (NDVI) increased during summer. However, when the NDVI reached a certain value, the nighttime LST shifted markedly in other seasons. The results suggest that urban design based on vegetation partitions would be effective for regulating the thermal environment.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors developed an object detection framework using a discriminatively trained mixture model, which is mainly composed of two stages: model training and object detection, where multi-scale histogram of oriented gradients (HOG) feature pyramids of all training samples are constructed.
Abstract: Automatically detecting objects with complex appearance and arbitrary orientations in remote sensing imagery (RSI) is a big challenge. To explore a possible solution to the problem, this paper develops an object detection framework using a discriminatively trained mixture model. It is mainly composed of two stages: model training and object detection. In the model training stage, multi-scale histogram of oriented gradients (HOG) feature pyramids of all training samples are constructed. A mixture of multi-scale deformable part-based models is then trained for each object category by training a latent Support Vector Machine (SVM), where each part-based model is composed of a coarse root filter, a set of higher resolution part filters, and a set of deformation models. In the object detection stage, given a test imagery, its multi-scale HOG feature pyramid is firstly constructed. Then, object detection is performed by computing and thresholding the response of the mixture model. The quantitative comparisons with state-of-the-art approaches on two datasets demonstrate the effectiveness of the developed framework.

Journal ArticleDOI
TL;DR: In this article, a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery is proposed.
Abstract: Automatic 3D extraction of building roofs from remotely sensed data is important for many applications including city modelling This paper proposes a new method for automatic 3D roof extraction through an effective integration of LIDAR (Light Detection And Ranging) data and multispectral orthoimagery Using the ground height from a DEM (Digital Elevation Model), the raw LIDAR points are separated into two groups The first group contains the ground points that are exploited to constitute a ‘ground mask’ The second group contains the non-ground points which are segmented using an innovative image line guided segmentation technique to extract the roof planes The image lines are extracted from the grey-scale version of the orthoimage and then classified into several classes such as ‘ground’, ‘tree’, ‘roof edge’ and ‘roof ridge’ using the ground mask and colour and texture information from the orthoimagery During segmentation of the non-ground LIDAR points, the lines from the latter two classes are used as baselines to locate the nearby LIDAR points of the neighbouring planes For each plane a robust seed region is thereby defined using the nearby non-ground LIDAR points of a baseline and this region is iteratively grown to extract the complete roof plane Finally, a newly proposed rule-based procedure is applied to remove planes constructed on trees Experimental results show that the proposed method can successfully remove vegetation and so offers high extraction rates

Journal ArticleDOI
TL;DR: This work proposes an automatic process with emphasis on top-down approaches to automatic 3D building roof reconstruction from airborne laser scanning point clouds, and conducts a generative modeling to reconstruct roof models that fit the data.
Abstract: This paper presents a generative statistical approach to automatic 3D building roof reconstruction from airborne laser scanning point clouds. In previous works, bottom-up methods, e.g., points clustering, plane detection, and contour extraction, are widely used. Due to the data artefacts caused by tree clutter, reflection from windows, water features, etc., the bottom-up reconstruction in urban areas may suffer from a number of incomplete or irregular roof parts. Manually given geometric constraints are usually needed to ensure plausible results. In this work we propose an automatic process with emphasis on top-down approaches. The input point cloud is firstly pre-segmented into subzones containing a limited number of buildings to reduce the computational complexity for large urban scenes. For the building extraction and reconstruction in the subzones we propose a pure top-down statistical scheme, in which the bottom-up efforts or additional data like building footprints are no more required. Based on a predefined primitive library we conduct a generative modeling to reconstruct roof models that fit the data. Primitives are assembled into an entire roof with given rules of combination and merging. Overlaps of primitives are allowed in the assembly. The selection of roof primitives, as well as the sampling of their parameters, is driven by a variant of Markov Chain Monte Carlo technique with specified jump mechanism. Experiments are performed on data-sets of different building types (from simple houses, high-rise buildings to combined building groups) and resolutions. The results show robustness despite the data artefacts mentioned above and plausibility in reconstruction.

Journal ArticleDOI
TL;DR: In this study, a multi-scale approach was used for classifying land cover in a high resolution image of an urban area using pixels and image segments assigned the spectral, texture, size, and shape information of their super-objects from coarser segmentations of the same scene.
Abstract: In this study, a multi-scale approach was used for classifying land cover in a high resolution image of an urban area. Pixels and image segments were assigned the spectral, texture, size, and shape information of their super-objects (i.e. the segments that they are located within) from coarser segmentations of the same scene, and this set of super-object information was used as additional input data for image classification. The accuracies of classifications that included super-object variables were compared with the classification accuracies of image segmentations that did not include super-object information. The highest overall accuracy and kappa coefficient achieved without super-object information was 78.11% and 0.727%, respectively. When single pixels or fine-scale image segments were assigned the statistics of their super-objects prior to classification, overall accuracy increased to 84.42% and the kappa coefficient increased to 0.804.

Journal ArticleDOI
TL;DR: Among the strategies for dense 3D reconstruction, using the presented method for solving the scale problem and PMVS on the images captured with two DSLR cameras resulted in a dense point cloud as accurate as the Nikon laser scanner dataset.
Abstract: Photogrammetric methods for dense 3D surface reconstruction are increasingly available to both professional and amateur users who have requirements that span a wide variety of applications. One of the key concerns in choosing an appropriate method is to understand the achievable accuracy and how choices made within the workflow can alter that outcome. In this paper we consider accuracy in two components: the ability to generate a correctly scaled 3D model; and the ability to automatically deliver a high quality data set that provides good agreement to a reference surface. The determination of scale information is particularly important, since a network of images usually only provides angle measurements and thus leads to unscaled geometry. A solution is the introduction of known distances in object space, such as base lines between camera stations or distances between control points. In order to avoid using known object distances, the method presented in this paper exploits a calibrated stereo camera utilizing the calibrated base line information from the camera pair as an observational based geometric constraint. The method provides distance information throughout the object volume by orbiting the object. In order to test the performance of this approach, four topical surface matching methods have been investigated to determine their ability to produce accurate, dense point clouds. The methods include two versions of Semi-Global Matching as well as MicMac and Patch-based Multi-View Stereo (PMVS). These methods are implemented on a set of stereo images captured from four carefully selected objects by using (1) an off-the-shelf low cost 3D camera and (2) a pair of Nikon D700 DSLR cameras rigidly mounted in close proximity to each other. Inter-comparisons demonstrate the subtle differences between each of these permutations. The point clouds are also compared to a dataset obtained with a Nikon MMD laser scanner. Finally, the established process of achieving accurate point clouds from images and known object space distances are compared with the presented strategies. Results from the matching demonstrate that if a good imaging network is provided, using a stereo camera and bundle adjustment with geometric constraints can effectively resolve the scale. Among the strategies for dense 3D reconstruction, using the presented method for solving the scale problem and PMVS on the images captured with two DSLR cameras resulted in a dense point cloud as accurate as the Nikon laser scanner dataset.

Journal ArticleDOI
TL;DR: An algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data is based on a novel combination of two modified versions of the parametric active contour or snake model based on the navigation information obtained from the mobile mapping vehicle.
Abstract: Terrestrial mobile laser scanning systems provide rapid and cost effective 3D point cloud data which can be used for extracting features such as the road edge along a route corridor. This information can assist road authorities in carrying out safety risk assessment studies along road networks. The knowledge of the road edge is also a prerequisite for the automatic estimation of most other road features. In this paper, we present an algorithm which has been developed for extracting left and right road edges from terrestrial mobile LiDAR data. The algorithm is based on a novel combination of two modified versions of the parametric active contour or snake model. The parameters involved in the algorithm are selected empirically and are fixed for all the road sections. We have developed a novel way of initialising the snake model based on the navigation information obtained from the mobile mapping vehicle. We tested our algorithm on different types of road sections representing rural, urban and national primary road sections. The successful extraction of road edges from these multiple road section environments validates our algorithm. These findings and knowledge provide valuable insights as well as a prototype road edge extraction tool-set, for both national road authorities and survey companies.

Journal ArticleDOI
TL;DR: A novel, fully automatic method for the reconstruction of three-dimensional building models with prototypical roofs (CityGML LoD2) from LIDAR data and building footprints is presented, able to incorporate additional features enabling a significant improvement in model selection accuracy.
Abstract: This article presents a novel, fully automatic method for the reconstruction of three-dimensional building models with prototypical roofs (CityGML LoD2) from LIDAR data and building footprints. The proposed method derives accurate results from sparse point data sets and is suitable for large area reconstruction. Sparse LIDAR data are widely available nowadays. Robust estimation methods such as RANSAC/MSAC, are applied to derive best fitting roof models in a model-driven way. For the identification of the most probable roof model, supervised machine learning methods (Support Vector Machines) are used. In contrast to standard approaches (where the best model is selected via MDL or AIC), supervised classification is able to incorporate additional features enabling a significant improvement in model selection accuracy.

Journal ArticleDOI
TL;DR: Experimental results suggest that the proposed method is capable of preserving discontinuities of landscapes and reducing the omission errors and total errors by approximately 10% and 6% respectively, which would significantly decrease the cost of the manual operation required for correcting the result in post-processing.
Abstract: Progressive TIN densification (PTD) is one of the classic methods for filtering airborne LiDAR point clouds. However, it may fail to preserve ground measurements in areas with steep terrain. A method is proposed to improve the PTD using a point cloud segmentation method, namely segmentation using smoothness constraint (SUSC). The classic PTD has two core steps. The first is selecting seed points and constructing the initial TIN. The second is an iterative densification of the TIN. Our main improvement is embedding the SUSC between these two steps. Specifically, after selecting the lowest points in each grid cell as initial ground seed points, SUSC is employed to expand the set of ground seed points as many as possible, as this can identify more ground seed points for the subsequent densification of the TIN-based terrain model. Seven datasets of ISPRS Working Group III/3 are utilized to test our proposed algorithm and the classic PTD. Experimental results suggest that, compared with the PTD, the proposed method is capable of preserving discontinuities of landscapes and reducing the omission errors and total errors by approximately 10% and 6% respectively, which would significantly decrease the cost of the manual operation required for correcting the result in post-processing.

Journal ArticleDOI
TL;DR: In this article, a new approach to estimate soil moisture (SM) based on evaporative fraction (EF) retrieved from optical/thermal infrared MODIS data is presented for Canadian Prairies in parts of Saskatchewan and Alberta.
Abstract: A new approach to estimate soil moisture (SM) based on evaporative fraction (EF) retrieved from optical/thermal infrared MODIS data is presented for Canadian Prairies in parts of Saskatchewan and Alberta. An EF model using the remotely sensed land surface temperature (Ts)/vegetation index concept was modified by incorporating North American Regional Reanalysis (NAAR) Ta data and used for SM estimation. Two different combinations of temperature and vegetation fraction using the difference between Ts from MODIS Aqua and Terra images and Ta from NARR data (Ts−Ta Aqua-day and Ts−Ta Terra-day, respectively) were proposed and the results were compared with those obtained from a previously improved model (ΔTs Aqua-DayNight) as a reference. For the estimation of SM from EF, two empirical models were tested and discussed to find the most appropriate model for converting MODIS-derived EF data to SM values. Estimated SM values were then correlated with in situ SM measurements and their relationships were statistically analyzed. Results indicated statistically significant correlations between SM estimated from all three EF estimation approaches and field measured SM values (R2 = 0.42–0.77, p values

Journal ArticleDOI
TL;DR: In this paper, the utility of the partial least squares discriminant analysis (PLS-DA) technique in accurately classifying six exotic commercial forest species using airborne AISA Eagle hyperspectral imagery (393-900nm) was examined.
Abstract: Discriminating commercial tree species using hyperspectral remote sensing techniques is critical in monitoring the spatial distributions and compositions of commercial forests. However, issues related to data dimensionality and multicollinearity limit the successful application of the technology. The aim of this study was to examine the utility of the partial least squares discriminant analysis (PLS-DA) technique in accurately classifying six exotic commercial forest species (Eucalyptus grandis, Eucalyptus nitens, Eucalyptus smithii, Pinus patula, Pinus elliotii and Acacia mearnsii) using airborne AISA Eagle hyperspectral imagery (393–900 nm). Additionally, the variable importance in the projection (VIP) method was used to identify subsets of bands that could successfully discriminate the forest species. Results indicated that the PLS-DA model that used all the AISA Eagle bands (n = 230) produced an overall accuracy of 80.61% and a kappa value of 0.77, with user’s and producer’s accuracies ranging from 50% to 100%. In comparison, incorporating the optimal subset of VIP selected wavebands (n = 78) in the PLS-DA model resulted in an improved overall accuracy of 88.78% and a kappa value of 0.87, with user’s and producer’s accuracies ranging from 70% to 100%. Bands located predominantly within the visible region of the electromagnetic spectrum (393–723 nm) showed the most capability in terms of discriminating between the six commercial forest species. Overall, the research has demonstrated the potential of using PLS-DA for reducing the dimensionality of hyperspectral datasets as well as determining the optimal subset of bands to produce the highest classification accuracies.

Journal ArticleDOI
TL;DR: A new emerging technique in the field of Bayesian nonparametric modeling is exploited: Gaussian process regression (GPR) for retrieval, which is an accurate method that also provides uncertainty intervals along with the mean estimates.
Abstract: ESA’s upcoming Sentinel-2 (S2) Multispectral Instrument (MSI) foresees to provide continuity to land monitoring services by relying on optical payload with visible, near infrared and shortwave infrared sensors with high spectral, spatial and temporal resolution. This unprecedented data availability leads to an urgent need for developing robust and accurate retrieval methods, which ideally should provide uncertainty intervals for the predictions. Statistical learning regression algorithms are powerful candidats for the estimation of biophysical parameters from satellite reflectance measurements because of their ability to perform adaptive, nonlinear data fitting. In this paper, we focus on a new emerging technique in the field of Bayesian nonparametric modeling. We exploit Gaussian process regression (GPR) for retrieval, which is an accurate method that also provides uncertainty intervals along with the mean estimates. This distinct feature is not shared by other machine learning approaches. In view of implementing the regressor into operational monitoring applications, here the portability of locally trained GPR models was evaluated. Experimental data came from the ESA-led field campaign SPARC (Barrax, Spain). For various simulated S2 configurations (S2-10m, S2-20m and S2-60m) two important biophysical parameters were estimated: leaf chlorophyll content (LCC) and leaf area index (LAI). Local evaluation of an extended training dataset with more variation over bare soil sites led to improved LCC and LAI mapping with reduced uncertainties. GPR reached the 10% precision required by end users, with for LCC a NRMSE of 3.5–9.2% ( r 2 : 0.95–0.99) and for LAI a NRMSE of 6.5–7.3% ( r 2 : 0.95–0.96). The developed GPR models were subsequently applied to simulated Sentinel images over various sites. The associated uncertainty maps proved to be a good indicator for evaluating the robustness of the retrieval performance. The generally low uncertainty intervals over vegetated surfaces suggest that the locally trained GPR models are portable to other sites and conditions.

Journal ArticleDOI
TL;DR: In this paper, an automated tree stem detection algorithm based on the range images of single scans was developed and applied to the data, which was achieved by slicing the point cloud and fitting circles to the slices using three different algorithms (Lemen, Pratt and Taubin), resulting in diameter profiles for each detected tree.
Abstract: Terrestrial laser scanning (TLS) has been used to estimate a number of biophysical and structural vegetation parameters. Of these stem diameter is a primary input to traditional forest inventory. While many experimental studies have confirmed the potential for TLS to successfully extract stem diameter, the estimation accuracies differ strongly for these studies – due to differences in experimental design, data processing and test plot characteristics. In order to provide consistency and maximize estimation accuracy, a systematic study into the impact of these variables is required. To contribute to such an approach, 12 scans were acquired with a FARO photon 120 at two test plots (Beech, Douglas fir) to assess the effects of scan mode and circle fitting on the extraction of stem diameter and volume. An automated tree stem detection algorithm based on the range images of single scans was developed and applied to the data. Extraction of stem diameter was achieved by slicing the point cloud and fitting circles to the slices using three different algorithms (Lemen, Pratt and Taubin), resulting in diameter profiles for each detected tree. Diameter at breast height (DBH) was determined using both the single value for the diameter fitted at the nominal breast height and by a linear fit of the stem diameter vertical profile. The latter is intended to reduce the influence of outliers and errors in the ground level determination. TLS-extracted DBH was compared to tape-measured DBH. Results show that tree stems with an unobstructed view to the scanner can be successfully extracted automatically from range images of the TLS data with detection rates of 94% for Beech and 96% for Douglas fir. If occlusion of trees is accounted for stem detection rates decrease to 85% (Beech) and 84% (Douglas fir). As far as the DBH estimation is concerned, both DBH extraction methods yield estimates which agree with reference measurements, however, the linear fit based approach proved to be more robust for the single scan DBH extraction (RMSE range 1.39–1.74 cm compared to 1.47–2.43 cm). With regard to the different circle fit algorithms applied, the algorithm by Lemen showed the best overall performance (RMSE range 1.39–1.65 cm compared to 1.49–2.43 cm). The Lemen algorithm was also found to be more robust in case of noisy data. Compared to the single scans, the DBH extraction from the merged scan data proved to be superior with significant lower RMSE’s (0.66–1.21 cm). The influence of scan mode and circle fitting is reflected in the stem volume estimates, too. Stem volumes extracted from the single scans exhibit a large variability with deviations from the reference volumes ranging from −34% to 44%. By contrast volumes extracted from the merged scans only vary weakly (−2% to 6%) and show a marginal influence of circle fitting.

Journal ArticleDOI
TL;DR: In this paper, the influence of morphophysiological variation at different growth stages on the performance of vegetation indices for estimating plant N status has been confirmed, but the underlying mechanisms explaining how this variation impacts hyperspectral measures and canopy N status are poorly understood.
Abstract: The influence of morphophysiological variation at different growth stages on the performance of vegetation indices for estimating plant N status has been confirmed. However, the underlying mechanisms explaining how this variation impacts hyperspectral measures and canopy N status are poorly understood. In this study, four field experiments involving different N rates were conducted to optimize the selection of sensitive bands and evaluate their performance for modeling canopy N status of rice at various growth stages in 2007 and 2008. The results indicate that growth stages negatively affect hyperspectral indices in different ways in modeling leaf N concentration (LNC), plant N concentration (PNC) and plant N uptake (PNU). Published hyperspectral indices showed serious limitations in estimating LNC, PNC and PNU. The newly proposed best 2-band indices significantly improved the accuracy for modeling PNU ( R 2 = 0.75–0.85) by using the lambda by lambda band-optimized algorithm. However, the newly proposed 2-band indices still have limitations in modeling LNC and PNC because the use of only 2-band indices is not fully adequate to provide the maximum N-related information. The optimum multiple narrow band reflectance (OMNBR) models significantly increase the accuracy for estimating the LNC ( R 2 = 0.67–0.71) and PNC ( R 2 = 0.57–0.78) with six bands. Results suggest the combinations of center of red-edge (735 nm) with longer red-edge bands (730–760 nm) are very efficient for estimating PNC after heading, whereas the combinations of blue with green bands are more efficient for modeling PNC across all stages. The center of red-edge (730–735 nm) paired with early NIR bands (775–808 nm) are predominant in estimating PNU before heading, whereas the longer red-edge (750 nm) paired with the center of “NIR shoulder” (840–850 nm) are dominant in estimating PNU after heading and across all stages. The OMNBR models have the advantage of modeling canopy N status for the entire growth period. However, the best 2-band indices are much easier to use. Alternatively, it is also possible to use the best 2-band indices to monitor PNU before heading and PNC after heading. This study systematically explains the influences of N dilution effect on hyperspectral band combinations in relating to the different N variables and further recommends the best band combinations which may provide an insight for developing new hyperspectral vegetation indices.