scispace - formally typeset
Search or ask a question

Showing papers in "Isprs Journal of Photogrammetry and Remote Sensing in 2016"


Journal ArticleDOI
TL;DR: This review has revealed that RF classifier can successfully handle high data dimensionality and multicolinearity, being both fast and insensitive to overfitting.
Abstract: A random forest (RF) classifier is an ensemble classifier that produces multiple decision trees, using a randomly selected subset of training samples and variables. This classifier has become popular within the remote sensing community due to the accuracy of its classifications. The overall objective of this work was to review the utilization of RF classifier in remote sensing. This review has revealed that RF classifier can successfully handle high data dimensionality and multicolinearity, being both fast and insensitive to overfitting. It is, however, sensitive to the sampling design. The variable importance (VI) measurement provided by the RF classifier has been extensively exploited in different scenarios, for example to reduce the number of dimensions of hyperspectral data, to identify the most relevant multisource remote sensing and geographic data, and to select the most suitable season to classify particular target classes. Further investigations are required into less commonly exploited uses of this classifier, such as for sample proximity analysis to detect and remove outliers in the training samples.

3,244 citations


Journal ArticleDOI
TL;DR: This survey focuses on more generic object categories including, but not limited to, road, building, tree, vehicle, ship, airport, urban-area, and proposes two promising research directions, namely deep learning- based feature representation and weakly supervised learning-based geospatial object detection.
Abstract: Object detection in optical remote sensing images, being a fundamental but challenging problem in the field of aerial and satellite image analysis, plays an important role for a wide range of applications and is receiving significant attention in recent years. While enormous methods exist, a deep review of the literature concerning generic object detection is still lacking. This paper aims to provide a review of the recent progress in this field. Different from several previously published surveys that focus on a specific object class such as building and road, we concentrate on more generic object categories including, but are not limited to, road, building, tree, vehicle, ship, airport, urban-area. Covering about 270 publications we survey (1) template matching-based object detection methods, (2) knowledge-based object detection methods, (3) object-based image analysis (OBIA)-based object detection methods, (4) machine learning-based object detection methods, and (5) five publicly available datasets and three standard evaluation metrics. We also discuss the challenges of current studies and propose two promising research directions, namely deep learning-based feature representation and weakly supervised learning-based geospatial object detection. It is our hope that this survey will be beneficial for the researchers to have better understanding of this research field.

994 citations


Journal ArticleDOI
TL;DR: In this article, the authors present the issues and opportunities associated with generating and validating time-series informed annual, large-area, land cover products, and identify methods suited to incorporating time series information and other novel inputs for land cover characterization.
Abstract: Accurate land cover information is required for science, monitoring, and reporting. Land cover changes naturally over time, as well as a result of anthropogenic activities. Monitoring and mapping of land cover and land cover change in a consistent and robust manner over large areas is made possible with Earth Observation (EO) data. Land cover products satisfying a range of science and policy information needs are currently produced periodically at different spatial and temporal scales. The increased availability of EO data—particularly from the Landsat archive (and soon to be augmented with Sentinel-2 data)—coupled with improved computing and storage capacity with novel image compositing approaches, have resulted in the availability of annual, large-area, gap-free, surface reflectance data products. In turn, these data products support the development of annual land cover products that can be both informed and constrained by change detection outputs. The inclusion of time series change in the land cover mapping process provides information on class stability and informs on logical class transitions (both temporally and categorically). In this review, we present the issues and opportunities associated with generating and validating time-series informed annual, large-area, land cover products, and identify methods suited to incorporating time series information and other novel inputs for land cover characterization.

784 citations


Journal ArticleDOI
TL;DR: This paper provides a review of the main PSI algorithms proposed in the literature, describing the main approaches and the most important works devoted to single aspects of PSI, and discusses the main open PSI problems and the associated future research lines.
Abstract: Persistent Scatterer Interferometry (PSI) is a powerful remote sensing technique able to measure and monitor displacements of the Earth’s surface over time. Specifically, PSI is a radar-based technique that belongs to the group of differential interferometric Synthetic Aperture Radar (SAR). This paper provides a review of such PSI technique. It firstly recalls the basic principles of SAR interferometry, differential SAR interferometry and PSI. Then, a review of the main PSI algorithms proposed in the literature is provided, describing the main approaches and the most important works devoted to single aspects of PSI. A central part of this paper is devoted to the discussion of different characteristics and technical aspects of PSI, e.g. SAR data availability, maximum deformation rates, deformation time series, thermal expansion component of PSI observations, etc. The paper then goes through the most important PSI validation activities, which have provided valuable inputs for the PSI development and its acceptability at scientific, technical and commercial level. This is followed by a description of the main PSI applications developed in the last fifteen years. The paper concludes with a discussion of the main open PSI problems and the associated future research lines.

661 citations


Journal ArticleDOI
TL;DR: In this article, the advances of applying terrestrial laser scanning (TLS) in forest inventories, discusses its properties with reference to other related techniques and discusses the future prospects of this technique.
Abstract: Decision making on forest resources relies on the precise information that is collected using inventory. There are many different kinds of forest inventory techniques that can be applied depending on the goal, scale, resources and the required accuracy. Most of the forest inventories are based on field sample. Therefore, the accuracy of the forest inventories depends on the quality and quantity of the field sample. Conventionally, field sample has been measured using simple tools. When map is required, remote sensing materials are needed. Terrestrial laser scanning (TLS) provides a measurement technique that can acquire millimeter-level of detail from the surrounding area, which allows rapid, automatic and periodical estimates of many important forest inventory attributes. It is expected that TLS will be operationally used in forest inventories as soon as the appropriate software becomes available, best practices become known and general knowledge of these findings becomes more wide spread. Meanwhile, mobile laser scanning, personal laser scanning, and image-based point clouds became capable of capturing similar terrestrial point cloud data as TLS. This paper reviews the advances of applying TLS in forest inventories, discusses its properties with reference to other related techniques and discusses the future prospects of this technique.

502 citations


Journal ArticleDOI
TL;DR: Modifications to the automated, open source NASA Ames Stereo Pipeline to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, and point cloud co-registration.
Abstract: We adapted the automated, open source NASA Ames Stereo Pipeline (ASP) to generate digital elevation models (DEMs) and orthoimages from very-high-resolution (VHR) commercial imagery of the Earth. These modifications include support for rigorous and rational polynomial coefficient (RPC) sensor models, sensor geometry correction, bundle adjustment, point cloud co-registration, and significant improvements to the ASP code base. We outline a processing workflow for ∼0.5 m ground sample distance (GSD) DigitalGlobe WorldView-1 and WorldView-2 along-track stereo image data, with an overview of ASP capabilities, an evaluation of ASP correlator options, benchmark test results, and two case studies of DEM accuracy. Output DEM products are posted at ∼2 m with direct geolocation accuracy of

470 citations


Journal ArticleDOI
TL;DR: In this article, the authors reviewed the state-of-the-art remote sensing technologies, including platforms and sensors, the topics representing the primary research interest in the ISPRS Technical Commission I activities.
Abstract: The objective of this article is to review the state-of-the-art remote sensing technologies, including platforms and sensors, the topics representing the primary research interest in the ISPRS Technical Commission I activities. Due to ever advancing technologies, the remote sensing field is experiencing unprecedented developments recently, fueled by sensor advancements and continuously increasing information infrastructure. The scope and performance potential of sensors in terms of spatial, spectral and temporal sensing abilities have expanded far beyond the traditional boundaries of remote sensing, resulting in significantly better observation capabilities. First, platform developments are reviewed with the main focus on emerging new remote sensing satellite constellations and UAS (Unmanned Aerial System) platforms. Next, sensor georeferencing and supporting navigation infrastructure, an enabling technology for remote sensing, are discussed. Finally, we group sensors based on their spatial, spectral and temporal characteristics, and classify them by their platform deployment competencies. In addition, we identify current trends, including the convergence between the remote sensing and navigation field, and the emergence of cooperative sensing, and the potential of crowdsensing.

447 citations


Journal ArticleDOI
TL;DR: The review shows that most previous studies have concentrated on the mapping and analysis of network components, and more attention should be given to an integrated use of various data sources to benefit from the various techniques in an optimal way.
Abstract: To secure uninterrupted distribution of electricity, effective monitoring and maintenance of power lines are needed This literature review article aims to give a wide overview of the possibilities provided by modern remote sensing sensors in power line corridor surveys and to discuss the potential and limitations of different approaches Monitoring of both power line components and vegetation around them is included Remotely sensed data sources discussed in the review include synthetic aperture radar (SAR) images, optical satellite and aerial images, thermal images, airborne laser scanner (ALS) data, land-based mobile mapping data, and unmanned aerial vehicle (UAV) data The review shows that most previous studies have concentrated on the mapping and analysis of network components In particular, automated extraction of power line conductors has achieved much attention, and promising results have been reported For example, accuracy levels above 90% have been presented for the extraction of conductors from ALS data or aerial images However, in many studies datasets have been small and numerical quality analyses have been omitted Mapping of vegetation near power lines has been a less common research topic than mapping of the components, but several studies have also been carried out in this field, especially using optical aerial and satellite images Based on the review we conclude that in future research more attention should be given to an integrated use of various data sources to benefit from the various techniques in an optimal way Knowledge in related fields, such as vegetation monitoring from ALS, SAR and optical image data should be better exploited to develop useful monitoring approaches Special attention should be given to rapidly developing remote sensing techniques such as UAVs and laser scanning from airborne and land-based platforms To demonstrate and verify the capabilities of automated monitoring approaches, large tests in various environments and practical monitoring conditions are needed These should include careful quality analyses and comparisons between different data sources, methods and individual algorithms

350 citations


Journal ArticleDOI
TL;DR: The International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisited the existing geospatial data handling methods and theories.
Abstract: Big data has now become a strong focus of global interest that is increasingly attracting the attention of academia, industry, government and other organizations. Big data can be situated in the disciplinary area of traditional geospatial data handling theory and methods. The increasing volume and varying format of collected geospatial big data presents challenges in storing, managing, processing, analyzing, visualizing and verifying the quality of data. This has implications for the quality of decisions made with big data. Consequently, this position paper of the International Society for Photogrammetry and Remote Sensing (ISPRS) Technical Commission II (TC II) revisits the existing geospatial data handling methods and theories to determine if they are still capable of handling emerging geospatial big data. Further, the paper synthesises problems, major issues and challenges with current developments as well as recommending what needs to be developed further in the near future.

336 citations


Journal ArticleDOI
TL;DR: Multiscale convolutional neural network has two merits: high-level spatial features can be effectively learned by using the hierarchical learning structure and the multiscale learning scheme can capture contextual information at different scales.
Abstract: It is widely agreed that spatial features can be combined with spectral properties for improving interpretation performances on very-high-resolution (VHR) images in urban areas. However, many existing methods for extracting spatial features can only generate low-level features and consider limited scales, leading to unpleasant classification results. In this study, multiscale convolutional neural network (MCNN) algorithm was presented to learn spatial-related deep features for hyperspectral remote imagery classification. Unlike traditional methods for extracting spatial features, the MCNN first transforms the original data sets into a pyramid structure containing spatial information at multiple scales, and then automatically extracts high-level spatial features using multiscale training data sets. Specifically, the MCNN has two merits: (1) high-level spatial features can be effectively learned by using the hierarchical learning structure and (2) multiscale learning scheme can capture contextual information at different scales. To evaluate the effectiveness of the proposed approach, the MCNN was applied to classify the well-known hyperspectral data sets and compared with traditional methods. The experimental results shown a significant increase in classification accuracies especially for urban areas.

323 citations


Journal ArticleDOI
TL;DR: Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface as mentioned in this paper, which can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels.
Abstract: Time series analysis of InSAR data has emerged as an important tool for monitoring and measuring the displacement of the Earth's surface. Changes in the Earth's surface can result from a wide range of phenomena such as earthquakes, volcanoes, landslides, variations in ground water levels, and changes in wetland water levels. Time series analysis is applied to interferometric phase measurements, which wrap around when the observed motion is larger than one-half of the radar wavelength. Thus, the spatio-temporal ''unwrapping" of phase observations is necessary to obtain physically meaningful results. Several different algorithms have been developed for time series analysis of InSAR data to solve for this ambiguity. These algorithms may employ different models for time series analysis, but they all generate a first-order deformation rate, which can be compared to each other. However, there is no single algorithm that can provide optimal results in all cases. Since time series analyses of InSAR data are used in a variety of applications with different characteristics, each algorithm possesses inherently unique strengths and weaknesses. In this review article, following a brief overview of InSAR technology, we discuss several algorithms developed for time series analysis of InSAR data using an example set of results for measuring subsidence rates in Mexico City.

Journal ArticleDOI
Puzhao Zhang1, Maoguo Gong1, Linzhi Su1, Jia Liu1, Li Zhizhou1 
TL;DR: This paper presents a novel multi-spatial-resolution change detection framework, which incorporates deep-architecture-based unsupervised feature learning and mapping-based feature change analysis, and tries to explore the inner relationships between them by building a mapping neural network.
Abstract: Multi-spatial-resolution change detection is a newly proposed issue and it is of great significance in remote sensing, environmental and land use monitoring, etc. Though multi-spatial-resolution image-pair are two kinds of representations of the same reality, they are often incommensurable superficially due to their different modalities and properties. In this paper, we present a novel multi-spatial-resolution change detection framework, which incorporates deep-architecture-based unsupervised feature learning and mapping-based feature change analysis. Firstly, we transform multi-resolution image-pair into the same pixel-resolution through co-registration, followed by details recovery, which is designed to remedy the spatial details lost in the registration. Secondly, the denoising autoencoder is stacked to learn local and high-level representation/feature from the local neighborhood of the given pixel, in an unsupervised fashion. Thirdly, motivated by the fact that multi-resolution image-pair share the same reality in the unchanged regions, we try to explore the inner relationships between them by building a mapping neural network. And it can be used to learn a mapping function based on the most-unlikely-changed feature-pairs, which are selected from all the feature-pairs via a coarse initial change map generated in advance. The learned mapping function can bridge the different representations and highlight changes. Finally, we can build a robust and contractive change map through feature similarity analysis, and the change detection result is obtained through the segmentation of the final change map. Experiments are carried out on four real datasets, and the results confirmed the effectiveness and superiority of the proposed method.

Journal ArticleDOI
TL;DR: This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest.
Abstract: Due to the unprecedented technology development of sensors, platforms and algorithms for 3D data acquisition and generation, 3D spaceborne, airborne and close-range data, in the form of image based, Light Detection and Ranging (LiDAR) based point clouds, Digital Elevation Models (DEM) and 3D city models, become more accessible than ever before Change detection (CD) or time-series data analysis in 3D has gained great attention due to its capability of providing volumetric dynamics to facilitate more applications and provide more accurate results The state-of-the-art CD reviews aim to provide a comprehensive synthesis and to simplify the taxonomy of the traditional remote sensing CD techniques, which mainly sit within the boundary of 2D image/spectrum analysis, largely ignoring the particularities of 3D aspects of the data The inclusion of 3D data for change detection (termed 3D CD), not only provides a source with different modality for analysis, but also transcends the border of traditional top-view 2D pixel/object-based analysis to highly detailed, oblique view or voxel-based geometric analysis This paper reviews the recent developments and applications of 3D CD using remote sensing and close-range data, in support of both academia and industry researchers who seek for solutions in detecting and analyzing 3D dynamics of various objects of interest We first describe the general considerations of 3D CD problems in different processing stages and identify CD types based on the information used, being the geometric comparison and geometric-spectral analysis We then summarize relevant works and practices in urban, environment, ecology and civil applications, etc Given the broad spectrum of applications and different types of 3D data, we discuss important issues in 3D CD methods Finally, we present concluding remarks in algorithmic aspects of 3D CD

Journal ArticleDOI
TL;DR: Li et al. as mentioned in this paper proposed an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions.
Abstract: Filtering of light detection and ranging (LiDAR) data into the ground and non-ground points is a fundamental step in processing raw airborne LiDAR data. This paper proposes an improved progressive triangulated irregular network (TIN) densification (IPTD) filtering algorithm that can cope with a variety of forested landscapes, particularly both topographically and environmentally complex regions. The IPTD filtering algorithm consists of three steps: (1) acquiring potential ground seed points using the morphological method; (2) obtaining accurate ground seed points; and (3) building a TIN-based model and iteratively densifying TIN. The IPTD filtering algorithm was tested in 15 forested sites with various terrains (i.e., elevation and slope) and vegetation conditions (i.e., canopy cover and tree height), and was compared with seven other commonly used filtering algorithms (including morphology-based, slope-based, and interpolation-based filtering algorithms). Results show that the IPTD achieves the highest filtering accuracy for nine of the 15 sites. In general, it outperforms the other filtering algorithms, yielding the lowest average total error of 3.15% and the highest average kappa coefficient of 89.53%.

Journal ArticleDOI
TL;DR: In this paper, a review of sensor modelling and photogrammetric calibration is presented, with an initial overview of the history and state-of-the-art of self-calibration.
Abstract: Metric calibration is a critical prerequisite to the application of modern, mostly consumer-grade digital cameras for close-range photogrammetric measurement. This paper reviews aspects of sensor modelling and photogrammetric calibration, with attention being focussed on techniques of automated self-calibration. Following an initial overview of the history and the state of the art, selected topics of current interest within calibration for close-range photogrammetry are addressed. These include sensor modelling, with standard, extended and generic calibration models being summarised, along with non-traditional camera systems. Self-calibration via both targeted planar arrays and targetless scenes amenable to SfM-based exterior orientation are then discussed, after which aspects of calibration and measurement accuracy are covered. Whereas camera self-calibration is largely a mature technology, there is always scope for additional research to enhance the models and processes employed with the many camera systems nowadays utilised in close-range photogrammetry.

Journal ArticleDOI
TL;DR: In this paper, the authors reviewed the existing paddy rice mapping methods from the literatures ranging from the 1980s to 2015, and illustrated the evolution of these paddy Rice mapping efforts, looking specifically at the future trajectory of PIR mapping methodologies.
Abstract: Paddy rice agriculture plays an important role in various environmental issues including food security, water use, climate change, and disease transmission. However, regional and global paddy rice maps are surprisingly scarce and sporadic despite numerous efforts in paddy rice mapping algorithms and applications. With the increasing need for regional to global paddy rice maps, this paper reviewed the existing paddy rice mapping methods from the literatures ranging from the 1980s to 2015. In particular, we illustrated the evolution of these paddy rice mapping efforts, looking specifically at the future trajectory of paddy rice mapping methodologies. The biophysical features and growth phases of paddy rice were analyzed first, and feature selections for paddy rice mapping were analyzed from spectral, polarimetric, temporal, spatial, and textural aspects. We sorted out paddy rice mapping algorithms into four categories: (1) Reflectance data and image statistic-based approaches, (2) vegetation index (VI) data and enhanced image statistic-based approaches, (3) VI or RADAR backscatter-based temporal analysis approaches, and (4) phenology-based approaches through remote sensing recognition of key growth phases. The phenology-based approaches using unique features of paddy rice (e.g., transplanting) for mapping have been increasingly used in paddy rice mapping. Current applications of these phenology-based approaches generally use coarse resolution MODIS data, which involves mixed pixel issues in Asia where smallholders comprise the majority of paddy rice agriculture. The free release of Landsat archive data and the launch of Landsat 8 and Sentinel-2 are providing unprecedented opportunities to map paddy rice in fragmented landscapes with higher spatial resolution. Based on the literature review, we discussed a series of issues for large scale operational paddy rice mapping.

Journal ArticleDOI
TL;DR: It is proposed that big data research should closely follow good scientific practice to provide reliable and scientific “stories”, as well as explore and develop techniques and methods to mitigate or rectify those ‘big-errors’ brought by big data.
Abstract: The recent explosive publications of big data studies have well documented the rise of big data and its ongoing prevalence. Different types of “big data” have emerged and have greatly enriched spatial information sciences and related fields in terms of breadth and granularity. Studies that were difficult to conduct in the past time due to data availability can now be carried out. However, big data brings lots of “big errors” in data quality and data usage, which cannot be used as a substitute for sound research design and solid theories. We indicated and summarized the problems faced by current big data studies with regard to data collection, processing and analysis: inauthentic data collection, information incompleteness and noise of big data, unrepresentativeness, consistency and reliability, and ethical issues. Cases of empirical studies are provided as evidences for each problem. We propose that big data research should closely follow good scientific practice to provide reliable and scientific “stories”, as well as explore and develop techniques and methods to mitigate or rectify those ‘big-errors’ brought by big data.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an automated approach to map soybeans and corn in the state of Parana, Brazil for crop years 2010-2015. And the results showed that the mapped areas of soybeans agreed with official statistics at the municipal level.
Abstract: For the two of the most important agricultural commodities, soybean and corn, remote sensing plays a substantial role in delivering timely information on the crop area for economic, environmental and policy studies. Traditional long-term mapping of soybean and corn is challenging as a result of the high cost of repeated training data collection, the inconsistency in image process and interpretation, and the difficulty of handling the inter-annual variability of weather and crop progress. In this study, we developed an automated approach to map soybean and corn in the state of Parana, Brazil for crop years 2010–2015. The core of the approach is a decision tree classifier with rules manually built based on expert interaction for repeated use. The automated approach is advantageous for its capacity of multi-year mapping without the need to re-train or re-calibrate the classifier. Time series MODerate-resolution Imaging Spectroradiometer (MODIS) reflectance product (MCD43A4) were employed to derive vegetation phenology to identify soybean and corn based on crop calendar. To deal with the phenological similarity between soybean and corn, the surface reflectance of the shortwave infrared band scaled to a phenological stage was used to fully separate the two crops. Results suggested that the mapped areas of soybean and corn agreed with official statistics at the municipal level. The resultant map in the crop year 2012 was evaluated using an independent reference data set, and the overall accuracy and Kappa coefficient were 87.2% and 0.804 respectively. As a result of mixed pixel effect at the 500 m resolution, classification results were biased depending on topography. In the flat, broad and highly-cropped areas, uncultivated lands were likely to be identified as soybean or corn, causing over-estimation of cropland area. By contrast, scattered crop fields in mountainous regions with dense natural vegetation tend to be overlooked. For future mapping efforts, it has great potential to apply the automated mapping algorithm to other image series at various scales especially high-resolution images.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors extracted temporal spectral differences between impervious and pervious surfaces that were extracted from dense time series Landsat imagery and applied it to the Pearl River Delta, southern China, from 1988 to 2013.
Abstract: Information on impervious surface distribution and dynamics is useful for understanding urbanization and its impacts on hydrological cycle, water management, surface energy balances, urban heat island, and biodiversity. Numerous methods have been developed and successfully applied to estimate impervious surfaces. Previous methods of impervious surface estimation mainly focused on the spectral differences between impervious surfaces and other land covers. Moreover, the accuracy of estimation from single or multi-temporal images was often limited by the mixed pixel problem in coarse- or medium-resolution imagery or by the intra-class spectral variability problem in high resolution imagery. Time series satellite imagery provides potential to resolve the above problems as well as the spectral confusion with similar surface characteristics due to phenological change, inter-annual climatic variability, and long-term changes of vegetation. Since Landsat time series has a long record with an effective spatial resolution, this study aimed at estimating and mapping impervious surfaces by analyzing temporal spectral differences between impervious and pervious surfaces that were extracted from dense time series Landsat imagery. Specifically, this study developed an efficient method to extract annual impervious surfaces from time series Landsat data and applied it to the Pearl River Delta, southern China, from 1988 to 2013. The annual classification accuracy yielded from 71% to 91% for all classes, while the mapping accuracy of impervious surfaces ranged from 80.5% to 94.5%. Furthermore, it is found that the use of more than 50% of Scan Line Corrector (SLC)-off images after 2003 did not substantially reduced annual classification accuracy, which ranged from 78% to 91%. It is also worthy to note that more than 80% of classification accuracies were achieved in both 2002 and 2010 despite of more than 40% of cloud cover detected in these two years. These results suggested that the proposed method was effective and efficient in mapping impervious surfaces and detecting impervious surface changes by using temporal spectral differences from dense time series Landsat imagery. The value of full sampling was revealed for enhancing temporal resolution and identifying temporal differences between impervious and pervious surfaces in time series analysis.

Journal ArticleDOI
TL;DR: An efficient spectral–structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery, which shows that the proposed SSBFC performs better than the other classification methods for V HSR image scenes.
Abstract: Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral–structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental results show that the whole image as the scope of the pooling operator performs better than the scope generated by SP. In addition, SSBFC codes and pools the spectral and structural features separately to avoid mutual interruption between the spectral and structural features. The coding vectors of spectral and structural features are then concatenated into a final coding vector. Finally, SSBFC classifies the final coding vector by support vector machine (SVM) with a histogram intersection kernel (HIK). Compared with the latest scene classification methods, the experimental results with three VHSR datasets demonstrate that the proposed SSBFC performs better than the other classification methods for VHSR image scenes.

Journal ArticleDOI
TL;DR: The proposed registration method performs well in various urban environments and indoor scenes with the accuracy at the centimeter level and improves the efficiency, robustness, and accuracy of registration in comparison with the feature plane-based methods.
Abstract: Point clouds collected by terrestrial laser scanning (TLS) from large-scale urban scenes contain a wide variety of objects (buildings, cars, pole-like objects, and others) with symmetric and incomplete structures, and relatively low-textured surfaces, all of which pose great challenges for automatic registration between scans. To address the challenges, this paper proposes a registration method to provide marker-free and multi-view registration based on the semantic feature points extracted. First, the method detects the semantic feature points within a detection scheme, which includes point cloud segmentation, vertical feature lines extraction and semantic information calculation and finally takes the intersections of these lines with the ground as the semantic feature points. Second, the proposed method matches the semantic feature points using geometrical constraints (3-point scheme) as well as semantic information (category and direction), resulting in exhaustive pairwise registration between scans. Finally, the proposed method implements multi-view registration by constructing a minimum spanning tree of the fully connected graph derived from exhaustive pairwise registration. Experiments have demonstrated that the proposed method performs well in various urban environments and indoor scenes with the accuracy at the centimeter level and improves the efficiency, robustness, and accuracy of registration in comparison with the feature plane-based methods.

Journal ArticleDOI
TL;DR: It is found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene.
Abstract: The U.S. Geological Survey’s Land Change Monitoring, Assessment, and Projection (LCMAP) initiative is a new end-to-end capability to continuously track and characterize changes in land cover, use, and condition to better support research and applications relevant to resource management and environmental change. Among the LCMAP product suite are annual land cover maps that will be available to the public. This paper describes an approach to optimize the selection of training and auxiliary data for deriving the thematic land cover maps based on all available clear observations from Landsats 4–8. Training data were selected from map products of the U.S. Geological Survey’s Land Cover Trends project. The Random Forest classifier was applied for different classification scenarios based on the Continuous Change Detection and Classification (CCDC) algorithm. We found that extracting training data proportionally to the occurrence of land cover classes was superior to an equal distribution of training data per class, and suggest using a total of 20,000 training pixels to classify an area about the size of a Landsat scene. The problem of unbalanced training data was alleviated by extracting a minimum of 600 training pixels and a maximum of 8000 training pixels per class. We additionally explored removing outliers contained within the training data based on their spectral and spatial criteria, but observed no significant improvement in classification results. We also tested the importance of different types of auxiliary data that were available for the conterminous United States, including: (a) five variables used by the National Land Cover Database, (b) three variables from the cloud screening “Function of mask” (Fmask) statistics, and (c) two variables from the change detection results of CCDC. We found that auxiliary variables such as a Digital Elevation Model and its derivatives (aspect, position index, and slope), potential wetland index, water probability, snow probability, and cloud probability improved the accuracy of land cover classification. Compared to the original strategy of the CCDC algorithm (500 pixels per class), the use of the optimal strategy improved the classification accuracies substantially (15-percentage point increase in overall accuracy and 4-percentage point increase in minimum accuracy).

Journal ArticleDOI
TL;DR: A novel semisupervised classification based on multi-decision labeling and deep feature learning is presented to exploit and utilize as much information as possible to realize the classification task.
Abstract: Semisupervised learning is widely used in hyperspectral image classification to deal with the limited training samples, however, some more information of hyperspectral image should be further explored. In this paper, a novel semisupervised classification based on multi-decision labeling and deep feature learning is presented to exploit and utilize as much information as possible to realize the classification task. First, the proposed method takes two decisions to pre-label each unlabeled sample: local decision based on weighted neighborhood information is made by the surrounding samples, and global decision based on deep learning is performed by the most similar training samples. Then, some unlabeled ones with high confidence are selected to extent the training set. Finally, self decision, which depends on the self features exploited by deep learning, is employed on the updated training set to extract spectral-spatial features and produce classification map. Experimental results with real data indicate that it is an effective and promising semisupervised classification method for hyperspectral image.

Journal ArticleDOI
TL;DR: A new automated method for delineating building shadows is proposed that combines spectral and spatial features of the satellite image with an optimized active contour model where the contours are biased to delineate shadow regions.
Abstract: Satellite images can provide valuable information about the presented urban landscape scenes to remote sensing and telecommunication applications. Obtaining information from satellite images is difficult since all the objects and their surroundings are presented with feature complexity. The shadows cast by buildings in urban scenes can be processed and used for estimating building heights. Thus, a robust and accurate building shadow detection process is important. Region-based active contour models can be used for satellite image segmentation. However, spectral heterogeneity that usually exists in satellite images and the feature similarity representing the shadow and several non-shadow regions makes building shadow detection challenging. In this work, a new automated method for delineating building shadows is proposed. Initially, spectral and spatial features of the satellite image are utilized for designing a custom filter to enhance shadows and reduce intensity heterogeneity. An effective iterative procedure using intensity differences is developed for tuning and subsequently selecting the most appropriate filter settings, able to highlight the building shadows. The response of the filter is then used for automatically estimating the radiometric property of the shadows. The customized filter and the radiometric feature are utilized to form an optimized active contour model where the contours are biased to delineate shadow regions. Post-processing morphological operations are also developed and applied for removing misleading artefacts. Finally, building heights are approximated using shadow length and the predefined or estimated solar elevation angle. Qualitative and quantitative measures are used for evaluating the performance of the proposed method for both shadow detection and building height estimation.

Journal ArticleDOI
TL;DR: The significant challenges currently facing ISPRS and its communities are examined, such as providing high-quality information, enabling advanced geospatial computing, and supporting collaborative problem solving.
Abstract: With the increased availability of very high-resolution satellite imagery, terrain based imaging and participatory sensing, inexpensive platforms, and advanced information and communication technologies, the application of imagery is now ubiquitous, playing an important role in many aspects of life and work today. As a leading organisation in this field, the International Society for Photogrammetry and Remote Sensing (ISPRS) has been devoted to effectively and efficiently obtaining and utilising information from imagery since its foundation in the year 1910. This paper examines the significant challenges currently facing ISPRS and its communities, such as providing high-quality information, enabling advanced geospatial computing, and supporting collaborative problem solving. The state-of-the-art in ISPRS related research and development is reviewed and the trends and topics for future work are identified. By providing an overarching scientific vision and research agenda, we hope to call on and mobilise all ISPRS scientists, practitioners and other stakeholders to continue improving our understanding and capacity on information from imagery and to deliver advanced geospatial knowledge that enables humankind to better deal with the challenges ahead, posed for example by global change, ubiquitous sensing, and a demand for real-time information generation.

Journal ArticleDOI
TL;DR: Comparative studies with the existing traffic sign detection and recognition methods demonstrate that the proposed algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.
Abstract: This paper presents a novel algorithm for detection and recognition of traffic signs in mobile laser scanning (MLS) data for intelligent transportation-related applications. The traffic sign detection task is accomplished based on 3-D point clouds by using bag-of-visual-phrases representations; whereas the recognition task is achieved based on 2-D images by using a Gaussian-Bernoulli deep Boltzmann machine-based hierarchical classifier. To exploit high-order feature encodings of feature regions, a deep Boltzmann machine-based feature encoder is constructed. For detecting traffic signs in 3-D point clouds, the proposed algorithm achieves an average recall, precision, quality, and F-score of 0.956, 0.946, 0.907, and 0.951, respectively, on the four selected MLS datasets. For on-image traffic sign recognition, a recognition accuracy of 97.54% is achieved by using the proposed hierarchical classifier. Comparative studies with the existing traffic sign detection and recognition methods demonstrate that our algorithm obtains promising, reliable, and high performance in both detecting traffic signs in 3-D point clouds and recognizing traffic signs on 2-D images.

Journal ArticleDOI
TL;DR: A novel segmentation algorithm based on a Markov random field model and an extensive data analysis for determining relevant features for the classification problem is given and the reachability of a good classification rate is evaluated using the Random Forest method.
Abstract: We present in this article a new method on unsupervised semantic parsing and structure recognition in peri-urban areas using satellite images. The automatic “building” and “road” detection is based on regions extracted by an unsupervised segmentation method. We propose a novel segmentation algorithm based on a Markov random field model and we give an extensive data analysis for determining relevant features for the classification problem. The novelty of the segmentation algorithm lies on the class-driven vector data quantization and clustering and the estimation of the likelihoods given the resulting clusters. We have evaluated the reachability of a good classification rate using the Random Forest method. We found that, with a limited number of features, among them some new defined in this article, we can obtain good classification performance. Our main contribution lies again on the data analysis and the estimation of likelihoods. Finally, we propose a new method for completing the road network exploiting its connectivity, and the local and global properties of the road network.

Journal ArticleDOI
TL;DR: This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process.
Abstract: Nowadays, mobile laser scanning has become a valid technology for infrastructure inspection. This technology permits collecting accurate 3D point clouds of urban and road environments and the geometric and semantic analysis of data became an active research topic in the last years. This paper focuses on the detection of vertical traffic signs in 3D point clouds acquired by a LYNX Mobile Mapper system, comprised of laser scanning and RGB cameras. Each traffic sign is automatically detected in the LiDAR point cloud, and its main geometric parameters can be automatically extracted, therefore aiding the inventory process. Furthermore, the 3D position of traffic signs are reprojected on the 2D images, which are spatially and temporally synced with the point cloud. Image analysis allows for recognizing the traffic sign semantics using machine learning approaches. The presented method was tested in road and urban scenarios in Galicia (Spain). The recall results for traffic sign detection are close to 98%, and existing false positives can be easily filtered after point cloud projection. Finally, the lack of a large, publicly available Spanish traffic sign database is pointed out.

Journal ArticleDOI
Bisheng Yang1, Ronggang Huang1, Zhen Dong1, Yufu Zang1, Jianping Li1 
TL;DR: In this article, the ground points and breaklines are extracted from airborne LiDAR point clouds using segment-based filtering and multi-scale morphological filtering, and the proposed method removes amorphous objects from the set of individual points to decrease the effect of the maximum scale on the filtering result.
Abstract: The extraction of ground points and breaklines is a crucial step during generation of high quality digital elevation models (DEMs) from airborne LiDAR point clouds. In this study, we propose a novel automated method for this task. To overcome the disadvantages of applying a single filtering method in areas with various types of terrain, the proposed method first classifies the points into a set of segments and one set of individual points, which are filtered by segment-based filtering and multi-scale morphological filtering, respectively. In the process of multi-scale morphological filtering, the proposed method removes amorphous objects from the set of individual points to decrease the effect of the maximum scale on the filtering result. The proposed method then extracts the breaklines from the ground points, which provide a good foundation for generation of a high quality DEM. Finally, the experimental results demonstrate that the proposed method extracts ground points in a robust manner while preserving the breaklines.

Journal ArticleDOI
TL;DR: In this article, a regression model was established for chlorophyll and nitrogen aimed indices with their corresponding crop growth variables by exploiting the ground-based measurements, regression models were established for CHs and nitrogen with their respective crop growth variable.
Abstract: Chlorophyll and nitrogen are the most essential parameters for paddy crop growth. Spectroradiometric measurements were collected at canopy level during critical growth period of rice. Chemical analysis was performed to quantify the total leaf content. By exploiting the ground based measurements, regression models were established for chlorophyll and nitrogen aimed indices with their corresponding crop growth variables. Vegetation index models were developed for mapping these parameters from Hyperion imagery in an agriculture system. It was inferred that the present Simple Ratio (SR) and Leaf Nitrogen Concentration (LNC) indices, which followed a linear and nonlinear relationship respectively, were completely different from published Tian et al. (2011). The nitrogen content varied widely from 1 to 4% and only 2 to 3% for paddy crop using present modified index models and Tian et al. (2011) respectively. The modified LNC index model performed better than the established Tian et al. (2011) model as far as estimated nitrogen content from Hyperion imagery was concerned. Furthermore, within the observed chlorophyll range obtained from the studied rice varieties grown in the rice agriculture system, the index models (LNC, OASVI, Gitelson, mSR and MTCI) performed well in the spatial distribution of rice chlorophyll content from Hyperion imagery. Spatial distribution of total chlorophyll content varied widely from 1.77 to 5.81 mg/g (LNC), 3.0 to 13 mg/g (OASVI), 0.5 to 10.43 mg/g (Gitelson), 2.18 to 10.61 mg/g (mSR) and 2.90 to 5.40 mg/g (MTCI). The spatial information of these parameters will help in proper nutrient management, yield forecasting, and will serve as inputs for crop growth and forecasting models for a precision rice agriculture system.