scispace - formally typeset
Search or ask a question

Showing papers in "Remote Sensing in 2017"


Journal ArticleDOI
TL;DR: A 3D convolutional neural network framework is proposed for accurate HSI classification, which is lighter, less likely to over-fit, and easier to train, and requires fewer parameters than other deep learning-based methods.
Abstract: Recent research has shown that using spectral–spatial information can considerably improve the performance of hyperspectral image (HSI) classification. HSI data is typically presented in the format of 3D cubes. Thus, 3D spatial filtering naturally offers a simple and effective method for simultaneously extracting the spectral–spatial features within such images. In this paper, a 3D convolutional neural network (3D-CNN) framework is proposed for accurate HSI classification. The proposed method views the HSI cube data altogether without relying on any preprocessing or post-processing, extracting the deep spectral–spatial-combined features effectively. In addition, it requires fewer parameters than other deep learning-based methods. Thus, the model is lighter, less likely to over-fit, and easier to train. For comparison and validation, we test the proposed method along with three other deep learning-based HSI classification methods—namely, stacked autoencoder (SAE), deep brief network (DBN), and 2D-CNN-based methods—on three real-world HSI datasets captured by different sensors. Experimental results demonstrate that our 3D-CNN-based method outperforms these state-of-the-art methods and sets a new record.

835 citations


Journal ArticleDOI
TL;DR: A survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectrals plays a center role—is presented in this paper.
Abstract: Traditional imagery—provided, for example, by RGB and/or NIR sensors—has proven to be useful in many agroforestry applications. However, it lacks the spectral range and precision to profile materials and organisms that only hyperspectral sensors can provide. This kind of high-resolution spectroscopy was firstly used in satellites and later in manned aircraft, which are significantly expensive platforms and extremely restrictive due to availability limitations and/or complex logistics. More recently, UAS have emerged as a very popular and cost-effective remote sensing technology, composed of aerial platforms capable of carrying small-sized and lightweight sensors. Meanwhile, hyperspectral technology developments have been consistently resulting in smaller and lighter sensors that can currently be integrated in UAS for either scientific or commercial purposes. The hyperspectral sensors’ ability for measuring hundreds of bands raises complexity when considering the sheer quantity of acquired data, whose usefulness depends on both calibration and corrective tasks occurring in pre- and post-flight stages. Further steps regarding hyperspectral data processing must be performed towards the retrieval of relevant information, which provides the true benefits for assertive interventions in agricultural crops and forested areas. Considering the aforementioned topics and the goal of providing a global view focused on hyperspectral-based remote sensing supported by UAV platforms, a survey including hyperspectral sensors, inherent data processing and applications focusing both on agriculture and forestry—wherein the combination of UAV and hyperspectral sensors plays a center role—is presented in this paper. Firstly, the advantages of hyperspectral data over RGB imagery and multispectral data are highlighted. Then, hyperspectral acquisition devices are addressed, including sensor types, acquisition modes and UAV-compatible sensors that can be used for both research and commercial purposes. Pre-flight operations and post-flight pre-processing are pointed out as necessary to ensure the usefulness of hyperspectral data for further processing towards the retrieval of conclusive information. With the goal of simplifying hyperspectral data processing—by isolating the common user from the processes’ mathematical complexity—several available toolboxes that allow a direct access to level-one hyperspectral data are presented. Moreover, research works focusing the symbiosis between UAV-hyperspectral for agriculture and forestry applications are reviewed, just before the paper’s conclusions.

736 citations


Journal ArticleDOI
TL;DR: The results reveal the significant accuracy of IBI-WAV forecasts and prove that a combined observational and modeling approach can provide a comprehensive characterization of severe wave conditions in coastal areas and shows the benefits from the complementary nature of both systems.
Abstract: The Galician coast (NW Spain) is a region that is strongly influenced by the presence of low pressure systems in the mid-Atlantic Ocean and the periodic passage of storms that give rise to severe sea states. Since its wave climate is one of the most energetic in Europe, the objectives of this paper were twofold. The first objective was to characterize the most extreme wave height events in Galicia over the wintertime of a two-year period (2015–2016) by using reliable high-frequency radar wave parameters in concert with predictions from a regional wave (WAV) forecasting system running operationally in the Iberia-Biscay-Ireland (IBI) area, denominatedIBI-WAV. The second objective was to showcase the application of satellite wave altimetry (in particular, remote-sensed three-hourly wave height estimations) for the daily skill assessment of the IBI-WAV model product. Special attention was focused on monitoring Ophelia—one of the major hurricanes on record in the easternmost Atlantic—during its 3-day track over Ireland and the UK (15–17 October 2017). Overall, the results reveal the significant accuracy of IBI-WAV forecasts and prove that a combined observational and modeling approach can provide a comprehensive characterization of severe wave conditions in coastal areas and shows the benefits from the complementary nature of both systems.

424 citations


Journal ArticleDOI
TL;DR: A global analysis of Landsat-8, Sentinel-2A and Sentinel- 2B metadata obtained from the committee on Earth Observation Satellite (CEOS) Visualization Environment (COVE) tool for 2016 is presented and the temporal observation frequency improvements afforded by sensor combination are shown to be significant.
Abstract: Combination of different satellite data will provide increased opportunities for more frequent cloud-free surface observations due to variable cloud cover at the different satellite overpass times and dates. Satellite data from the polar-orbiting Landsat-8 (launched 2013), Sentinel-2A (launched 2015) and Sentinel-2B (launched 2017) sensors offer 10 m to 30 m multi-spectral global coverage. Together, they advance the virtual constellation paradigm for mid-resolution land imaging. In this study, a global analysis of Landsat-8, Sentinel-2A and Sentinel-2B metadata obtained from the committee on Earth Observation Satellite (CEOS) Visualization Environment (COVE) tool for 2016 is presented. A global equal area projection grid defined every 0.05° is used considering each sensor and combined together. Histograms, maps and global summary statistics of the temporal revisit intervals (minimum, mean, and maximum) and the number of observations are reported. The temporal observation frequency improvements afforded by sensor combination are shown to be significant. In particular, considering Landsat-8, Sentinel-2A, and Sentinel-2B together will provide a global median average revisit interval of 2.9 days, and, over a year, a global median minimum revisit interval of 14 min (±1 min) and maximum revisit interval of 7.0 days.

410 citations


Journal ArticleDOI
TL;DR: An exploratory investigation of UAV regulations on the global scale, including a thorough literature review and a comparative analysis of national regulatory frameworks, reveals distinct variations in all the compared variables.
Abstract: UAVs—unmanned aerial vehicles—facilitate data acquisition at temporal and spatial scales that still remain unachievable for traditional remote sensing platforms. However, current legal frameworks that regulate UAVs present significant barriers to research and development. To highlight the importance, impact, and diversity of UAV regulations, this paper provides an exploratory investigation of UAV regulations on the global scale. For this, the methodological approach consists of a research synthesis of UAV regulations, including a thorough literature review and a comparative analysis of national regulatory frameworks. Similarities and contrasting elements in the various national UAV regulations are explored including their statuses from the perspectives of past, present, and future trends. Since the early 2000s, countries have gradually established national legal frameworks. Although all UAV regulations have one common goal—minimizing the risks to other airspace users and to both people and property on the ground—the results reveal distinct variations in all the compared variables. Furthermore, besides the clear presence of legal frameworks, market forces such as industry design standards and reliable information about UAVs as public goods are expected to shape future developments.

377 citations


Journal ArticleDOI
TL;DR: The presented results show the good quality of the Sentinel-2 mission products both in terms of radiometry and geometry and provide an overview on next mission steps related to data quality aspects.
Abstract: As part of the Copernicus programme of the European Commission (EC), the European Space Agency (ESA) has developed and is currently operating the Sentinel-2 mission that is acquiring high spatial resolution optical imagery. This article provides a description of the calibration activities and the status of the mission products validation activities after one year in orbit. Measured performances, from the validation activities, cover both Top-Of-Atmosphere (TOA) and Bottom-Of-Atmosphere (BOA) products. The presented results show the good quality of the mission products both in terms of radiometry and geometry and provide an overview on next mission steps related to data quality aspects.

339 citations


Journal ArticleDOI
TL;DR: This work proposes a transfer learning based method, making knowledge learned from sufficient unlabeled SAR scene images transferrable to labeled SAR target data, and designs an assembled CNN architecture consisting of a classification pathway and a reconstruction pathway, together with a feedback bypass additionally.
Abstract: Tremendous progress has been made in object recognition with deep convolutional neural networks (CNNs), thanks to the availability of large-scale annotated dataset. With the ability of learning highly hierarchical image feature extractors, deep CNNs are also expected to solve the Synthetic Aperture Radar (SAR) target classification problems. However, the limited labeled SAR target data becomes a handicap to train a deep CNN. To solve this problem, we propose a transfer learning based method, making knowledge learned from sufficient unlabeled SAR scene images transferrable to labeled SAR target data. We design an assembled CNN architecture consisting of a classification pathway and a reconstruction pathway, together with a feedback bypass additionally. Instead of training a deep network with limited dataset from scratch, a large number of unlabeled SAR scene images are used to train the reconstruction pathway with stacked convolutional auto-encoders (SCAE) at first. Then, these pre-trained convolutional layers are reused to transfer knowledge to SAR target classification tasks, with feedback bypass introducing the reconstruction loss simultaneously. The experimental results demonstrate that transfer learning leads to a better performance in the case of scarce labeled training data and the additional feedback bypass with reconstruction loss helps to boost the capability of classification pathway.

335 citations


Journal ArticleDOI
TL;DR: This paper presents a methodology for the fully automatic production of land cover maps at country scale using high resolution optical image time series which is based on supervised classification and uses existing databases as reference data for training and validation.
Abstract: A detailed and accurate knowledge of land cover is crucial for many scientific and operational applications, and as such, it has been identified as an Essential Climate Variable. This accurate knowledge needs frequent updates. This paper presents a methodology for the fully automatic production of land cover maps at country scale using high resolution optical image time series which is based on supervised classification and uses existing databases as reference data for training and validation. The originality of the approach resides in the use of all available image data, a simple pre-processing step leading to a homogeneous set of acquisition dates over the whole area and the use of a supervised classifier which is robust to errors in the reference data. The produced maps have a kappa coefficient of 0.86 with 17 land cover classes. The processing is efficient, allowing a fast delivery of the maps after the acquisition of the image data, does not need expensive field surveys for model calibration and validation, nor human operators for decision making, and uses open and freely available imagery. The land cover maps are provided with a confidence map which gives information at the pixel level about the expected quality of the result.

321 citations


Journal ArticleDOI
TL;DR: Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions, and are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future.
Abstract: Small unmanned aerial vehicle (UAV) based remote sensing is a rapidly evolving technology. Novel sensors and methods are entering the market, offering completely new possibilities to carry out remote sensing tasks. Three-dimensional (3D) hyperspectral remote sensing is a novel and powerful technology that has recently become available to small UAVs. This study investigated the performance of UAV-based photogrammetry and hyperspectral imaging in individual tree detection and tree species classification in boreal forests. Eleven test sites with 4151 reference trees representing various tree species and developmental stages were collected in June 2014 using a UAV remote sensing system equipped with a frame format hyperspectral camera and an RGB camera in highly variable weather conditions. Dense point clouds were measured photogrammetrically by automatic image matching using high resolution RGB images with a 5 cm point interval. Spectral features were obtained from the hyperspectral image blocks, the large radiometric variation of which was compensated for by using a novel approach based on radiometric block adjustment with the support of in-flight irradiance observations. Spectral and 3D point cloud features were used in the classification experiment with various classifiers. The best results were obtained with Random Forest and Multilayer Perceptron (MLP) which both gave 95% overall accuracies and an F-score of 0.93. Accuracy of individual tree identification from the photogrammetric point clouds varied between 40% and 95%, depending on the characteristics of the area. Challenges in reference measurements might also have reduced these numbers. Results were promising, indicating that hyperspectral 3D remote sensing was operational from a UAV platform even in very difficult conditions. These novel methods are expected to provide a powerful tool for automating various environmental close-range remote sensing tasks in the very near future.

306 citations


Journal ArticleDOI
TL;DR: It is suggested that the development of land cover classification methods grew alongside the launches of a new series of Landsat sensors and advancements in computer science, and many advancements in specific classifiers and algorithms have occurred in the last decade.
Abstract: Land cover classification of Landsat images is one of the most important applications developed from Earth observation satellites. The last four decades were marked by different developments in land cover classification methods of Landsat images. This paper reviews the developments in land cover classification methods for Landsat images from the 1970s to date and highlights key ways to optimize analysis of Landsat images in order to attain the desired results. This review suggests that the development of land cover classification methods grew alongside the launches of a new series of Landsat sensors and advancements in computer science. Most classification methods were initially developed in the 1970s and 1980s; however, many advancements in specific classifiers and algorithms have occurred in the last decade. The first methods of land cover classification to be applied to Landsat images were visual analyses in the early 1970s, followed by unsupervised and supervised pixel-based classification methods using maximum likelihood, K-means and Iterative Self-Organizing Data Analysis Technique (ISODAT) classifiers. After 1980, other methods such as sub-pixel, knowledge-based, contextual-based, object-based image analysis (OBIA) and hybrid approaches became common in land cover classification. Attaining the best classification results with Landsat images demands particular attention to the specifications of each classification method such as selecting the right training samples, choosing the appropriate segmentation scale for OBIA, pre-processing calibration, choosing the right classifier and using suitable Landsat images. All these classification methods applied on Landsat images have strengths and limitations. Most studies have reported the superior performance of OBIA on different landscapes such as agricultural areas, forests, urban settlements and wetlands; however, OBIA has challenges such as selecting the optimal segmentation scale, which can result in over or under segmentation, and the low spatial resolution of Landsat images. Other classification methods have the potential to produce accurate classification results when appropriate procedures are followed. More research is needed on the application of hybrid classifiers as they are considered more complex methods for land cover classification.

282 citations


Journal ArticleDOI
TL;DR: An elaborately designed deep hierarchical network, namely a contextual region-based convolutional neural network with multilayer fusion, for SAR ship detection, which is composed of a region proposal network (RPN) with high network resolution and an object detection network with contextual features.
Abstract: Synthetic aperture radar (SAR) ship detection has been playing an increasingly essential role in marine monitoring in recent years. The lack of detailed information about ships in wide swath SAR imagery poses difficulty for traditional methods in exploring effective features for ship discrimination. Being capable of feature representation, deep neural networks have achieved dramatic progress in object detection recently. However, most of them suffer from the missing detection of small-sized targets, which means that few of them are able to be employed directly in SAR ship detection tasks. This paper discloses an elaborately designed deep hierarchical network, namely a contextual region-based convolutional neural network with multilayer fusion, for SAR ship detection, which is composed of a region proposal network (RPN) with high network resolution and an object detection network with contextual features. Instead of using low-resolution feature maps from a single layer for proposal generation in a RPN, the proposed method employs an intermediate layer combined with a downscaled shallow layer and an up-sampled deep layer to produce region proposals. In the object detection network, the region proposals are projected onto multiple layers with region of interest (ROI) pooling to extract the corresponding ROI features and contextual features around the ROI. After normalization and rescaling, they are subsequently concatenated into an integrated feature vector for final outputs. The proposed framework fuses the deep semantic and shallow high-resolution features, improving the detection performance for small-sized ships. The additional contextual features provide complementary information for classification and help to rule out false alarms. Experiments based on the Sentinel-1 dataset, which contains twenty-seven SAR images with 7986 labeled ships, verify that the proposed method achieves an excellent performance in SAR ship detection.

Journal ArticleDOI
TL;DR: An accurate classification approach for high resolution remote sensing imagery based on the improved FCN model is proposed, which improves the density of output class maps by introducing Atrous convolution, and designs a multi-scale network architecture by adding a skip-layer structure to make it capable for multi-resolution image classification.
Abstract: As a variant of Convolutional Neural Networks (CNNs) in Deep Learning, the Fully Convolutional Network (FCN) model achieved state-of-the-art performance for natural image semantic segmentation In this paper, an accurate classification approach for high resolution remote sensing imagery based on the improved FCN model is proposed Firstly, we improve the density of output class maps by introducing Atrous convolution, and secondly, we design a multi-scale network architecture by adding a skip-layer structure to make it capable for multi-resolution image classification Finally, we further refine the output class map using Conditional Random Fields (CRFs) post-processing Our classification model is trained on 70 GF-2 true color images, and tested on the other 4 GF-2 images and 3 IKONOS true color images We also employ object-oriented classification, patch-based CNN classification, and the FCN-8s approach on the same images for comparison The experiments show that compared with the existing approaches, our approach has an obvious improvement in accuracy The average precision, recall, and Kappa coefficient of our approach are 081, 078, and 083, respectively The experiments also prove that our approach has strong applicability for multi-resolution image classification

Journal ArticleDOI
TL;DR: This research presents an approach for cropland extent mapping at high spatial resolution (30-m or better) using the 10-day, 10 to 20-m, Sentinel-2 data in combination with 16- day, 30- m, Landsat-8 data on Google Earth Engine (GEE) to improve segmentation accuracy.
Abstract: A satellite-derived cropland extent map at high spatial resolution (30-m or better) is a must for food and water security analysis. Precise and accurate global cropland extent maps, indicating cropland and non-cropland areas, are starting points to develop higher-level products such as crop watering methods (irrigated or rainfed), cropping intensities (e.g., single, double, or continuous cropping), crop types, cropland fallows, as well as for assessment of cropland productivity (productivity per unit of land), and crop water productivity (productivity per unit of water). Uncertainties associated with the cropland extent map have cascading effects on all higher-level cropland products. However, precise and accurate cropland extent maps at high spatial resolution over large areas (e.g., continents or the globe) are challenging to produce due to the small-holder dominant agricultural systems like those found in most of Africa and Asia. Cloud-based geospatial computing platforms and multi-date, multi-sensor satellite image inventories on Google Earth Engine offer opportunities for mapping croplands with precision and accuracy over large areas that satisfy the requirements of broad range of applications. Such maps are expected to provide highly significant improvements compared to existing products, which tend to be coarser in resolution, and often fail to capture fragmented small-holder farms especially in regions with high dynamic change within and across years. To overcome these limitations, in this research we present an approach for cropland extent mapping at high spatial resolution (30-m or better) using the 10-day, 10 to 20-m, Sentinel-2 data in combination with 16-day, 30-m, Landsat-8 data on Google Earth Engine (GEE). First, nominal 30-m resolution satellite imagery composites were created from 36,924 scenes of Sentinel-2 and Landsat-8 images for the entire African continent in 2015–2016. These composites were generated using a median-mosaic of five bands (blue, green, red, near-infrared, NDVI) during each of the two periods (period 1: January–June 2016 and period 2: July–December 2015) plus a 30-m slope layer derived from the Shuttle Radar Topographic Mission (SRTM) elevation dataset. Second, we selected Cropland/Non-cropland training samples (sample size = 9791) from various sources in GEE to create pixel-based classifications. As supervised classification algorithm, Random Forest (RF) was used as the primary classifier because of its efficiency, and when over-fitting issues of RF happened due to the noise of input training data, Support Vector Machine (SVM) was applied to compensate for such defects in specific areas. Third, the Recursive Hierarchical Segmentation (RHSeg) algorithm was employed to generate an object-oriented segmentation layer based on spectral and spatial properties from the same input data. This layer was merged with the pixel-based classification to improve segmentation accuracy. Accuracies of the merged 30-m crop extent product were computed using an error matrix approach in which 1754 independent validation samples were used. In addition, a comparison was performed with other available cropland maps as well as with LULC maps to show spatial similarity. Finally, the cropland area results derived from the map were compared with UN FAO statistics. The independent accuracy assessment showed a weighted overall accuracy of 94%, with a producer’s accuracy of 85.9% (or omission error of 14.1%), and user’s accuracy of 68.5% (commission error of 31.5%) for the cropland class. The total net cropland area (TNCA) of Africa was estimated as 313 Mha for the nominal year 2015. The online product, referred to as the Global Food Security-support Analysis Data @ 30-m for the African Continent, Cropland Extent product (GFSAD30AFCE) is distributed through the NASA’s Land Processes Distributed Active Archive Center (LP DAAC) as (available for download by 10 November 2017 or earlier): https://doi.org/10.5067/MEaSUREs/GFSAD/GFSAD30AFCE.001 and can be viewed at https://croplands.org/app/map. Causes of uncertainty and limitations within the crop extent product are discussed in detail.

Journal ArticleDOI
TL;DR: An improved pre-trained AlexNet architecture named pre- trained AlexNet-SPP-SS has been proposed, which incorporates the scale pooling—spatial pyramid pooling (SPP) and side supervision (SS) to improve the above two situations.
Abstract: The rapid development of high spatial resolution (HSR) remote sensing imagery techniques not only provide a considerable amount of datasets for scene classification tasks but also request an appropriate scene classification choice when facing with finite labeled samples. AlexNet, as a relatively simple convolutional neural network (CNN) architecture, has obtained great success in scene classification tasks and has been proven to be an excellent foundational hierarchical and automatic scene classification technique. However, current HSR remote sensing imagery scene classification datasets always have the characteristics of small quantities and simple categories, where the limited annotated labeling samples easily cause non-convergence. For HSR remote sensing imagery, multi-scale information of the same scenes can represent the scene semantics to a certain extent but lacks an efficient fusion expression manner. Meanwhile, the current pre-trained AlexNet architecture lacks a kind of appropriate supervision for enhancing the performance of this model, which easily causes overfitting. In this paper, an improved pre-trained AlexNet architecture named pre-trained AlexNet-SPP-SS has been proposed, which incorporates the scale pooling—spatial pyramid pooling (SPP) and side supervision (SS) to improve the above two situations. Extensive experimental results conducted on the UC Merced dataset and the Google Image dataset of SIRI-WHU have demonstrated that the proposed pre-trained AlexNet-SPP-SS model is superior to the original AlexNet architecture as well as the traditional scene classification methods.

Journal ArticleDOI
TL;DR: This work presents a deep-learning based segment-before-detect method for segmentation and subsequent detection and classification of several varieties of wheeled vehicles in high resolution remote sensing images to show that deep learning is also suitable for object-oriented analysis of Earth Observation data as effective object detection can be obtained as a byproduct of accurate semantic segmentation.
Abstract: Like computer vision before, remote sensing has been radically changed by the introduction of deep learning and, more notably, Convolution Neural Networks. Land cover classification, object detection and scene understanding in aerial images rely more and more on deep networks to achieve new state-of-the-art results. Recent architectures such as Fully Convolutional Networks can even produce pixel level annotations for semantic mapping. In this work, we present a deep-learning based segment-before-detect method for segmentation and subsequent detection and classification of several varieties of wheeled vehicles in high resolution remote sensing images. This allows us to investigate object detection and classification on a complex dataset made up of visually similar classes, and to demonstrate the relevance of such a subclass modeling approach. Especially, we want to show that deep learning is also suitable for object-oriented analysis of Earth Observation data as effective object detection can be obtained as a byproduct of accurate semantic segmentation. First, we train a deep fully convolutional network on the ISPRS Potsdam and the NZAM/ONERA Christchurch datasets and show how the learnt semantic maps can be used to extract precise segmentation of vehicles. Then, we show that those maps are accurate enough to perform vehicle detection by simple connected component extraction. This allows us to study the repartition of vehicles in the city. Finally, we train a Convolutional Neural Network to perform vehicle classification on the VEDAI dataset, and transfer its knowledge to classify the individual vehicle instances that we detected.

Journal ArticleDOI
TL;DR: Sentinel-2 bands at 10 m spatial resolution are suitable for estimating LAI, LCC, and CCC, avoiding the need for red-edge bands that are only available at 20 m, and are an important finding for applying Sentinel-2 data in precision agriculture.
Abstract: Leaf area index (LAI) and chlorophyll content, at leaf and canopy level, are important variables for agricultural applications because of their crucial role in photosynthesis and in plant functioning. The goal of this study was to test the hypothesis that LAI, leaf chlorophyll content (LCC), and canopy chlorophyll content (CCC) of a potato crop can be estimated by vegetation indices for the first time using Sentinel-2 satellite images. In 2016 ten plots of 30 × 30 m were designed in a potato field with different fertilization levels. During the growing season approximately 10 daily radiometric field measurements were used to determine LAI, LCC, and CCC. These radiometric determinations were extensively calibrated against LAI2000 and chlorophyll meter (SPAD, soil plant analysis development) measurements for potato crops grown in the years 2010–2014. Results for Sentinel-2 showed that the weighted difference vegetation index (WDVI) using bands at 10 m spatial resolution can be used for estimating the LAI (R2 of 0.809; root mean square error of prediction (RMSEP) of 0.36). The ratio of the transformed chlorophyll in reflectance index and the optimized soil-adjusted vegetation index (TCARI/OSAVI) showed to be a good linear estimator of LCC at 20 m (R2 of 0.696; RMSEP of 0.062 g·m−2). The performance of the chlorophyll vegetation index (CVI) at 10 m spatial resolution was slightly worse (R2 of 0.656; RMSEP of 0.066 g·m−2) compared to TCARI/OSAVI. Finally, results showed that the green chlorophyll index (CIgreen) was an accurate and linear estimator of CCC at 10 m (R2 of 0.818; RMSEP of 0.29 g·m−2). Results for CIgreen were better than for the red-edge chlorophyll index (CIred-edge, R2 of 0.576, RMSE of 0.43 g·m−2). Our results show that Sentinel-2 bands at 10 m spatial resolution are suitable for estimating LAI, LCC, and CCC, avoiding the need for red-edge bands that are only available at 20 m. This is an important finding for applying Sentinel-2 data in precision agriculture.

Journal ArticleDOI
TL;DR: The main objective of the present paper is to develop an operational approach for soil moisture mapping in agricultural areas at a high spatial resolution over bare soils, as well as soils with vegetation cover, based on the synergic use of radar and optical data.
Abstract: Soil moisture mapping at a high spatial resolution is very important for several applications in hydrology, agriculture and risk assessment. With the arrival of the free Sentinel data at high spatial and temporal resolutions, the development of soil moisture products that can better meet the needs of users is now possible. In this context, the main objective of the present paper is to develop an operational approach for soil moisture mapping in agricultural areas at a high spatial resolution over bare soils, as well as soils with vegetation cover. The developed approach is based on the synergic use of radar and optical data. A neural network technique was used to develop an operational method for soil moisture estimates. Three inversion SAR (Synthetic Aperture Radar) configurations were tested: (1) VV polarization; (2) VH polarization; and (3) both VV and VH polarization, all in addition to the NDVI information extracted from optical images. Neural networks were developed and validated using synthetic and real databases. The results showed that the use of a priori information on the soil moisture condition increases the precision of the soil moisture estimates. The results showed that VV alone provides better accuracy on the soil moisture estimates than VH alone. In addition, the use of both VV and VH provides similar results, compared to VV alone. In conclusion, the soil moisture could be estimated in agricultural areas with an accuracy of approximately 5 vol % (volumetric unit expressed in percent). Better results were obtained for soil with a moderate surface roughness (for root mean surface height between 1 and 3 cm). The developed approach could be applied for agricultural plots with an NDVI lower than 0.75.

Journal ArticleDOI
TL;DR: An automatic solution to the problem of detecting and counting cars in unmanned aerial vehicle (UAV) images that outperforms the state-of-the-art methods, both in terms of accuracy and computational time.
Abstract: This paper presents an automatic solution to the problem of detecting and counting cars in unmanned aerial vehicle (UAV) images. This is a challenging task given the very high spatial resolution of UAV images (on the order of a few centimetres) and the extremely high level of detail, which require suitable automatic analysis methods. Our proposed method begins by segmenting the input image into small homogeneous regions, which can be used as candidate locations for car detection. Next, a window is extracted around each region, and deep learning is used to mine highly descriptive features from these windows. We use a deep convolutional neural network (CNN) system that is already pre-trained on huge auxiliary data as a feature extraction tool, combined with a linear support vector machine (SVM) classifier to classify regions into “car” and “no-car” classes. The final step is devoted to a fine-tuning procedure which performs morphological dilation to smooth the detected regions and fill any holes. In addition, small isolated regions are analysed further using a few sliding rectangular windows to locate cars more accurately and remove false positives. To evaluate our method, experiments were conducted on a challenging set of real UAV images acquired over an urban area. The experimental results have proven that the proposed method outperforms the state-of-the-art methods, both in terms of accuracy and computational time.

Journal ArticleDOI
TL;DR: The results suggest that crop height determined from the new UAV-based snapshot hyperspectral sensor can improve AGB estimation and is advantageous for mapping applications.
Abstract: Correct estimation of above-ground biomass (AGB) is necessary for accurate crop growth monitoring and yield prediction. We estimated AGB based on images obtained with a snapshot hyperspectral sensor (UHD 185 firefly, Cubert GmbH, Ulm, Baden-Wurttemberg, Germany) mounted on an unmanned aerial vehicle (UAV). The UHD 185 images were used to calculate the crop height and hyperspectral reflectance of winter wheat canopies from hyperspectral and panchromatic images. We constructed several single-parameter models for AGB estimation based on spectral parameters, such as specific bands, spectral indices (e.g., Ratio Vegetation Index (RVI), NDVI, Greenness Index (GI) and Wide Dynamic Range VI (WDRVI)) and crop height and several models combined with spectral parameters and crop height. Comparison with experimental results indicated that incorporating crop height into the models improved the accuracy of AGB estimations (the average AGB is 6.45 t/ha). The estimation accuracy of single-parameter models was low (crop height only: R2 = 0.50, RMSE = 1.62 t/ha, MAE = 1.24 t/ha; R670 only: R2 = 0.54, RMSE = 1.55 t/ha, MAE = 1.23 t/ha; NDVI only: R2 = 0.37, RMSE = 1.81 t/ha, MAE = 1.47 t/ha; partial least squares regression R2 = 0.53, RMSE = 1.69, MAE = 1.20), but accuracy increased when crop height and spectral parameters were combined (partial least squares regression modeling: R2 = 0.78, RMSE = 1.08 t/ha, MAE = 0.83 t/ha; verification: R2 = 0.74, RMSE = 1.20 t/ha, MAE = 0.96 t/ha). Our results suggest that crop height determined from the new UAV-based snapshot hyperspectral sensor can improve AGB estimation and is advantageous for mapping applications. This new method can be used to guide agricultural management.

Journal ArticleDOI
TL;DR: 3D convolution is used to exploit both the spatial context of neighboring pixels and spectral correlation of neighboring bands, such that spectral distortion when directly applying traditional CNN based SR algorithms to hyperspectral images in band-wise manners is alleviated.
Abstract: Hyperspectral images are well-known for their fine spectral resolution to discriminate different materials. However, their spatial resolution is relatively low due to the trade-off in imaging sensor technologies, resulting in limitations in their applications. Inspired by recent achievements in convolutional neural network (CNN) based super-resolution (SR) for natural images, a novel three-dimensional full CNN (3D-FCNN) is constructed for spatial SR of hyperspectral images in this paper. Specifically, 3D convolution is used to exploit both the spatial context of neighboring pixels and spectral correlation of neighboring bands, such that spectral distortion when directly applying traditional CNN based SR algorithms to hyperspectral images in band-wise manners is alleviated. Furthermore, a sensor-specific mode is designed for the proposed 3D-FCNN such that none of the samples from the target scene are required for training. Fine-tuning by a small number of training samples from the target scene can further improve the performance of such a sensor-specific method. Extensive experimental results on four benchmark datasets from two well-known hyperspectral sensors, namely hyperspectral digital imagery collection experiment (HYDICE) and reflective optics system imaging spectrometer (ROSIS) sensors, demonstrate that our proposed 3D-FCNN outperforms several existing SR methods by ensuring higher quality both in reconstruction and spectral fidelity.

Journal ArticleDOI
TL;DR: Results indicate that systematic and open access Synthetic Aperture Radar (SAR) can help scale information required by food security initiatives and Monitoring, Reporting, and Verification programs.
Abstract: Assessment and monitoring of rice agriculture over large areas has been limited by cloud cover, optical sensor spatial and temporal resolutions, and lack of systematic or open access radar. Dense time series of open access Sentinel-1 C-band data at moderate spatial resolution offers new opportunities for monitoring agriculture. This is especially pertinent in South and Southeast Asia where rice is critical to food security and mostly grown during the rainy seasons when high cloud cover is present. In this research application, time series Sentinel-1A Interferometric Wide images (632) were utilized to map rice extent, crop calendar, inundation, and cropping intensity across Myanmar. An updated (2015) land use land cover map fusing Sentinel-1, Landsat-8 OLI, and PALSAR-2 were integrated and classified using a randomforest algorithm. Time series phenological analyses of the dense Sentinel-1 data were then executed to assess rice information across all of Myanmar. The broad land use land cover map identified 186,701 km2 of cropland across Myanmar with mean out-of-sample kappa of over 90%. A phenological time series analysis refined the cropland class to create a rice mask by extrapolating unique indicators tied to the rice life cycle (dynamic range, inundation, growth stages) from the dense time series Sentinel-1 to map rice paddy characteristics in an automated approach. Analyses show that the harvested rice area was 6,652,111 ha with general (R2 = 0.78) agreement with government census statistics. The outcomes show strong ability to assess and monitor rice production at moderate scales over a large cloud-prone region. In countries such as Myanmar with large populations and governments dependent upon rice production, more robust and transparent monitoring and assessment tools can help support better decision making. These results indicate that systematic and open access Synthetic Aperture Radar (SAR) can help scale information required by food security initiatives and Monitoring, Reporting, and Verification programs.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed convolutional recurrent neural network (CRNN) method provides better classification performance compared to traditional methods and other state-of-the-art deep learning methods for hyperspectral data classification.
Abstract: Deep neural networks, such as convolutional neural networks (CNN) and stackedautoencoders, have recently been successfully used to extract deep features for hyperspectral dataclassification. Recurrent neural networks (RNN) are another type of neural networks, which arewidely used for sequence analysis because they are constructed to extract contextual information fromsequences by modeling the dependencies between different time steps. In this paper, we study theability of RNN for hyperspectral data classification by extracting the contextual information from thedata. Specifically, hyperspectral data are treated as spectral sequences, and an RNN is used to modelthe dependencies between different spectral bands. In addition, we propose to use a convolutionalrecurrent neural network (CRNN) to learn more discriminative features for hyperspectral dataclassification. In CRNN, a few convolutional layers are first learned to extract middle-level andlocally-invariant features from the input data, and the following recurrent layers are then employedto further extract spectrally-contextual information from the features generated by the convolutionallayers. Experimental results on real hyperspectral datasets show that our method provides betterclassification performance compared to traditional methods and other state-of-the-art deep learningmethods for hyperspectral data classification.

Journal ArticleDOI
TL;DR: An alternative SMOS product that was developed by INRA and CESBIO is presented, which is much simpler and does not account for corrections associated with the antenna pattern and the complex SMOS viewing angle geometry and considers pixels as homogeneous to avoid uncertainties and errors linked to inconsistent auxiliary datasets.
Abstract: The main goal of the Soil Moisture and Ocean Salinity (SMOS) mission over land surfaces is the production of global maps of soil moisture (SM) and vegetation optical depth (τ) based on multi-angular brightness temperature (TB) measurements at L-band. The operational SMOS Level 2 and Level 3 soil moisture algorithms account for different surface effects, such as vegetation opacity and soil roughness at 4 km resolution, in order to produce global retrievals of SM and τ. In this study, we present an alternative SMOS product that was developed by INRA (Institut National de la Recherche Agronomique) and CESBIO (Centre d’Etudes Spatiales de la BIOsphere). One of the main goals of this SMOS-INRA-CESBIO (SMOS-IC) product is to be as independent as possible from auxiliary data. The SMOS-IC product provides daily SM and τ at the global scale and differs from the operational SMOS Level 3 (SMOSL3) product in the treatment of retrievals over heterogeneous pixels. Specifically, SMOS-IC is much simpler and does not account for corrections associated with the antenna pattern and the complex SMOS viewing angle geometry. It considers pixels as homogeneous to avoid uncertainties and errors linked to inconsistent auxiliary datasets which are used to characterize the pixel heterogeneity in the SMOS L3 algorithm. SMOS-IC also differs from the current SMOSL3 product (Version 300, V300) in the values of the effective vegetation scattering albedo (ω) and soil roughness parameters. An inter-comparison is presented in this study based on the use of ECMWF (European Center for Medium range Weather Forecasting) SM outputs and NDVI (Normalized Difference Vegetation Index) from MODIS (Moderate-Resolution Imaging Spectroradiometer). A six-year (2010–2015) inter-comparison of the SMOS products SMOS-IC and SMOSL3 SM (V300) with ECMWF SM yielded higher correlations and lower ubRMSD (unbiased root mean square difference) for SMOS-IC over most of the pixels. In terms of τ, SMOS-IC τ was found to be better correlated to MODIS NDVI in most regions of the globe, with the exception of the Amazonian basin and the northern mid-latitudes.

Journal ArticleDOI
TL;DR: An exploratory evaluation of the performance of the newly available Sentinel-2A Multispectral Instrument imagery for mapping water bodies using the image sharpening approach shows that the proposed NDWI-based MNDWI image exhibits higher separability and is more effective for both classification-level and boundary-level final water maps than traditional approaches.
Abstract: This study conducts an exploratory evaluation of the performance of the newly available Sentinel-2A Multispectral Instrument (MSI) imagery for mapping water bodies using the image sharpening approach. Sentinel-2 MSI provides spectral bands with different resolutions, including RGB and Near-Infra-Red (NIR) bands in 10 m and Short-Wavelength InfraRed (SWIR) bands in 20 m, which are closely related to surface water information. It is necessary to define a pan-like band for the Sentinel-2 image sharpening process because of the replacement of the panchromatic band by four high-resolution multi-spectral bands (10 m). This study, which aimed at urban surface water extraction, utilised the Normalised Difference Water Index (NDWI) at 10 m resolution as a high-resolution image to sharpen the 20 m SWIR bands. Then, object-level Modified NDWI (MNDWI) mapping and minimum valley bottom adjustment threshold were applied to extract water maps. The proposed method was compared with the conventional most related band- (between the visible spectrum/NIR and SWIR bands) based and principal component analysis first component-based sharpening. Results show that the proposed NDWI-based MNDWI image exhibits higher separability and is more effective for both classification-level and boundary-level final water maps than traditional approaches.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial features from hyperspectral images (HSIs).
Abstract: This paper proposes a novel deep learning framework named bidirectional-convolutional long short term memory (Bi-CLSTM) network to automatically learn the spectral-spatial features from hyperspectral images (HSIs). In the network, the issue of spectral feature extraction is considered as a sequence learning problem, and a recurrent connection operator across the spectral domain is used to address it. Meanwhile, inspired from the widely used convolutional neural network (CNN), a convolution operator across the spatial domain is incorporated into the network to extract the spatial feature. In addition, to sufficiently capture the spectral information, a bidirectional recurrent connection is proposed. In the classification phase, the learned features are concatenated into a vector and fed to a Softmax classifier via a fully-connected operator. To validate the effectiveness of the proposed Bi-CLSTM framework, we compare it with six state-of-the-art methods, including the popular 3D-CNN model, on three widely used HSIs (i.e., Indian Pines, Pavia University, and Kennedy Space Center). The obtained results show that Bi-CLSTM can improve the classification performance by almost 1.5 % as compared to 3D-CNN.

Journal ArticleDOI
TL;DR: This paper proposes assisting avalanche search and rescue operations with UAVs fitted with vision cameras and introduces a pre- processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier.
Abstract: Following an avalanche, one of the factors that affect victims’ chance of survival is the speed with which they are located and dug out. Rescue teams use techniques like trained rescue dogs and electronic transceivers to locate victims. However, the resources and time required to deploy rescue teams are major bottlenecks that decrease a victim’s chance of survival. Advances in the field of Unmanned Aerial Vehicles (UAVs) have enabled the use of flying robots equipped with sensors like optical cameras to assess the damage caused by natural or manmade disasters and locate victims in the debris. In this paper, we propose assisting avalanche search and rescue (SAR) operations with UAVs fitted with vision cameras. The sequence of images of the avalanche debris captured by the UAV is processed with a pre-trained Convolutional Neural Network (CNN) to extract discriminative features. A trained linear Support Vector Machine (SVM) is integrated at the top of the CNN to detect objects of interest. Moreover, we introduce a pre-processing method to increase the detection rate and a post-processing method based on a Hidden Markov Model to improve the prediction performance of the classifier. Experimental results conducted on two different datasets at different levels of resolution show that the detection performance increases with an increase in resolution, while the computation time increases. Additionally, they also suggest that a significant decrease in processing time can be achieved thanks to the pre-processing step.

Journal ArticleDOI
Lin Li, Fan Yang, Haihong Zhu, Dalin Li, You Li, Lei Tang 
TL;DR: An improved RANSAC method based on Normal Distribution Transformation (NDT) cells is proposed in this study to avoid spurious planes for 3D point-cloud plane segmentation and is verified on three indoor scenes to validate the suitability of the method.
Abstract: Plane segmentation is a basic task in the automatic reconstruction of indoor and urban environments from unorganized point clouds acquired by laser scanners. As one of the most common plane-segmentation methods, standard Random Sample Consensus (RANSAC) is often used to continually detect planes one after another. However, it suffers from the spurious-plane problem when noise and outliers exist due to the uncertainty of randomly sampling the minimum subset with 3 points. An improved RANSAC method based on Normal Distribution Transformation (NDT) cells is proposed in this study to avoid spurious planes for 3D point-cloud plane segmentation. A planar NDT cell is selected as a minimal sample in each iteration to ensure the correctness of sampling on the same plane surface. The 3D NDT represents the point cloud with a set of NDT cells and models the observed points with a normal distribution within each cell. The geometric appearances of NDT cells are used to classify the NDT cells into planar and non-planar cells. The proposed method is verified on three indoor scenes. The experimental results show that the correctness exceeds 88.5% and the completeness exceeds 85.0%, which indicates that the proposed method identifies more reliable and accurate planes than standard RANSAC. It also executes faster. These results validate the suitability of the method.

Journal ArticleDOI
TL;DR: A methodology to process the SAR and optical data in a synergistic fashion and automatically calibrate, mosaic, and integrate these data sets together into seamless, ice-sheet-wide, products is described.
Abstract: Satellite remote sensing data including Landsat-8 (optical), Sentinel-1, and RADARSAT-2 (synthetic aperture radar (SAR) missions) have recently become routinely available for large scale ice velocity mapping of ice sheets in Greenland and Antarctica. These datasets are too large in size to be processed and calibrated manually as done in the past. Here, we describe a methodology to process the SAR and optical data in a synergistic fashion and automatically calibrate, mosaic, and integrate these data sets together into seamless, ice-sheet-wide, products. We employ this approach to produce annual mosaics of ice motion in Antarctica and Greenland with all available data acquired on a particular year. We find that the precision of a Landsat-8 pair is lower than that of its SAR counterpart, but due to the large number of Landsat-8 acquisitions, combined with the high persistency of optical surface features in the Landsat-8 data, we obtain accurate velocity products from Landsat that integrate well with the SAR-derived velocity products. The resulting pool of remote sensing products is a significant advance for observing changes in ice dynamics over the entire ice sheets and their contribution to sea level. In preparation for the next generation sensors, we discuss the implications of the results for the upcoming NASA-ISRO SAR mission (NISAR).

Journal ArticleDOI
TL;DR: This work developed a workflow for predicting the probability of wetland occurrence using a boosted regression tree machine-learning framework applied to digital topographic and EO data, and demonstrates the central role of high-quality topographic variables for modeling wetland distribution at regional scales.
Abstract: Modern advances in cloud computing and machine-leaning algorithms are shifting the manner in which Earth-observation (EO) data are used for environmental monitoring, particularly as we settle into the era of free, open-access satellite data streams. Wetland delineation represents a particularly worthy application of this emerging research trend, since wetlands are an ecologically important yet chronically under-represented component of contemporary mapping and monitoring programs, particularly at the regional and national levels. Exploiting Google Earth Engine and R Statistical software, we developed a workflow for predicting the probability of wetland occurrence using a boosted regression tree machine-learning framework applied to digital topographic and EO data. Working in a 13,700 km2 study area in northern Alberta, our best models produced excellent results, with AUC (area under the receiver-operator characteristic curve) values of 0.898 and explained-deviance values of 0.708. Our results demonstrate the central role of high-quality topographic variables for modeling wetland distribution at regional scales. Including optical and/or radar variables into the workflow substantially improved model performance, though optical data performed slightly better. Converting our wetland probability-of-occurrence model into a binary Wet-Dry classification yielded an overall accuracy of 85%, which is virtually identical to that derived from the Alberta Merged Wetland Inventory (AMWI): the contemporary inventory used by the Government of Alberta. However, our workflow contains several key advantages over that used to produce the AMWI, and provides a scalable foundation for province-wide monitoring initiatives.

Journal ArticleDOI
TL;DR: The precision, accuracy, and stability of the RF, ANN, and SVM models were improved by inclusion of STR sampling, and the RF model is suitable for estimating LAI when sample plots and variation are relatively large and more than one growth period.
Abstract: Leaf area index (LAI) is an important indicator of plant growth and yield that can be monitored by remote sensing. Several models were constructed using datasets derived from SRS and STR sampling methods to determine the optimal model for soybean (multiple strains) LAI inversion for the whole crop growth period and a single growth period. Random forest (RF), artificial neural network (ANN), and support vector machine (SVM) regression models were compared with a partial least-squares regression (PLS) model. The RF model yielded the highest precision, accuracy, and stability with V-R2, SDR2, V-RMSE, and SDRMSE values of 0.741, 0.031, 0.106, and 0.005, respectively, over the whole growth period based on STR sampling. The ANN model had the highest precision, accuracy, and stability (0.452, 0.132, 0.086, and 0.009, respectively) over a single growth phase based on STR sampling. The precision, accuracy, and stability of the RF, ANN, and SVM models were improved by inclusion of STR sampling. The RF model is suitable for estimating LAI when sample plots and variation are relatively large (i.e., the whole growth period or more than one growth period). The ANN model is more appropriate for estimating LAI when sample plots and variation are relatively low (i.e., a single growth period).