scispace - formally typeset
Search or ask a question

Showing papers in "Remote Sensing in 2018"


Journal ArticleDOI
TL;DR: An exponentially increasing trend of SUHI research since 2005, with clear preferences for geographic areas, time of day, seasons, research foci, and platforms/sensors is found, and key potential directions and opportunities for future efforts are proposed.
Abstract: The surface urban heat island (SUHI), which represents the difference of land surface temperature (LST) in urban relativity to neighboring non-urban surfaces, is usually measured using satellite LST data. Over the last few decades, advancements of remote sensing along with spatial science have considerably increased the number and quality of SUHI studies that form the major body of the urban heat island (UHI) literature. This paper provides a systematic review of satellite-based SUHI studies, from their origin in 1972 to the present. We find an exponentially increasing trend of SUHI research since 2005, with clear preferences for geographic areas, time of day, seasons, research foci, and platforms/sensors. The most frequently studied region and time period of research are China and summer daytime, respectively. Nearly two-thirds of the studies focus on the SUHI/LST variability at a local scale. The Landsat Thematic Mapper (TM)/Enhanced Thematic Mapper (ETM+)/Thermal Infrared Sensor (TIRS) and Terra/Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) are the two most commonly-used satellite sensors and account for about 78% of the total publications. We systematically reviewed the main satellite/sensors, methods, key findings, and challenges of the SUHI research. Previous studies confirm that the large spatial (local to global scales) and temporal (diurnal, seasonal, and inter-annual) variations of SUHI are contributed by a variety of factors such as impervious surface area, vegetation cover, landscape structure, albedo, and climate. However, applications of SUHI research are largely impeded by a series of data and methodological limitations. Lastly, we propose key potential directions and opportunities for future efforts. Besides improving the quality and quantity of LST data, more attention should be focused on understudied regions/cities, methods to examine SUHI intensity, inter-annual variability and long-term trends of SUHI, scaling issues of SUHI, the relationship between surface and subsurface UHIs, and the integration of remote sensing with field observations and numeric modeling.

443 citations


Journal ArticleDOI
TL;DR: An overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring is provided in order to identify future directions, applications, developments, and challenges.
Abstract: Environmental monitoring plays a central role in diagnosing climate and management impacts on natural and agricultural systems; enhancing the understanding of hydrological processes; optimizing the allocation and distribution of water resources; and assessing, forecasting, and even preventing natural disasters. Nowadays, most monitoring and data collection systems are based upon a combination of ground-based measurements, manned airborne sensors, and satellite observations. These data are utilized in describing both small- and large-scale processes, but have spatiotemporal constraints inherent to each respective collection system. Bridging the unique spatial and temporal divides that limit current monitoring platforms is key to improving our understanding of environmental systems. In this context, Unmanned Aerial Systems (UAS) have considerable potential to radically improve environmental monitoring. UAS-mounted sensors offer an extraordinary opportunity to bridge the existing gap between field observations and traditional air- and space-borne remote sensing, by providing high spatial detail over relatively large areas in a cost-effective way and an entirely new capacity for enhanced temporal retrieval. As well as showcasing recent advances in the field, there is also a need to identify and understand the potential limitations of UAS technology. For these platforms to reach their monitoring potential, a wide spectrum of unresolved issues and application-specific challenges require focused community attention. Indeed, to leverage the full potential of UAS-based approaches, sensing technologies, measurement protocols, postprocessing techniques, retrieval algorithms, and evaluation techniques need to be harmonized. The aim of this paper is to provide an overview of the existing research and applications of UAS in natural and agricultural ecosystem monitoring in order to identify future directions, applications, developments, and challenges.

442 citations


Journal ArticleDOI
TL;DR: This study presents a new global baseline of mangrove extent for 2010 and has been released as the first output of the Global Mangrove Watch (GMW) initiative, the first study to apply a globally consistent and automated method for mapping mangroves.
Abstract: This study presents a new global baseline of mangrove extent for 2010 and has been released as the first output of the Global Mangrove Watch (GMW) initiative. This is the first study to apply a globally consistent and automated method for mapping mangroves, identifying a global extent of 137,600 km 2 . The overall accuracy for mangrove extent was 94.0% with a 99% likelihood that the true value is between 93.6–94.5%, using 53,878 accuracy points across 20 sites distributed globally. Using the geographic regions of the Ramsar Convention on Wetlands, Asia has the highest proportion of mangroves with 38.7% of the global total, while Latin America and the Caribbean have 20.3%, Africa has 20.0%, Oceania has 11.9%, North America has 8.4% and the European Overseas Territories have 0.7%. The methodology developed is primarily based on the classification of ALOS PALSAR and Landsat sensor data, where a habitat mask was first generated, within which the classification of mangrove was undertaken using the Extremely Randomized Trees classifier. This new globally consistent baseline will also form the basis of a mangrove monitoring system using JAXA JERS-1 SAR, ALOS PALSAR and ALOS-2 PALSAR-2 radar data to assess mangrove change from 1996 to the present. However, when using the product, users should note that a minimum mapping unit of 1 ha is recommended and that the error increases in regions of disturbance and where narrow strips or smaller fragmented areas of mangroves are present. Artefacts due to cloud cover and the Landsat-7 SLC-off error are also present in some areas, particularly regions of West Africa due to the lack of Landsat-5 data and persistence cloud cover. In the future, consideration will be given to the production of a new global baseline based on 10 m Sentinel-2 composites.

372 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a framework called Rotation Dense Feature Pyramid Networks (R-DFPN), which can effectively detect ships in different scenes including ocean and port.
Abstract: Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN) which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN), which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN), DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI) Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

372 citations


Journal ArticleDOI
TL;DR: This review evaluates the state-of-the-art methods in UAV spectral remote sensing and discusses sensor technology, measurement procedures, geometric processing, and radiometric calibration based on the literature and more than a decade of experimentation.
Abstract: In the last 10 years, development in robotics, computer vision, and sensor technology has provided new spectral remote sensing tools to capture unprecedented ultra-high spatial and high spectral resolution with unmanned aerial vehicles (UAVs). This development has led to a revolution in geospatial data collection in which not only few specialist data providers collect and deliver remotely sensed data, but a whole diverse community is potentially able to gather geospatial data that fit their needs. However, the diversification of sensing systems and user applications challenges the common application of good practice procedures that ensure the quality of the data. This challenge can only be met by establishing and communicating common procedures that have had demonstrated success in scientific experiments and operational demonstrations. In this review, we evaluate the state-of-the-art methods in UAV spectral remote sensing and discuss sensor technology, measurement procedures, geometric processing, and radiometric calibration based on the literature and more than a decade of experimentation. We follow the 'journey' of the reflected energy from the particle in the environment to its representation as a pixel in a 2D or 2.5D map, or 3D spectral point cloud. Additionally, we reflect on the current revolution in remote sensing, and identify trends, potential opportunities, and limitations.

370 citations


Journal ArticleDOI
TL;DR: Analysis of published literature showed that a total of 300 journal papers were published between 2011 and June 2017 that used GEE in their research, spread across 158 journals; Landsat was the most widely used dataset; it is the biggest component of the GEE data portal.
Abstract: The Google Earth Engine (GEE) portal provides enhanced opportunities for undertaking earth observation studies Established towards the end of 2010, it provides access to satellite and other ancillary data, cloud computing, and algorithms for processing large amounts of data with relative ease However, the uptake and usage of the opportunity remains varied and unclear This study was undertaken to investigate the usage patterns of the Google Earth Engine platform and whether researchers in developing countries were making use of the opportunity Analysis of published literature showed that a total of 300 journal papers were published between 2011 and June 2017 that used GEE in their research, spread across 158 journals The highest number of papers were in the journal Remote Sensing, followed by Remote Sensing of Environment There were also a number of papers in premium journals such as Nature and Science The application areas were quite varied, ranging from forest and vegetation studies to medical fields such as malaria Landsat was the most widely used dataset; it is the biggest component of the GEE data portal, with data from the first to the current Landsat series available for use and download Examination of data also showed that the usage was dominated by institutions based in developed nations, with study sites mainly in developed nations There were very few studies originating from institutions based in less developed nations and those that targeted less developed nations, particularly in the African continent

350 citations


Journal ArticleDOI
TL;DR: A segmentation model, which designs an image segmentation neural network based on the deep residual networks and uses a guided filter to extract buildings in remote sensing imagery, which shows outstanding performance in terms of the building extraction from diversified objects in the urban district.
Abstract: Very high resolution (VHR) remote sensing imagery has been used for land cover classification, and it tends to a transition from land-use classification to pixel-level semantic segmentation. Inspired by the recent success of deep learning and the filter method in computer vision, this work provides a segmentation model, which designs an image segmentation neural network based on the deep residual networks and uses a guided filter to extract buildings in remote sensing imagery. Our method includes the following steps: first, the VHR remote sensing imagery is preprocessed and some hand-crafted features are calculated. Second, a designed deep network architecture is trained with the urban district remote sensing image to extract buildings at the pixel level. Third, a guided filter is employed to optimize the classification map produced by deep learning; at the same time, some salt-and-pepper noise is removed. Experimental results based on the Vaihingen and Potsdam datasets demonstrate that our method, which benefits from neural networks and guided filtering, achieves a higher overall accuracy when compared with other machine learning and deep learning methods. The method proposed shows outstanding performance in terms of the building extraction from diversified objects in the urban district.

338 citations


Journal ArticleDOI
TL;DR: A detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery for wetland mapping in Canada finds InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands.
Abstract: Despite recent advances of deep Convolutional Neural Networks (CNNs) in various computer vision tasks, their potential for classification of multispectral remote sensing images has not been thoroughly explored. In particular, the applications of deep CNNs using optical remote sensing data have focused on the classification of very high-resolution aerial and satellite data, owing to the similarity of these data to the large datasets in computer vision. Accordingly, this study presents a detailed investigation of state-of-the-art deep learning tools for classification of complex wetland classes using multispectral RapidEye optical imagery. Specifically, we examine the capacity of seven well-known deep convnets, namely DenseNet121, InceptionV3, VGG16, VGG19, Xception, ResNet50, and InceptionResNetV2, for wetland mapping in Canada. In addition, the classification results obtained from deep CNNs are compared with those based on conventional machine learning tools, including Random Forest and Support Vector Machine, to further evaluate the efficiency of the former to classify wetlands. The results illustrate that the full-training of convnets using five spectral bands outperforms the other strategies for all convnets. InceptionResNetV2, ResNet50, and Xception are distinguished as the top three convnets, providing state-of-the-art classification accuracies of 96.17%, 94.81%, and 93.57%, respectively. The classification accuracies obtained using Support Vector Machine (SVM) and Random Forest (RF) are 74.89% and 76.08%, respectively, considerably inferior relative to CNNs. Importantly, InceptionResNetV2 is consistently found to be superior compared to all other convnets, suggesting the integration of Inception and ResNet modules is an efficient architecture for classifying complex remote sensing scenes such as wetlands.

296 citations


Journal ArticleDOI
TL;DR: The LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community, and is compared against the heritage code (LT-IDL).
Abstract: The LandTrendr (LT) algorithm has been used widely for analysis of change in Landsat spectral time series data, but requires significant pre-processing, data management, and computational resources, and is only accessible to the community in a proprietary programming language (IDL). Here, we introduce LT for the Google Earth Engine (GEE) platform. The GEE platform simplifies pre-processing steps, allowing focus on the translation of the core temporal segmentation algorithm. Temporal segmentation involved a series of repeated random access calls to each pixel’s time series, resulting in a set of breakpoints (“vertices”) that bound straight-line segments. The translation of the algorithm into GEE included both transliteration and code analysis, resulting in improvement and logic error fixes. At six study areas representing diverse land cover types across the U.S., we conducted a direct comparison of the new LT-GEE code against the heritage code (LT-IDL). The algorithms agreed in most cases, and where disagreements occurred, they were largely attributable to logic error fixes in the code translation process. The practical impact of these changes is minimal, as shown by an example of forest disturbance mapping. We conclude that the LT-GEE algorithm represents a faithful translation of the LT code into a platform easily accessible by the broader user community.

278 citations


Journal ArticleDOI
TL;DR: This review paper investigates literature on current spatiotemporal data fusion methods, categorizes existing methods, discusses the principal laws underlying these methods, summarizes their potential applications, and proposes possible directions for future studies in this field.
Abstract: Satellite time series with high spatial resolution is critical for monitoring land surface dynamics in heterogeneous landscapes. Although remote sensing technologies have experienced rapid development in recent years, data acquired from a single satellite sensor are often unable to satisfy our demand. As a result, integrated use of data from different sensors has become increasingly popular in the past decade. Many spatiotemporal data fusion methods have been developed to produce synthesized images with both high spatial and temporal resolutions from two types of satellite images, frequent coarse-resolution images, and sparse fine-resolution images. These methods were designed based on different principles and strategies, and therefore show different strengths and limitations. This diversity brings difficulties for users to choose an appropriate method for their specific applications and data sets. To this end, this review paper investigates literature on current spatiotemporal data fusion methods, categorizes existing methods, discusses the principal laws underlying these methods, summarizes their potential applications, and proposes possible directions for future studies in this field.

274 citations


Journal ArticleDOI
TL;DR: A novel three-dimensional (3D) convolutional neural networks (CNN) based method that automatically classifies crops from spatio-temporal remote sensing images that outperformed the other mainstream methods.
Abstract: This study describes a novel three-dimensional (3D) convolutional neural networks (CNN) based method that automatically classifies crops from spatio-temporal remote sensing images. First, 3D kernel is designed according to the structure of multi-spectral multi-temporal remote sensing data. Secondly, the 3D CNN framework with fine-tuned parameters is designed for training 3D crop samples and learning spatio-temporal discriminative representations, with the full crop growth cycles being preserved. In addition, we introduce an active learning strategy to the CNN model to improve labelling accuracy up to a required threshold with the most efficiency. Finally, experiments are carried out to test the advantage of the 3D CNN, in comparison to the two-dimensional (2D) CNN and other conventional methods. Our experiments show that the 3D CNN is especially suitable in characterizing the dynamics of crop growth and outperformed the other mainstream methods.

Journal ArticleDOI
TL;DR: Good correlation between annotated and detected symptomatic surface per plant was obtained, meaning slightly symptomatic plants can be efficiently separated from severely attacked plants, and efficiency of simple transfer learning approaches without the need to design an ad-hoc specific feature extractor is demonstrated.
Abstract: Grapevine wood fungal diseases such as esca are among the biggest threats in vineyards nowadays. The lack of very efficient preventive (best results using commercial products report 20% efficiency) and curative means induces huge economic losses. The study presented in this paper is centered around the in-field detection of foliar esca symptoms during summer, exhibiting a typical “striped” pattern. Indeed, in-field disease detection has shown great potential for commercial applications and has been successfully used for other agricultural needs such as yield estimation. Differentiation with foliar symptoms caused by other diseases or abiotic stresses was also considered. Two vineyards from the Bordeaux region (France, Aquitaine) were chosen as the basis for the experiment. Pictures of diseased and healthy vine plants were acquired during summer 2017 and labeled at the leaf scale, resulting in a patch database of around 6000 images (224 × 224 pixels) divided into red cultivar and white cultivar samples. Then, we tackled the classification part of the problem comparing state-of-the-art SIFT encoding and pre-trained deep learning feature extractors for the classification of database patches. In the best case, 91% overall accuracy was obtained using deep features extracted from MobileNet network trained on ImageNet database, demonstrating the efficiency of simple transfer learning approaches without the need to design an ad-hoc specific feature extractor. The third part aimed at disease detection (using bounding boxes) within full plant images. For this purpose, we integrated the deep learning base network within a “one-step” detection network (RetinaNet), allowing us to perform detection queries in real time (approximately six frames per second on GPU). Recall/Precision (RP) and Average Precision (AP) metrics then allowed us to evaluate the performance of the network on a 91-image (plants) validation database. Overall, 90% precision for a 40% recall was obtained while best esca AP was about 70%. Good correlation between annotated and detected symptomatic surface per plant was also obtained, meaning slightly symptomatic plants can be efficiently separated from severely attacked plants.

Journal ArticleDOI
TL;DR: The new spatial technologies, and particularly the Sentinel constellation, are expected to improve the monitoring of cropping practices in the challenging context of food security and better management of agro-environmental issues.
Abstract: For agronomic, environmental, and economic reasons, the need for spatialized information about agricultural practices is expected to rapidly increase. In this context, we reviewed the literature on remote sensing for mapping cropping practices. The reviewed studies were grouped into three categories of practices: crop succession (crop rotation and fallowing), cropping pattern (single tree crop planting pattern, sequential cropping, and intercropping/agroforestry), and cropping techniques (irrigation, soil tillage, harvest and post-harvest practices, crop varieties, and agro-ecological infrastructures). We observed that the majority of the studies were exploratory investigations, tested on a local scale with a high dependence on ground data, and used only one type of remote sensing sensor. Furthermore, to be correctly implemented, most of the methods relied heavily on local knowledge on the management practices, the environment, and the biological material. These limitations point to future research directions, such as the use of land stratification, multi-sensor data combination, and expert knowledge-driven methods. Finally, the new spatial technologies, and particularly the Sentinel constellation, are expected to improve the monitoring of cropping practices in the challenging context of food security and better management of agro-environmental issues.

Journal ArticleDOI
TL;DR: The proposed end-to-end fast dense spectral–spatial convolution (FDSSC) framework for HSI classification achieved state-of-the-art performance compared with existing deep-learning-based methods while significantly reducing the training time.
Abstract: Recent research shows that deep-learning-derived methods based on a deep convolutional neural network have high accuracy when applied to hyperspectral image (HSI) classification, but long training times. To reduce the training time and improve accuracy, in this paper we propose an end-to-end fast dense spectral–spatial convolution (FDSSC) framework for HSI classification. The FDSSC framework uses different convolutional kernel sizes to extract spectral and spatial features separately, and the “valid” convolution method to reduce the high dimensions. Densely-connected structures—the input of each convolution consisting of the output of all previous convolution layers—was used for deep learning of features, leading to extremely accurate classification. To increase speed and prevent overfitting, the FDSSC framework uses a dynamic learning rate, parametric rectified linear units, batch normalization, and dropout layers. These attributes enable the FDSSC framework to achieve accuracy within as few as 80 epochs. The experimental results show that with the Indian Pines, Kennedy Space Center, and University of Pavia datasets, the proposed FDSSC framework achieved state-of-the-art performance compared with existing deep-learning-based methods while significantly reducing the training time.

Journal ArticleDOI
TL;DR: This paper demonstrates how much the accuracy improves as the number of GCP points increases, as well as the importance of an even distribution, and how the ground sample distance of a project relates to the maximum accuracy that can be achieved.
Abstract: The geometrical accuracy of georeferenced digital surfacemodels (DTM) obtained fromimages captured bymicro-UAVs and processed by using structure frommotion (SfM) photogrammetry depends on several factors, including flight design, camera quality, camera calibration, SfM algorithms and georeferencing strategy. This paper focusses on the critical role of the number and location of ground control points (GCP) used during the georeferencing stage. A challenging case study involving an area of 1200+ ha, 100+ GCP and 2500+ photos was used. Three thousand, four hundred and sixty-five different combinations of control points were introduced in the bundle adjustment, whilst the accuracy of the model was evaluated using both control points and independent check points. The analysis demonstrates how much the accuracy improves as the number of GCP points increases, as well as the importance of an even distribution, how much the accuracy is overestimated when it is quantified only using control points rather than independent check points, and how the ground sample distance (GSD) of a project relates to the maximum accuracy that can be achieved.

Journal ArticleDOI
TL;DR: An overview of the techniques developed in the past decade for hyperspectral image noise reduction is provided, and the performance of these techniques by applying them as a preprocessing step to improve a hyperspectrals image analysis task, i.e., classification.
Abstract: Hyperspectral remote sensing is based on measuring the scattered and reflected electromagnetic signals from the Earth’s surface emitted by the Sun. The received radiance at the sensor is usually degraded by atmospheric effects and instrumental (sensor) noises which include thermal (Johnson) noise, quantization noise, and shot (photon) noise. Noise reduction is often considered as a preprocessing step for hyperspectral imagery. In the past decade, hyperspectral noise reduction techniques have evolved substantially from two dimensional bandwise techniques to three dimensional ones, and varieties of low-rank methods have been forwarded to improve the signal to noise ratio of the observed data. Despite all the developments and advances, there is a lack of a comprehensive overview of these techniques and their impact on hyperspectral imagery applications. In this paper, we address the following two main issues; (1) Providing an overview of the techniques developed in the past decade for hyperspectral image noise reduction; (2) Discussing the performance of these techniques by applying them as a preprocessing step to improve a hyperspectral image analysis task, i.e., classification. Additionally, this paper discusses about the hyperspectral image modeling and denoising challenges. Furthermore, different noise types that exist in hyperspectral images have been described. The denoising experiments have confirmed the advantages of the use of low-rank denoising techniques compared to the other denoising techniques in terms of signal to noise ratio and spectral angle distance. In the classification experiments, classification accuracies have improved when denoising techniques have been applied as a preprocessing step.

Journal ArticleDOI
TL;DR: The potential of Sentinel-1 VV and VH backscatter and their ratio VH/VV, the cross ratio (CR), to monitor crop conditions is assessed and demonstrates the large potential of microwave indices for vegetation monitoring of VWC and phenology.
Abstract: Crop monitoring is of great importance for e.g., yield prediction and increasing water use efficiency. The Copernicus Sentinel-1 mission operated by the European Space Agency provides the opportunity to monitor Earth’s surface using radar at high spatial and temporal resolution. Sentinel-1’s Synthetic Aperture Radar provides co- and cross-polarized backscatter, enabling the calculation of microwave indices. In this study, we assess the potential of Sentinel-1 VV and VH backscatter and their ratio VH/VV, the cross ratio (CR), to monitor crop conditions. A quantitative assessment is provided based on in situ reference data of vegetation variables for different crops under varying meteorological conditions. Vegetation Water Content (VWC), biomass, Leaf Area Index (LAI) and height are measured in situ for oilseed-rape, corn and winter cereals at different fields during two growing seasons. To quantify the sensitivity of backscatter and microwave indices to vegetation dynamics, linear and exponential models and machine learning methods have been applied to the Sentinel-1 data and in situ measurements. Using an exponential model, the CR can account for 87% and 63% of the variability in VWC for corn and winter cereals. In oilseed-rape, the coefficient of determination ( R 2 ) is lower ( R 2 = 0.34) due to the large difference in VWC between the two growing seasons and changes in vegetation structure that affect backscatter. Findings from the Random Forest analysis, which uses backscatter, microwave indices and soil moisture as input variables, show that CR is by and large the most important variable to estimate VWC. This study demonstrates, based on a quantitative analysis, the large potential of microwave indices for vegetation monitoring of VWC and phenology.

Journal ArticleDOI
TL;DR: A robust and innovative automatic object-based image analysis (OBIA) algorithm was developed on Unmanned Aerial Vehicle images to design early post-emergence prescription maps, which could help farmers in decision-making to optimize crop management by rationalization of the herbicide application.
Abstract: Accurate and timely detection of weeds between and within crop rows in the early growth stage is considered one of the main challenges in site-specific weed management (SSWM). In this context, a robust and innovative automatic object-based image analysis (OBIA) algorithm was developed on Unmanned Aerial Vehicle (UAV) images to design early post-emergence prescription maps. This novel algorithm makes the major contribution. The OBIA algorithm combined Digital Surface Models (DSMs), orthomosaics and machine learning techniques (Random Forest, RF). OBIA-based plant heights were accurately estimated and used as a feature in the automatic sample selection by the RF classifier; this was the second research contribution. RF randomly selected a class balanced training set, obtained the optimum features values and classified the image, requiring no manual training, making this procedure time-efficient and more accurate, since it removes errors due to a subjective manual task. The ability to discriminate weeds was significantly affected by the imagery spatial resolution and weed density, making the use of higher spatial resolution images more suitable. Finally, prescription maps for in-season post-emergence SSWM were created based on the weed maps—the third research contribution—which could help farmers in decision-making to optimize crop management by rationalization of the herbicide application. The short time involved in the process (image capture and analysis) would allow timely weed control during critical periods, crucial for preventing yield loss.

Journal ArticleDOI
TL;DR: The analysis ready data (ARD) as discussed by the authors is a set of data sets that have been processed to allow analysis with a minimum of additional user effort, referred to as Analysis Ready Data (ARD).
Abstract: Data that have been processed to allow analysis with a minimum of additional user effort are often referred to as Analysis Ready Data (ARD). The ability to perform large scale Landsat analysis relies on the ability to access observations that are geometrically and radiometrically consistent, and have had non-target features (clouds) and poor quality observations flagged so that they can be excluded. The United States Geological Survey (USGS) has processed all of the Landsat 4 and 5 Thematic Mapper (TM), Landsat 7 Enhanced Thematic Mapper Plus (ETM+), Landsat 8 Operational Land Imager (OLI) and Thermal Infrared Sensor (TIRS) archive over the conterminous United States (CONUS), Alaska, and Hawaii, into Landsat ARD. The ARD are available to significantly reduce the burden of pre-processing on users of Landsat data. Provision of pre-prepared ARD is intended to make it easier for users to produce Landsat-based maps of land cover and land-cover change and other derived geophysical and biophysical products. The ARD are provided as tiled, georegistered, top of atmosphere and atmospherically corrected products defined in a common equal area projection, accompanied by spatially explicit quality assessment information, and appropriate metadata to enable further processing while retaining traceability of data provenance.

Journal ArticleDOI
TL;DR: This study brings face to face past and future hyperspectral sensors’ applications with Sentinel-2 systems for resolution enhancement techniques in order to increase the panel of hyper-spectral uses.
Abstract: In the last few decades, researchers have developed a plethora of hyperspectral Earth Observation (EO) remote sensing techniques, analysis and applications. While hyperspectral exploratory sensors are demonstrating their potential, Sentinel-2 multispectral satellite remote sensing is now providing free, open, global and systematic high resolution visible and infrared imagery at a short revisit time. Its recent launch suggests potential synergies between multi- and hyper-spectral data. This study, therefore, reviews 20 years of research and applications in satellite hyperspectral remote sensing through the analysis of Earth observation hyperspectral sensors’ publications that cover the Sentinel-2 spectrum range: Hyperion, TianGong-1, PRISMA, HISUI, EnMAP, Shalom, HyspIRI and HypXIM. More specifically, this study (i) brings face to face past and future hyperspectral sensors’ applications with Sentinel-2’s and (ii) analyzes the applications’ requirements in terms of spatial and temporal resolutions. Eight main application topics were analyzed including vegetation, agriculture, soil, geology, urban, land use, water resources and disaster. Medium spatial resolution, long revisit time and low signal-to-noise ratio in the short-wave infrared of some hyperspectral sensors were highlighted as major limitations for some applications compared to the Sentinel-2 system. However, these constraints mainly concerned past hyperspectral sensors, while they will probably be overcome by forthcoming instruments. Therefore, this study is putting forward the compatibility of hyperspectral sensors and Sentinel-2 systems for resolution enhancement techniques in order to increase the panel of hyperspectral uses.

Journal ArticleDOI
TL;DR: A better understanding of the capabilities of Sentinel-1 radar images for agricultural land cover mapping through the use of deep learning techniques is provided and it is found that in the near future these RNN-based techniques will play an important role in the analysis of remote sensing time series.
Abstract: The development and improvement of methods to map agricultural land cover are currently major challenges, especially for radar images. This is due to the speckle noise nature of radar, leading to a less intensive use of radar rather than optical images. The European Space Agency Sentinel-1 constellation, which recently became operational, is a satellite system providing global coverage of Synthetic Aperture Radar (SAR) with a 6-days revisit period at a high spatial resolution of about 20 m. These data are valuable, as they provide spatial information on agricultural crops. The aim of this paper is to provide a better understanding of the capabilities of Sentinel-1 radar images for agricultural land cover mapping through the use of deep learning techniques. The analysis is carried out on multitemporal Sentinel-1 data over an area in Camargue, France. The data set was processed in order to produce an intensity radar data stack from May 2017 to September 2017. We improved this radar time series dataset by exploiting temporal filtering to reduce noise, while retaining as much as possible the fine structures present in the images. We revealed that even with classical machine learning approaches (K nearest neighbors, random forest, and support vector machines), good performance classification could be achieved with F-measure/Accuracy greater than 86% and Kappa coefficient better than 0.82. We found that the results of the two deep recurrent neural network (RNN)-based classifiers clearly outperformed the classical approaches. Finally, our analyses of the Camargue area results show that the same performance was obtained with two different RNN-based classifiers on the Rice class, which is the most dominant crop of this region, with a F-measure metric of 96%. These results thus highlight that in the near future these RNN-based techniques will play an important role in the analysis of remote sensing time series.

Journal ArticleDOI
TL;DR: This study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent and suggests a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large- scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.”
Abstract: Wetlands are one of the most important ecosystems that provide a desirable habitat for a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO) data are essential for natural resource management at both regional and national levels. However, accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently, precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are lacking globally, with most studies focused on the generation of local-scale maps from limited remote sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1 and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland and three non-wetland classes on the Island of Newfoundland, covering an approximate area of 106,000 km2. The classification results were evaluated using both pixel-based and object-based random forest (RF) classifications implemented on the GEE platform. The results revealed the superiority of the object-based approach relative to the pixel-based classification for wetland mapping. Although the classification using multi-year optical data was more accurate compared to that of SAR, the inclusion of both types of data significantly improved the classification accuracies of wetland classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with the multi-year summer SAR/optical composite using an object-based RF classification, wherein all wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%, respectively. The results suggest a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition, the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be used by many stakeholders, including federal and provincial governments, municipalities, NGOs, and environmental consultants to name a few.

Journal ArticleDOI
TL;DR: It is found that a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs, and that classification performance increased throughout the season until July, when differences between crop types are largest.
Abstract: A timely inventory of agricultural areas and crop types is an essential requirement for ensuring global food security and allowing early crop monitoring practices. Satellite remote sensing has proven to be an increasingly more reliable tool to identify crop types. With the Copernicus program and its Sentinel satellites, a growing source of satellite remote sensing data is publicly available at no charge. Here, we used joint Sentinel-1 radar and Sentinel-2 optical imagery to create a crop map for Belgium. To ensure homogenous radar and optical inputs across the country, Sentinel-1 12-day backscatter mosaics were created after incidence angle normalization, and Sentinel-2 normalized difference vegetation index (NDVI) images were smoothed to yield 10-daily cloud-free mosaics. An optimized random forest classifier predicted the eight crop types with a maximum accuracy of 82% and a kappa coefficient of 0.77. We found that a combination of radar and optical imagery always outperformed a classification based on single-sensor inputs, and that classification performance increased throughout the season until July, when differences between crop types were largest. Furthermore, we showed that the concept of classification confidence derived from the random forest classifier provided insight into the reliability of the predicted class for each pixel, clearly showing that parcel borders have a lower classification confidence. We concluded that the synergistic use of radar and optical data for crop classification led to richer information increasing classification accuracies compared to optical-only classification. Further work should focus on object-level classification and crop monitoring to exploit the rich potential of combined radar and optical observations.

Journal ArticleDOI
TL;DR: The analysis clearly shows that the PROSAIL model is well suited for the analysis of imaging spectrometer data from future satellite missions and that the model should be integrated in appropriate software tools that are being developed in this context for agricultural applications.
Abstract: Upcoming satellite hyperspectral sensors require powerful and robust methodologies for making optimum use of the rich spectral data. This paper reviews the widely applied coupled PROSPECT and SAIL radiative transfer models (PROSAIL), regarding their suitability for the retrieval of biophysical and biochemical variables in the context of agricultural crop monitoring. Evaluation was carried out using a systematic literature review of 281 scientific publications with regard to their (i) spectral exploitation, (ii) vegetation type analyzed, (iii) variables retrieved, and (iv) choice of retrieval methods. From the analysis, current trends were derived, and problems identified and discussed. Our analysis clearly shows that the PROSAIL model is well suited for the analysis of imaging spectrometer data from future satellite missions and that the model should be integrated in appropriate software tools that are being developed in this context for agricultural applications. The review supports the decision of potential users to employ PROSAIL for their specific data analysis and provides guidelines for choosing between the diverse retrieval techniques.

Journal ArticleDOI
TL;DR: To improve the detection ability of infrared small targets in complex backgrounds, a novel method based on non-convex rank approximation minimization joint l2,1 norm (NRAM) was proposed, which shows superiority in background suppression and target enhancement, but also reduces the computational complexity compared with other baselines.
Abstract: To improve the detection ability of infrared small targets in complex backgrounds, a novel method based on non-convex rank approximation minimization joint l2,1 norm (NRAM) was proposed. Due to the defects of the nuclear norm and l1 norm, the state-of-the-art infrared image-patch (IPI) model usually leaves background residuals in the target image. To fix this problem, a non-convex, tighter rank surrogate and weighted l1 norm are instead utilized, which can suppress the background better while preserving the target efficiently. Considering that many state-of-the-art methods are still unable to fully suppress sparse strong edges, the structured l2,1 norm was introduced to wipe out the strong residuals. Furthermore, with the help of exploiting the structured norm and tighter rank surrogate, the proposed model was more robust when facing various complex or blurry scenes. To solve this non-convex model, an efficient optimization algorithm based on alternating direction method of multipliers (ADMM) plus difference of convex (DC) programming was designed. Extensive experimental results illustrate that the proposed method not only shows superiority in background suppression and target enhancement, but also reduces the computational complexity compared with other baselines.

Journal ArticleDOI
TL;DR: This paper proposes a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images that is comparable to traditional supervised training data labeling.
Abstract: In recent years, weeds have been responsible for most agricultural yield losses. To deal with this threat, farmers resort to spraying the fields uniformly with herbicides. This method not only requires huge quantities of herbicides but impacts the environment and human health. One way to reduce the cost and environmental impact is to allocate the right doses of herbicide to the right place and at the right time (precision agriculture). Nowadays, unmanned aerial vehicles (UAVs) are becoming an interesting acquisition system for weed localization and management due to their ability to obtain images of the entire agricultural field with a very high spatial resolution and at a low cost. However, despite significant advances in UAV acquisition systems, the automatic detection of weeds remains a challenging problem because of their strong similarity to the crops. Recently, a deep learning approach has shown impressive results in different complex classification problems. However, this approach needs a certain amount of training data, and creating large agricultural datasets with pixel-level annotations by an expert is an extremely time-consuming task. In this paper, we propose a novel fully automatic learning method using convolutional neuronal networks (CNNs) with an unsupervised training dataset collection for weed detection from UAV images. The proposed method comprises three main phases. First, we automatically detect the crop rows and use them to identify the inter-row weeds. In the second phase, inter-row weeds are used to constitute the training dataset. Finally, we perform CNNs on this dataset to build a model able to detect the crop and the weeds in the images. The results obtained are comparable to those of traditional supervised training data labeling, with differences in accuracy of 1.5% in the spinach field and 6% in the bean field.

Journal ArticleDOI
TL;DR: An experimental investigation on the repeatability of DSM generation from several blocks acquired with a RTK-enabled drone, where differential corrections were sent from a local master station or a network of Continuously Operating Reference Stations (CORS).
Abstract: High-resolution Digital Surface Models (DSMs) from unmanned aerial vehicles (UAVs) imagery with accuracy better than 10 cm open new possibilities in geosciences and engineering. The accuracy of such DSMs depends on the number and distribution of ground control points (GCPs). Placing and measuring GCPs are often the most time-consuming on-site tasks in a UAV project. Safety or accessibility concerns may impede their proper placement, so either costlier techniques must be used, or a less accurate DSM is obtained. Photogrammetric blocks flown by drones with on-board receivers capable of RTK (real-time kinematic) positioning do not need GCPs, as camera stations at exposure time can be determined with cm-level accuracy, and used to georeference the block and control its deformations. This paper presents an experimental investigation on the repeatability of DSM generation from several blocks acquired with a RTK-enabled drone, where differential corrections were sent from a local master station or a network of Continuously Operating Reference Stations (CORS). Four different flights for each RTK mode were executed over a test field, according to the same flight plan. DSM generation was performed with three block control configurations: GCP only, camera stations only, and with camera stations and one GCP. The results show that irrespective of the RTK mode, the first and third configurations provide the best DSM inner consistency. The average range of the elevation discrepancies among the DSMs in such cases is about 6 cm (2.5 GSD, ground sampling density) for a 10-cm resolution DSM. Using camera stations only, the average range is almost twice as large (4.7 GSD). The average DSM accuracy, which was verified on checkpoints, turned out to be about 2.1 GSD with the first and third configurations, and 3.7 GSD with camera stations only.

Journal ArticleDOI
TL;DR: A novel crop/weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN) and releases a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.
Abstract: The ability to automatically monitor agricultural fields is an important capability in precision farming, enabling steps towards more sustainable agriculture. Precise, high-resolution monitoring is a key prerequisite for targeted intervention and the selective application of agro-chemicals. The main goal of this paper is developing a novel crop/weed segmentation and mapping framework that processes multispectral images obtained from an unmanned aerial vehicle (UAV) using a deep neural network (DNN). Most studies on crop/weed semantic segmentation only consider single images for processing and classification. Images taken by UAVs often cover only a few hundred square meters with either color only or color and near-infrared (NIR) channels. Although a map can be generated by processing single segmented images incrementally, this requires additional complex information fusion techniques which struggle to handle high fidelity maps due to their computational costs and problems in ensuring global consistency. Moreover, computing a single large and accurate vegetation map (e.g., crop/weed) using a DNN is non-trivial due to difficulties arising from: (1) limited ground sample distances (GSDs) in high-altitude datasets, (2) sacrificed resolution resulting from downsampling high-fidelity images, and (3) multispectral image alignment. To address these issues, we adopt a stand sliding window approach that operates on only small portions of multispectral orthomosaic maps (tiles), which are channel-wise aligned and calibrated radiometrically across the entire map. We define the tile size to be the same as that of the DNN input to avoid resolution loss. Compared to our baseline model (i.e., SegNet with 3 channel RGB (red, green, and blue) inputs) yielding an area under the curve (AUC) of [background=0.607, crop=0.681, weed=0.576], our proposed model with 9 input channels achieves [0.839, 0.863, 0.782]. Additionally, we provide an extensive analysis of 20 trained models, both qualitatively and quantitatively, in order to evaluate the effects of varying input channels and tunable network hyperparameters. Furthermore, we release a large sugar beet/weed aerial dataset with expertly guided annotations for further research in the fields of remote sensing, precision agriculture, and agricultural robotics.

Journal ArticleDOI
TL;DR: This work provides a segmentation model that was designed based on densely connected convolutional networks (DenseNet) and introduces the local and global attention units and achieves a harmonic mean of precision and recall higher than other machine learning and deep learning methods.
Abstract: The road network plays an important role in the modern traffic system; as development occurs, the road structure changes frequently Owing to the advancements in the field of high-resolution remote sensing, and the success of semantic segmentation success using deep learning in computer version, extracting the road network from high-resolution remote sensing imagery is becoming increasingly popular, and has become a new tool to update the geospatial database Considering that the training dataset of the deep convolutional neural network will be clipped to a fixed size, which lead to the roads run through each sample, and that different kinds of road types have different widths, this work provides a segmentation model that was designed based on densely connected convolutional networks (DenseNet) and introduces the local and global attention units The aim of this work is to propose a novel road extraction method that can efficiently extract the road network from remote sensing imagery with local and global information A dataset from Google Earth was used to validate the method, and experiments showed that the proposed deep convolutional neural network can extract the road network accurately and effectively This method also achieves a harmonic mean of precision and recall higher than other machine learning and deep learning methods

Journal ArticleDOI
TL;DR: The results showed that the AGB models derived from the combination of the Sentinel-2A and the ALOS-2 PALSAR-2 data had the highest accuracy, followed by models using the Sentinel -2A dataset and the AlOS- 2 PALSar-2 dataset.
Abstract: The main objective of this research is to investigate the potential combination of Sentinel-2A and ALOS-2 PALSAR-2 (Advanced Land Observing Satellite -2 Phased Array type L-band Synthetic Aperture Radar-2) imagery for improving the accuracy of the Aboveground Biomass (AGB) measurement. According to the current literature, this kind of investigation has rarely been conducted. The Hyrcanian forest area (Iran) is selected as the case study. For this purpose, a total of 149 sample plots for the study area were documented through fieldwork. Using the imagery, three datasets were generated including the Sentinel-2A dataset, the ALOS-2 PALSAR-2 dataset, and the combination of the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset (Sentinel-ALOS). Because the accuracy of the AGB estimation is dependent on the method used, in this research, four machine learning techniques were selected and compared, namely Random Forests (RF), Support Vector Regression (SVR), Multi-Layer Perceptron Neural Networks (MPL Neural Nets), and Gaussian Processes (GP). The performance of these AGB models was assessed using the coefficient of determination (R2), the root-mean-square error (RMSE), and the mean absolute error (MAE). The results showed that the AGB models derived from the combination of the Sentinel-2A and the ALOS-2 PALSAR-2 data had the highest accuracy, followed by models using the Sentinel-2A dataset and the ALOS-2 PALSAR-2 dataset. Among the four machine learning models, the SVR model (R2 = 0.73, RMSE = 38.68, and MAE = 32.28) had the highest prediction accuracy, followed by the GP model (R2 = 0.69, RMSE = 40.11, and MAE = 33.69), the RF model (R2 = 0.62, RMSE = 43.13, and MAE = 35.83), and the MPL Neural Nets model (R2 = 0.44, RMSE = 64.33, and MAE = 53.74). Overall, the Sentinel-2A imagery provides a reasonable result while the ALOS-2 PALSAR-2 imagery provides a poor result of the forest AGB estimation. The combination of the Sentinel-2A imagery and the ALOS-2 PALSAR-2 imagery improved the estimation accuracy of AGB compared to that of the Sentinel-2A imagery only.