scispace - formally typeset
Journal ArticleDOI

A new fully convolutional neural network for semantic segmentation of polarimetric SAR imagery in complex land cover ecosystem

Reads0
Chats0
TLDR
A new Fully Convolutional Network (FCN) architecture that can be trained in an end-to-end scheme and is specifically designed for the classification of wetland complexes using polarimetric SAR (PolSAR) imagery, demonstrating that the proposed network outperforms the conventional random forest classifier and the state-of-the-art FCNs, both visually and numerically for wetland mapping.
Abstract
Despite the application of state-of-the-art fully Convolutional Neural Networks (CNNs) for semantic segmentation of very high-resolution optical imagery, their capacity has not yet been thoroughly examined for the classification of Synthetic Aperture Radar (SAR) images. The presence of speckle noise, the absence of efficient feature expression, and the limited availability of labelled SAR samples have hindered the application of the state-of-the-art CNNs for the classification of SAR imagery. This is of great concern for mapping complex land cover ecosystems, such as wetlands, where backscattering/spectrally similar signatures of land cover units further complicate the matter. Accordingly, we propose a new Fully Convolutional Network (FCN) architecture that can be trained in an end-to-end scheme and is specifically designed for the classification of wetland complexes using polarimetric SAR (PolSAR) imagery. The proposed architecture follows an encoder-decoder paradigm, wherein the input data are fed into a stack of convolutional filters (encoder) to extract high-level abstract features and a stack of transposed convolutional filters (decoder) to gradually up-sample the low resolution output to the spatial resolution of the original input image. The proposed network also benefits from recent advances in CNN designs, namely the addition of inception modules and skip connections with residual units. The former component improves multi-scale inference and enriches contextual information, while the latter contributes to the recovery of more detailed information and simplifies optimization. Moreover, an in-depth investigation of the learned features via opening the black box demonstrates that convolutional filters extract discriminative polarimetric features, thus mitigating the limitation of the feature engineering design in PolSAR image processing. Experimental results from full polarimetric RADARSAT-2 imagery illustrate that the proposed network outperforms the conventional random forest classifier and the state-of-the-art FCNs, such as FCN-32s, FCN-16s, FCN-8s, and SegNet, both visually and numerically for wetland mapping.

read more

Citations
More filters
Journal ArticleDOI

Review on Convolutional Neural Networks (CNN) in vegetation remote sensing

TL;DR: This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures.
Journal ArticleDOI

Google Earth Engine for geo-big data applications: A meta-analysis and systematic review

TL;DR: A meta-analysis investigation of recent peer-reviewed GEE articles focusing on several features, including data, sensor type, study area, spatial resolution, application, strategy, and analytical methods confirmed that GEE has and continues to make substantive progress on global challenges involving process of geo-big data.
Journal ArticleDOI

Self-attention for raw optical Satellite Time Series Classification

TL;DR: This work compares recent deep learning models on crop type classification on raw and preprocessed Sentinel 2 data and qualitatively shows how self-attention scores focus selectively on few classification-relevant observations.
Journal ArticleDOI

An efficient Harris hawks-inspired image segmentation method

TL;DR: An efficient methodology for multilevel segmentation is proposed using the Harris Hawks Optimization (HHO) algorithm and the minimum cross-entropy as a fitness function and it presents an improvement over other segmentation approaches that are currently used in the literature.
Journal ArticleDOI

Comparison between convolutional neural networks and random forest for local climate zone classification in mega urban areas using Landsat images

TL;DR: This study revealed that the CNN classifier classified particularly well for the specific LCZ classes in which buildings were mixed with trees or buildings or plants were sparsely distributed, providing a basis for guidance of future LCZ classification using deep learning.
References
More filters
Proceedings ArticleDOI

Deep Residual Learning for Image Recognition

TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Journal ArticleDOI

Random Forests

TL;DR: Internal estimates monitor error, strength, and correlation and these are used to show the response to increasing the number of features used in the forest, and are also applicable to regression.
Proceedings ArticleDOI

Going deeper with convolutions

TL;DR: Inception as mentioned in this paper is a deep convolutional neural network architecture that achieves the new state of the art for classification and detection in the ImageNet Large-Scale Visual Recognition Challenge 2014 (ILSVRC14).
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Proceedings ArticleDOI

Fully convolutional networks for semantic segmentation

TL;DR: The key insight is to build “fully convolutional” networks that take input of arbitrary size and produce correspondingly-sized output with efficient inference and learning.
Related Papers (5)