Bio: Ruben Fernandez-Beltran is an academic researcher from James I University. The author has contributed to research in topics: Computer science & Probabilistic latent semantic analysis. The author has an hindex of 14, co-authored 48 publications receiving 760 citations.
TL;DR: A CNN model extension is developed that redefines the concept of capsule units to become spectral–spatial units specialized in classifying remotely sensed HSI data and is able to provide competitive advantages in terms of both classification accuracy and computational time.
Abstract: Convolutional neural networks (CNNs) have recently exhibited an excellent performance in hyperspectral image classification tasks. However, the straightforward CNN-based network architecture still finds obstacles when effectively exploiting the relationships between hyperspectral imaging (HSI) features in the spectral–spatial domain, which is a key factor to deal with the high level of complexity present in remotely sensed HSI data. Despite the fact that deeper architectures try to mitigate these limitations, they also find challenges with the convergence of the network parameters, which eventually limit the classification performance under highly demanding scenarios. In this paper, we propose a new CNN architecture based on spectral–spatial capsule networks in order to achieve a highly accurate classification of HSIs while significantly reducing the network design complexity. Specifically, based on Hinton’s capsule networks, we develop a CNN model extension that redefines the concept of capsule units to become spectral–spatial units specialized in classifying remotely sensed HSI data. The proposed model is composed by several building blocks, called spectral–spatial capsules, which are able to learn HSI spectral–spatial features considering their corresponding spatial positions in the scene, their associated spectral signatures, and also their possible transformations. Our experiments, conducted using five well-known HSI data sets and several state-of-the-art classification methods, reveal that our HSI classification approach based on spectral–spatial capsules is able to provide competitive advantages in terms of both classification accuracy and computational time.
TL;DR: A new deep CNN architecture specially designed for the HSI data is presented to improve the spectral–spatial features uncovered by the convolutional filters of the network and is able to provide competitive advantages over the state-of-the-art HSI classification methods.
Abstract: Convolutional neural networks (CNNs) exhibit good performance in image processing tasks, pointing themselves as the current state-of-the-art of deep learning methods. However, the intrinsic complexity of remotely sensed hyperspectral images still limits the performance of many CNN models. The high dimensionality of the HSI data, together with the underlying redundancy and noise, often makes the standard CNN approaches unable to generalize discriminative spectral–spatial features. Moreover, deeper CNN architectures also find challenges when additional layers are added, which hampers the network convergence and produces low classification accuracies. In order to mitigate these issues, this paper presents a new deep CNN architecture specially designed for the HSI data. Our new model pursues to improve the spectral–spatial features uncovered by the convolutional filters of the network. Specifically, the proposed residual-based approach gradually increases the feature map dimension at all convolutional layers, grouped in pyramidal bottleneck residual blocks, in order to involve more locations as the network depth increases while balancing the workload among all units, preserving the time complexity per layer. It can be seen as a pyramid, where the deeper the blocks, the more feature maps can be extracted. Therefore, the diversity of high-level spectral–spatial attributes can be gradually increased across layers to enhance the performance of the proposed network with the HSI data. Our experiments, conducted using four well-known HSI data sets and 10 different classification techniques, reveal that our newly developed HSI pyramidal residual model is able to provide competitive advantages (in terms of both classification accuracy and computational time) over the state-of-the-art HSI classification methods
TL;DR: A new convolutional generator model is proposed to super-resolve low-resolution (LR) remote sensing data from an unsupervised perspective and is able to initially learn relationships between the LR and HR domains throughout several convolutionals, downsampling, batch normalization, and activation layers.
Abstract: Super-resolution (SR) brings an excellent opportunity to improve a wide range of different remote sensing applications. SR techniques are concerned about increasing the image resolution while providing finer spatial details than those captured by the original acquisition instrument. Therefore, SR techniques are particularly useful to cope with the increasing demand remote sensing imaging applications requiring fine spatial resolution. Even though different machine learning paradigms have been successfully applied in SR, more research is required to improve the SR process without the need of external high-resolution (HR) training examples. This paper proposes a new convolutional generator model to super-resolve low-resolution (LR) remote sensing data from an unsupervised perspective. That is, the proposed generative network is able to initially learn relationships between the LR and HR domains throughout several convolutional, downsampling, batch normalization, and activation layers. Then, the data are symmetrically projected to the target resolution while guaranteeing a reconstruction constraint over the LR input image. An experimental comparison is conducted using 12 different unsupervised SR methods over different test images. Our experiments reveal the potential of the proposed approach to improve the resolution of remote sensing imagery.
TL;DR: A new unsupervised deep metric learning model, called spatially augmented momentum contrast (SauMoCo), which has been specially designed to characterize unlabeled RS scenes is presented, able to substantially enhance the discrimination ability among complex land cover categories.
Abstract: Convolutional neural networks (CNNs) have achieved great success when characterizing remote sensing (RS) images. However, the lack of sufficient annotated data (together with the high complexity of the RS image domain) often makes supervised and transfer learning schemes limited from an operational perspective. Despite the fact that unsupervised methods can potentially relieve these limitations, they are frequently unable to effectively exploit relevant prior knowledge about the RS domain, which may eventually constrain their final performance. In order to address these challenges, this article presents a new unsupervised deep metric learning model, called spatially augmented momentum contrast (SauMoCo), which has been specially designed to characterize unlabeled RS scenes. Based on the first law of geography, the proposed approach defines spatial augmentation criteria to uncover semantic relationships among land cover tiles. Then, a queue of deep embeddings is constructed to enhance the semantic variety of RS tiles within the considered contrastive learning process, where an auxiliary CNN model serves as an updating mechanism. Our experimental comparison, including different state-of-the-art techniques and benchmark RS image archives, reveals that the proposed approach obtains remarkable performance gains when characterizing unlabeled scenes since it is able to substantially enhance the discrimination ability among complex land cover categories. The source codes of this article will be made available to the RS community for reproducible research.
TL;DR: This is the first time, to the best of the knowledge, that such a review and analysis of single SR methods is made in the framework of remote sensing, and the new single-frame SR taxonomy is aimed at shedding some light when classifying some types of single- frame SR methods.
Abstract: Image acquisition technology is improving very fast from a performance point of view. However, there are physical restrictions that can only be solved using software processing strategies. This is ...
01 Jan 2016
TL;DR: The remote sensing and image interpretation is universally compatible with any devices to read and is available in the digital library an online access to it is set as public so you can get it instantly.
Abstract: Thank you very much for downloading remote sensing and image interpretation. As you may know, people have look hundreds times for their favorite novels like this remote sensing and image interpretation, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious virus inside their computer. remote sensing and image interpretation is available in our digital library an online access to it is set as public so you can get it instantly. Our book servers spans in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Merely said, the remote sensing and image interpretation is universally compatible with any devices to read.
TL;DR: A hybrid spectral CNN (HybridSN) for HSI classification is proposed that reduces the complexity of the model compared to the use of 3-D-CNN alone and is compared with the state-of-the-art hand-crafted as well as end-to-end deep learning-based methods.
Abstract: Hyperspectral image (HSI) classification is widely used for the analysis of remotely sensed images. Hyperspectral imagery includes varying bands of images. Convolutional neural network (CNN) is one of the most frequently used deep learning-based methods for visual data processing. The use of CNN for HSI classification is also visible in recent works. These approaches are mostly based on 2-D CNN. On the other hand, the HSI classification performance is highly dependent on both spatial and spectral information. Very few methods have used the 3-D-CNN because of increased computational complexity. This letter proposes a hybrid spectral CNN (HybridSN) for HSI classification. In general, the HybridSN is a spectral–spatial 3-D-CNN followed by spatial 2-D-CNN. The 3-D-CNN facilitates the joint spatial–spectral feature representation from a stack of spectral bands. The 2-D-CNN on top of the 3-D-CNN further learns more abstract-level spatial representation. Moreover, the use of hybrid CNNs reduces the complexity of the model compared to the use of 3-D-CNN alone. To test the performance of this hybrid approach, very rigorous HSI classification experiments are performed over Indian Pines, University of Pavia, and Salinas Scene remote sensing data sets. The results are compared with the state-of-the-art hand-crafted as well as end-to-end deep learning-based methods. A very satisfactory performance is obtained using the proposed HybridSN for HSI classification. The source code can be found at https://github.com/gokriznastic/HybridSN .
TL;DR: In this paper, the authors present a systematic review of deep learning-based hyperspectral image classification literatures and compare several strategies for this topic, which can provide some guidelines for future studies on this topic.
Abstract: Hyperspectral image (HSI) classification has become a hot topic in the field of remote sensing. In general, the complex characteristics of hyperspectral data make the accurate classification of such data challenging for traditional machine learning methods. In addition, hyperspectral imaging often deals with an inherently nonlinear relation between the captured spectral information and the corresponding materials. In recent years, deep learning has been recognized as a powerful feature-extraction tool to effectively address nonlinear problems and widely used in a number of image processing tasks. Motivated by those successful applications, deep learning has also been introduced to classify HSIs and demonstrated good performance. This survey paper presents a systematic review of deep learning-based HSI classification literatures and compares several strategies for this topic. Specifically, we first summarize the main challenges of HSI classification which cannot be effectively overcome by traditional machine learning methods, and also introduce the advantages of deep learning to handle these problems. Then, we build a framework which divides the corresponding works into spectral-feature networks, spatial-feature networks, and spectral-spatial-feature networks to systematically review the recent achievements in deep learning-based HSI classification. In addition, considering the fact that available training samples in the remote sensing field are usually very limited and training deep networks require a large number of samples, we include some strategies to improve classification performance, which can provide some guidelines for future studies on this topic. Finally, several representative deep learning-based classification methods are conducted on real HSIs in our experiments.
TL;DR: A comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature is provided, providing an exhaustive comparison of the discussed techniques.
Abstract: Advances in computing technology have fostered the development of new and powerful deep learning (DL) techniques, which have demonstrated promising results in a wide range of applications. Particularly, DL methods have been successfully used to classify remotely sensed data collected by Earth Observation (EO) instruments. Hyperspectral imaging (HSI) is a hot topic in remote sensing data analysis due to the vast amount of information comprised by this kind of images, which allows for a better characterization and exploitation of the Earth surface by combining rich spectral and spatial information. However, HSI poses major challenges for supervised classification methods due to the high dimensionality of the data and the limited availability of training samples. These issues, together with the high intraclass variability (and interclass similarity) –often present in HSI data– may hamper the effectiveness of classifiers. In order to solve these limitations, several DL-based architectures have been recently developed, exhibiting great potential in HSI data interpretation. This paper provides a comprehensive review of the current-state-of-the-art in DL for HSI classification, analyzing the strengths and weaknesses of the most widely used classifiers in the literature. For each discussed method, we provide quantitative results using several well-known and widely used HSI scenes, thus providing an exhaustive comparison of the discussed techniques. The paper concludes with some remarks and hints about future challenges in the application of DL techniques to HSI classification. The source codes of the methods discussed in this paper are available from: https://github.com/mhaut/hyperspectral_deeplearning_review .
TL;DR: A formal classification method for the existing work in deep active learning is provided, along with a comprehensive and systematic overview, to investigate whether AL can be used to reduce the cost of sample annotation while retaining the powerful learning capabilities of DL.
Abstract: Active learning (AL) attempts to maximize the performance gain of the model by marking the fewest samples. Deep learning (DL) is greedy for data and requires a large amount of data supply to optimize massive parameters, so that the model learns how to extract high-quality features. In recent years, due to the rapid development of internet technology, we are in an era of information torrents and we have massive amounts of data. In this way, DL has aroused strong interest of researchers and has been rapidly developed. Compared with DL, researchers have relatively low interest in AL. This is mainly because before the rise of DL, traditional machine learning requires relatively few labeled samples. Therefore, early AL is difficult to reflect the value it deserves. Although DL has made breakthroughs in various fields, most of this success is due to the publicity of the large number of existing annotation datasets. However, the acquisition of a large number of high-quality annotated datasets consumes a lot of manpower, which is not allowed in some fields that require high expertise, especially in the fields of speech recognition, information extraction, medical images, etc. Therefore, AL has gradually received due attention. A natural idea is whether AL can be used to reduce the cost of sample annotations, while retaining the powerful learning capabilities of DL. Therefore, deep active learning (DAL) has emerged. Although the related research has been quite abundant, it lacks a comprehensive survey of DAL. This article is to fill this gap, we provide a formal classification method for the existing work, and a comprehensive and systematic overview. In addition, we also analyzed and summarized the development of DAL from the perspective of application. Finally, we discussed the confusion and problems in DAL, and gave some possible development directions for DAL.