scispace - formally typeset
Search or ask a question
Author

Anastasia Pentari

Bio: Anastasia Pentari is an academic researcher from University of Crete. The author has contributed to research in topics: Computer science & Medicine. The author has an hindex of 2, co-authored 7 publications receiving 59 citations. Previous affiliations of Anastasia Pentari include Foundation for Research & Technology – Hellas.

Papers
More filters
Journal ArticleDOI
12 Sep 2019-Sensors
TL;DR: This paper provides a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others.
Abstract: Deep Learning, and Deep Neural Networks in particular, have established themselves as the new norm in signal and data processing, achieving state-of-the-art performance in image, audio, and natural language understanding. In remote sensing, a large body of research has been devoted to the application of deep learning for typical supervised learning tasks such as classification. Less yet equally important effort has also been allocated to addressing the challenges associated with the enhancement of low-quality observations from remote sensing platforms. Addressing such channels is of paramount importance, both in itself, since high-altitude imaging, environmental conditions, and imaging systems trade-offs lead to low-quality observation, as well as to facilitate subsequent analysis, such as classification and detection. In this paper, we provide a comprehensive review of deep-learning methods for the enhancement of remote sensing observations, focusing on critical tasks including single and multi-band super-resolution, denoising, restoration, pan-sharpening, and fusion, among others. In addition to the detailed analysis and comparison of recently presented approaches, different research avenues which could be explored in the future are also discussed.

95 citations

Journal ArticleDOI
TL;DR: This work considers the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and proposes a resource-efficient compression scheme based on quantized low-rank tensor completion.
Abstract: Multispectral sensors constitute a core Earth observation image technology generating massive high-dimensional observations. To address the communication and storage constraints of remote sensing platforms, lossy data compression becomes necessary, but it unavoidably introduces unwanted artifacts. In this work, we consider the encoding of multispectral observations into high-order tensor structures which can naturally capture multi-dimensional dependencies and correlations, and we propose a resource-efficient compression scheme based on quantized low-rank tensor completion. The proposed method is also applicable to the case of missing observations due to environmental conditions, such as cloud cover. To quantify the performance of compression, we consider both typical image quality metrics as well as the impact on state-of-the-art deep learning-based land-cover classification schemes. Experimental analysis on observations from the ESA Sentinel-2 satellite reveals that even minimal compression can have negative effects on classification performance which can be efficiently addressed by our proposed recovery scheme.

6 citations

Proceedings ArticleDOI
01 Nov 2019
TL;DR: This paper employs a tensor-based structuring of multi-spectral image data and proposes a low-rank tensor completion scheme for efficient image-content compression and recovery, followed by a state-of-the-art convolutional neural network architecture serving the classification task of the employed images.
Abstract: As the field of remote sensing for Earth Observation is rapidly evolving, there is an increasing demand for developing suitable methods to store and transmit the massive amounts of the generated data. At the same time, as multiple sensors acquire observations with different dimensions, super-resolution methods come into play to unify the framework for upcoming statistical inference tasks. In this paper, we employ a tensor-based structuring of multi-spectral image data and we propose a low-rank tensor completion scheme for efficient image-content compression and recovery. To address the problem of low-resolution imagery, we further provide a robust algorithmic scheme for super-resolving satellite images, followed by a state-of-the-art convolutional neural network architecture serving the classification task of the employed images. Experimental analysis on real-world observations demonstrates the detrimental effects of image compression on classification, an issued successfully addressed by the proposed recovery and super-resolution schemes.

5 citations

Journal ArticleDOI
TL;DR: In this article, the relative sensitivity of cross recurrence quantification analysis (CRQA) to identify aberrant functional brain connectivity in patients with systemic lupus erythematosus (NPSLE) in comparison with conventional static and dynamic bivariate FC measures, as well as univariate (nodal) RQA was assessed.

4 citations

Proceedings ArticleDOI
24 Jan 2021
TL;DR: In this paper, the spatio-temporal interdependence between the electrodes is first modelled by means of graph representations, and then the family of alpha-stable models is employed to fit the distribution of the noisy graph signals and design an appropriate adjacency matrix.
Abstract: As the fields of brain-computer interaction and digital monitoring of mental health are rapidly evolving, there is an increasing demand to improve the signal processing module of such systems. Specifically, the employment of electroencephalogram (EEG) signals is among the best non-invasive modalities for collecting brain signals. However, in practice, the quality of the recorded EEG signals is often deteriorated by impulsive noise, which hinders the accuracy of any decision-making process. Previous methods for denoising EEG signals primarily rely on second order statistics for the additive noise, which is not a valid assumption when operating in impulsive environments. To alleviate this issue, this work proposes a new method for suppressing the effects of heavy-tailed noise in EEG recordings. To this end, the spatio-temporal interdependence between the electrodes is first modelled by means of graph representations. Then, the family of alpha-stable models is employed to fit the distribution of the noisy graph signals and design an appropriate adjacency matrix. The denoised signals are obtained by solving iteratively a regularized optimization problem based on fractional lower-order moments. Experimental evaluation with real data reveals the improved denoising performance of our algorithm against well-established techniques.

4 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: An overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN) starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019.
Abstract: Deep learning (DL) has great influence on large parts of science and increasingly established itself as an adaptive method for new challenges in the field of Earth observation (EO). Nevertheless, the entry barriers for EO researchers are high due to the dense and rapidly developing field mainly driven by advances in computer vision (CV). To lower the barriers for researchers in EO, this review gives an overview of the evolution of DL with a focus on image segmentation and object detection in convolutional neural networks (CNN). The survey starts in 2012, when a CNN set new standards in image recognition, and lasts until late 2019. Thereby, we highlight the connections between the most important CNN architectures and cornerstones coming from CV in order to alleviate the evaluation of modern DL models. Furthermore, we briefly outline the evolution of the most popular DL frameworks and provide a summary of datasets in EO. By discussing well performing DL architectures on these datasets as well as reflecting on advances made in CV and their impact on future research in EO, we narrow the gap between the reviewed, theoretical concepts from CV and practical application in EO.

191 citations

Journal ArticleDOI
TL;DR: The main finding is that CNNs are in an advanced transition phase from computer vision to EO, and it is argued that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research.
Abstract: In Earth observation (EO), large-scale land-surface dynamics are traditionally analyzed by investigating aggregated classes. The increase in data with a very high spatial resolution enables investigations on a fine-grained feature level which can help us to better understand the dynamics of land surfaces by taking object dynamics into account. To extract fine-grained features and objects, the most popular deep-learning model for image analysis is commonly used: the convolutional neural network (CNN). In this review, we provide a comprehensive overview of the impact of deep learning on EO applications by reviewing 429 studies on image segmentation and object detection with CNNs. We extensively examine the spatial distribution of study sites, employed sensors, used datasets and CNN architectures, and give a thorough overview of applications in EO which used CNNs. Our main finding is that CNNs are in an advanced transition phase from computer vision to EO. Upon this, we argue that in the near future, investigations which analyze object dynamics with CNNs will have a significant impact on EO research. With a focus on EO applications in this Part II, we complete the methodological review provided in Part I.

99 citations

Proceedings ArticleDOI
14 Jun 2020
TL;DR: The proposed FusAtNet framework achieves the state-of-the-art classification performance, including on the largest HSI-LiDAR dataset available, University of Houston (Data Fusion Contest - 2013), opening new avenues in multimodal feature fusion for classification.
Abstract: With recent advances in sensing, multimodal data is becoming easily available for various applications, especially in remote sensing (RS), where many data types like multispectral imagery (MSI), hyperspectral imagery (HSI), LiDAR etc. are available. Effective fusion of these multisource datasets is becoming important, for these multimodality features have been shown to generate highly accurate land-cover maps. However, fusion in the context of RS is non-trivial considering the redundancy involved in the data and the large domain differences among multiple modalities. In addition, the feature extraction modules for different modalities hardly interact among themselves, which further limits their semantic relatedness. As a remedy, we propose a feature fusion and extraction framework, namely FusAtNet, for collective land-cover classification of HSIs and LiDAR data in this paper. The proposed framework effectively utilizses HSI modality to generate an attention map using "self-attention" mechanism that highlights its own spectral features. Similarly, a "cross-attention" approach is simultaneously used to harness the LiDAR derived attention map that accentuates the spatial features of HSI. These attentive spectral and spatial representations are then explored further along with the original data to obtain modality-specific feature embeddings. The modality oriented joint spectro-spatial information thus obtained, is subsequently utilized to carry out the land-cover classification task. Experimental evaluations on three HSILiDAR datasets show that the proposed method achieves the state-of-the-art classification performance, including on the largest HSI-LiDAR dataset available, University of Houston (Data Fusion Contest - 2013), opening new avenues in multimodal feature fusion for classification.

93 citations

Journal ArticleDOI
TL;DR: This review focuses on deep learning techniques,such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectrals, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted.
Abstract: Images gathered from different satellites are vastly available these days due to the fast development of remote sensing (RS) technology. These images significantly enhance the data sources of change detection (CD). CD is a technique of recognizing the dissimilarities in the images acquired at distinct intervals and are used for numerous applications, such as urban area development, disaster management, land cover object identification, etc. In recent years, deep learning (DL) techniques have been used tremendously in change detection processes, where it has achieved great success because of their practical applications. Some researchers have even claimed that DL approaches outperform traditional approaches and enhance change detection accuracy. Therefore, this review focuses on deep learning techniques, such as supervised, unsupervised, and semi-supervised for different change detection datasets, such as SAR, multispectral, hyperspectral, VHR, and heterogeneous images, and their advantages and disadvantages will be highlighted. In the end, some significant challenges are discussed to understand the context of improvements in change detection datasets and deep learning models. Overall, this review will be beneficial for the future development of CD methods.

72 citations

Journal ArticleDOI
TL;DR: This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of receptive fields, underutilized meteorological performance measures, and methods for Neural network interpretation,such as synthetic experiments and layer-wise relevance propagation.
Abstract: The method of neural networks (aka deep learning) has opened up many new opportunities to utilize remotely sensed images in meteorology. Common applications include image classification, e.g., to determine whether an image contains a tropical cyclone, and image-to-image translation, e.g., to emulate radar imagery for satellites that only have passive channels. However, there are yet many open questions regarding the use of neural networks for working with meteorological images, such as best practices for evaluation, tuning and interpretation. This article highlights several strategies and practical considerations for neural network development that have not yet received much attention in the meteorological community, such as the concept of receptive fields, underutilized meteorological performance measures, and methods for neural network interpretation, such as synthetic experiments and layer-wise relevance propagation. We also consider the process of neural network interpretation as a whole, recognizing it as an iterative meteorologist-driven discovery process that builds on experimental design and hypothesis generation and testing. Finally, while most work on neural network interpretation in meteorology has so far focused on networks for image classification tasks, we expand the focus to also include networks for image-to-image translation. 12

64 citations