scispace - formally typeset
Search or ask a question
Author

Eric W. Gill

Other affiliations: St. John's University
Bio: Eric W. Gill is an academic researcher from Memorial University of Newfoundland. The author has contributed to research in topics: Radar & Radar imaging. The author has an hindex of 22, co-authored 178 publications receiving 2103 citations. Previous affiliations of Eric W. Gill include St. John's University.


Papers
More filters
Journal ArticleDOI
TL;DR: A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed, and a taxonomization methodology for surveying the numerous methods published in the open literature is proposed.
Abstract: The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.

269 citations

Journal ArticleDOI
TL;DR: This study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent and suggests a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large- scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.”
Abstract: Wetlands are one of the most important ecosystems that provide a desirable habitat for a great variety of flora and fauna. Wetland mapping and modeling using Earth Observation (EO) data are essential for natural resource management at both regional and national levels. However, accurate wetland mapping is challenging, especially on a large scale, given their heterogeneous and fragmented landscape, as well as the spectral similarity of differing wetland classes. Currently, precise, consistent, and comprehensive wetland inventories on a national- or provincial-scale are lacking globally, with most studies focused on the generation of local-scale maps from limited remote sensing data. Leveraging the Google Earth Engine (GEE) computational power and the availability of high spatial resolution remote sensing data collected by Copernicus Sentinels, this study introduces the first detailed, provincial-scale wetland inventory map of one of the richest Canadian provinces in terms of wetland extent. In particular, multi-year summer Synthetic Aperture Radar (SAR) Sentinel-1 and optical Sentinel-2 data composites were used to identify the spatial distribution of five wetland and three non-wetland classes on the Island of Newfoundland, covering an approximate area of 106,000 km2. The classification results were evaluated using both pixel-based and object-based random forest (RF) classifications implemented on the GEE platform. The results revealed the superiority of the object-based approach relative to the pixel-based classification for wetland mapping. Although the classification using multi-year optical data was more accurate compared to that of SAR, the inclusion of both types of data significantly improved the classification accuracies of wetland classes. In particular, an overall accuracy of 88.37% and a Kappa coefficient of 0.85 were achieved with the multi-year summer SAR/optical composite using an object-based RF classification, wherein all wetland and non-wetland classes were correctly identified with accuracies beyond 70% and 90%, respectively. The results suggest a paradigm-shift from standard static products and approaches toward generating more dynamic, on-demand, large-scale wetland coverage maps through advanced cloud computing resources that simplify access to and processing of the “Geo Big Data.” In addition, the resulting ever-demanding inventory map of Newfoundland is of great interest to and can be used by many stakeholders, including federal and provincial governments, municipalities, NGOs, and environmental consultants to name a few.

188 citations

Journal ArticleDOI
TL;DR: A new Fully Convolutional Network (FCN) architecture that can be trained in an end-to-end scheme and is specifically designed for the classification of wetland complexes using polarimetric SAR (PolSAR) imagery, demonstrating that the proposed network outperforms the conventional random forest classifier and the state-of-the-art FCNs, both visually and numerically for wetland mapping.
Abstract: Despite the application of state-of-the-art fully Convolutional Neural Networks (CNNs) for semantic segmentation of very high-resolution optical imagery, their capacity has not yet been thoroughly examined for the classification of Synthetic Aperture Radar (SAR) images. The presence of speckle noise, the absence of efficient feature expression, and the limited availability of labelled SAR samples have hindered the application of the state-of-the-art CNNs for the classification of SAR imagery. This is of great concern for mapping complex land cover ecosystems, such as wetlands, where backscattering/spectrally similar signatures of land cover units further complicate the matter. Accordingly, we propose a new Fully Convolutional Network (FCN) architecture that can be trained in an end-to-end scheme and is specifically designed for the classification of wetland complexes using polarimetric SAR (PolSAR) imagery. The proposed architecture follows an encoder-decoder paradigm, wherein the input data are fed into a stack of convolutional filters (encoder) to extract high-level abstract features and a stack of transposed convolutional filters (decoder) to gradually up-sample the low resolution output to the spatial resolution of the original input image. The proposed network also benefits from recent advances in CNN designs, namely the addition of inception modules and skip connections with residual units. The former component improves multi-scale inference and enriches contextual information, while the latter contributes to the recovery of more detailed information and simplifies optimization. Moreover, an in-depth investigation of the learned features via opening the black box demonstrates that convolutional filters extract discriminative polarimetric features, thus mitigating the limitation of the feature engineering design in PolSAR image processing. Experimental results from full polarimetric RADARSAT-2 imagery illustrate that the proposed network outperforms the conventional random forest classifier and the state-of-the-art FCNs, such as FCN-32s, FCN-16s, FCN-8s, and SegNet, both visually and numerically for wetland mapping.

151 citations

Journal ArticleDOI
TL;DR: In this article, a pulsed dipole source is introduced into the previously derived electric field expressions for the bistatic reception of vertically polarized radiation scattered from rough surfaces that do not vary with time.
Abstract: An analysis leading to the first and second-order bistatic cross sections of the ocean surface in the context of high-frequency ground wave radar operation is presented. Initially, a pulsed dipole source is introduced into the previously derived electric field expressions for the bistatic reception of vertically polarized radiation scattered from rough surfaces that do not vary with time. To make application to the ocean, a time-varying surface is introduced via a three-dimensional Fourier series with two spatial variables and one temporal variable. The surface randomness is accounted for by allowing the Fourier coefficients to be zero-mean Gaussian random variables. Fourier transformation of the autocorrelations of the resulting fields gives the appropriate power spectral densities. The latter are used in the bistatic radar range equation to produce the cross sections. The features of the bistatic case are seen to reduce to the well-known monostatic results when the appropriate geometry is introduced. Illustrative comparisons of monostatic and bistatic reception are presented.

105 citations

Journal ArticleDOI
TL;DR: In this paper, the authors used the Hamburg Shelf Ocean Model (HAMSOM) to estimate the tsunami-induced current velocity at 1 km spatial resolution and 1 s time step.
Abstract: High-frequency (HF) surface wave radars provide the unique capability to continuously monitor the coastal environment far beyond the range of conventional microwave radars. Bragg-resonant backscattering by ocean waves with half the electromagnetic radar wavelength allows ocean surface currents to be measured at distances up to 200 km. When a tsunami propagates from the deep ocean to shallow water, a specific ocean current signature is generated throughout the water column. Due to the long range of an HF radar, it is possible to detect this current signature at the shelf edge. When the shelf edge is about 100 km in front of the coastline, the radar can detect the tsunami about 45 min before it hits the coast, leaving enough time to issue an early warning. As up to now no HF radar measurements of an approaching tsunami exist, a simulation study has been done to fix parameters like the required spatial resolution or the maximum coherent integration time allowed. The simulation involves several steps, starting with the Hamburg Shelf Ocean Model (HAMSOM) which is used to estimate the tsunami-induced current velocity at 1 km spatial resolution and 1 s time step. This ocean current signal is then superimposed to modelled and measured HF radar backscatter signals using a new modulation technique. After applying conventional HF radar signal processing techniques, the surface current maps contain the rapidly changing tsunami-induced current features, which can be compared to the HAMSOM data. The specific radial tsunami current signatures can clearly be observed in these maps, if appropriate spatial and temporal resolution is used. Based on the entropy of the ocean current maps, a tsunami detection algorithm is described which can be used to issue an automated tsunami warning message.

76 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: To the best of our knowledge, there is only one application of mathematical modelling to face recognition as mentioned in this paper, and it is a face recognition problem that scarcely clamoured for attention before the computer age but, having surfaced, has attracted the attention of some fine minds.
Abstract: to be done in this area. Face recognition is a problem that scarcely clamoured for attention before the computer age but, having surfaced, has involved a wide range of techniques and has attracted the attention of some fine minds (David Mumford was a Fields Medallist in 1974). This singular application of mathematical modelling to a messy applied problem of obvious utility and importance but with no unique solution is a pretty one to share with students: perhaps, returning to the source of our opening quotation, we may invert Duncan's earlier observation, 'There is an art to find the mind's construction in the face!'.

3,015 citations

01 Jan 2010
TL;DR: A 23-year database of calibrated and validated satellite altimeter measurements is used to investigate global changes in oceanic wind speed and wave height over this period and finds a general global trend of increasing values of windspeed and, to a lesser degree, wave height.
Abstract: Wind speeds over the world’s oceans have increased over the past two decades, as have wave heights. Studies of climate change typically consider measurements or predictions of temperature over extended periods of time. Climate, however, is much more than temperature. Over the oceans, changes in wind speed and the surface gravity waves generated by such winds play an important role. We used a 23-year database of calibrated and validated satellite altimeter measurements to investigate global changes in oceanic wind speed and wave height over this period. We find a general global trend of increasing values of wind speed and, to a lesser degree, wave height, over this period. The rate of increase is greater for extreme events as compared to the mean condition.

737 citations

Journal ArticleDOI
TL;DR: This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures.
Abstract: Identifying and characterizing vascular plants in time and space is required in various disciplines, e.g. in forestry, conservation and agriculture. Remote sensing emerged as a key technology revealing both spatial and temporal vegetation patterns. Harnessing the ever growing streams of remote sensing data for the increasing demands on vegetation assessments and monitoring requires efficient, accurate and flexible methods for data analysis. In this respect, the use of deep learning methods is trend-setting, enabling high predictive accuracy, while learning the relevant data features independently in an end-to-end fashion. Very recently, a series of studies have demonstrated that the deep learning method of Convolutional Neural Networks (CNN) is very effective to represent spatial patterns enabling to extract a wide array of vegetation properties from remote sensing imagery. This review introduces the principles of CNN and distils why they are particularly suitable for vegetation remote sensing. The main part synthesizes current trends and developments, including considerations about spectral resolution, spatial grain, different sensors types, modes of reference data generation, sources of existing reference data, as well as CNN approaches and architectures. The literature review showed that CNN can be applied to various problems, including the detection of individual plants or the pixel-wise segmentation of vegetation classes, while numerous studies have evinced that CNN outperform shallow machine learning methods. Several studies suggest that the ability of CNN to exploit spatial patterns particularly facilitates the value of very high spatial resolution data. The modularity in the common deep learning frameworks allows a high flexibility for the adaptation of architectures, whereby especially multi-modal or multi-temporal applications can benefit. An increasing availability of techniques for visualizing features learned by CNNs will not only contribute to interpret but to learn from such models and improve our understanding of remotely sensed signals of vegetation. Although CNN has not been around for long, it seems obvious that they will usher in a new era of vegetation remote sensing.

473 citations

01 Jan 2016
TL;DR: Thank you very much for downloading spotlight synthetic aperture radar signal processing algorithms, maybe you have knowledge that, people have search numerous times for their favorite books, but end up in malicious downloads.
Abstract: Thank you very much for downloading spotlight synthetic aperture radar signal processing algorithms. Maybe you have knowledge that, people have search numerous times for their favorite books like this spotlight synthetic aperture radar signal processing algorithms, but end up in malicious downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they juggled with some harmful virus inside their laptop.

455 citations

Journal ArticleDOI
TL;DR: A meta-analysis investigation of recent peer-reviewed GEE articles focusing on several features, including data, sensor type, study area, spatial resolution, application, strategy, and analytical methods confirmed that GEE has and continues to make substantive progress on global challenges involving process of geo-big data.
Abstract: Google Earth Engine (GEE) is a cloud-based geospatial processing platform for large-scale environmental monitoring and analysis. The free-to-use GEE platform provides access to (1) petabytes of publicly available remote sensing imagery and other ready-to-use products with an explorer web app; (2) high-speed parallel processing and machine learning algorithms using Google’s computational infrastructure; and (3) a library of Application Programming Interfaces (APIs) with development environments that support popular coding languages, such as JavaScript and Python. Together these core features enable users to discover, analyze and visualize geospatial big data in powerful ways without needing access to supercomputers or specialized coding expertise. The development of GEE has created much enthusiasm and engagement in the remote sensing and geospatial data science fields. Yet after a decade since GEE was launched, its impact on remote sensing and geospatial science has not been carefully explored. Thus, a systematic review of GEE that can provide readers with the “big picture” of the current status and general trends in GEE is needed. To this end, the decision was taken to perform a meta-analysis investigation of recent peer-reviewed GEE articles focusing on several features, including data, sensor type, study area, spatial resolution, application, strategy, and analytical methods. A total of 349 peer-reviewed articles published in 146 different journals between 2010 and October 2019 were reviewed. Publications and geographical distribution trends showed a broad spectrum of applications in environmental analyses at both regional and global scales. Remote sensing datasets were used in 90% of studies while 10% of the articles utilized ready-to-use products for analyses. Optical satellite imagery with medium spatial resolution, particularly Landsat data with an archive exceeding 40 years, has been used extensively. Linear regression and random forest were the most frequently used algorithms for satellite imagery processing. Among ready-to-use products, the normalized difference vegetation index (NDVI) was used in 27% of studies for vegetation, crop, land cover mapping and drought monitoring. The results of this study confirm that GEE has and continues to make substantive progress on global challenges involving process of geo-big data.

438 citations