scispace - formally typeset
Search or ask a question
Author

W.J. Rhea

Bio: W.J. Rhea is an academic researcher from United States Naval Research Laboratory. The author has contributed to research in topics: Hyperspectral imaging & Ocean color. The author has an hindex of 9, co-authored 19 publications receiving 442 citations. Previous affiliations of W.J. Rhea include California Institute of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: The results of a study of optical scattering and backscattering of particulates for three coastal sites that represent a wide range of optical properties that are found in U.S. near-shore waters can be well approximated by a power-law function of wavelength.
Abstract: We present the results of a study of optical scattering and backscattering of particulates for three coastal sites that represent a wide range of optical properties that are found in U.S. near-shore waters. The 6000 scattering and backscattering spectra collected for this study can be well approximated by a power-law function of wavelength. The power-law exponent for particulate scattering changes dramatically from site to site (and within each site) compared with particulate backscattering where all the spectra, except possibly the very clearest waters, cluster around a single wavelength power-law exponent of -0.94. The particulate backscattering-to-scattering ratio (the backscattering ratio) displays a wide range in wavelength dependence. This result is not consistent with scattering models that describe the bulk composition of water as a uniform mix of homogeneous spherical particles with a Junge-like power-law distribution over all particle sizes. Simultaneous particulate organic matter (POM) and particulate inorganic matter (PIM) measurements are available for some of our optical measurements, and site-averaged POM and PIM mass-specific cross sections for scattering and backscattering can be derived. Cross sections for organic and inorganic material differ at each site, and the relative contribution of organic and inorganic material to scattering and backscattering depends differently at each site on the relative amount of material that is present.

126 citations

Journal ArticleDOI
TL;DR: In this paper, the data on chlorophyll content and bathymetry of Lake Tahoe obtained by the Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) were compared to concurrent in situ surface and in-water measurements.

85 citations

Journal ArticleDOI
TL;DR: Automatic land cover classification maps were developed from Airborne Hyperspectral Scanner imagery acquired May 8, 2000 over Smith Island, VA, a barrier island in the Virginia Coast Reserve to develop models that would be useful to natural resource managers at higher spatial resolution than has been available previously.
Abstract: Automatic land cover classification maps were developed from Airborne Hyperspectral Scanner (HyMAP) imagery acquired May 8, 2000 over Smith Island, VA, a barrier island in the Virginia Coast Reserve. Both unsupervised and supervised classification approaches were used to create these products to evaluate relative merits and to develop models that would be useful to natural resource managers at higher spatial resolution than has been available previously. Ground surveys made by us in late October and early December 2000 and again in May, August, and October 2001 and May 2002 provided ground truth data for 20 land cover types. Locations of pure land cover types recorded with global positioning system (GPS) data from these surveys were used to extract spectral end-members for training and testing supervised land cover classification models. Unsupervised exploratory models were also developed using spatial-spectral windows and projection pursuit (PP), a class of algorithms suitable for extracting multimodal views of the data. PP projections were clustered by ISODATA to produce an unsupervised classification. Supervised models, which relied on the GPS data, used only spectral inputs because for some categories in particular areas, labeled data consisted of isolated single-pixel waypoints. Both approaches to the classification problem produced consistent results for some categories such as Spartina alterniflora, although there were differences for other categories. Initial models for supervised classification based on 112 HyMAP spectra, labeled in ground surveys, obtained reasonably consistent results for many of the dominant categories, with a few exceptions.

83 citations

Journal ArticleDOI
TL;DR: A credit assignment approach to decision-based classifier fusion is developed and applied to the problem of land-cover classification from multiseason airborne hyperspectral imagery, using a smoothed estimated reliability measure (SERM) in the output domain of the classifiers.
Abstract: A credit assignment approach to decision-based classifier fusion is developed and applied to the problem of land-cover classification from multiseason airborne hyperspectral imagery. For each input sample, the new method uses a smoothed estimated reliability measure (SERM) in the output domain of the classifiers. SERM requires no additional training beyond that needed to optimize the constituent classifiers in the pool, and its generalization (test) accuracy exceeds that of a number of other extant methods for classifier fusion. Hyperspectral imagery from HyMAP and PROBE2 acquired at three points in the growing season over Smith Island, VA, a barrier island in the Nature Conservancy's Virginia Coast Reserve, serves as the basis for comparing SERM with other approaches.

35 citations

Proceedings ArticleDOI
01 Jun 2005
TL;DR: The use of manifold learning techniques to separate the various curves, thus partitioning the scene into homogeneous areas is investigated, and ways in which these techniques may be able to derive various scene characteristics such as bathymetry are discussed.
Abstract: A useful technique in hyperspectral data analysis is dimensionality reduction, which replaces the original high dimensional data with low dimensional representations. Usually this is done with linear techniques such as linear mixing or principal components (PCA). While often useful, there is no a priori reason for believing that the data is actually linear. Lately there has been renewed interest in modeling high dimensional data using nonlinear techniques such as manifold learning (ML). In ML, the data is assumed to lie on a low dimensional, possibly curved surface (or manifold). The goal is to discover this manifold and therefore find the best low dimensional representation of the data. Recently, researchers at the Naval Research Lab have begun to model hyperspectral data using ML. We continue this work by applying ML techniques to hyperspectral ocean water data. We focus on water since there are underlying physical reasons for believing that the data lies on a certain type of nonlinear manifold. In particular, ocean data is influenced by three factors: the water parameters, the bottom type, and the depth. For fixed water and bottom types, the spectra that arise by varying the depth will lie on a nonlinear, one dimensional manifold (i.e. a curve). Generally, water scenes will contain a number of different water and bottom types, each combination of which leads to a distinct curve. In this way, the scene may be modeled as a union of one dimensional curves. In this paper, we investigate the use of manifold learning techniques to separate the various curves, thus partitioning the scene into homogeneous areas. We also discuss ways in which these techniques may be able to derive various scene characteristics such as bathymetry.

27 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This paper presents an overview of un Mixing methods from the time of Keshava and Mustard's unmixing tutorial to the present, including Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixed algorithms.
Abstract: Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.

2,373 citations

Posted Content
TL;DR: An overview of unmixing methods from the time of Keshava and Mustard's tutorial as mentioned in this paper to the present can be found in Section 2.2.1].
Abstract: Imaging spectrometers measure electromagnetic energy scattered in their instantaneous field view in hundreds or thousands of spectral channels with higher spectral resolution than multispectral cameras. Imaging spectrometers are therefore often referred to as hyperspectral cameras (HSCs). Higher spectral resolution enables material identification via spectroscopic analysis, which facilitates countless applications that require identifying materials in scenarios unsuitable for classical spectroscopic analysis. Due to low spatial resolution of HSCs, microscopic material mixing, and multiple scattering, spectra measured by HSCs are mixtures of spectra of materials in a scene. Thus, accurate estimation requires unmixing. Pixels are assumed to be mixtures of a few materials, called endmembers. Unmixing involves estimating all or some of: the number of endmembers, their spectral signatures, and their abundances at each pixel. Unmixing is a challenging, ill-posed inverse problem because of model inaccuracies, observation noise, environmental conditions, endmember variability, and data set size. Researchers have devised and investigated many models searching for robust, stable, tractable, and accurate unmixing algorithms. This paper presents an overview of unmixing methods from the time of Keshava and Mustard's unmixing tutorial [1] to the present. Mixing models are first discussed. Signal-subspace, geometrical, statistical, sparsity-based, and spatial-contextual unmixing algorithms are described. Mathematical problems and potential solutions are described. Algorithm characteristics are illustrated experimentally.

1,808 citations

Journal ArticleDOI
TL;DR: This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery, which is eigen decomposition based, unsupervised, and fully automatic.
Abstract: Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.

1,154 citations

Journal ArticleDOI
TL;DR: This framework of composite kernels demonstrates enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only, flexibility to balance between the spatial and spectral information in the classifier, and computational efficiency.
Abstract: This letter presents a framework of composite kernel machines for enhanced classification of hyperspectral images. This novel method exploits the properties of Mercer's kernels to construct a family of composite kernels that easily combine spatial and spectral information. This framework of composite kernels demonstrates: 1) enhanced classification accuracy as compared to traditional approaches that take into account the spectral information only: 2) flexibility to balance between the spatial and spectral information in the classifier; and 3) computational efficiency. In addition, the proposed family of kernel classifiers opens a wide field for future developments in which spatial and spectral information can be easily integrated.

1,069 citations

Journal ArticleDOI
TL;DR: The EnS as mentioned in this paper is a statistical stratification of the European environment, suitable for stratified random sampling of ecological resources, the selection of sites for representative studies across the continent, and to provide strata for modelling exercises and reporting.
Abstract: Aim To produce a statistical stratification of the European environment, suitable for stratified random sampling of ecological resources, the selection of sites for representative studies across the continent, and to provide strata for modelling exercises and reporting. Location A ‘Greater European Window’ with the following boundaries: 11 ° W, 32 ° E, 34 ° N, 72 ° N. Methods Tw enty of the most relevant available environmental variables were selected, based on experience from previous studies. Principal components analysis (PCA) was used to explain 88% of the variation into three dimensions, which were subsequently clustered using an ISODATA clustering routine. The mean first principal component values of the classification variables were used to aggregate the strata into Environmental Zones and to provide a basis for consistent nomenclature. Results The Environmental Stratification of Europe (EnS) consists of 84 strata, which have been aggregated into 13 Environmental Zones. The stratification has a 1 km 2 resolution. Aggregations of the strata have been compared to other European classifications using the Kappa statistic, and show ‘good’ comparisons. The individual strata have been described using data from available environmental databases. The EnS is available for noncommercial use by applying to the corresponding author. Main conclusions The Environmental Stratification of Europe has been constructed using tried and tested statistical procedures. It forms an appropriate stratification for stratified random sampling of ecological resources, the selection of sites for representative studies across the continent and for the provision of strata for modelling exercises and reporting at the European scale.

655 citations