scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Geoscience and Remote Sensing in 2005"


Journal ArticleDOI
TL;DR: A new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA), which competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.
Abstract: Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: (1) the endmembers are the vertices of a simplex and (2) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computational complexity between one and two orders of magnitude lower than the best available method.

2,422 citations


Journal ArticleDOI
TL;DR: This paper assesses performance of regularized radial basis function neural networks (Reg-RBFNN), standard support vector machines (SVMs), kernel Fisher discriminant (KFD) analysis, and regularized AdaBoost (reg-AB) in the context of hyperspectral image classification.
Abstract: This paper presents the framework of kernel-based methods in the context of hyperspectral image classification, illustrating from a general viewpoint the main characteristics of different kernel-based approaches and analyzing their properties in the hyperspectral domain. In particular, we assess performance of regularized radial basis function neural networks (Reg-RBFNN), standard support vector machines (SVMs), kernel Fisher discriminant (KFD) analysis, and regularized AdaBoost (Reg-AB). The novelty of this work consists in: 1) introducing Reg-RBFNN and Reg-AB for hyperspectral image classification; 2) comparing kernel-based methods by taking into account the peculiarities of hyperspectral images; and 3) clarifying their theoretical relationships. To these purposes, we focus on the accuracy of methods when working in noisy environments, high input dimension, and limited training sets. In addition, some other important issues are discussed, such as the sparsity of the solutions, the computational burden, and the capability of the methods to provide outputs that can be directly interpreted as probabilities.

1,428 citations


Journal ArticleDOI
TL;DR: A method based on mathematical morphology for preprocessing of the hyperspectral data is proposed, using opening and closing morphological transforms to isolate bright and dark structures in images, where bright/dark means brighter/darker than the surrounding features in the images.
Abstract: Classification of hyperspectral data with high spatial resolution from urban areas is investigated. A method based on mathematical morphology for preprocessing of the hyperspectral data is proposed. In this approach, opening and closing morphological transforms are used in order to isolate bright (opening) and dark (closing) structures in images, where bright/dark means brighter/darker than the surrounding features in the images. A morphological profile is constructed based on the repeated use of openings and closings with a structuring element of increasing size, starting with one original image. In order to apply the morphological approach to hyperspectral data, principal components of the hyperspectral imagery are computed. The most significant principal components are used as base images for an extended morphological profile, i.e., a profile based on more than one original image. In experiments, two hyperspectral urban datasets are classified. The proposed method is used as a preprocessing method for a neural network classifier and compared to more conventional classification methods with different types of statistical computations and feature extraction.

1,308 citations


Journal ArticleDOI
TL;DR: This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited.
Abstract: Statistical classification of byperspectral data is challenging because the inputs are high in dimension and represent multiple classes that are sometimes quite mixed, while the amount and quality of ground truth in the form of labeled data is typically limited. The resulting classifiers are often unstable and have poor generalization. This work investigates two approaches based on the concept of random forests of classifiers implemented within a binary hierarchical multiclassifier system, with the goal of achieving improved generalization of the classifier in analysis of hyperspectral data, particularly when the quantity of training data is limited. A new classifier is proposed that incorporates bagging of training samples and adaptive random subspace feature selection within a binary hierarchical classifier (BHC), such that the number of features that is selected at each node of the tree is dependent on the quantity of associated training data. Results are compared to a random forest implementation based on the framework of classification and regression trees. For both methods, classification results obtained from experiments on data acquired by the National Aeronautics and Space Administration (NASA) Airborne Visible/Infrared Imaging Spectrometer instrument over the Kennedy Space Center, Florida, and by Hyperion on the NASA Earth Observing 1 satellite over the Okavango Delta of Botswana are superior to those from the original best basis BHC algorithm and a random subspace extension of the BHC.

984 citations


Journal ArticleDOI
TL;DR: A four-component scattering model is proposed to decompose polarimetric synthetic aperture radar (SAR) images and the covariance matrix approach is used to deal with the nonreflection symmetric scattering case.
Abstract: A four-component scattering model is proposed to decompose polarimetric synthetic aperture radar (SAR) images. The covariance matrix approach is used to deal with the nonreflection symmetric scattering case. This scheme includes and extends the three-component decomposition method introduced by Freeman and Durden dealing with the reflection symmetry condition that the co-pol and the cross-pol correlations are close to zero. Helix scattering power is added as the fourth component to the three-component scattering model which describes surface, double bounce, and volume scattering. This helix scattering term is added to take account of the co-pol and the cross-pol correlations which generally appear in complex urban area scattering and disappear for a natural distributed scatterer. This term is relevant for describing man-made targets in urban area scattering. In addition, asymmetric volume scattering covariance matrices are introduced in dependence of the relative backscattering magnitude between HH and VV. A modification of probability density function for a cloud of dipole scatterers yields asymmetric covariance matrices. An appropriate choice among the symmetric or asymmetric volume scattering covariance matrices allows us to make a best fit to the measured data. A four-component decomposition algorithm is developed to deal with a general scattering case. The result of this decomposition is demonstrated with L-band Pi-SAR images taken over the city of Niigata, Japan.

933 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods.
Abstract: There are many image fusion methods that can be used to produce high-resolution multispectral images from a high-resolution panchromatic image and low-resolution multispectral images Starting from the physical principle of image formation, this paper presents a comprehensive framework, the general image fusion (GIF) method, which makes it possible to categorize, compare, and evaluate the existing image fusion methods Using the GIF method, it is shown that the pixel values of the high-resolution multispectral images are determined by the corresponding pixel values of the low-resolution panchromatic image, the approximation of the high-resolution panchromatic image at the low-resolution level Many of the existing image fusion methods, including, but not limited to, intensity-hue-saturation, Brovey transform, principal component analysis, high-pass filtering, high-pass modulation, the a/spl grave/ trous algorithm-based wavelet transform, and multiresolution analysis-based intensity modulation (MRAIM), are evaluated and found to be particular cases of the GIF method The performance of each image fusion method is theoretically analyzed based on how the corresponding low-resolution panchromatic image is computed and how the modulation coefficients are set An experiment based on IKONOS images shows that there is consistency between the theoretical analysis and the experimental results and that the MRAIM method synthesizes the images closest to those the corresponding multisensors would observe at the high-resolution level

793 citations


Journal ArticleDOI
TL;DR: The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods.
Abstract: Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.

702 citations


Journal ArticleDOI
TL;DR: Experiments carried out on two sets of multitemporal images acquired by the European Remote Sensing 2 satellite SAR sensor confirm the effectiveness of the proposed unsupervised approach, which results in change-detection accuracies very similar to those that can be achieved by a manual supervised thresholding.
Abstract: We present a novel automatic and unsupervised change-detection approach specifically oriented to the analysis of multitemporal single-channel single-polarization synthetic aperture radar (SAR) images. This approach is based on a closed-loop process made up of three main steps: (1) a novel preprocessing based on a controlled adaptive iterative filtering; (2) a comparison between multitemporal images carried out according to a standard log-ratio operator; and (3) a novel approach to the automatic analysis of the log-ratio image for generating the change-detection map. The first step aims at reducing the speckle noise in a controlled way in order to maximize the discrimination capability between changed and unchanged classes. In the second step, the two filtered multitemporal images are compared to generate a log-ratio image that contains explicit information on changed areas. The third step produces the change-detection map according to a thresholding procedure based on a reformulation of the Kittler-Illingworth (KI) threshold selection criterion. In particular, the modified KI criterion is derived under the generalized Gaussian assumption for modeling the distributions of changed and unchanged classes. This parametric model was chosen because it is capable of better fitting the conditional densities of classes in the log-ratio image. In order to control the filtering step and, accordingly, the effects of the filtering process on change-detection accuracy, we propose to identify automatically the optimal number of despeckling filter iterations [Step 1] by analyzing the behavior of the modified KI criterion. This results in a completely automatic and self-consistent change-detection approach that avoids the use of empirical methods for the selection of the best number of filtering iterations. Experiments carried out on two sets of multitemporal images (characterized by different levels of speckle noise) acquired by the European Remote Sensing 2 satellite SAR sensor confirm the effectiveness of the proposed unsupervised approach, which results in change-detection accuracies very similar to those that can be achieved by a manual supervised thresholding.

688 citations


Journal ArticleDOI
TL;DR: It is shown that the kernel RX-algorithm can easily be implemented by kernelizing the RX- algorithm in the feature space in terms of kernels that implicitly compute dot products in thefeature space.
Abstract: We present a nonlinear version of the well-known anomaly detection method referred to as the RX-algorithm. Extending this algorithm to a feature space associated with the original input space via a certain nonlinear mapping function can provide a nonlinear version of the RX-algorithm. This nonlinear RX-algorithm, referred to as the kernel RX-algorithm, is basically intractable mainly due to the high dimensionality of the feature space produced by the nonlinear mapping function. However, in this paper it is shown that the kernel RX-algorithm can easily be implemented by kernelizing the RX-algorithm in the feature space in terms of kernels that implicitly compute dot products in the feature space. Improved performance of the kernel RX-algorithm over the conventional RX-algorithm is shown by testing several hyperspectral imagery for military target and mine detection.

653 citations


Journal ArticleDOI
TL;DR: A new algorithm for exploiting the nonlinear structure of hyperspectral imagery is developed and compared against the de facto standard of linear mixing, and it is demonstrated that the new manifold representation provides better separation of spectrally similar classes than one of the standard linear mixing models.
Abstract: A new algorithm for exploiting the nonlinear structure of hyperspectral imagery is developed and compared against the de facto standard of linear mixing. This new approach seeks a manifold coordinate system that preserves geodesic distances in the high-dimensional hyperspectral data space. Algorithms for deriving manifold coordinates, such as isometric mapping (ISOMAP), have been developed for other applications. ISOMAP guarantees a globally optimal solution, but is computationally practical only for small datasets because of computational and memory requirements. Here, we develop a hybrid technique to circumvent ISOMAP's computational cost. We divide the scene into a set of smaller tiles. The manifolds derived from the individual tiles are then aligned and stitched together to recomplete the scene. Several alignment methods are discussed. This hybrid approach exploits the fact that ISOMAP guarantees a globally optimal solution for each tile and the presumed similarity of the manifold structures derived from different tiles. Using land-cover classification of hyperspectral imagery in the Virginia Coast Reserve as a test case, we show that the new manifold representation provides better separation of spectrally similar classes than one of the standard linear mixing models. Additionally, we demonstrate that this technique provides a natural data compression scheme, which dramatically reduces the number of components needed to model hyperspectral data when compared with traditional methods such as the minimum noise fraction transform.

443 citations


Journal ArticleDOI
TL;DR: Experimental results reveal that, by designing morphological filtering methods that take into account the complementary nature of spatial and spectral information in a simultaneous manner, it is possible to alleviate the problems related to each of them when taken separately.
Abstract: This work describes sequences of extended morphological transformations for filtering and classification of high-dimensional remotely sensed hyperspectral datasets. The proposed approaches are based on the generalization of concepts from mathematical morphology theory to multichannel imagery. A new vector organization scheme is described, and fundamental morphological vector operations are defined by extension. Extended morphological transformations, characterized by simultaneously considering the spatial and spectral information contained in hyperspectral datasets, are applied to agricultural and urban classification problems where efficacy in discriminating between subtly different ground covers is required. The methods are tested using real hyperspectral imagery collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory Airborne Visible-Infrared Imaging Spectrometer and the German Aerospace Agency Digital Airborne Imaging Spectrometer (DAIS 7915). Experimental results reveal that, by designing morphological filtering methods that take into account the complementary nature of spatial and spectral information in a simultaneous manner, it is possible to alleviate the problems related to each of them when taken separately.

Journal ArticleDOI
TL;DR: Experimental results obtained on multitem temporal SAR images acquired by the ERS-1 satellite confirm the effectiveness of the proposed approach to change detection in multitemporal synthetic aperture radar images.
Abstract: This paper presents a novel approach to change detection in multitemporal synthetic aperture radar (SAR) images. The proposed approach exploits a wavelet-based multiscale decomposition of the log-ratio image (obtained by a comparison of the original multitemporal data) aimed at achieving different scales (levels) of representation of the change signal. Each scale is characterized by a different tradeoff between speckle reduction and preservation of geometrical details. For each pixel, a subset of reliable scales is identified on the basis of a local statistic measure applied to scale-dependent log-ratio images. The final change-detection result is obtained according to an adaptive scale-driven fusion algorithm. Experimental results obtained on multitemporal SAR images acquired by the ERS-1 satellite confirm the effectiveness of the proposed approach.

Journal ArticleDOI
TL;DR: The accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio, and it is concluded that there are always endmembers incorrectly unmixed.
Abstract: Independent component analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: 1) the observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); 2)sources are statistically independent. Independent factor analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct constituent substances (endmembers) is negligible, and the surface is partitioned according to the fractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent, this compromising the performance of ICA/IFA algorithms in hyperspectral unmixing. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods tends to improve with the increase of the signature variability, of the number of endmembers, and of the signal-to-noise ratio. In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.

Journal ArticleDOI
TL;DR: It is shown that ocean-reflected signals from the global positioning system (GPS) navigation satellite constellation can be detected from a low-earth orbiting satellite and that these signals show rough correlation with independent measurements of the sea winds.
Abstract: We will show that ocean-reflected signals from the global positioning system (GPS) navigation satellite constellation can be detected from a low-earth orbiting satellite and that these signals show rough correlation with independent measurements of the sea winds. We will present waveforms of ocean-reflected GPS signals that have been detected using the experiment onboard the United Kingdom's Disaster Monitoring Constellation satellite and describe the processing methods used to obtain their delay and Doppler power distributions. The GPS bistatic radar experiment has made several raw data collections, and reflected GPS signals have been found on all attempts. The down linked data from an experiment has undergone extensive processing, and ocean-scattered signals have been mapped across a wide range of delay and Doppler space revealing characteristics which are known to be related to geophysical parameters such as surface roughness and wind speed. Here we will discuss the effects of integration time, reflection incidence angle and examine several delay-Doppler signal maps. The signals detected have been found to be in general agreement with an existing model (based on geometric optics) and with limited independent measurements of sea winds; a brief comparison is presented here. These results demonstrate that the concept of using bistatically reflected global navigation satellite systems signals from low earth orbit is a viable means of ocean remote sensing.

Journal ArticleDOI
TL;DR: The polarization uniqueness in transmission of this mixed basis mode, hereafter referred to as the /spl pi//4 mode, maintains the standard lower pulse repetition frequency operation and hence maximizes the coverage of the sensor.
Abstract: We assess the performance of synthetic aperture radar (SAR) compact polarimetry architectures based on mixed basis measurements, where the transmitter polarization is either circular or orientated at 45/spl deg/(/spl pi//4), and the receivers are at horizontal and vertical polarizations with respect to the radar line of sight. An original algorithm is proposed to reconstruct the full polarimetric (FP) information from this architecture. The performance assessment is twofold: it first concerns the level of information preserved in comparison with FP, both for point target analysis and crop fields classification, using L-band SIRC/XSAR images acquired over Landes forest and Jet Propulsion Laboratory AIRSAR images acquired over Flevoland. Then, it addresses the space implementation complexity, in terms of processed swath, downloading features, power budget, calibration, and ionospheric effects. The polarization uniqueness in transmission of this mixed basis mode, hereafter referred to as the /spl pi//4 mode, maintains the standard lower pulse repetition frequency operation and hence maximizes the coverage of the sensor. Because of the mismatch between transmitter and receiver basis, the power budget is deteriorated by a factor of 3 dB, but it can partly be compensated.

Journal ArticleDOI
TL;DR: An investigation of RFI in the 6.9- and 10.7-GHz AMSR-E channels over the global land domain and for a one-year observation period is extended and the spatial and temporal characteristics of the RFI are examined by the use of spectral indices.
Abstract: Radio-frequency interference (RFI) is an increasingly serious problem for passive and active microwave sensing of the Earth. To satisfy their measurement objectives, many spaceborne passive sensors must operate in unprotected bands, and future sensors may also need to operate in unprotected bands. Data from these sensors are likely to be increasingly contaminated by RFI as the spectrum becomes more crowded. In a previous paper we reported on a preliminary investigation of RFI observed over the United States in the 6.9-GHz channels of the Advanced Microwave Scanning Radiometer (AMSR-E) on the Earth Observing System Aqua satellite. Here, we extend the analysis to an investigation of RFI in the 6.9- and 10.7-GHz AMSR-E channels over the global land domain and for a one-year observation period. The spatial and temporal characteristics of the RFI are examined by the use of spectral indices. The observed RFI at 6.9 GHz is most densely concentrated in the United States, Japan, and the Middle East, and is sparser in Europe, while at 10.7 GHz the RFI is concentrated mostly in England, Italy, and Japan. Classification of RFI using means and standard deviations of the spectral indices is effective in identifying strong RFI. In many cases, however, it is difficult, using these indices, to distinguish weak RFI from natural geophysical variability. Geophysical retrievals using RFI-filtered data may therefore contain residual errors due to weak RFI. More robust radiometer designs and continued efforts to protect spectrum allocations will be needed in future to ensure the viability of spaceborne passive microwave sensing.

Journal ArticleDOI
TL;DR: It has been shown that using multifrequency data high-quality image reconstruction can be achieved with a limited array view.
Abstract: A two-dimensional nonlinear inverse scattering technique is developed for imaging objects in a multilayered medium that simulates the effects of building walls in the context of through-wall imaging (TWI). The effectiveness and capacity of the inversion algorithm and the feasibility of through-wall imaging is demonstrated via a number of numerical examples. It has been shown that using multifrequency data high-quality image reconstruction can be achieved with a limited array view.

Journal ArticleDOI
TL;DR: Results for a full range of slopes, aspects, and crown closures showed SCS+C provided improved corrections compared to the SCS and four other photometric approaches (cosine, C, Minnaert, statistical-empirical) for a Rocky Mountain forest setting in western Canada.
Abstract: Topographic correction based on sun-canopy-sensor (SCS) geometry is more appropriate than terrain-based corrections in forested areas since SCS preserves the geotropic nature of trees (vertical growth) regardless of terrain, view, and illumination angles. However, in some terrain orientations, SCS experiences an overcorrection problem similar to other simple photometric functions. To address this problem, we propose a new SCS+C correction that accounts for diffuse atmospheric irradiance based on the C-correction. A rigorous, comprehensive, and flexible method for independent validation based on canopy geometric optical reflectance models is also introduced as an improvement over previous validation approaches, and forms a secondary contribution of this paper. Results for a full range of slopes, aspects, and crown closures showed SCS+C provided improved corrections compared to the SCS and four other photometric approaches (cosine, C, Minnaert, statistical-empirical) for a Rocky Mountain forest setting in western Canada. It was concluded that SCS+C should be considered for topographic correction of remote sensing imagery in forested terrain.

Journal ArticleDOI
TL;DR: First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.
Abstract: Synthetic aperture radar (SAR) interferometry is a modern efficient technique that allows reconstructing the height profile of the observed scene. However, apart for the presence of critical nonlinear inversion steps, particularly crucial in abrupt topography scenarios, it does not allow one to separate different scattering mechanisms in the elevation (height) direction within the ground pixel. Overlay of scattering at different elevations in the same azimuth-range resolution cell can be due either to the penetration of the radiation below the surface or to perspective ambiguities caused by the side-looking geometry. Multibaseline three-dimensional (3-D) SAR focusing allows overcoming such a limitation and has thus raised great interest in the recent research. First results with real data have been only obtained in the laboratory and with airborne systems, or with limited time-span and spatial-coverage spaceborne data. This work presents a novel approach for the tomographic processing of European Remote Sensing satellite (ERS) real data for extended scenes and long time span. Besides facing problems common to the airborne case, such as the nonuniformly spaced passes, this processing requires tackling additional difficulties specific to the spaceborne case, in particular a space-varying phase calibration of the data due to atmospheric variations and possible scene deformations occurring for years-long temporal spans. First results are presented that confirm the capability of ERS multipass tomography to resolve multiple targets within the same azimuth-range cell and to map the 3-D scattering properties of the illuminated scene.

Journal ArticleDOI
TL;DR: This work has developed direct linear relationships between the MODIS-measured R/sub fre/ and smoke aerosol emission rates R/ sub sa/ and derived a FRE-based smoke emission coefficient, C/sub e/ (in kilograms per megajoule), which is an excellent remote sensing parameter expressing the emission strength of different ecosystems and regions.
Abstract: Present methods of emissions estimation from satellite data often use fire pixel counts, even though fire strengths and smoke emission rates can differ by some orders of magnitude between pixels. Moderate Resolution Imaging Spectroradiometer (MODIS) measurements of fire radiative energy (FRE) release rates R/sub fre/ range from less than 10 to more than 1700 MW per pixel at 1-km resolution. To account for the effect of such a wide range of fire strengths/sizes on smoke emission rates, we have developed direct linear relationships between the MODIS-measured R/sub fre/ and smoke aerosol emission rates R/sub sa/ (in kilograms per second), derived by analyzing MODIS measurements of aerosol spatial distribution around the fires with National Center for Environmental Prediction/National Center for Atmospheric Research wind fields. We applied the technique to several regions around the world and derived a FRE-based smoke emission coefficient, C/sub e/ (in kilograms per megajoule), which can be simply multiplied by R/sub fre/ to calculate R/sub sa/. This new coefficient C/sub e/ is an excellent remote sensing parameter expressing the emission strength of different ecosystems and regions. Analysis of all 2002 MODIS data from Terra and Aqua satellites yielded C/sub e/ values of 0.02-0.06 kg/MJ for boreal regions, 0.04-0.08 kg/MJ for both tropical forests and savanna regions, and 0.08-0.1 kg/MJ for Western Russian regions. These results are probably overestimated by about 50% because of uncertainties in some of the data, parameters, and assumptions involved in the computations. This 50% overestimation is comparable to uncertainties in traditional emission factors. However, our satellite method shows great promise for accuracy improvement, as better knowledge is gained about the sources of the uncertainties.

Journal ArticleDOI
TL;DR: The proposed algorithm for this technique has been successfully applied to register multitemporal SPOT and synthetic aperture radar images from urban and agricultural areas and demonstrates the robustness, efficiency and accuracy of the algorithm.
Abstract: This paper deals with a major problem encountered in the area of remote sensing consisting of the registration of multitemporal and/or multisensor images. In general, such images have different gray-level characteristics, and simple techniques such as those based on correlation cannot be applied directly. In this work, a new automatic satellite image registration approach is proposed. This technique exploits the invariant relations between regions of a reference and a sensed image, respectively. It involves an edge-based selection of the most distinctive control points (CPs) in the reference image. The search for the corresponding CPs in the sensed image is based on local similarity detection by means of template matching according to a combined invariants-based similarity measure. The final warping of the images according to the selected CPs is performed by using the thin-plate spline interpolation. The procedure is fully automatic and computationally efficient. The proposed algorithm for this technique has been successfully applied to register multitemporal SPOT and synthetic aperture radar images from urban and agricultural areas. The experimental results demonstrate the robustness, efficiency and accuracy of the algorithm.

Journal ArticleDOI
TL;DR: Simulated results are reported for different baseline-time acquisition patterns and two motion conditions of layover scatterers, showing that this new challenging interferometric technique is promising.
Abstract: A new interferometric mode crossing the differential synthetic aperture radar (SAR) interferometry and multibaseline SAR tomography concepts, that can be termed differential SAR tomography, is proposed. Its potentials, coming from the joint elevation-velocity resolution capability of multiple scatterers, are discussed. Processing is cast in a bidimensional baseline-time spectral analysis framework, with sparse sampling. The use of a modern data-dependent bidimensional spectral estimator is proposed for joint baseline-time processing. Simulated results are reported for different baseline-time acquisition patterns and two motion conditions of layover scatterers, showing that this new challenging interferometric technique is promising.

Journal ArticleDOI
TL;DR: This work conducts a comprehensive study and analysis on the orthogonal subspace projection from several signal processing perspectives and discusses in depth how to effectively operate the OSP using different levels of a priori target knowledge for target detection and classification.
Abstract: The orthogonal subspace projection (OSP) approach has received considerable interest in hyperspectral data exploitation recently. It has been shown to be a versatile technique for a wide range of applications. Unfortunately, insights into its design rationale have not been investigated and have yet to be explored. This work conducts a comprehensive study and analysis on the OSP from several signal processing perspectives and further discusses in depth how to effectively operate the OSP using different levels of a priori target knowledge for target detection and classification. Additionally, it looks into various assumptions made in the OSP and analyzes filters with different forms, some of which turn out to be well-known and popular target detectors and classifiers. It also shows how the OSP is related to the well-known least-squares-based linear spectral mixture analysis and how the OSP takes advantage of Gaussian noise to arrive at the Gaussian maximum-likelihood detector/estimator and likelihood ratio test. Extensive experiments are also included in this paper to simulate various scenarios to illustrate the utility of the OSP operating under various assumptions and different degrees of target knowledge.

Journal ArticleDOI
TL;DR: An ecosystem-dependent temporal interpolation technique is described that has been developed to fill missing or seasonally snow-covered data in the official MOD43B3 albedo product, and phenological curves are derived from statistics based on the MODIS MOD12Q1 IGBP land cover classification product geolocated with the MOD 43B3 data.
Abstract: Recent production of land surface anisotropy, diffuse bihemispherical (white-sky) albedo, and direct-beam directional hemispherical (black-sky) albedo from observations acquired by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard the National Aeronautics and Space Administration's Terra and Aqua satellite platforms have provided researchers with unprecedented spatial, spectral, and temporal information on the land surface's radiative characteristics. Cloud cover, which curtails retrievals, and the presence of ephemeral and seasonal snow limit the snow-free data to approximately half the global land surfaces on an annual equal-angle basis. This precludes the MOD43B3 albedo products from being used in some remote sensing and ground-based applications, climate models, and global change research projects. An ecosystem-dependent temporal interpolation technique is described that has been developed to fill missing or seasonally snow-covered data in the official MOD43B3 albedo product. The method imposes pixel-level and local regional ecosystem-dependent phenological behavior onto retrieved pixel temporal data in such a way as to maintain pixel-level spatial and spectral detail and integrity. The phenological curves are derived from statistics based on the MODIS MOD12Q1 IGBP land cover classification product geolocated with the MOD43B3 data. The resulting snow-free value-added products provide the scientific community with spatially and temporally complete global white- and black-sky surface albedo maps and statistics. These products are stored on 1-min and coarser resolution equal-angle grids and are computed for the first seven MODIS wavelengths, ranging from 0.47-2.1 /spl mu/m and for three broadband wavelengths 0.3-0.7, 0.3-5.0, and 0.7-5.0 /spl mu/m.

Journal ArticleDOI
TL;DR: A new unified approach to object and change detection is presented that involves clustering and analyzing the distribution of pixel values within clusters over one or more images, and preliminary results show cluster-basedchange detection is less sensitive to image misregistration errors than global change detection.
Abstract: A new unified approach to object and change detection is presented that involves clustering and analyzing the distribution of pixel values within clusters over one or more images. Cluster-based anomaly detection (CBAD) can detect man-made objects that are: (1) present in a single multiband image; (2) appear or disappear between two images acquired at different times; or (3) manifest themselves as spectral differences between two sets of bands acquired at the same time. Based on a Gaussian mixture model, CBAD offers an alternative to compute-intensive, sliding-window algorithms like Reed and Yu's RX-algorithm for single-image object detection. It assumes that background pixel values within clusters can be modeled as Gaussian distributions about mean values that vary cluster-to-cluster and that anomalies (man-made objects) have values that deviate significantly from the distribution of the cluster. This model is valid in situations where the frequency of occurrence of man-made objects is low compared to the background so that they do not form distinct clusters, but are instead split up among multiple background clusters. CBAD estimates background statistics over clusters, not sliding windows, and so can detect objects of any size or shape. This provides the flexibility of filtering detections at the object level. Examples show the ability to detect small compact objects such as vehicles as well as large, spatially extended features (e.g., built-up and bomb-damaged areas). Unlike previous approaches to change detection, which compare pixels, vectors, features, or objects, cluster-based change detection involves no direct comparison of images. In fact, it is identical to the object detection algorithm, different only in the way it is applied. Preliminary results show cluster-based change detection is less sensitive to image misregistration errors than global change detection. The same cluster-based algorithm can also be used for cross-spectral anomaly detection. An example showing the detection of thermal anomalies in Landsat Thematic Mapper imagery is provided.

Journal ArticleDOI
TL;DR: A classification strategy is described that allows the identification of samples drawn from unknown classes through the application of a suitable Bayesian decision rule, based on support vector machines (SVMs) for the estimation of probability density functions and on a recursive procedure to generate prior probability estimates for known and unknown classes.
Abstract: A general problem of supervised remotely sensed image classification assumes prior knowledge to be available for all the thematic classes that are present in the considered dataset. However, the ground-truth map representing that prior knowledge usually does not really describe all the land-cover typologies in the image, and the generation of a complete training set often represents a time-consuming, difficult and expensive task. This problem affects the performances of supervised classifiers, which erroneously assign each sample drawn from an unknown class to one of the known classes. In the present paper, a classification strategy is described that allows the identification of samples drawn from unknown classes through the application of a suitable Bayesian decision rule. The proposed approach is based on support vector machines (SVMs) for the estimation of probability density functions and on a recursive procedure to generate prior probability estimates for known and unknown classes. In the experiments, both a synthetic dataset and two real datasets were used.

Journal ArticleDOI
TL;DR: There has been no demonstrable improvement in classification performance over the 15-year period, and expected relationships between classification accuracy and resolution and between accuracy and number of classes were also not observed in the data.
Abstract: A study has been carried out of 15 years of published peer-reviewed experiments on satellite image classification. The aim of the study was to assess the degree of progress being made in thematic mapping through developments in classification algorithms and also in systems approaches such as postclassification analysis, multiclassifier integration, and data fusion. The results of over 500 reported classification experiments were quantitatively analyzed. This involved examination of relationships between classification accuracy and date of publication, as well as between accuracy and various experimental parameters such as number of classes, size of feature vector, resolution of satellite data, and test area. Comparisons were also made between different types of methodology such as neural network and nonneural approaches. Overall, the results show that there has been no demonstrable improvement in classification performance over the 15-year period. The mean value of the Kappa coefficient across all experiments was found to be 0.6557 with a standard deviation of 0.1980. Expected relationships between classification accuracy and resolution and between accuracy and number of classes were also not observed in the data. Some of the implications of these findings for the future research agenda are considered.

Journal ArticleDOI
TL;DR: The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data.
Abstract: The Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) instrument onboard the National Aeronautics and Space Administration's Terra spacecraft has an along-track stereoscopic capability using its a near-infrared spectral band to acquire the stereo data. ASTER has two telescopes, one for nadir-viewing and another for backward-viewing, with a base-to-height ratio of 0.6. The spatial resolution is 15 m in the horizontal plane. Parameters such as the line-of-sight vectors and the pointing axis were adjusted during the initial operation period to generate Level-1 data products with a high-quality stereo system performance. The evaluation of the digital elevation model (DEM) data was carried out both by Japanese and U.S. science teams separately using different DEM generation software and reference databases. The vertical accuracy of the DEM data generated from the Level-1A data is 20 m with 95% confidence without ground control point (GCP) correction for individual scenes. Geolocation accuracy that is important for the DEM datasets is better than 50 m. This appears to be limited by the spacecraft position accuracy. In addition, a slight increase in accuracy is observed by using GCPs to generate the stereo data.

Journal ArticleDOI
TL;DR: Five clustering techniques are compared by classifying a polarimetric synthetic aperture radar image, and the results support the conclusion that the pixel model is more important than the clustering mechanism.
Abstract: Five clustering techniques are compared by classifying a polarimetric synthetic aperture radar image. The pixels are complex covariance matrices, which are known to have the complex Wishart distribution. Two techniques are fuzzy clustering algorithms based on the standard /spl lscr//sub 1/ and /spl lscr//sub 2/ metrics. Two others are new, combining a robust fuzzy C-means clustering technique with a distance measure based on the Wishart distribution. The fifth clustering technique is an application of the expectation-maximization algorithm assuming the data are Wishart. The clustering algorithms that are based on the Wishart are demonstrably more effective than the clustering algorithms that appeal only to the /spl lscr//sub p/ norms. The results support the conclusion that the pixel model is more important than the clustering mechanism.

Journal ArticleDOI
TL;DR: Comparison to monthly averaged sunphotometer data confirms that either the Terra or Aqua estimate of global AOT is a valid representation of the daily average, though in the vicinity of aerosol sources such as fires, the authors do not expect this to be true.
Abstract: Observations of the aerosol optical thickness (AOT) by the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard Terra and Aqua satellites are being used extensively for applications to climate and air quality studies. Data quality is essential for these studies. Here we investigate the effects of unresolved clouds on the MODIS measurements of the AOT. The main cloud effect is from residual cirrus that increases the AOT by 0.015/spl plusmn/0.003 at 0.55 /spl mu/m. In addition, lower level clouds can add contamination. We examine the effect of lower clouds using the difference between simultaneously measured MODIS and AERONET AOT. The difference is positively correlated with the cloud fraction. However, interpretation of this difference is sensitive to the definition of cloud contamination versus aerosol growth. If we consider this consistent difference between MODIS and AERONET to be entirely due to cloud contamination we get a total cloud contamination of 0.025/spl plusmn/0.005, though a more likely estimate is closer to 0.020 after accounting for aerosol growth. This reduces the difference between MODIS-observed global aerosol optical thickness over the oceans and model simulations by half, from 0.04 to 0.02. However it is insignificant for studies of aerosol cloud interaction. We also examined how representative are the MODIS data of the diurnal average aerosol. Comparison to monthly averaged sunphotometer data confirms that either the Terra or Aqua estimate of global AOT is a valid representation of the daily average. Though in the vicinity of aerosol sources such as fires, we do not expect this to be true.