scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Geoscience and Remote Sensing in 2011"


Journal ArticleDOI
TL;DR: This paper introduces a new approach, SqueeSAR, to jointly process PS and DS, taking into account their different statistical behavior, and results on real SAR data, acquired over an Alpine area, confirm the effectiveness of this new approach.
Abstract: Permanent Scatterer SAR Interferometry (PSInSAR) aims to identify coherent radar targets exhibiting high phase stability over the entire observation time period. These targets often correspond to point-wise, man-made objects widely available over a city, but less present in non-urban areas. To overcome the limits of PSInSAR, analysis of interferometric data-stacks should aim at extracting geophysical parameters not only from point-wise deterministic objects (i.e., PS), but also from distributed scatterers (DS). Rather than developing hybrid processing chains where two or more algorithms are applied to the same data-stack, and results are then combined, in this paper we introduce a new approach, SqueeSAR, to jointly process PS and DS, taking into account their different statistical behavior. As it will be shown, PS and DS can be jointly processed without the need for significant changes to the traditional PSInSAR processing chain and without the need to unwrap hundreds of interferograms, provided that the coherence matrix associated with each DS is properly “squeezed” to provide a vector of optimum (wrapped) phase values. Results on real SAR data, acquired over an Alpine area, challenging for any InSAR analysis, confirm the effectiveness of this new approach.

1,324 citations


Journal ArticleDOI
TL;DR: Experimental results show that the proposed sparsity-based algorithm for the classification of hyperspectral imagery outperforms the classical supervised classifier support vector machines in most cases.
Abstract: A new sparsity-based algorithm for the classification of hyperspectral imagery is proposed in this paper. The proposed algorithm relies on the observation that a hyperspectral pixel can be sparsely represented by a linear combination of a few training samples from a structured dictionary. The sparse representation of an unknown pixel is expressed as a sparse vector whose nonzero entries correspond to the weights of the selected training samples. The sparse vector is recovered by solving a sparsity-constrained optimization problem, and it can directly determine the class label of the test sample. Two different approaches are proposed to incorporate the contextual information into the sparse recovery optimization problem in order to improve the classification performance. In the first approach, an explicit smoothing constraint is imposed on the problem formulation by forcing the vector Laplacian of the reconstructed image to become zero. In this approach, the reconstructed pixel of interest has similar spectral characteristics to its four nearest neighbors. The second approach is via a joint sparsity model where hyperspectral pixels in a small neighborhood around the test pixel are simultaneously represented by linear combinations of a few common training samples, which are weighted with a different set of coefficients for each pixel. The proposed sparsity-based algorithm is applied to several real hyperspectral images for classification. Experimental results show that our algorithm outperforms the classical supervised classifier support vector machines in most cases.

1,099 citations


Journal ArticleDOI
TL;DR: The experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra.
Abstract: Linear spectral unmixing is a popular tool in remotely sensed hyperspectral data interpretation. It aims at estimating the fractional abundances of pure spectral signatures (also called as endmembers) in each mixed pixel collected by an imaging spectrometer. In many situations, the identification of the end-member signatures in the original data set may be challenging due to insufficient spatial resolution, mixtures happening at different scales, and unavailability of completely pure spectral signatures in the scene. However, the unmixing problem can also be approached in semisupervised fashion, i.e., by assuming that the observed image signatures can be expressed in the form of linear combinations of a number of pure spectral signatures known in advance (e.g., spectra collected on the ground by a field spectroradiometer). Unmixing then amounts to finding the optimal subset of signatures in a (potentially very large) spectral library that can best model each mixed pixel in the scene. In practice, this is a combinatorial problem which calls for efficient linear sparse regression (SR) techniques based on sparsity-inducing regularizers, since the number of endmembers participating in a mixed pixel is usually very small compared with the (ever-growing) dimensionality (and availability) of spectral libraries. Linear SR is an area of very active research, with strong links to compressed sensing, basis pursuit (BP), BP denoising, and matching pursuit. In this paper, we study the linear spectral unmixing problem under the light of recent theoretical results published in those referred to areas. Furthermore, we provide a comparison of several available and new linear SR algorithms, with the ultimate goal of analyzing their potential in solving the spectral unmixing problem by resorting to available spectral libraries. Our experimental results, conducted using both simulated and real hyperspectral data sets collected by the NASA Jet Propulsion Laboratory's Airborne Visible Infrared Imaging Spectrometer and spectral libraries publicly available from the U.S. Geological Survey, indicate the potential of SR techniques in the task of accurately characterizing the mixed pixels using the library spectra. This opens new perspectives for spectral unmixing, since the abundance estimation process no longer depends on the availability of pure spectral signatures in the input data nor on the capacity of a certain endmember extraction algorithm to identify such pure signatures.

956 citations


Journal ArticleDOI
TL;DR: An improvement to a decomposition scheme for the accurate classification of polarimetric synthetic aperture radar (POLSAR) images by implementing a rotation of the coherency matrix first and, subsequently, the four-component decomposition yields considerably improved accurate results that oriented urban areas are recognized as double bounce objects from volume scattering.
Abstract: This paper presents an improvement to a decomposition scheme for the accurate classification of polarimetric synthetic aperture radar (POLSAR) images. Using a rotation of the coherency matrix to minimize the cross-polarized component, the four-component scattering power decomposition is applied to fully polarimetric SAR images. It is known that oriented urban area and vegetation signatures are decomposed into the same volume scattering mechanism in the previous decompositions and that it is difficult to distinguish vegetation from oblique urban areas with respect to the radar direction of illumination within the volume scattering mechanism. It is desirable to distinguish these two scattering mechanisms for accurate classification although they exhibit similar polarimetric responses. The new decomposition scheme by implementing a rotation of the coherency matrix first and, subsequently, the four-component decomposition yields considerably improved accurate results that oriented urban areas are recognized as double bounce objects from volume scattering.

473 citations


Journal ArticleDOI
TL;DR: Experiments showed that this approach can resolve spectral distortion problems and successfully conserve the spatial information of a PAN image and give higher fusion quality than the images from some other methods.
Abstract: Preservation of spectral information and enhancement of spatial resolution are regarded as important issues in remote sensing satellite image fusion. In previous research, various algorithms have been proposed. Although they have been successful, there are still some margins of spatial and spectral quality that can be improved. In addition, a new method that can be used for various types of sensors is required. In this paper, a new adaptive fusion method based on component substitution is proposed to merge a high-spatial-resolution panchromatic (PAN) image with a multispectral image. This method generates high-/low-resolution synthetic component images by partial replacement and uses statistical ratio-based high-frequency injection. Various remote sensing satellite images, such as IKONOS-2, QuickBird, LANDSAT ETM+, and SPOT-5, were employed in the evaluation. Experiments showed that this approach can resolve spectral distortion problems and successfully conserve the spatial information of a PAN image. Thus, the fused image obtained from the proposed method gave higher fusion quality than the images from some other methods. In addition, the proposed method worked efficiently with the different sensors considered in the evaluation.

442 citations


Journal ArticleDOI
TL;DR: This paper documents the CERES Edition-2 cloud property retrieval system used to analyze data from the Tropical Rainfall Measuring Mission Visible and Infrared Scanner and by the MODerate-resolution Imaging Spectrometer instruments on board the Terra and Aqua satellites covering the period 1998 through 2007.
Abstract: The National Aeronautics and Space Administration's Clouds and the Earth's Radiant Energy System (CERES) Project was designed to improve our understanding of the relationship between clouds and solar and longwave radiation. This is achieved using satellite broad-band instruments to map the top-of-atmosphere radiation fields with coincident data from satellite narrow-band imagers employed to retrieve the properties of clouds associated with those fields. This paper documents the CERES Edition-2 cloud property retrieval system used to analyze data from the Tropical Rainfall Measuring Mission Visible and Infrared Scanner and by the MODerate-resolution Imaging Spectrometer instruments on board the Terra and Aqua satellites covering the period 1998 through 2007. Two daytime retrieval methods are explained: the Visible Infrared Shortwave-infrared Split-window Technique for snow-free surfaces and the Shortwave-infrared Infrared Near-infrared Technique for snow or ice-covered surfaces. The Shortwave-infrared Infrared Split-window Technique is used for all surfaces at night. These methods, along with the ancillary data and empirical parameterizations of cloud thickness, are used to derive cloud boundaries, phase, optical depth, effective particle size, and condensed/frozen water path at both pixel and CERES footprint levels. Additional information is presented, detailing the potential effects of satellite calibration differences, highlighting methods to compensate for spectral differences and correct for atmospheric absorption and emissivity, and discussing known errors in the code. Because a consistent set of algorithms, auxiliary input, and calibrations across platforms are used, instrument and algorithm-induced changes in the data record are minimized. This facilitates the use of the CERES data products for studying climate-scale trends.

430 citations


Journal ArticleDOI
TL;DR: A new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of a multinomial logistic regression model to learn the class posterior probability distributions and a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling.
Abstract: This paper introduces a new supervised Bayesian approach to hyperspectral image segmentation with active learning, which consists of two main steps. First, we use a multinomial logistic regression (MLR) model to learn the class posterior probability distributions. This is done by using a recently introduced logistic regression via splitting and augmented Lagrangian algorithm. Second, we use the information acquired in the previous step to segment the hyperspectral image using a multilevel logistic prior that encodes the spatial information. In order to reduce the cost of acquiring large training sets, active learning is performed based on the MLR posterior probabilities. Another contribution of this paper is the introduction of a new active sampling approach, called modified breaking ties, which is able to provide an unbiased sampling. Furthermore, we have implemented our proposed method in an efficient way. For instance, in order to obtain the time-consuming maximum a posteriori segmentation, we use the α-expansion min-cut-based integer optimization algorithm. The state-of-the-art performance of the proposed approach is illustrated using both simulated and real hyperspectral data sets in a number of experimental comparisons with recently introduced hyperspectral image analysis methods.

414 citations


Journal ArticleDOI
TL;DR: First results on the simultaneous superposition of SatAIS and high-resolution radar images are presented and the velocity of a moving ship is estimated using complex TS-X data.
Abstract: Ship detection is an important application of global monitoring of environment and security. In order to overcome the limitations by other systems, surveillance with satellite synthetic aperture radar (SAR) is used because of its possibility to provide ship detection at high resolution over wide swaths and in all weather conditions. A new X-band radar onboard the TerraSAR-X (TS-X) satellite gives access to spatial resolution as fine as 1 m. In this paper, first results on the combined use of TS-X ship detection, automatic identification system (AIS), and satellite AIS (SatAIS) is presented. The AIS system is an effective terrestrial method for tracking vessels in real time typically up to 40 km off the coast. SatAIS, as a space-based system, allows almost global coverage for monitoring of ships since not all ships operate their AIS and smaller ships are not equipped with AIS. The system is considered to be of cooperative nature. In this paper, the quality of TS-X images with respect to ship detection is evaluated, and a first assessment of its performance for ship detection is given. The velocity of a moving ship is estimated using complex TS-X data. As test cases, images were acquired over the North Sea, Baltic Sea, Atlantic Ocean, and Pacific Ocean in Stripmap mode with a resolution of 3 m at a coverage of 30 km 100 km. Simultaneous information on ship positions was available from TS-X and terrestrial as well as SatAIS. First results on the simultaneous superposition of SatAIS and high-resolution radar images are presented.

405 citations


Journal ArticleDOI
TL;DR: In this paper, the L1/2 sparsity constraint is added to nonnegative matrix factorization (NMF) for hyperspectral unmixing, which is known as L 1/2 -NMF.
Abstract: Hyperspectral unmixing is a crucial preprocessing step for material classification and recognition. In the last decade, nonnegative matrix factorization (NMF) and its extensions have been intensively studied to unmix hyperspectral imagery and recover the material end-members. As an important constraint for NMF, sparsity has been modeled making use of the L1 regularizer. Unfortunately, the L1 regularizer cannot enforce further sparsity when the full additivity constraint of material abundances is used, hence limiting the practical efficacy of NMF methods in hyperspectral unmixing. In this paper, we extend the NMF method by incorporating the L1/2 sparsity constraint, which we name L1/2 -NMF. The L1/2 regularizer not only induces sparsity but is also a better choice among Lq(0 <; q <; 1) regularizers. We propose an iterative estimation algorithm for L1/2-NMF, which provides sparser and more accurate results than those delivered using the L1 norm. We illustrate the utility of our method on synthetic and real hyperspectral data and compare our results to those yielded by other state-of-the-art methods.

405 citations


Journal ArticleDOI
Shutao Li1, Bin Yang1
TL;DR: The experimental results show that the proposed method can well preserve spectral and spatial details of the source images and is competitive or even superior to those images fused by other well-known methods.
Abstract: This paper addresses the remote sensing image pan-sharpening problem from the perspective of compressed sensing (CS) theory which ensures that with the sparsity regularization, a compressible signal can be correctly recovered from the global linear sampled data. First, the degradation model from a high- to low-resolution multispectral (MS) image and high-resolution panchromatic (PAN) image is constructed as a linear sampling process which is formulated as a matrix. Then, the model matrix is considered as the measurement matrix in CS, so pan-sharpening is converted into signal restoration problem with sparsity regularization. Finally, the basis pursuit (BP) algorithm is used to resolve the restoration problem, which can recover the high-resolution MS image effectively. The QuickBird and IKONOS satellite images are used to test the proposed method. The experimental results show that the proposed method can well preserve spectral and spatial details of the source images. The pan-sharpened high-resolution MS image by the proposed method is competitive or even superior to those images fused by other well-known methods.

390 citations


Journal ArticleDOI
TL;DR: A new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1), using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels.
Abstract: In this paper, a new denoising method is proposed for hyperspectral data cubes that already have a reasonably good signal-to-noise ratio (SNR) (such as 600 : 1). Given this level of the SNR, the noise level of the data cubes is relatively low. The conventional image denoising methods are likely to remove the fine features of the data cubes during the denoising process. We propose to decorrelate the image information of hyperspectral data cubes from the noise by using principal component analysis (PCA) and removing the noise in the low-energy PCA output channels. The first PCA output channels contain a majority of the total energy of a data cube, and the rest PCA output channels contain a small amount of energy. It is believed that the low-energy channels also contain a large amount of noise. Removing noise in the low-energy PCA output channels will not harm the fine features of the data cubes. A 2-D bivariate wavelet thresholding method is used to remove the noise for low-energy PCA channels, and a 1-D dual-tree complex wavelet transform denoising method is used to remove the noise of the spectrum of each pixel of the data cube. Experimental results demonstrated that the proposed denoising method produces better denoising results than other denoising methods published in the literature.

Journal ArticleDOI
TL;DR: A generalized bilinear model and a hierarchical Bayesian algorithm for unmixing hyperspectral images and a Metropolis-within-Gibbs algorithm is proposed, which allows samples distributed according to this posterior to be generated and to estimate the unknown model parameters.
Abstract: Nonlinear models have recently shown interesting properties for spectral unmixing. This paper studies a generalized bilinear model and a hierarchical Bayesian algorithm for unmixing hyperspectral images. The proposed model is a generalization not only of the accepted linear mixing model but also of a bilinear model that has been recently introduced in the literature. Appropriate priors are chosen for its parameters to satisfy the positivity and sum-to-one constraints for the abundances. The joint posterior distribution of the unknown parameter vector is then derived. Unfortunately, this posterior is too complex to obtain analytical expressions of the standard Bayesian estimators. As a consequence, a Metropolis-within-Gibbs algorithm is proposed, which allows samples distributed according to this posterior to be generated and to estimate the unknown model parameters. The performance of the resulting unmixing strategy is evaluated via simulations conducted on synthetic and real data.

Journal ArticleDOI
TL;DR: Following the SMOS launch, a downscaling strategy for the estimation of soil moisture at high resolution from SMOS using MODIS VIS/IR data has been developed and is validated against in situ soil moisture data from the OZnet soil moisture monitoring network, in South-Eastern Australia.
Abstract: A downscaling approach to improve the spatial resolution of Soil Moisture and Ocean Salinity (SMOS) soil moisture estimates with the use of higher resolution visible/infrared (VIS/IR) satellite data is presented. The algorithm is based on the so-called “universal triangle” concept that relates VIS/IR parameters, such as the Normalized Difference Vegetation Index (NDVI), and Land Surface Temperature (Ts), to the soil moisture status. It combines the accuracy of SMOS observations with the high spatial resolution of VIS/IR satellite data into accurate soil moisture estimates at high spatial resolution. In preparation for the SMOS launch, the algorithm was tested using observations of the UPC Airborne RadIomEter at L-band (ARIEL) over the Soil Moisture Measurement Network of the University of Salamanca (REMEDHUS) in Zamora (Spain), and LANDSAT imagery. Results showed fairly good agreement with ground-based soil moisture measurements and illustrated the strength of the link between VIS/IR satellite data and soil moisture status. Following the SMOS launch, a downscaling strategy for the estimation of soil moisture at high resolution from SMOS using MODIS VIS/IR data has been developed. The method has been applied to some of the first SMOS images acquired during the commissioning phase and is validated against in situ soil moisture data from the OZnet soil moisture monitoring network, in South-Eastern Australia. Results show that the soil moisture variability is effectively captured at 10 and 1 km spatial scales without a significant degradation of the root mean square error.

Journal ArticleDOI
TL;DR: A high-resolution imaging system based on the combination of ultrawideband transmission, multiple-input-multiple-output (MIMO) array, and synthetic aperture radar (SAR) is suggested and studied, showing a strong potential of the MIMO-SAR-based UWB system for security applications.
Abstract: A high-resolution imaging system based on the combination of ultrawideband (UWB) transmission, multiple-input-multiple-output (MIMO) array, and synthetic aperture radar (SAR) is suggested and studied. Starting from the resolution requirements, spatial sampling criteria for nonmonochromatic waves are investigated. Exploring the decisive influence of the system's fractional bandwidth (instead of previously claimed aperture sparsity) on the imaging capabilities of sparse aperture arrays, a MIMO linear array is designed based on the principle of effective aperture. For the antenna array, an optimized UWB antenna is designed allowing for distortionless impulse radiation with more than 150% fractional bandwidth. By combining the digital beamforming in the MIMO array with the SAR in the orthogonal direction, a high-resolution 3-D volumetric imaging system with a significantly reduced number of antenna elements is proposed. The proposed imaging system is experimentally verified against the conventional 2-D SAR under different conditions, including a typical concealed-weapon-detection scenario. The imaging results confirm the correctness of the proposed system design and show a strong potential of the MIMO-SAR-based UWB system for security applications.

Journal ArticleDOI
David Small1
TL;DR: The ASAR & PALSAR sensors provide state vectors and timing with higher absolute accuracy than was previously available, allowing them to directly support accurate tie-point-free geolocation and radiometric normalization of their imagery.
Abstract: Enabling intercomparison of synthetic aperture radar (SAR) imagery acquired from different sensors or acquisition modes requires accurate modeling of not only the geometry of each scene, but also of systematic influences on the radiometry of individual scenes. Terrain variations affect not only the position of a given point on the Earth's surface but also the brightness of the radar return as expressed in radar geometry. Without treatment, the hill-slope modulations of the radiometry threaten to overwhelm weaker thematic land cover induced backscatter differences, and comparison of backscatter from multiple satellites, modes, or tracks loses meaning. The ASAR & PALSAR sensors provide state vectors and timing with higher absolute accuracy than was previously available, allowing them to directly support accurate tie-point-free geolocation and radiometric normalization of their imagery. Given accurate knowledge of the acquisition geometry of a SAR image together with a digital height model (DHM) of the area imaged, radiometric image simulation is applied to estimate the local illuminated area for each point in the image. Ellipsoid-based or sigma naught (σ0) based incident angle approximations that fail to reproduce the effect of topographic variation in their sensor model are contrasted with a new method that integrates terrain variations with the concept of gamma naught (γ0) backscatter, converting directly from beta naught (β0) to a newly introduced terrain-flattened γ0 normalization convention. The interpretability of imagery treated in this manner is improved in comparison to processing based on conventional ellipsoid or local incident angle based σ0 normalization.

Journal ArticleDOI
TL;DR: The influence of the algorithm used to enforce independence and of the number of IC retained for the classification of hyperspectral images is studied, proposing an effective method to estimate the most suitable number.
Abstract: In this paper, the use of Independent Component (IC) Discriminant Analysis (ICDA) for remote sensing classification is proposed. ICDA is a nonparametric method for discriminant analysis based on the application of a Bayesian classification rule on a signal composed by ICs. The method uses IC Analysis (ICA) to choose a transform matrix so that the transformed components are as independent as possible. When the data are projected in an independent space, the estimates of their multivariate density function can be computed in a much easier way as the product of univariate densities. A nonparametric kernel density estimator is used to compute the density functions of each IC. Finally, the Bayes rule is applied for the classification assignment. In this paper, we investigate the possibility of using ICDA for the classification of hyperspectral images. We study the influence of the algorithm used to enforce independence and of the number of IC retained for the classification, proposing an effective method to estimate the most suitable number. The proposed method is applied to several hyperspectral images, in order to test different data set conditions (urban/agricultural area, size of the training set, and type of sensor). Obtained results are compared with one of the most commonly used classifier of hyperspectral images (support vector machines) and show the comparative effectiveness of the proposed method in terms of accuracy.

Journal ArticleDOI
TL;DR: This paper proposes a simple modification that ensures that all covariance matrices in the decomposition will have non negative eigenvalues, and combines their nonnegative eigenvalue decomposition with eigenvector decomposition to remove additional assumptions.
Abstract: Model-based decomposition of polarimetric radar covariance matrices holds the promise that specific scattering mechanisms can be isolated for further quantitative analysis. In this paper, we show that current algorithms suffer from a fatal flaw in that some of the scattering components result in negative powers. We propose a simple modification that ensures that all covariance matrices in the decomposition will have nonnegative eigenvalues. We further combine our nonnegative eigenvalue decomposition with eigenvector decomposition to remove additional assumptions that have to be made before the current algorithms can be used to estimate all the scattering components. Our results are illustrated using Airborne Synthetic Aperture Radar data and show that current algorithms typically overestimate the canopy scattering contribution by 10%-20%.

Journal ArticleDOI
TL;DR: A novel 3-D SAR data imaging based on Compressive Sampling theory is presented and allows super-resolution imaging, overcoming the limitation imposed by the overall baseline span.
Abstract: Three-dimensional synthetic aperture radar (SAR) image formation provides the scene reflectivity estimation along azimuth, range, and elevation coordinates. It is based on multipass SAR data obtained usually by nonuniformly spaced acquisition orbits. A common 3-D SAR focusing approach is Fourier-based SAR tomography, but this technique brings about image quality problems because of the low number of acquisitions and their not regular spacing. Moreover, attained resolution in elevation is limited by the overall acquisitions baseline extent. In this paper, a novel 3-D SAR data imaging based on Compressive Sampling theory is presented. It is shown that since the image to be focused has usually a sparse representation along the elevation direction (i.e., only few scatterers with different elevation are present in the same range-azimuth resolution cell), it suffices to have a small number of measurements to construct the 3-D image. Furthermore, the method allows super-resolution imaging, overcoming the limitation imposed by the overall baseline span. Tomographic imaging is performed by solving an optimization problem which enforces sparsity through l1-norm minimization. Numerical results on simulated and real data validate the method and have been compared with the truncated singular value decomposition technique.

Journal ArticleDOI
TL;DR: This paper investigates different batch-mode active-learning techniques for the classification of remote sensing images with support vector machines and proposes a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster.
Abstract: This paper investigates different batch-mode active-learning (AL) techniques for the classification of remote sensing (RS) images with support vector machines. This is done by generalizing to multiclass problem techniques defined for binary classifiers. The investigated techniques exploit different query functions, which are based on the evaluation of two criteria: uncertainty and diversity. The uncertainty criterion is associated to the confidence of the supervised algorithm in correctly classifying the considered sample, while the diversity criterion aims at selecting a set of unlabeled samples that are as more diverse (distant one another) as possible, thus reducing the redundancy among the selected samples. The combination of the two criteria results in the selection of the potentially most informative set of samples at each iteration of the AL process. Moreover, we propose a novel query function that is based on a kernel-clustering technique for assessing the diversity of samples and a new strategy for selecting the most informative representative sample from each cluster. The investigated and proposed techniques are theoretically and experimentally compared with state-of-the-art methods adopted for RS applications. This is accomplished by considering very high resolution multispectral and hyperspectral images. By this comparison, we observed that the proposed method resulted in better accuracy with respect to other investigated and state-of-the art methods on both the considered data sets. Furthermore, we derived some guidelines on the design of AL systems for the classification of different types of RS images.

Journal ArticleDOI
TL;DR: Results indicate that the algorithm has the potential to obtain better soil-moisture accuracy at a high resolution and show an improvement in root-mean-square error of 0.015-0.02-cm3/cm3 volumetric soil moisture over the minimum performance taken to be retrievals based on radiometer measurements resampled to a finer scale.
Abstract: A robust and simple algorithm is developed to merge L-band radiometer retrievals and L-band radar observations to obtain high-resolution (9-km) soil-moisture estimates from data of the NASA Soil Moisture Active and Passive (SMAP) mission. The algorithm exploits the established accuracy of coarse-scale radiometer soil-moisture retrievals and blends this with the fine-scale spatial heterogeneity detectable by radar observations to produce a high-resolution optimal soil-moisture estimate at 9 km. The capability of the algorithm is demonstrated by implementing the approach using the airborne Passive and Active L-band System (PALS) instrument data set from Soil Moisture Experiments 2002 (SMEX02) and a four-month synthetic data set in an Observation System Simulation Experiment (OSSE) framework. The results indicate that the algorithm has the potential to obtain better soil-moisture accuracy at a high resolution and show an improvement in root-mean-square error of 0.015-0.02-cm3/cm3 volumetric soil moisture over the minimum performance taken to be retrievals based on radiometer measurements resampled to a finer scale. These results are based on PALS data from SMEX02 and a four-month OSSE data set and need to be further confirmed for different hydroclimatic regions using airborne data sets from prelaunch calibration/validation field campaigns of the SMAP mission.

Journal ArticleDOI
Bo Du1, Liangpei Zhang1
TL;DR: This paper proposes an anomaly detection method based on the random selection of background pixels, the random-selection-based anomaly detector (RSAD), which shows a better performance than the current hyperspectral anomaly detection algorithms and also outperforms its real-time counterparts.
Abstract: Anomaly detection in hyperspectral images is of great interest in the target detection domain since it requires no prior information and makes full use of the spectral differences revealed in hyperspectral images. The current anomaly detection methods are susceptible to anomalies in the processing window range or the image scope. In addition, for the local anomaly detection methods themselves, it is difficult to determine the window size suitable for processing background statistics. This paper proposes an anomaly detection method based on the random selection of background pixels, the random-selection-based anomaly detector (RSAD). Pixels are randomly selected from the image scene to represent the background statistics; the random selections are performed a sufficient number of times; blocked adaptive computationally efficient outlier nominators are used to detect anomalies each time after a proper subset of background pixels is selected; finally, a fusion procedure is employed to avoid contamination of the background statistics by anomaly pixels. In addition, the real-time implementation of the RSAD is also developed by random selection from updating data and QR decomposition. Several hyperspectral data sets are used in the experiments, and the RSAD shows a better performance than the current hyperspectral anomaly detection algorithms. The real-time version also outperforms its real-time counterparts.

Journal ArticleDOI
TL;DR: A statistically grounded patch-similarity criterion suitable to SLC images is derived and a weighted maximum likelihood estimation of the SAR interferogram is then computed with weights derived in a data-driven way.
Abstract: Interferometric synthetic aperture radar (SAR) data provide reflectivity, interferometric phase, and coherence images, which are paramount to scene interpretation or low-level processing tasks such as segmentation and 3-D reconstruction. These images are estimated in practice from a Hermitian product on local windows. These windows lead to biases and resolution losses due to the local heterogeneity caused by edges and textures. This paper proposes a nonlocal approach for the joint estimation of the reflectivity, the interferometric phase, and the coherence images from an interferometric pair of coregistered single-look complex (SLC) SAR images. Nonlocal techniques are known to efficiently reduce noise while preserving structures by performing the weighted averaging of similar pixels. Two pixels are considered similar if the surrounding image patches are “resembling.” Patch similarity is usually defined as the Euclidean distance between the vectors of graylevels. In this paper, a statistically grounded patch-similarity criterion suitable to SLC images is derived. A weighted maximum likelihood estimation of the SAR interferogram is then computed with weights derived in a data-driven way. Weights are defined from the intensity and interferometric phase and are iteratively refined based both on the similarity between noisy patches and on the similarity of patches from the previous estimate. The efficiency of this new interferogram construction technique is illustrated both qualitatively and quantitatively on synthetic and true data.

Journal ArticleDOI
TL;DR: This paper presents an approach to select objectively parameters for a region growing segmentation technique to outline landslides as individual segments and also addresses the scale dependence of landslides and false positives occurring in a natural landscape.
Abstract: To detect landslides by object-based image analysis using criteria based on shape, color, texture, and, in particular, contextual information and process knowledge, candidate segments must be delineated properly. This has proved challenging in the past, since segments are mainly created using spectral and size criteria that are not consistent for landslides. This paper presents an approach to select objectively parameters for a region growing segmentation technique to outline landslides as individual segments and also addresses the scale dependence of landslides and false positives occurring in a natural landscape. Multiple scale parameters were determined using a plateau objective function derived from the spatial autocorrelation and intrasegment variance analysis, allowing for differently sized features to be identified. While a high-resolution Resourcesat-1 Linear Imaging and Self Scanning Sensor IV (5.8 m) multispectral image was used to create segments for landslide recognition, terrain curvature derived from a digital terrain model based on Cartosat-1 (2.5 m) data was used to create segments for subsequent landslide classification. Here, optimal segments were used in a knowledge-based classification approach with the thresholds of diagnostic parameters derived from If-means cluster analysis, to detect landslides of five different types, with an overall recognition accuracy of 76.9%. The approach, when tested in a geomorphologically dissimilar area, recognized landslides with an overall accuracy of 77.7%, without modification to the methodology. The multiscale classification-based segment optimization procedure was also able to reduce the error of commission significantly in comparison to a single-optimal-scale approach.

Journal ArticleDOI
TL;DR: Comprehensive evaluation of efficiency, distribution quality, and positional accuracy of the extracted point pairs proves the capabilities of the proposed matching algorithm on a variety of optical remote sensing images.
Abstract: Extracting well-distributed, reliable, and precisely aligned point pairs for accurate image registration is a difficult task, particularly for multisource remote sensing images that have significant illumination, rotation, and scene differences. The scale-invariant feature transform (SIFT) approach, as a well-known feature-based image matching algorithm, has been successfully applied in a number of automatic registration of remote sensing images. Regardless of its distinctiveness and robustness, the SIFT algorithm suffers from some problems in the quality, quantity, and distribution of extracted features particularly in multisource remote sensing imageries. In this paper, an improved SIFT algorithm is introduced that is fully automated and applicable to various kinds of optical remote sensing images, even with those that are five times the difference in scale. The main key of the proposed approach is a selection strategy of SIFT features in the full distribution of location and scale where the feature qualities are quarantined based on the stability and distinctiveness constraints. Then, the extracted features are introduced to an initial cross-matching process followed by a consistency check in the projective transformation model. Comprehensive evaluation of efficiency, distribution quality, and positional accuracy of the extracted point pairs proves the capabilities of the proposed matching algorithm on a variety of optical remote sensing images.

Journal ArticleDOI
TL;DR: Six techniques for the calculation of surface roughness were selected for an assessment of the parameter's behavior at different spatial scales and data-set resolutions, and standard deviation of slope offered good performance at a variety of scales.
Abstract: Surface roughness is an important geomorphological variable which has been used in the Earth and planetary sciences to infer material properties, current/past processes, and the time elapsed since formation. No single definition exists; however, within the context of geomorphometry, we use surface roughness as an expression of the variability of a topographic surface at a given scale, where the scale of analysis is determined by the size of the landforms or geomorphic features of interest. Six techniques for the calculation of surface roughness were selected for an assessment of the parameter's behavior at different spatial scales and data-set resolutions. Area ratio operated independently of scale, providing consistent results across spatial resolutions. Vector dispersion produced results with increasing roughness and homogenization of terrain at coarser resolutions and larger window sizes. Standard deviation of residual topography highlighted local features and did not detect regional relief. Standard deviation of elevation correctly identified breaks of slope and was good at detecting regional relief. Standard deviation of slope (SDslope) also correctly identified smooth sloping areas and breaks of slope, providing the best results for geomorphological analysis. Standard deviation of profile curvature identified the breaks of slope, although not as strongly as SDslope, and it is sensitive to noise and spurious data. In general, SDslope offered good performance at a variety of scales, while the simplicity of calculation is perhaps its single greatest benefit.

Journal ArticleDOI
TL;DR: A new satellite image resolution enhancement technique based on the interpolation of the high-frequency subbands obtained by discrete wavelet transform and the input image to achieve a sharper image is proposed.
Abstract: Satellite images are being used in many fields of research. One of the major issues of these types of images is their resolution. In this paper, we propose a new satellite image resolution enhancement technique based on the interpolation of the high-frequency subbands obtained by discrete wavelet transform (DWT) and the input image. The proposed resolution enhancement technique uses DWT to decompose the input image into different subbands. Then, the high-frequency subband images and the input low-resolution image have been interpolated, followed by combining all these images to generate a new resolution-enhanced image by using inverse DWT. In order to achieve a sharper image, an intermediate stage for estimating the high-frequency subbands has been proposed. The proposed technique has been tested on satellite benchmark images. The quantitative (peak signal-to-noise ratio and root mean square error) and visual results show the superiority of the proposed technique over the conventional and state-of-art image resolution enhancement techniques.

Journal ArticleDOI
TL;DR: This paper proves that, fundamentally, the existence of pure pixels is not only sufficient for the Winter problem to perfectly identify the ground-truth endmembers but also necessary, and proposes a robust worst case generalization of the Winter Problem for accounting for perturbed pixel effects in the noisy scenario.
Abstract: In the late 1990s, Winter proposed an endmember extraction belief that has much impact on endmember extraction techniques in hyperspectral remote sensing. The idea is to find a maximum-volume simplex whose vertices are drawn from the pixel vectors. Winter's belief has stimulated much interest, resulting in many different variations of pixel search algorithms, widely known as N-FINDR, being proposed. In this paper, we take a continuous optimization perspective to revisit Winter's belief, where the aim is to provide an alternative framework of formulating and understanding Winter's belief in a systematic manner. We first prove that, fundamentally, the existence of pure pixels is not only sufficient for the Winter problem to perfectly identify the ground-truth endmembers but also necessary. Then, under the umbrella of the Winter problem, we derive two methods using two different optimization strategies. One is by alternating optimization. The resulting algorithm turns out to be an N-FINDR variant, but, with the proposed formulation, we can pin down some of its convergence characteristics. Another is by successive optimization; interestingly, the resulting algorithm is found to exhibit some similarity to vertex component analysis. Hence, the framework provides linkage and alternative interpretations to these existing algorithms. Furthermore, we propose a robust worst case generalization of the Winter problem for accounting for perturbed pixel effects in the noisy scenario. An algorithm combining alternating optimization and projected subgradients is devised to deal with the problem. We use both simulations and real data experiments to demonstrate the viability and merits of the proposed algorithms.

Journal ArticleDOI
TL;DR: In this paper, the effects of orientation compensation on the coherency matrix and the scattering-model-based decompositions by Freeman-Durden and Yamaguchi et al. were investigated.
Abstract: The polarization orientation angle (OA) of the scattering media affects the polarimetric radar signatures. This paper investigates the effects of orientation compensation on the coherency matrix and the scattering-model-based decompositions by Freeman-Durden and Yamaguchi et al. The Cloude and Pottier decomposition is excluded, because entropy, anisotropy, and alpha angle are roll invariant. We will show that, after orientation compensation, the volume scattering power is consistently decreased, while the double-bounce power has increased. The surface scattering power is relatively unchanged, and the helicity power is roll invariant. All of these characteristics can be explained by the compensation effect on the nine elements of the coherency matrix. In particular, after compensation, the real part of the (HH - VV) · HV* correlation reduces to zero, the intensity of cross-pol |HV| always reduces, and |HH - VV| always increases. This analysis also reveals that the common perception that OA compensation would make a reflection asymmetrical medium completely reflection symmetric is incorrect and that, contrary to the general perception, the four-component decomposition does not use the complete information of the coherency matrix. Only six quantities are included - one more than the Freeman-Durden decomposition, which explicitly assumes reflection symmetry.

Journal ArticleDOI
TL;DR: A synthetic aperture radar (SAR) automatic target recognition approach based on a global scattering center model that is much easier to implement and less sensitive to nonideal factors such as noise and pose estimation error than point-to-point matching is proposed.
Abstract: This paper proposes a synthetic aperture radar (SAR) automatic target recognition approach based on a global scattering center model. The scattering center model is established offline using range profiles at multiple viewing angles, so the original data amount is much less than that required for establishing SAR image templates. Scattering center features at different target poses can be conveniently predicted by this model. Moreover, the model can be modified to predict features for various target configurations. For the SAR image to be classified, regional features in different levels are extracted by thresholding and morphological operations. The regional features will be matched to the predicted scattering center features of different targets to arrive at a decision. This region-to-point matching is much easier to implement and is less sensitive to nonideal factors such as noise and pose estimation error than point-to-point matching. A matching scheme going through from coarse to fine regional features in the inner cycle and going through different pose hypotheses in the outer cycle is designed to improve the efficiency and robustness of the classifier. Experiments using both data predicted by a high-frequency electromagnetic (EM) code and data measured in the MSTAR program verify the validity of the method.

Journal ArticleDOI
TL;DR: In this article, the authors present a methodology based entirely on satellite remote sensing data to set up and calibrate a hydrologic model, simulate the spatial extent of flooding, and evaluate the probability of detecting inundated areas.
Abstract: Floods are among the most catastrophic natural disasters around the globe impacting human lives and infrastructure. Implementation of a flood prediction system can potentially help mitigate flood-induced hazards. Such a system typically requires implementation and calibration of a hydrologic model using in situ observations (i.e., rain and stream gauges). Recently, satellite remote sensing data have emerged as a viable alternative or supplement to in situ observations due to their availability over vast ungauged regions. The focus of this study is to integrate the best available satellite products within a distributed hydrologic model to characterize the spatial extent of flooding and associated hazards over sparsely gauged or ungauged basins. We present a methodology based entirely on satellite remote sensing data to set up and calibrate a hydrologic model, simulate the spatial extent of flooding, and evaluate the probability of detecting inundated areas. A raster-based distributed hydrologic model, Coupled Routing and Excess STorage (CREST), was implemented for the Nzoia basin, a subbasin of Lake Victoria in Africa. Moderate Resolution Imaging Spectroradiometer Terra-based and Advanced Spaceborne Thermal Emission and Reflection Radiometer-based flood inundation maps were produced over the region and used to benchmark the distributed hydrologic model simulations of inundation areas. The analysis showed the value of integrating satellite data such as precipitation, land cover type, topography, and other products along with space-based flood inundation extents as inputs to the distributed hydrologic model. We conclude that the quantification of flooding spatial extent through optical sensors can help to calibrate and evaluate hydrologic models and, hence, potentially improve hydrologic prediction and flood management strategies in ungauged catchments.