scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Geoscience and Remote Sensing in 2013"


Journal ArticleDOI
TL;DR: A new family of generalized composite kernels which exhibit great flexibility when combining the spectral and the spatial information contained in the hyperspectral data, without any weight parameters are constructed.
Abstract: This paper presents a new framework for the development of generalized composite kernel machines for hyperspectral image classification. We construct a new family of generalized composite kernels which exhibit great flexibility when combining the spectral and the spatial information contained in the hyperspectral data, without any weight parameters. The classifier adopted in this work is the multinomial logistic regression, and the spatial information is modeled from extended multiattribute profiles. In order to illustrate the good performance of the proposed framework, support vector machines are also used for evaluation purposes. Our experimental results with real hyperspectral images collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory's Airborne Visible/Infrared Imaging Spectrometer and the Reflective Optics Spectrographic Imaging System indicate that the proposed framework leads to state-of-the-art classification performance in complex analysis scenarios.

459 citations


Journal ArticleDOI
TL;DR: Experimental results on several HSIs show that the proposed technique outperforms the linear sparsity-based classification technique, as well as the classical support vector machines and sparse kernel logistic regression classifiers.
Abstract: In this paper, a novel nonlinear technique for hyperspectral image (HSI) classification is proposed. Our approach relies on sparsely representing a test sample in terms of all of the training samples in a feature space induced by a kernel function. For each test pixel in the feature space, a sparse representation vector is obtained by decomposing the test pixel over a training dictionary, also in the same feature space, by using a kernel-based greedy pursuit algorithm. The recovered sparse representation vector is then used directly to determine the class label of the test pixel. Projecting the samples into a high-dimensional feature space and kernelizing the sparse representation improve the data separability between different classes, providing a higher classification accuracy compared to the more conventional linear sparsity-based classification algorithms. Moreover, the spatial coherency across neighboring pixels is also incorporated through a kernelized joint sparsity model, where all of the pixels within a small neighborhood are jointly represented in the feature space by selecting a few common training samples. Kernel greedy optimization algorithms are suggested in this paper to solve the kernel versions of the single-pixel and multi-pixel joint sparsity-based recovery problems. Experimental results on several HSIs show that the proposed technique outperforms the linear sparsity-based classification technique, as well as the classical support vector machines and sparse kernel logistic regression classifiers.

456 citations


Journal ArticleDOI
TL;DR: The cloud effective particle radius of liquid water clouds is significantly larger over ocean than land, consistent with the variation in hygroscopic aerosol concentrations that provide cloud condensation nuclei necessary for cloud formation.
Abstract: Cloud properties have been retrieved from the Moderate Resolution Imaging Spectroradiometer (MODIS) over 12 years of continuous observations from Terra and over nine years from Aqua. Results include the spatial and temporal distribution of cloud fraction, the cloud top pressure and cloud top temperature, and the cloud optical thickness and effective radius of both liquid water and ice clouds. Globally, the cloud fraction derived by the MODIS cloud mask is ~ 67%, with somewhat more clouds over land during the afternoon and less clouds over ocean in the afternoon, with very little difference in global cloud cover between Terra and Aqua. Overall, the cloud fraction over land is ~ 55%, with a distinctive seasonal cycle, whereas the ocean cloudiness is much higher, around 72%, with much reduced seasonal variation. Aqua and Terra have comparable zonal cloud top pressures, with Aqua having somewhat higher clouds (cloud top pressures lower by 100 hPa) over land due to afternoon deep convection. The coldest cloud tops (colder than 230 K) generally occur over Antarctica and the high clouds in the tropics. The cloud effective particle radius of liquid water clouds is significantly larger over ocean (mode 12-13 μm) than land (mode 10-11 μm), consistent with the variation in hygroscopic aerosol concentrations that provide cloud condensation nuclei necessary for cloud formation. We also find the effective radius to be 2-3 μm larger in the southern hemisphere than in the northern hemisphere, likely reflecting differences in sources of cloud condensation nuclei.

431 citations


Journal ArticleDOI
TL;DR: A new multifeature model, aiming to construct a support vector machine (SVM) ensemble combining multiple spectral and spatial features at both pixel and object levels is proposed, which provides more accurate classification results compared to the voting and probabilistic models.
Abstract: In recent years, the resolution of remotely sensed imagery has become increasingly high in both the spectral and spatial domains, which simultaneously provides more plentiful spectral and spatial information. Accordingly, the accurate interpretation of high-resolution imagery depends on effective integration of the spectral, structural and semantic features contained in the images. In this paper, we propose a new multifeature model, aiming to construct a support vector machine (SVM) ensemble combining multiple spectral and spatial features at both pixel and object levels. The features employed in this study include a gray-level co-occurrence matrix, differential morphological profiles, and an urban complexity index. Subsequently, three algorithms are proposed to integrate the multifeature SVMs: certainty voting, probabilistic fusion, and an object-based semantic approach, respectively. The proposed algorithms are compared with other multifeature SVM methods including the vector stacking, feature selection, and composite kernels. Experiments are conducted on the hyperspectral digital imagery collection experiment DC Mall data set and two WorldView-2 data sets. It is found that the multifeature model with semantic-based postprocessing provides more accurate classification results (an accuracy improvement of 1-4% for the three experimental data sets) compared to the voting and probabilistic models.

408 citations


Journal ArticleDOI
TL;DR: This paper proposes a new pan-sharpening method named SparseFI, based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version.
Abstract: Data provided by most optical Earth observation satellites such as IKONOS, QuickBird, and GeoEye are composed of a panchromatic channel of high spatial resolution (HR) and several multispectral channels at a lower spatial resolution (LR). The fusion of an HR panchromatic and the corresponding LR spectral channels is called “pan-sharpening.” It aims at obtaining an HR multispectral image. In this paper, we propose a new pan-sharpening method named Sparse F usion of Images (SparseFI, pronounced as “sparsify”). SparseFI is based on the compressive sensing theory and explores the sparse representation of HR/LR multispectral image patches in the dictionary pairs cotrained from the panchromatic image and its downsampled LR version. Compared with conventional methods, it “learns” from, i.e., adapts itself to, the data and has generally better performance than existing methods. Due to the fact that the SparseFI method does not assume any spectral composition model of the panchromatic image and due to the super-resolution capability and robustness of sparse signal reconstruction algorithms, it gives higher spatial resolution and, in most cases, less spectral distortion compared with the conventional methods.

390 citations


Journal ArticleDOI
TL;DR: Manifold regularization is incorporated into sparsity-constrained NMF for unmixing in this paper and can keep the close link between the original image and the material abundance maps, which leads to a more desired un Mixing performance.
Abstract: Hyperspectral unmixing is one of the most important techniques in analyzing hyperspectral images, which decomposes a mixed pixel into a collection of constituent materials weighted by their proportions. Recently, many sparse nonnegative matrix factorization (NMF) algorithms have achieved advanced performance for hyperspectral unmixing because they overcome the difficulty of absence of pure pixels and sufficiently utilize the sparse characteristic of the data. However, most existing sparse NMF algorithms for hyperspectral unmixing only consider the Euclidean structure of the hyperspectral data space. In fact, hyperspectral data are more likely to lie on a low-dimensional submanifold embedded in the high-dimensional ambient space. Thus, it is necessary to consider the intrinsic manifold structure for hyperspectral unmixing. In order to exploit the latent manifold structure of the data during the decomposition, manifold regularization is incorporated into sparsity-constrained NMF for unmixing in this paper. Since the additional manifold regularization term can keep the close link between the original image and the material abundance maps, the proposed approach leads to a more desired unmixing performance. The experimental results on synthetic and real hyperspectral data both illustrate the superiority of the proposed method compared with other state-of-the-art approaches.

346 citations


Journal ArticleDOI
TL;DR: An extensive evaluation of local invariant features for image retrieval of land-use/land-cover classes in high-resolution aerial imagery using a bag-of-visual-words (BOVW) representation and describes interesting findings such as the performance-efficiency tradeoffs that are possible through the appropriate pairings of different-sized codebooks and dissimilarity measures.
Abstract: This paper investigates local invariant features for geographic (overhead) image retrieval. Local features are particularly well suited for the newer generations of aerial and satellite imagery whose increased spatial resolution, often just tens of centimeters per pixel, allows a greater range of objects and spatial patterns to be recognized than ever before. Local invariant features have been successfully applied to a broad range of computer vision problems and, as such, are receiving increased attention from the remote sensing community particularly for challenging tasks such as detection and classification. We perform an extensive evaluation of local invariant features for image retrieval of land-use/land-cover (LULC) classes in high-resolution aerial imagery. We report on the effects of a number of design parameters on a bag-of-visual-words (BOVW) representation including saliency- versus grid-based local feature extraction, the size of the visual codebook, the clustering algorithm used to create the codebook, and the dissimilarity measure used to compare the BOVW representations. We also perform comparisons with standard features such as color and texture. The performance is quantitatively evaluated using a first-of-its-kind LULC ground truth data set which will be made publicly available to other researchers. In addition to reporting on the effects of the core design parameters, we also describe interesting findings such as the performance-efficiency tradeoffs that are possible through the appropriate pairings of different-sized codebooks and dissimilarity measures. While the focus is on image retrieval, we expect our insights to be informative for other applications such as detection and classification.

338 citations


Journal ArticleDOI
TL;DR: A hybrid methodology combining backscatter thresholding, region growing, and change detection (CD) is introduced as an approach enabling the automated, objective, and reliable flood extent extraction from very high resolution urban SAR images.
Abstract: Very high resolution synthetic aperture radar (SAR) sensors represent an alternative to aerial photography for delineating floods in built-up environments where flood risk is highest. However, even with currently available SAR image resolutions of 3 m and higher, signal returns from man-made structures hamper the accurate mapping of flooded areas. Enhanced image processing algorithms and a better exploitation of image archives are required to facilitate the use of microwave remote-sensing data for monitoring flood dynamics in urban areas. In this paper, a hybrid methodology combining backscatter thresholding, region growing, and change detection (CD) is introduced as an approach enabling the automated, objective, and reliable flood extent extraction from very high resolution urban SAR images. The method is based on the calibration of a statistical distribution of “open water” backscatter values from images of floods. Images acquired during dry conditions enable the identification of areas that are not “visible” to the sensor (i.e., regions affected by “shadow”) and that systematically behave as specular reflectors (e.g., smooth tarmac, permanent water bodies). CD with respect to a reference image thereby reduces overdetection of inundated areas. A case study of the July 2007 Severn River flood (UK) observed by airborne photography and the very high resolution SAR sensor on board TerraSAR-X highlights advantages and limitations of the method. Even though the proposed fully automated SAR-based flood-mapping technique overcomes some limitations of previous methods, further technological and methodological improvements are necessary for SAR-based flood detection in urban areas to match the mapping capability of high-quality aerial photography.

328 citations


Journal ArticleDOI
TL;DR: The proposed framework serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously and exploits the marginal probability distribution which uses the whole information in the hyperspectral data.
Abstract: In this paper, we propose a new framework for spectral-spatial classification of hyperspectral image data. The proposed approach serves as an engine in the context of which active learning algorithms can exploit both spatial and spectral information simultaneously. An important contribution of our paper is the fact that we exploit the marginal probability distribution which uses the whole information in the hyperspectral data. We learn such distributions from both the spectral and spatial information contained in the original hyperspectral data using loopy belief propagation. The adopted probabilistic model is a discriminative random field in which the association potential is a multinomial logistic regression classifier and the interaction potential is a Markov random field multilevel logistic prior. Our experimental results with hyperspectral data sets collected using the National Aeronautics and Space Administration's Airborne Visible Infrared Imaging Spectrometer and the Reflective Optics System Imaging Spectrometer system indicate that the proposed framework provides state-of-the-art performance when compared to other similar developments.

325 citations


Journal ArticleDOI
TL;DR: This paper proposes a hyperspectral feature extraction and pixel classification method based on structured sparse logistic regression and 3-D discrete wavelet transform (3D-DWT) texture features, and extended the linear sparse model to nonlinear classification by partitioning the feature space into subspaces of linearly separable samples.
Abstract: Hyperspectral remote sensing imagery contains rich information on spectral and spatial distributions of distinct surface materials. Owing to its numerous and continuous spectral bands, hyperspectral data enable more accurate and reliable material classification than using panchromatic or multispectral imagery. However, high-dimensional spectral features and limited number of available training samples have caused some difficulties in the classification, such as overfitting in learning, noise sensitiveness, overloaded computation, and lack of meaningful physical interpretability. In this paper, we propose a hyperspectral feature extraction and pixel classification method based on structured sparse logistic regression and 3-D discrete wavelet transform (3D-DWT) texture features. The 3D-DWT decomposes a hyperspectral data cube at different scales, frequencies, and orientations, during which the hyperspectral data cube is considered as a whole tensor instead of adapting the data to a vector or matrix. This allows the capture of geometrical and statistical spectral-spatial structures. After the feature extraction step, sparse representation/modeling is applied for data analysis and processing via sparse regularized optimization, which selects a small subset of the original feature variables to model the data for regression and classification purpose. A linear structured sparse logistic regression model is proposed to simultaneously select the discriminant features from the pool of 3D-DWT texture features and learn the coefficients of the linear classifier, in which the prior knowledge about feature structure can be mapped into the various sparsity-inducing norms such as lasso, group, and sparse group lasso. Furthermore, to overcome the limitation of linear models, we extended the linear sparse model to nonlinear classification by partitioning the feature space into subspaces of linearly separable samples. The advantages of our methods are validated on the real hyperspectral remote sensing data sets.

318 citations


Journal ArticleDOI
TL;DR: Evaluating the potential of two high spectral and spatial resolution hyperspectral sensors, operating at different wavelengths, for tree species classification of boreal forests showed that the HySpex VNIR 1600 sensor is effective in borealTree species classification with kappa accuracies over 0.8.
Abstract: Tree species mapping in forest areas is an important topic in forest inventory. In recent years, several studies have been carried out using different types of hyperspectral sensors under various forest conditions. The aim of this work was to evaluate the potential of two high spectral and spatial resolution hyperspectral sensors (HySpex-VNIR 1600 and HySpex-SWIR 320i), operating at different wavelengths, for tree species classification of boreal forests. To address this objective, many experiments were carried out, taking into consideration: 1) three classifiers (support vector machines (SVM), random forest (RF), and Gaussian maximum likelihood); 2) two spatial resolutions (1.5 m and 0.4 m pixel sizes); 3) two subsets of spectral bands (all and a selection); and 4) two spatial levels (pixel and tree levels). The study area is characterized by the presence of four classes 1) Norway spruce, 2) Scots pine, together with 3) scattered Birch and 4) other broadleaves. Our results showed that: 1) the HySpex VNIR 1600 sensor is effective in boreal tree species classification with kappa accuracies over 0.8 (with Pine and Spruce reaching producer's accuracies higher than 95%); 2) the role of the HySpex-SWIR 320i is limited, and its bands alone are able to properly separate only Pine and Spruce species; 3) the spatial resolution has a strong effect on the classification accuracy (an overall decrease of more than 20% between 0.4 m and 1.5 m spatial resolution); and 4) there is no significant difference between SVM or RF classifiers.

Journal ArticleDOI
TL;DR: A novel graph-regularized low-rank representation (LRR) destriping algorithm is proposed by incorporating the LRR technique and can both remove striping noise and achieve cleaner and higher contrast reconstructed results.
Abstract: Hyperspectral image destriping is a challenging and promising theme in remote sensing. Striping noise is a ubiquitous phenomenon in hyperspectral imagery, which may severely degrade the visual quality. A variety of methods have been proposed to effectively alleviate the effects of the striping noise. However, most of them fail to take full advantage of the high spectral correlation between the observation subimages in distinct bands and consider the local manifold structure of the hyperspectral data space. In order to remedy this drawback, in this paper, a novel graph-regularized low-rank representation (LRR) destriping algorithm is proposed by incorporating the LRR technique. To obtain desired destriping performance, two sides of performing destriping are included: 1) To exploit the high spectral correlation between the observation subimages in distinct bands, the technique of LRR is first utilized for destriping, and 2) to preserve the intrinsic local structure of the original hyperspectral data, the graph regularizer is incorporated in the objective function. The experimental results and quantitative analysis demonstrate that the proposed method can both remove striping noise and achieve cleaner and higher contrast reconstructed results.

Journal ArticleDOI
TL;DR: A tensor organization scheme for representing a pixel's spectral-spatial feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification are defined.
Abstract: In this paper, we propose a method for the dimensionality reduction (DR) of spectral-spatial features in hyperspectral images (HSIs), under the umbrella of multilinear algebra, i.e., the algebra of tensors. The proposed approach is a tensor extension of conventional supervised manifold-learning-based DR. In particular, we define a tensor organization scheme for representing a pixel's spectral-spatial feature and develop tensor discriminative locality alignment (TDLA) for removing redundant information for subsequent classification. The optimal solution of TDLA is obtained by alternately optimizing each mode of the input tensors. The methods are tested on three public real HSI data sets collected by hyperspectral digital imagery collection experiment, reflective optics system imaging spectrometer, and airborne visible/infrared imaging spectrometer. The classification results show significant improvements in classification accuracies while using a small number of features.

Journal ArticleDOI
TL;DR: By comparing with six well-known methods in terms of several universal quality evaluation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of the proposed remote sensing image fusion method.
Abstract: Remote sensing image fusion can integrate the spatial detail of panchromatic (PAN) image and the spectral information of a low-resolution multispectral (MS) image to produce a fused MS image with high spatial resolution. In this paper, a remote sensing image fusion method is proposed with sparse representations over learned dictionaries. The dictionaries for PAN image and low-resolution MS image are learned from the source images adaptively. Furthermore, a novel strategy is designed to construct the dictionary for unknown high-resolution MS images without training set, which can make our proposed method more practical. The sparse coefficients of the PAN image and low-resolution MS image are sought by the orthogonal matching pursuit algorithm. Then, the fused high-resolution MS image is calculated by combining the obtained sparse coefficients and the dictionary for the high-resolution MS image. By comparing with six well-known methods in terms of several universal quality evaluation indexes with or without references, the simulated and real experimental results on QuickBird and IKONOS images demonstrate the superiority of our method.

Journal ArticleDOI
TL;DR: The experimental results show that the proposed approach can process large clouds in a heterogeneous landscape, which is difficult for cloud removal approaches.
Abstract: A cloud removal approach based on information cloning is introduced. The approach removes cloud-contaminated portions of a satellite image and then reconstructs the information of missing data utilizing temporal correlation of multitemporal images. The basic idea is to clone information from cloud-free patches to their corresponding cloud-contaminated patches under the assumption that land covers change insignificantly over a short period of time. The patch-based information reconstruction is mathematically formulated as a Poisson equation and solved using a global optimization process. Thus, the proposed approach can potentially yield better results in terms of radiometric accuracy and consistency compared with related approaches. Some experimental analyses on sequences of images acquired by the Landsat-7 Enhanced Thematic Mapper Plus sensor are conducted. The experimental results show that the proposed approach can process large clouds in a heterogeneous landscape, which is difficult for cloud removal approaches. In addition, quantitative and qualitative analyses on simulated data with different cloud contamination conditions are conducted using quality index and visual inspection, respectively, to evaluate the performance of the proposed approach.

Journal ArticleDOI
TL;DR: It is found that the new method enhances the double-bounce scattering contributions over the urban areas compared with those of the existing four-component decomposition, resulting from the full utilization of polarimetric information, which requires highly improved acquisitions of the cross-polarized HV component above the noise floor.
Abstract: This paper presents a new general four-component scattering power decomposition method by implementing a set of unitary transformations for the polarimetric coherency matrix. There exist nine real independent observation parameters in the 3 $\times$ 3 coherency matrix with respect to the second-order statistics of polarimetric information. The proposed method accounts for all observation parameters in the new scheme. It is known that the existing four-component decomposition method reduces the number of observation parameters from nine to eight by rotation of the coherency matrix and that it accounts for six parameters out of eight, leaving two parameters (i.e., the real and imaginary parts of $T_{13}$ component) unaccounted for. By additional special unitary transformation to this rotated coherency matrix, it became possible to reduce the number of independent parameters from eight to seven. After the unitary transformation, the new four-component decomposition is carried out that accounts for all parameters in the coherency matrix, including the remaining $T_{13}$ component. Therefore, the proposed method makes use of full utilization of polarimetric information in the decomposition. The decomposition also employs an extended volume scattering model, which discriminates volume scattering between dipole and dihedral scattering structures caused by the cross-polarized $HV$ component. It is found that the new method enhances the double-bounce scattering contributions over the urban areas compared with those of the existing four-component decomposition, resulting from the full utilization of polarimetric information, which requires highly improved acquisitions of the cross-polarized $HV$ component above the noise floor.

Journal ArticleDOI
TL;DR: The expected GMTI performance of RADARSat-2 after EDPCA processing is compared to results achieved with measured RADARSAT-2 data recorded during several trials in order to validate the developed theory.
Abstract: This paper generalizes the well-known displaced- phase-center antenna (DPCA) method for efficient ground moving target indication (GMTI) with a two-channel synthetic aperture radar (SAR) to any multichannel SAR/GMTI radars independent of the number of receive channels. This processing method called extended DPCA (EDPCA) is derived in this paper and is applied to data acquired with the Canadian RADARSAT-2 satellite. The expected GMTI performance of RADARSAT-2 after EDPCA processing is compared to results achieved with measured RADARSAT-2 data recorded during several trials in order to validate the developed theory.

Journal ArticleDOI
TL;DR: It is confirmed that a combination of spectral and spatial information increases accuracy of species classification and that species mapping is tractable in tropical forests when using high-fidelity imaging spectroscopy.
Abstract: We identify canopy species in a Hawaiian tropical forest using supervised classification applied to airborne hyperspectral imagery acquired with the Carnegie Airborne Observatory-Alpha system. Nonparametric methods (linear and radial basis function support vector machine, artificial neural network, and k-nearest neighbor) and parametric methods (linear, quadratic, and regularized discriminant analysis) are compared for a range of species richness values and training sample sizes. We find a clear advantage in using regularized discriminant analysis, linear discriminant analysis, and support vector machines. No unique optimal classifier was found for all conditions tested, but we highlight the possibility of improving support vector machine classification with a better optimization of its free parameters. We also confirm that a combination of spectral and spatial information increases accuracy of species classification: we combine segmentation and species classification from regularized discriminant analysis to produce a map of the 17 discriminated species. Finally, we compare different methods to assess spectral separability and find a better ability of Bhattacharyya distance to assess separability within and among species. The results indicate that species mapping is tractable in tropical forests when using high-fidelity imaging spectroscopy.

Journal ArticleDOI
TL;DR: This method forms a unified framework for blending remote sensing images with temporal reflectance changes, whether phenology change or land-cover-type change, based on a two-layer spatiotemporal fusion strategy due to the large spatial resolution difference between HSLT and LSHT data.
Abstract: This paper proposes a novel spatiotemporal fusion model for generating images with high-spatial and high-temporal resolution (HSHT) through learning with only one pair of prior images. For this purpose, this method establishes correspondence between low-spatial-resolution but high-temporal-resolution (LSHT) data and high-spatial-resolution but low-temporal-resolution (HSLT) data through the superresolution of LSHT data and further fusion by using high-pass modulation. Specifically, this method is implemented in two stages. In the first stage, the spatial resolution of LSHT data on prior and prediction dates is improved simultaneously by means of sparse representation; in the second stage, the known HSLT and the superresolved LSHTs are fused via high-pass modulation to generate the HSHT data on the prediction date. Remarkably, this method forms a unified framework for blending remote sensing images with temporal reflectance changes, whether phenology change (e.g., seasonal change of vegetation) or land-cover-type change (e.g., conversion of farmland to built-up area) based on a two-layer spatiotemporal fusion strategy due to the large spatial resolution difference between HSLT and LSHT data. This method was tested on both a simulated data set and two actual data sets of Landsat Enhanced Thematic Mapper Plus-Moderate Resolution Imaging Spectroradiometer acquisitions. It was also compared with other well-known spatiotemporal fusion algorithms on two types of data: images primarily with phenology changes and images primarily with land-cover-type changes. Experimental results demonstrated that our method performed better in capturing surface reflectance changes on both types of images.

Journal ArticleDOI
TL;DR: Use of a robust set of internationally agreed upon and coordinated intercalibration techniques will lead to significant improvement in the consistency between satellite instruments and facilitate accurate monitoring of the Earth's climate at uncertainty levels needed to detect and attribute the mechanisms of change.
Abstract: Intercalibration of satellite instruments is critical for detection and quantification of changes in the Earth's environment, weather forecasting, understanding climate processes, and monitoring climate and land cover change. These applications use data from many satellites; for the data to be interoperable, the instruments must be cross-calibrated. To meet the stringent needs of such applications, instruments must provide reliable, accurate, and consistent measurements over time. Robust techniques are required to ensure that observations from different instruments can be normalized to a common scale that the community agrees on. The long-term reliability of this process needs to be sustained in accordance with established reference standards and best practices. Furthermore, establishing physical meaning to the information through robust Systeme International d'unites traceable calibration and validation (Cal/Val) is essential to fully understand the parameters under observation. The processes of calibration, correction, stability monitoring, and quality assurance need to be underpinned and evidenced by comparison with “peer instruments” and, ideally, highly calibrated in-orbit reference instruments. Intercalibration between instruments is a central pillar of the Cal/Val strategies of many national and international satellite remote sensing organizations. Intercalibration techniques as outlined in this paper not only provide a practical means of identifying and correcting relative biases in radiometric calibration between instruments but also enable potential data gaps between measurement records in a critical time series to be bridged. Use of a robust set of internationally agreed upon and coordinated intercalibration techniques will lead to significant improvement in the consistency between satellite instruments and facilitate accurate monitoring of the Earth's climate at uncertainty levels needed to detect and attribute the mechanisms of change. This paper summarizes the state-of-the-art of postlaunch radiometric calibration of remote sensing satellite instruments through intercalibration.

Journal ArticleDOI
TL;DR: The developed contextual generalization of SVMs, is obtained by analytically relating the Markovian minimum-energy criterion to the application of an SVM in a suitably transformed space and a novel contextual classifier is developed in the proposed general framework.
Abstract: In the framework of remote-sensing image classification, support vector machines (SVMs) have lately been receiving substantial attention due to their accurate results in many applications as well as their remarkable generalization capability even with high-dimensional input data. However, SVM classifiers are intrinsically noncontextual, which represents an important limitation in image classification. In this paper, a novel and rigorous framework, which integrates SVMs and Markov random field models in a unique formulation for spatial contextual classification, is proposed. The developed contextual generalization of SVMs, is obtained by analytically relating the Markovian minimum-energy criterion to the application of an SVM in a suitably transformed space. Furthermore, as a second contribution, a novel contextual classifier is developed in the proposed general framework. Two specific algorithms, based on the Ho–Kashyap and Powell numerical procedures, are combined with this classifier to automate the estimation of its parameters. Experiments are carried out with hyperspectral, multichannel synthetic aperture radar, and multispectral high-resolution images and the behavior of the method as a function of the training-set size is assessed.

Journal ArticleDOI
TL;DR: An instantaneous-range-Doppler algorithm of GMTIm based on deramp-keystone processing is proposed, which focuses all the targets in the scene at an arbitrarily chosen azimuth time and shows that no interpolation is needed.
Abstract: Range cell migration (RCM) correction and azimuth spectrum being contained entirely in baseband are critical for ground moving-target imaging (GMTIm). Without the azimuth spectrum entirely contained within baseband and a proper RCM correction, the image will be defocused, or artifacts may appear in the image. An instantaneous-range-Doppler algorithm of GMTIm based on deramp-keystone processing is proposed. The main idea is to focus all the targets in the scene at an arbitrarily chosen azimuth time. With our proposed algorithm, RCMs of all targets in the scene are removed without a priori knowledge of their accurate motion parameters. The targets with azimuth spectrum not entirely in baseband, i.e., azimuth spectrum within an ambiguous pulse repeating frequency (PRF) band or spanning neighboring PRF bands, can also be effectively dealt with simultaneously. Theoretical analysis shows that no interpolation is needed. The simulated and real data are used to validate the effectiveness of this method.

Journal ArticleDOI
TL;DR: A new approach for semisupervised learning is developed which adapts available active learning methods to a self-learning framework in which the machine learning algorithm itself selects the most useful and informative unlabeled samples for classification purposes.
Abstract: Remotely sensed hyperspectral imaging allows for the detailed analysis of the surface of the Earth using advanced imaging instruments which can produce high-dimensional images with hundreds of spectral bands. Supervised hyperspectral image classification is a difficult task due to the unbalance between the high dimensionality of the data and the limited availability of labeled training samples in real analysis scenarios. While the collection of labeled samples is generally difficult, expensive, and time-consuming, unlabeled samples can be generated in a much easier way. This observation has fostered the idea of adopting semisupervised learning techniques in hyperspectral image classification. The main assumption of such techniques is that the new (unlabeled) training samples can be obtained from a (limited) set of available labeled samples without significant effort/cost. In this paper, we develop a new approach for semisupervised learning which adapts available active learning methods (in which a trained expert actively selects unlabeled samples) to a self-learning framework in which the machine learning algorithm itself selects the most useful and informative unlabeled samples for classification purposes. In this way, the labels of the selected pixels are estimated by the classifier itself, with the advantage that no extra cost is required for labeling the selected pixels using this machine-machine framework when compared with traditional machine-human active learning. The proposed approach is illustrated with two different classifiers: multinomial logistic regression and a probabilistic pixelwise support vector machine. Our experimental results with real hyperspectral images collected by the National Aeronautics and Space Administration Jet Propulsion Laboratory's Airborne Visible-Infrared Imaging Spectrometer and the Reflective Optics Spectrographic Imaging System indicate that the use of self-learning represents an effective and promising strategy in the context of hyperspectral image classification.

Journal ArticleDOI
TL;DR: This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images, and proposes a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows.
Abstract: This paper introduces a new approach for the automated detection of buildings from monocular very high resolution (VHR) optical satellite images. First, we investigate the shadow evidence to focus on building regions. To do that, we propose a new fuzzy landscape generation approach to model the directional spatial relationship between buildings and their shadows. Once all landscapes are collected, a pruning process is developed to eliminate the landscapes that may occur due to non-building objects. The final building regions are detected by GrabCut partitioning approach. In this paper, the input requirements of the GrabCut partitioning are automatically extracted from the previously determined shadow and landscape regions, so that the approach gained an efficient fully automated behavior for the detection of buildings. Extensive experiments performed on 20 test sites selected from a set of QuickBird and Geoeye-1 VHR images showed that the proposed approach accurately detects buildings with arbitrary shapes and sizes in complex environments. The tests also revealed that even under challenging environmental and illumination conditions, reasonable building detection performances could be achieved by the proposed approach.

Journal ArticleDOI
TL;DR: The underlying idea is to design an optimal projection matrix, which preserves the local neighborhood information inferred from unlabeled samples, while simultaneously maximizing the class discrimination of the data inferred from the labeled samples.
Abstract: We propose a novel semisupervised local discriminant analysis method for feature extraction in hyperspectral remote sensing imagery, with improved performance in both ill-posed and poor-posed conditions. The proposed method combines unsupervised methods (local linear feature extraction methods and supervised method (linear discriminant analysis) in a novel framework without any free parameters. The underlying idea is to design an optimal projection matrix, which preserves the local neighborhood information inferred from unlabeled samples, while simultaneously maximizing the class discrimination of the data inferred from the labeled samples. Experimental results on four real hyperspectral images demonstrate that the proposed method compares favorably with conventional feature extraction methods.

Journal ArticleDOI
TL;DR: This paper discusses using field-programmable gate arrays (FPGAs) to process either time- or frequency-domain signals in human sensing radar applications and gives an example for a continuous-wave (CW) Doppler radar and another for an ultrawideband (UWB) pulse–Doppler (PD) radar.
Abstract: In this paper, we discuss using field-programmable gate arrays (FPGAs) to process either time- or frequency-domain signals in human sensing radar applications. One example will be given for a continuous-wave (CW) Doppler radar and another for an ultrawideband (UWB) pulse–Doppler (PD) radar. The example for the CW Doppler radar utilizes a novel superheterodyne receiver to suppress low-frequency noise and includes a digital downconverter module implemented in an FPGA. Meanwhile, the UWB PD radar employs a carrier-based transceiver and a novel equivalent time sampling scheme based on FPGA for narrow pulse digitization. Highly integrated compact data acquisition hardware has been implemented and exploited in both radar prototypes. Typically, the CW Doppler radar is a low-cost option for single human activity monitoring, vital sign detection, etc., where target range information is not required. Meanwhile, the UWB PD radar is more advanced in through-wall sensing, multiple-object detection, real-time target tracking, and so on, where a high-resolution range profile is acquired together with a micro-Doppler signature. Design challenges, performance comparison, pros, and cons will be discussed in detail.

Journal ArticleDOI
TL;DR: The Bayesian pixel-based method can provide a higher resolution in the classified image and, therefore, better capability to identify leads compared to the NN method, and may be effectively used in the Central Arctic where MYI is predominant.
Abstract: In this paper, sea ice in the Central Arctic has been classified in synthetic aperture radar (SAR) images from ENVISAT using a neural network (NN)-based algorithm and a Bayesian algorithm. Since different sea ice types can have similar backscattering coefficients at C-band HH polarization, it is necessary to use textural features in addition to the backscattering coefficients. The analysis revealed that the most informative texture features for the classification of multiyear ice (MYI), deformed first-year ice (FYI) (DFYI), and level FYI (LFYI) and open water/nilas are correlation, inertia, cluster prominence, energy, homogeneity, and entropy, as well as third and fourth central statistical moments of image brightness. The optimal topology of the NN, trained for ENVISAT wide-swath SAR sea ice classification, consists of nine neurons in input layer, six neurons in hidden layer, and three neurons in output layer. The classification results for a series of 20 SAR images, acquired in the central part of the Arctic Ocean during winter months, were compared to expert analysis of the images and ice charts. The results of the NN classification show that the average correspondences with the expert analysis amount to 85 $\%$ , 83 $\%$ , and 80 $\%$ for LFYI, DFYI, and MYI, respectively. The Bayesian pixel-based method can provide a higher resolution in the classified image and, therefore, better capability to identify leads compared to the NN method. Both methods may be effectively used in the Central Arctic where MYI is predominant.

Journal ArticleDOI
TL;DR: According to the structure of the TV regularization and sparse unmixing in the model, the convergence of the alternating direction method can be guaranteed and the method is compared to the recent Sparse Unmixing via variable Splitting Augmented Lagrangian and TV method.
Abstract: The main aim of this paper is to study total variation (TV) regularization in deblurring and sparse unmixing of hyperspectral images. In the model, we also incorporate blurring operators for dealing with blurring effects, particularly blurring operators for hyperspectral imaging whose point spread functions are generally system dependent and formed from axial optical aberrations in the acquisition system. An alternating direction method is developed to solve the resulting optimization problem efficiently. According to the structure of the TV regularization and sparse unmixing in the model, the convergence of the alternating direction method can be guaranteed. Experimental results are reported to demonstrate the effectiveness of the TV and sparsity model and the efficiency of the proposed numerical scheme, and the method is compared to the recent Sparse Unmixing via variable Splitting Augmented Lagrangian and TV method by Iordache et al.

Journal ArticleDOI
TL;DR: A novel superpixel-based classification framework with an adaptive number of classes for PolSAR images that is capable of improving the classification accuracy, making the results more understandable and easier for further analyses, and providing robust performance under various numbers of classes.
Abstract: Polarimetric synthetic aperture radar (PolSAR) image classification, an important technique in the remote sensing area, has been deeply studied for a couple of decades. In order to develop a robust automatic or semiautomatic classification system for PolSAR images, two important problems should be addressed: 1) incorporation of spatial relations between pixels; 2) estimation of the number of classes in the image. Therefore, in this paper, we present a novel superpixel-based classification framework with an adaptive number of classes for PolSAR images. The approach is mainly composed of three operations. First, the PolSAR image is partitioned into superpixels, which are local, coherent regions and preserve most of the characteristics necessary for image information extraction. Then, the number of classes and each class center within the data are estimated using the pairwise dissimilarity information between superpixels, followed by the final classification operation. The proposed framework takes the spatial relations between pixels into consideration and makes good use of the inherent statistical characteristics and contour information of PolSAR data. The framework is capable of improving the classification accuracy, making the results more understandable and easier for further analyses, and providing robust performance under various numbers of classes. The performance of the proposed classification framework on one synthetic and three real data sets is presented and analyzed; and the experimental results show that the framework provides a promising solution for unsupervised classification of PolSAR images.

Journal ArticleDOI
TL;DR: Experimental results show that transferring the class labels from the source domain to the target domain provides a reliable initial training set and that the priority rule for AL results in a fast convergence to the desired accuracy with respect to Standard AL.
Abstract: This paper proposes a novel change-detection-driven transfer learning (TL) approach to update land-cover maps by classifying remote-sensing images acquired on the same area at different times (i.e., image time series). The proposed approach requires that a reliable training set is available only for one of the images (i.e., the source domain) in the time series whereas it is not for another image to be classified (i.e., the target domain). Unlike other literature TL methods, no additional assumptions on either the similarity between class distributions or the presence of the same set of land-cover classes in the two domains are required. The proposed method aims at defining a reliable training set for the target domain, taking advantage of the already available knowledge on the source domain. This is done by applying an unsupervised-change-detection method to target and source domains and transferring class labels of detected unchanged training samples from the source to the target domain to initialize the target-domain training set. The training set is then optimized by a properly defined novel active learning (AL) procedure. At the early iterations of AL, priority in labeling is given to samples detected as being changed, whereas in the remaining ones, the most informative samples are selected from changed and unchanged unlabeled samples. Finally, the target image is classified. Experimental results show that transferring the class labels from the source domain to the target domain provides a reliable initial training set and that the priority rule for AL results in a fast convergence to the desired accuracy with respect to Standard AL.