scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Geoscience and Remote Sensing in 2010"


Journal ArticleDOI
TL;DR: The classification maps obtained by considering different APs result in a better description of the scene than those obtained with an MP, and the usefulness of APs in modeling the spatial information present in the images is proved.
Abstract: Morphological attribute profiles (APs) are defined as a generalization of the recently proposed morphological profiles (MPs). APs provide a multilevel characterization of an image created by the sequential application of morphological attribute filters that can be used to model different kinds of the structural information. According to the type of the attributes considered in the morphological attribute transformation, different parametric features can be modeled. The generation of APs, thanks to an efficient implementation, strongly reduces the computational load required for the computation of conventional MPs. Moreover, the characterization of the image with different attributes leads to a more complete description of the scene and to a more accurate modeling of the spatial information than with the use of conventional morphological filters based on a predefined structuring element. Here, the features extracted by the proposed operators were used for the classification of two very high resolution panchromatic images acquired by Quickbird on the city of Trento, Italy. The experimental analysis proved the usefulness of APs in modeling the spatial information present in the images. The classification maps obtained by considering different APs result in a better description of the scene (both in terms of thematic and geometric accuracy) than those obtained with an MP.

721 citations


Journal ArticleDOI
TL;DR: Support vector machines (SVM) are attractive for the classification of remotely sensed data with some claims that the method is insensitive to the dimensionality of the data and, therefore, does not require a dimensionality-reduction analysis in preprocessing, but it is shown that the accuracy of a classification by an SVM does vary as a function of the number of features used.
Abstract: Support vector machines (SVM) are attractive for the classification of remotely sensed data with some claims that the method is insensitive to the dimensionality of the data and, therefore, does not require a dimensionality-reduction analysis in preprocessing. Here, a series of classification analyses with two hyperspectral sensor data sets reveals that the accuracy of a classification by an SVM does vary as a function of the number of features used. Critically, it is shown that the accuracy of a classification may decline significantly (at 0.05 level of statistical significance) with the addition of features, particularly if a small training sample is used. This highlights a dependence of the accuracy of classification by an SVM on the dimensionality of the data and, therefore, the potential value of undertaking a feature-selection analysis prior to classification. Additionally, it is demonstrated that, even when a large training sample is available, feature selection may still be useful. For example, the accuracy derived from the use of a small number of features may be noninferior (at 0.05 level of significance) to that derived from the use of a larger feature set providing potential advantages in relation to issues such as data storage and computational processing costs. Feature selection may, therefore, be a valuable analysis to include in preprocessing operations for classification by an SVM.

708 citations


Journal ArticleDOI
TL;DR: Four soil moisture networks were developed and used as part of the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) validation program, and it is shown that there is much room for improvement in the algorithms currently in use by JAXA and NASA.
Abstract: Validation is an important and particularly challenging task for remote sensing of soil moisture. A key issue in the validation of soil moisture products is the disparity in spatial scales between satellite and in situ observations. Conventional measurements of soil moisture are made at a point, whereas satellite sensors provide an integrated area/volume value for a much larger spatial extent. In this paper, four soil moisture networks were developed and used as part of the Advanced Microwave Scanning Radiometer-Earth Observing System (AMSR-E) validation program. Each network is located in a different climatic region of the U.S., and provides estimates of the average soil moisture over highly instrumented experimental watersheds and surrounding areas that approximate the size of the AMSR-E footprint. Soil moisture measurements have been made at these validation sites on a continuous basis since 2002, which provided a seven-year period of record for this analysis. The National Aeronautics and Space Administration (NASA) and Japan Aerospace Exploration Agency (JAXA) standard soil moisture products were compared to the network observations, along with two alternative soil moisture products developed using the single-channel algorithm (SCA) and the land parameter retrieval model (LPRM). The metric used for validation is the root-mean-square error (rmse) of the soil moisture estimate as compared to the in situ data. The mission requirement for accuracy defined by the space agencies is 0.06 m3/m3. The statistical results indicate that each algorithm performs differently at each site. Neither the NASA nor the JAXA standard products provide reliable estimates for all the conditions represented by the four watershed sites. The JAXA algorithm performs better than the NASA algorithm under light-vegetation conditions, but the NASA algorithm is more reliable for moderate vegetation. However, both algorithms have a moderate to large bias in all cases. The SCA had the lowest overall rmse with a small bias. The LPRM had a very large overestimation bias and retrieval errors. When site-specific corrections were applied, all algorithms had approximately the same error level and correlation. These results clearly show that there is much room for improvement in the algorithms currently in use by JAXA and NASA. They also illustrate the potential pitfalls in using the products without a careful evaluation.

535 citations


Journal ArticleDOI
TL;DR: The proposed approach can provide classification accuracies that are similar or higher than those achieved by other supervised methods for the considered scenes, and indicates that the use of a spatial prior can greatly improve the final results with respect to a case in which only the learned class densities are considered.
Abstract: This paper presents a new semisupervised segmentation algorithm, suited to high-dimensional data, of which remotely sensed hyperspectral image data sets are an example. The algorithm implements two main steps: 1) semisupervised learning of the posterior class distributions followed by 2) segmentation, which infers an image of class labels from a posterior distribution built on the learned class distributions and on a Markov random field. The posterior class distributions are modeled using multinomial logistic regression, where the regressors are learned using both labeled and, through a graph-based technique, unlabeled samples. Such unlabeled samples are actively selected based on the entropy of the corresponding class label. The prior on the image of labels is a multilevel logistic model, which enforces segmentation results in which neighboring labels belong to the same class. The maximum a posteriori segmentation is computed by the α-expansion min-cut-based integer optimization algorithm. Our experimental results, conducted using synthetic and real hyperspectral image data sets collected by the Airborne Visible/Infrared Imaging Spectrometer system of the National Aeronautics and Space Administration Jet Propulsion Laboratory over the regions of Indian Pines, IN, and Salinas Valley, CA, reveal that the proposed approach can provide classification accuracies that are similar or higher than those achieved by other supervised methods for the considered scenes. Our results also indicate that the use of a spatial prior can greatly improve the final results with respect to a case in which only the learned class densities are considered, confirming the importance of jointly considering spatial and spectral information in hyperspectral image segmentation.

523 citations


Journal ArticleDOI
TL;DR: A novel method that detects buildings destroyed in an earthquake using pre-event VHR optical and post-event detected VHR SAR imagery and is demonstrated the feasibility and the effectiveness of the method for a subset of the town of Yingxiu, China.
Abstract: Rapid damage assessment after natural disasters (eg, earthquakes) and violent conflicts (eg, war-related destruction) is crucial for initiating effective emergency response actions Remote-sensing satellites equipped with very high spatial resolution (VHR) multispectral and synthetic aperture radar (SAR) imaging sensors can provide vital information due to their ability to map the affected areas with high geometric precision and in an uncensored manner In this paper, we present a novel method that detects buildings destroyed in an earthquake using pre-event VHR optical and post-event detected VHR SAR imagery The method operates at the level of individual buildings and assumes that they have a rectangular footprint and are isolated First, the 3-D parameters of a building are estimated from the pre-event optical imagery Second, the building information and the acquisition parameters of the VHR SAR scene are used to predict the expected signature of the building in the post-event SAR scene assuming that it is not affected by the event Third, the similarity between the predicted image and the actual SAR image is analyzed If the similarity is high, the building is likely to be still intact, whereas a low similarity indicates that the building is destroyed A similarity threshold is used to classify the buildings We demonstrate the feasibility and the effectiveness of the method for a subset of the town of Yingxiu, China, which was heavily damaged in the Sichuan earthquake of May 12, 2008 For the experiment, we use QuickBird and WorldView-1 optical imagery, and TerraSAR-X and COSMO-SkyMed SAR data

482 citations


Journal ArticleDOI
TL;DR: In this article, compressive sensing (CS) methods for tomographic reconstruction of a building complex from the TerraSAR-X spotlight data are presented, and the theory of 4-D (differential, i.e., space-time) CS TomoSAR and compares it with parametric (nonlinear least squares) and nonparametric (singular value decomposition) reconstruction methods.
Abstract: Synthetic aperture radar (SAR) tomography (TomoSAR) extends the synthetic aperture principle into the elevation direction for 3-D imaging. The resolution in the elevation direction depends on the size of the elevation aperture, i.e., on the spread of orbit tracks. Since the orbits of modern meter-resolution spaceborne SAR systems, like TerraSAR-X, are tightly controlled, the tomographic elevation resolution is at least an order of magnitude lower than in range and azimuth. Hence, super-resolution reconstruction algorithms are desired. The high anisotropy of the 3-D tomographic resolution element renders the signals sparse in the elevation direction; only a few pointlike reflections are expected per azimuth-range cell. This property suggests using compressive sensing (CS) methods for tomographic reconstruction. This paper presents the theory of 4-D (differential, i.e., space-time) CS TomoSAR and compares it with parametric (nonlinear least squares) and nonparametric (singular value decomposition) reconstruction methods. Super-resolution properties and point localization accuracies are demonstrated using simulations and real data. A CS reconstruction of a building complex from TerraSAR-X spotlight data is presented.

460 citations


Journal ArticleDOI
TL;DR: First 3-D and 4-D reconstructions of an entire building complex with very high level of detail from spaceborne SAR data by pixelwise TomoSAR are presented.
Abstract: Synthetic aperture radar tomography (TomoSAR) extends the synthetic aperture principle into the elevation direction for 3-D imaging. It uses stacks of several acquisitions from slightly different viewing angles (the elevation aperture) to reconstruct the reflectivity function along the elevation direction by means of spectral analysis for every azimuth-range pixel. The new class of meter-resolution spaceborne SAR systems (TerraSAR-X and COSMO-Skymed) offers a tremendous improvement in tomographic reconstruction of urban areas and man-made infrastructure. The high resolution fits well to the inherent scale of buildings (floor height, distance of windows, etc.). This paper demonstrates the tomographic potential of these SARs and the achievable quality on the basis of TerraSAR-X spotlight data of urban environment. A new Wiener-type regularization to the singular-value decomposition method-equivalent to a maximum a posteriori estimator-for TomoSAR is introduced and is extended to the differential case (4-D, i.e., space-time). Different model selection schemes for the estimation of the number of scatterers in a resolution cell are compared and proven to be applicable in practice. Two parametric estimation algorithms of the scatterers' elevation and their velocities are evaluated. First 3-D and 4-D reconstructions of an entire building complex (including its radar reflectivity) with very high level of detail from spaceborne SAR data by pixelwise TomoSAR are presented.

411 citations


Journal ArticleDOI
TL;DR: In experiments with Hyperion and AVIRIS hyperspectral data, the proposed SLML-WkNN performed better than ULML- kNN and SL ML-k NN, and the highest accuracies were obtained using weights provided by supervised LTSA and LLE.
Abstract: Approaches to combine local manifold learning (LML) and the k -nearest-neighbor (kNN) classifier are investigated for hyperspectral image classification. Based on supervised LML (SLML) and kNN, a new SLML-weighted kNN (SLML-W kNN) classifier is proposed. This method is appealing as it does not require dimensionality reduction and only depends on the weights provided by the kernel function of the specific ML method. Performance of the proposed classifier is compared to that of unsupervised LML (ULML) and SLML for dimensionality reduction in conjunction with the kNN (ULML- kNN and SLML-k NN). Three LML methods, locally linear embedding (LLE), local tangent space alignment (LTSA), and Laplacian eigenmaps, are investigated with these classifiers. In experiments with Hyperion and AVIRIS hyperspectral data, the proposed SLML-WkNN performed better than ULML- kNN and SLML-k NN, and the highest accuracies were obtained using weights provided by supervised LTSA and LLE.

404 citations


Journal ArticleDOI
A. Sampath1, Jie Shan1
TL;DR: An extended boundary regularization approach is developed based on multiple parallel and perpendicular line pairs to achieve topologically consistent and geometrically correct building models.
Abstract: This paper presents a solution framework for the segmentation and reconstruction of polyhedral building roofs from aerial LIght Detection And Ranging (lidar) point clouds. The eigenanalysis is first carried out for each roof point of a building within its Voronoi neighborhood. Such analysis not only yields the surface normal for each lidar point but also separates the lidar points into planar and nonplanar ones. In the second step, the surface normals of all planar points are clustered with the fuzzy k-means method. To optimize this clustering process, a potential-based approach is used to estimate the number of clusters, while considering both geometry and topology for the cluster similarity. The final step of segmentation separates the parallel and coplanar segments based on their distances and connectivity, respectively. Building reconstruction starts with forming an adjacency matrix that represents the connectivity of the segmented planar segments. A roof interior vertex is determined by intersecting all planar segments that meet at one point, whereas constraints in the form of vertical walls or boundary are applied to determine the vertices on the building outline. Finally, an extended boundary regularization approach is developed based on multiple parallel and perpendicular line pairs to achieve topologically consistent and geometrically correct building models. This paper describes the detail principles and implementation steps for the aforementioned solution framework. Results of a number of buildings with diverse roof complexities are presented and evaluated.

364 citations


Journal ArticleDOI
TL;DR: Experimental results of SDSOI on a large image set captured by optical sensors from multiple satellites show that the approach is effective in distinguishing between ships and nonships, and obtains a satisfactory ship detection performance.
Abstract: Ship detection from remote sensing imagery is very important, with a wide array of applications in areas such as fishery management, vessel traffic services, and naval warfare. This paper focuses on the issue of ship detection from spaceborne optical images (SDSOI). Although advantages of synthetic-aperture radar (SAR) result in that most of current ship detection approaches are based on SAR images, disadvantages of SAR still exist, such as the limited number of SAR sensors, the relatively long revisit cycle, and the relatively lower resolution. With the increasing number of and the resulting improvement in continuous coverage of the optical sensors, SDSOI can partly overcome the shortcomings of SAR-based approaches and should be investigated to help satisfy the requirements of real-time ship monitoring. In SDSOI, several factors such as clouds, ocean waves, and small islands affect the performance of ship detection. This paper proposes a novel hierarchical complete and operational SDSOI approach based on shape and texture features, which is considered a sequential coarse-to-fine elimination process of false alarms. First, simple shape analysis is adopted to eliminate evident false candidates generated by image segmentation with global and local information and to extract ship candidates with missing alarms as low as possible. Second, a novel semisupervised hierarchical classification approach based on various features is presented to distinguish between ships and nonships to remove most false alarms. Besides a complete and operational SDSOI approach, the other contributions of our approach include the following three aspects: 1) it classifies ship candidates by using their class probability distributions rather than the direct extracted features; 2) the relevant classes are automatically built by the samples' appearances and their feature attribute in a semisupervised mode; and 3) besides commonly used shape and texture features, a new texture operator, i.e., local multiple patterns, is introduced to enhance the representation ability of the feature set in feature extraction. Experimental results of SDSOI on a large image set captured by optical sensors from multiple satellites show that our approach is effective in distinguishing between ships and nonships, and obtains a satisfactory ship detection performance.

355 citations


Journal ArticleDOI
TL;DR: The proposed approach gives rise to an operational classifier, as opposed to previously presented transductive or Laplacian support vector machines (TSVM or LapSVM, respectively), which constitutes a general framework for building computationally efficient semisupervised methods.
Abstract: A framework for semisupervised remote sensing image classification based on neural networks is presented. The methodology consists of adding a flexible embedding regularizer to the loss function used for training neural networks. Training is done using stochastic gradient descent with additional balancing constraints to avoid falling into local minima. The method constitutes a generalization of both supervised and unsupervised methods and can handle millions of unlabeled samples. Therefore, the proposed approach gives rise to an operational classifier, as opposed to previously presented transductive or Laplacian support vector machines (TSVM or LapSVM, respectively). The proposed methodology constitutes a general framework for building computationally efficient semisupervised methods. The method is compared with LapSVM and TSVM in semisupervised scenarios, to SVM in supervised settings, and to online and batch k-means for unsupervised learning. Results demonstrate the improved classification accuracy and scalability of this approach on several hyperspectral image classification problems.

Journal ArticleDOI
TL;DR: This paper proposes a novel method for the focusing of raw data in the framework of radar imaging and shows that an image can be reconstructed, without the loss of resolution, after dropping a large percentage of the received pulses, which would allow the implementation of wide-swath modes without reducing the azimuth resolution.
Abstract: Radar data have already proven to be compressible with no significant losses for most of the applications in which it is used. In the framework of information theory, the compressibility of a signal implies that it can be decomposed onto a reduced set of basic elements. Since the same quantity of information is carried by the original signal and its decomposition, it can be deduced that a certain degree of redundancy exists in the explicit representation. According to the theory of compressive sensing (CS), due to this redundancy, it is possible to infer an accurate representation of an unknown compressible signal through a highly incomplete set of measurements. Based on this assumption, this paper proposes a novel method for the focusing of raw data in the framework of radar imaging. The technique presented is introduced as an alternative option to the traditional matched filtering, and it suggests that the new modes of acquisition of data are more efficient in orbital configurations. In this paper, this method is first tested on 1-D simulated signals, and results are discussed. An experiment with synthetic aperture radar (SAR) raw data is also described. Its purpose is to show the potential of CS applied to SAR systems. In particular, we show that an image can be reconstructed, without the loss of resolution, after dropping a large percentage of the received pulses, which would allow the implementation of wide-swath modes without reducing the azimuth resolution.

Journal ArticleDOI
TL;DR: Experimental results clearly demonstrate that the generation of an SVM-based classifier system with RFS significantly improves overall classification accuracy as well as producer's and user's accuracies.
Abstract: The accuracy of supervised land cover classifications depends on factors such as the chosen classification algorithm, adequate training data, the input data characteristics, and the selection of features. Hyperspectral imaging provides more detailed spectral and spatial information on the land cover than other remote sensing resources. Over the past ten years, traditional and formerly widely accepted statistical classification methods have been superseded by more recent machine learning algorithms, e.g., support vector machines (SVMs), or by multiple classifier systems (MCS). This can be explained by limitations of statistical approaches with regard to high-dimensional data, multimodal classes, and often limited availability of training data. In the presented study, MCSs based on SVM and random feature selection (RFS) are applied to explore the potential of a synergetic use of the two concepts. We investigated how the number of selected features and the size of the MCS influence classification accuracy using two hyperspectral data sets, from different environmental settings. In addition, experiments were conducted with a varying number of training samples. Accuracies are compared with regular SVM and random forests. Experimental results clearly demonstrate that the generation of an SVM-based classifier system with RFS significantly improves overall classification accuracy as well as producer's and user's accuracies. In addition, the ensemble strategy results in smoother, i.e., more realistic, classification maps than those from stand-alone SVM. Findings from the experiments were successfully transferred onto an additional hyperspectral data set.

Journal ArticleDOI
TL;DR: A solely histogram-based method to achieve automatic registration within TerraSAR-X and Ikonos images acquired specifically over urban areas is analyzed and indicates that the proposed method is successful in estimating large global shifts followed by a fine refinement of registration parameters for high-resolution images acquired over dense urban areas.
Abstract: The launch of high-resolution remote sensing satellites like TerraSAR-X, WorldView, and Ikonos has benefited the combined application of synthetic aperture radar (SAR) and optical imageries tremendously. Specifically, in case of natural calamities or disasters, decision makers can now easily use an old archived optical with a newly acquired (postdisaster) SAR image. Although the latest satellites provide the end user already georeferenced and orthorectified data products, still, registration differences exist between different data sets. These differences need to be taken care of through quick automated registration techniques before using the images in different applications. Specifically, mutual information (MI) has been utilized for the intricate SAR-optical registration problem. The computation of this metric involves estimating the joint histogram directly from image intensity values, which might have been generated from different sensor geometries and/or modalities (e.g., SAR and optical). Satellites carrying high-resolution remote sensing sensors like TerraSAR-X and Ikonos generate enormous data volume along with fine Earth observation details that might lead to failure of MI to detect correct registration parameters. In this paper, a solely histogram-based method to achieve automatic registration within TerraSAR-X and Ikonos images acquired specifically over urban areas is analyzed. Taking future sensors into a perspective, techniques like compression and segmentation for handling the enormous data volume and incompatible radiometry generated due to different SAR-optical image acquisition characteristics have been rightfully analyzed. The findings indicate that the proposed method is successful in estimating large global shifts followed by a fine refinement of registration parameters for high-resolution images acquired over dense urban areas.

Journal ArticleDOI
TL;DR: An improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation is presented.
Abstract: The theory of compressed sampling (CS) indicates that exact recovery of an unknown sparse signal can be achieved from very limited samples. For inversed synthetic aperture radar (ISAR), the image of a target is usually constructed by strong scattering centers whose number is much smaller than that of pixels of an image plane. This sparsity of the ISAR signal intrinsically paves a way to apply CS to the reconstruction of high-resolution ISAR imagery. CS-based high-resolution ISAR imaging with limited pulses is developed, and it performs well in the case of high signal-to-noise ratios. However, strong noise and clutter are usually inevitable in radar imaging, which challenges current high-resolution imaging approaches based on parametric modeling, including the CS-based approach. In this paper, we present an improved version of CS-based high-resolution imaging to overcome strong noise and clutter by combining coherent projectors and weighting with the CS optimization for ISAR image generation. Real data are used to test the robustness of the improved CS imaging compared with other current techniques. Experimental results show that the approach is capable of precise estimation of scattering centers and effective suppression of noise.

Journal ArticleDOI
TL;DR: The TerraSAR-X mission concept within the context of a public-private partnership agreement between the German Aerospace Center (DLR) and the industry is described and the mission accomplishments achieved so far are focused on.
Abstract: This paper describes the TerraSAR-X mission concept within the context of a public-private partnership (PPP) agreement between the German Aerospace Center (DLR) and the industry. It briefly describes the PPP concept as well as the overall project organization. This paper then gives an overview of the satellite design and the corresponding ground segment, as well as the main mission parameters. After a short introduction to the scientific and commercial exploitation scheme, this paper finally focuses on the mission accomplishments achieved so far during the ongoing mission.

Journal ArticleDOI
TL;DR: An improved three-component decomposition for polarimetric synthetic aperture radar (SAR) data is proposed, and the results show that the pixels with negative powers are totally eliminated by the proposed decomposition, demonstrating the effectiveness of the new model.
Abstract: An improved three-component decomposition for polarimetric synthetic aperture radar (SAR) data is proposed in this paper. The reasons for the emergence of negative powers in the Freeman decomposition have been analyzed, and three corresponding improvements are included in the proposed method. First, the deorientation process is applied to the coherency matrix before it is decomposed into three scattering components. Then, the coherency matrix with the maximal polarimetric entropy, i.e., the unit matrix, is used as the new volume-scattering model instead of the original one adopted in the Freeman decomposition. A power constraint is also added to the proposed three-component decomposition. The E-SAR polarimetric data acquired over the Oberpfaffenhofen area in Germany are applied in the experiment. The results show that the pixels with negative powers are totally eliminated by the proposed decomposition, demonstrating the effectiveness of the new model.

Journal ArticleDOI
TL;DR: A novel data acquisition scheme and an imaging algorithm for TWI radar based on compressive sensing, which states that a signal having a sparse representation can be reconstructed from a small number of nonadaptive randomized projections by solving a tractable convex program is presented.
Abstract: To achieve high-resolution 2-D images, through-wall imaging (TWI) radar with ultra-wideband and long antenna arrays faces considerable technical challenges such as a prolonged data collection time, a huge amount of data, and a high hardware complexity. This paper presents a novel data acquisition scheme and an imaging algorithm for TWI radar based on compressive sensing (CS), which states that a signal having a sparse representation can be reconstructed from a small number of nonadaptive randomized projections by solving a tractable convex program. Instead of measuring all spatial-frequency data, a few samples, by employing an overcomplete dictionary, are sufficient to obtain reliable target space images even at high noise levels. Preliminary simulated and experimental results show that the proposed algorithm outperforms the conventional delay-and-sum beamforming method even though many fewer CS measurements are used.

Journal ArticleDOI
TL;DR: A new multiple-classifier approach for spectral-spatial classification of hyperspectral images is proposed, which significantly improves classification accuracies when compared with previously proposed classification techniques.
Abstract: A new multiple-classifier approach for spectral-spatial classification of hyperspectral images is proposed. Several classifiers are used independently to classify an image. For every pixel, if all the classifiers have assigned this pixel to the same class, the pixel is kept as a marker, i.e., a seed of the spatial region with a corresponding class label. We propose to use spectral-spatial classifiers at the preliminary step of the marker-selection procedure, each of them combining the results of a pixelwise classification and a segmentation map. Different segmentation methods based on dissimilar principles lead to different classification results. Furthermore, a minimum spanning forest is built, where each tree is rooted on a classification-driven marker and forms a region in the spectral-spatial classification map. Experimental results are presented for two hyperspectral airborne images. The proposed method significantly improves classification accuracies when compared with previously proposed classification techniques.

Journal ArticleDOI
TL;DR: Findings indicate that TerraSAR-X is capable of providing useful data for this purpose, and the algorithm is aimed at producing urban flood extents with which to calibrate and validate urban flood inundation models.
Abstract: Flooding is a major hazard in both rural and urban areas worldwide, but it is in urban areas that the impacts are most severe. An investigation of the ability of high-resolution TerraSAR-X synthetic aperture radar (SAR) data to detect flooded regions in urban areas is described. The study uses a TerraSAR-X image of a one-in-150-year flood near Tewkesbury, U.K., in 2007, for which contemporaneous aerial photography exists for validation. The German Aerospace Center (DLR) SAR end-to-end simulator (SETES) was used in conjunction with airborne scanning laser altimetry (LiDAR) data to estimate regions of the image in which water would not be visible due to shadow or layover caused by buildings and taller vegetation. A semiautomatic algorithm for the detection of floodwater in urban areas is described, together with its validation using aerial photographs. Of the urban water pixels that are visible to TerraSAR-X, 76% were correctly detected, with an associated false positive rate of 25%. If all the urban water pixels were considered, including those in shadow and layover regions, these figures fell to 58% and 19%, respectively. The algorithm is aimed at producing urban flood extents with which to calibrate and validate urban flood inundation models, and these findings indicate that TerraSAR-X is capable of providing useful data for this purpose.

Journal ArticleDOI
TL;DR: A modified version of the subspace-based optimization method for solving inverse-scattering problems is found to share several properties with the contrast-source-inversion method, which significantly speeds up the convergence of the algorithm.
Abstract: This paper investigates a modified version of the subspace-based optimization method for solving inverse-scattering problems The method is found to share several properties with the contrast-source-inversion method The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the rest is determined by optimization method This feature significantly speeds up the convergence of the algorithm There is a great flexibility in partitioning the space of induced current into two orthogonal complementary subspaces: the signal subspace and the noise subspace This flexibility enables the algorithm to perform robustly against noise Numerical simulations validate the efficacy of the proposed method: fast convergent and robust against noise

Journal ArticleDOI
TL;DR: Two semisupervised one-class support vector machine (OC-SVM) classifiers for remote sensing applications based on a simple modification of the standard SVM cost function which penalizes more the errors made when classifying samples of the target class.
Abstract: This paper presents two semisupervised one-class support vector machine (OC-SVM) classifiers for remote sensing applications. In one-class image classification, one tries to detect pixels belonging to one of the classes in the image and reject the others. When few labeled pixels of only one class are available, obtaining a reliable classifier is a difficult task. In the particular case of SVM-based classifiers, this task is even harder because the free parameters of the model need to be finely adjusted, but no clear criterion can be adopted. In order to improve the OC-SVM classifier accuracy and alleviate the problem of free-parameter selection, the information provided by unlabeled samples present in the scene can be used. In this paper, we present two state-of-the-art algorithms for semisupervised one-class classification for remote sensing classification problems. The first proposed algorithm is based on modifying the OC-SVM kernel by modeling the data marginal distribution with the graph Laplacian built with both labeled and unlabeled samples. The second one is based on a simple modification of the standard SVM cost function which penalizes more the errors made when classifying samples of the target class. The good performance of the proposed methods is illustrated in four challenging remote sensing image classification scenarios where the goal is to detect one of the classes present on the scene. In particular, we present results for multisource urban monitoring, hyperspectral crop detection, multispectral cloud screening, and change-detection problems. Experimental results show the suitability of the proposed techniques, particularly in cases with few or poorly representative labeled samples.

Journal ArticleDOI
TL;DR: The neuro-fuzzy model using remote-sensing data and geographic information system for landslide susceptibility analysis in a part of the Cameron Highlands areas in Malaysia yields reasonable results, which can be used for preliminary land-use planning purposes.
Abstract: This paper presents the results of the neuro-fuzzy model using remote-sensing data and geographic information system for landslide susceptibility analysis in a part of the Cameron Highlands areas in Malaysia. Landslide locations in the study area were identified by interpreting aerial photographs and satellite images, supported by extensive field surveys. Landsat TM satellite imagery was used to map the vegetation index. Maps of the topography, lineaments, Normalized Difference Vegetation Index (NDVI), and land cover were constructed from the spatial data sets. Eight landslide conditioning factors such as altitude, slope gradient, curvature, distance from the drainage, distance from the road, lithology, distance from the faults, and NDVI were extracted from the spatial database. These factors were analyzed using a neuro-fuzzy model adaptive neuro-fuzzy inference system to produce the landslide susceptibility maps. During the model development works, a total of five landslide susceptibility models were constructed. For verification, the results of the analyses were then compared with the field-verified landslide locations. Additionally, the receiver operating characteristic curves for all landslide susceptibility models were drawn, and the area under curve values were calculated. Landslide locations were used to validate the results of the landslide susceptibility map, and the verification results showed a 97% accuracy for model 5, employing all parameters produced in the present study as the landslide conditioning factors. The validation results showed a sufficient agreement between the obtained susceptibility map and the existing data on the landslide areas. Qualitatively, the model yields reasonable results, which can be used for preliminary land-use planning purposes.

Journal ArticleDOI
TL;DR: The intention of this paper is to present the SAR data processing concept and the comprehensive portfolio of products reflecting the instrument's diverse imaging capabilities together with options of processing and achieved product quality as well as the essentials of SAR processing.
Abstract: The TerraSAR-X mission was launched in June 2007. After successful completion of the commissioning phase, the mission entered its operational phase in January 2008. Since that time, TerraSAR-X provides the scientific remote sensing community and commercial customers with high-quality spaceborne synthetic aperture radar (SAR) data products. The intention of this paper is to present the SAR data processing concept and the comprehensive portfolio of products reflecting the instrument's diverse imaging capabilities together with options of processing and achieved product quality as well as the essentials of SAR processing. Furthermore, it shall also provide details on how to fully exploit the precision of the TerraSAR-X products.

Journal ArticleDOI
TL;DR: A model is designed to combine a physical model-based polarimetric decomposition with the random-volume-over-ground (RVoG) PolInSAR parameter inversion approach, which enhances the estimation of the vertical forest structure parameters by enabling the ground-to-volume ratio, the temporal decorrelation, and the differential extinction.
Abstract: This paper concerns forest parameter retrieval from polarimetric interferometric synthetic aperture radar (PolInSAR) data considering two layers, one for the ground under the vegetation and one for the volumetric canopy. A model is designed to combine a physical model-based polarimetric decomposition with the random-volume-over-ground (RVoG) PolInSAR parameter inversion approach. The combination of a polarimetric scattering media model with a PolInSAR RVoG vertical structure model provides the possibility to separate the ground and the volume coherency matrices based on polarimetric signatures and interferometric coherence diversity. The proposed polarimetric decomposition characterizes volumetric media by the degree of polarization orientation randomness and by the particle scattering anisotropy. Using the full model enhances the estimation of the vertical forest structure parameters by enabling us to estimate the ground-to-volume ratio, the temporal decorrelation, and the differential extinction. For forest vegetation observed at L-band, this model accounts for the ground topography, forest and canopy layer heights, wave attenuation in the canopy, tree morphology in the form of the angular distribution and the effective shapes of the branches, and the contributions from the ground level consisting of surface scattering and double-bounce ground-trunk interactions, as well as volumetric understory scattering. The parameter estimation performance is evaluated on real airborne L-band SAR data of the Traunstein test site, acquired by the German Aerospace Center (DLR)'s E-SAR sensor in 2003, in both single- and multibaseline configurations. The retrieved forest height is compared with the ground-truth measurements, revealing, for the given test site, an average root-mean-square error (rmse) of about 5 m in the repeat-pass configuration. This implies an improvement in rmse by over 2 m in comparison to the pure coherence-based RVoG PolInSAR parameter inversion.

Journal ArticleDOI
TL;DR: Experiments carried out in multi- and hyperspectral, contextual, and multisource remote sensing data classification confirm the capability of the method in ranking the relevant features and show the computational efficience of the proposed strategy.
Abstract: The increase in spatial and spectral resolution of the satellite sensors, along with the shortening of the time-revisiting periods, has provided high-quality data for remote sensing image classification. However, the high-dimensional feature space induced by using many heterogeneous information sources precludes the use of simple classifiers: thus, a proper feature selection is required for discarding irrelevant features and adapting the model to the specific problem. This paper proposes to classify the images and simultaneously to learn the relevant features in such high-dimensional scenarios. The proposed method is based on the automatic optimization of a linear combination of kernels dedicated to different meaningful sets of features. Such sets can be groups of bands, contextual or textural features, or bands acquired by different sensors. The combination of kernels is optimized through gradient descent on the support vector machine objective function. Even though the combination is linear, the ranked relevance takes into account the intrinsic nonlinearity of the data through kernels. Since a naive selection of the free parameters of the multiple-kernel method is computationally demanding, we propose an efficient model selection procedure based on the kernel alignment. The result is a weight (learned from the data) for each kernel where both relevant and meaningless image features automatically emerge after training the model. Experiments carried out in multi- and hyperspectral, contextual, and multisource remote sensing data classification confirm the capability of the method in ranking the relevant features and show the computational efficience of the proposed strategy.

Journal ArticleDOI
TL;DR: The performance characteristics of the Atmospheric Weather Electromagnetic System for Observation, Modeling, and Education (AWESOME) instrument are described, including sensitivity, frequency and phase response, timing accuracy, and cross modulation.
Abstract: A new instrument has been developed and deployed for sensitive reception of broadband extremely low frequency (ELF) (defined in this paper as 300-3000 Hz) and very low frequency (VLF) (defined in this paper as 3-30 kHz) radio signals from natural and man-made sources, based on designs used for decades at Stanford University. We describe the performance characteristics of the Atmospheric Weather Electromagnetic System for Observation, Modeling, and Education (AWESOME) instrument, including sensitivity, frequency and phase response, timing accuracy, and cross modulation. We also describe a broad range of scientific applications that use AWESOME ELF/VLF data involving measurements of both subionospherically and magnetospherically propagating signals.

Journal ArticleDOI
TL;DR: This paper presents an efficient phase preserving processor for the focusing of data acquired in sliding spotlight and Terrain Observation by Progressive Scans (TOPS) imaging modes with a new azimuth scaling approach, whose kernel is exactly the same for sliding Spotlight and TOPS modes.
Abstract: This paper presents an efficient phase preserving processor for the focusing of data acquired in sliding spotlight and Terrain Observation by Progressive Scans (TOPS) imaging modes. They share in common a linear variation of the Doppler centroid along the azimuth dimension, which is due to a steering of the antenna (either mechanically or electronically) throughout the data take. Existing approaches for the azimuth processing can become inefficient due to the additional processing to overcome the folding in the focused domain. In this paper, a new azimuth scaling approach is presented to perform the azimuth processing, whose kernel is exactly the same for sliding spotlight and TOPS modes. The possibility to use the proposed approach to process data acquired in the ScanSAR mode, as well as a discussion concerning staring spotlight, is also included. Simulations with point targets and real data acquired by TerraSAR-X in sliding spotlight and TOPS modes are used to validate the developed algorithm.

Journal ArticleDOI
TL;DR: Ground-based measurements of land-surface temperature performed in a homogeneous site of rice crops close to Valencia, Spain, showed the good LST accuracy that can be achieved with ETM+ thermal data.
Abstract: Ground-based measurements of land-surface temperature (LST) performed in a homogeneous site of rice crops close to Valencia, Spain, were used for the validation of the calibration and the atmospheric correction of the Landsat-7 Enhanced Thematic Mapper Plus (ETM+) thermal band. Atmospheric radiosondes were launched at the test site around the satellite overpasses. Field-emissivity measurements of the near-full-vegetated rice crops were also performed. Seven concurrences of Landsat-7 and ground data were obtained in July and August 2004-2007. The ground measurements were used with the MODTRAN-4 radiative transfer model to simulate at-sensor radiances and brightness temperatures, which were compared with the calibrated ETM+ observations over the test site. For the cases analyzed here, the differences between the simulated and ETM+ brightness temperatures show an average bias of 0.6 K and a rootmean-square difference (rmsd) of ±0.8 K. The ground-based measurements were also used for the validation of LSTs derived from ETM+ at-sensor radiances with atmospheric correction calculated from the following: 1) the local-radiosonde profiles and 2) the operational atmospheric-correction tool available at http://atmcorr.gsfc.nasa.gov. For the first case, the differences between the ground and satellite LSTs ranged from -0.6 to 1.4 K, with a mean bias of 0.7 K and an rmsd = ±1.0 K. For the second case, the differences ranged between -1.8 and 1.3 K, with a zero average bias and an rmsd = ±1.1 K. Although the validation cases are few and limited to one land cover at morning and summer, results show the good LST accuracy that can be achieved with ETM+ thermal data.

Journal ArticleDOI
TL;DR: An iterative gradient method in which the steepest descent direction, used to update iteratively the permittivity and conductivity distributions in an optimal way, is found by cross-correlating the forward vector wavefield and the backward-propagated vectorial residual wavefield.
Abstract: We have developed a new full-waveform groundpenetrating radar (GPR) multicomponent inversion scheme for imaging the shallow subsurface using arbitrary recording configurations. It yields significantly higher resolution images than conventional tomographic techniques based on first-arrival times and pulse amplitudes. The inversion is formulated as a nonlinear least squares problem in which the misfit between observed and modeled data is minimized. The full-waveform modeling is implemented by means of a finite-difference time-domain solution of Maxwell's equations. We derive here an iterative gradient method in which the steepest descent direction, used to update iteratively the permittivity and conductivity distributions in an optimal way, is found by cross-correlating the forward vector wavefield and the backward-propagated vectorial residual wavefield. The formulation of the solution is given in a very general, albeit compact and elegant, fashion. Each iteration step of our inversion scheme requires several calculations of propagating wavefields. Novel features of the scheme compared to previous full-waveform GPR inversions are as follows: 1) The permittivity and conductivity distributions are updated simultaneously (rather than consecutively) at each iterative step using improved gradient and step length formulations; 2) the scheme is able to exploit the full vector wavefield; and 3) various data sets/survey types (e.g., crosshole and borehole-to-surface) can be individually or jointly inverted. Several synthetic examples involving both homogeneous and layered stochastic background models with embedded anomalous inclusions demonstrate the superiority of the new scheme over previous approaches.