scispace - formally typeset
Search or ask a question

Showing papers by "Heesung Kwon published in 2013"


Journal ArticleDOI
TL;DR: The proposed technique, called contextual support vector machine (SVM), is aimed at jointly exploiting both local spectral and spatial information in a reproducing kernel Hilbert space (RKHS) by collectively embedding a set of spectral signatures within a confined local region into a single point in the RKHS.
Abstract: In this letter, a kernel-based contextual classification approach built on the principle of a newly introduced mapping technique, called Hilbert space embedding, is proposed. The proposed technique, called contextual support vector machine (SVM), is aimed at jointly exploiting both local spectral and spatial information in a reproducing kernel Hilbert space (RKHS) by collectively embedding a set of spectral signatures within a confined local region into a single point in the RKHS that can uniquely represent the corresponding local hyperspectral pixels. Embedding is conducted by calculating the weighted empirical mean of the mapped points in the RKHS to exploit the similarities and variations in the local spectral and spatial information. The weights are adaptively estimated based on the distance between the mapped point in consideration and its neighbors in the RKHS. An SVM separating hyperplane is built to maximize the margin between classes formed by weighted empirical means. The proposed technique showed significant improvement over the composite kernel-based SVM on several hyperspectral images.

34 citations


Journal ArticleDOI
TL;DR: An algorithm to determine the optimal full-diagonal bandwidth parameters of the Gaussian kernels of the individual SVMs is presented by minimizing the radius-margin bound, which is used by the sparse SVM ensemble to perform binary classification.
Abstract: Recently, a kernel-based ensemble learning technique for hyperspectral detection/classification problems has been introduced by the authors, to provide robust classification over hyperspectral data with relatively high level of noise and background clutter. The kernel-based ensemble technique first randomly selects spectral feature subspaces from the input data. Each individual classifier, which is in fact a support vector machine (SVM), then independently conducts its own learning within its corresponding spectral feature subspace and hence constitutes a weak classifier. The decisions from these weak classifiers are equally or adaptively combined to generate the final ensemble decision. However, in such ensemble learning, little attempt has been previously made to jointly optimize the weak classifiers and the aggregating process for combining the subdecisions. The main goal of this paper is to achieve an optimal sparse combination of the subdecisions by jointly optimizing the separating hyperplane obtained by optimally combining the kernel matrices of the SVM classifiers and the corresponding weights of the subdecisions required for the aggregation process. Sparsity is induced by applying an l1 norm constraint on the weighting coefficients. Consequently, the weights of most of the subclassifiers become zero after the optimization, and only a few of the subclassifiers with non-zero weights contribute to the final ensemble decision. Moreover, in this paper, an algorithm to determine the optimal full-diagonal bandwidth parameters of the Gaussian kernels of the individual SVMs is also presented by minimizing the radius-margin bound. The optimized full-diagonal bandwidth Gaussian kernels are used by the sparse SVM ensemble to perform binary classification. The performance of the proposed technique with optimized kernel parameters is compared to that of the one with single-bandwidth parameter obtained using cross-validation by testing them on various data sets. On an average, the proposed sparse kernel-based ensemble learning algorithm with optimized full-diagonal bandwidth parameters shows an improvement of 20% over the existing ensemble learning techniques.

33 citations


Journal ArticleDOI
TL;DR: The aim of this special issue is to advance the capabilities of algorithms and analysis technologies for mul-tispectral and hyperspectral imagery by addressing some of the above-mentioned critical issues.
Abstract: Copyright © 2013 Heesung Kwon et al. is is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Recent advances in multispectral and hyperspectral sensing technologies coupled with rapid growth in computing power have led to new opportunities in remote sensing—higher spatial and/or spectral resolution over larger areas leads to more detailed and comprehensive land cover mapping and more sensitive target detection. However, these massive hyperspectral datasets provide new challenges as well. Accurate and timely processing of hyperspectral data in large volumes must be treated in a nonconventional way in order to drastically enhance data modeling and representation, learning and inference, physics-based analysis, computational complexity, and so forth. Current practical issues in processing multispectral and hyperspectral data include robust characterization of target and background signatures and scene characterization [1–3], joint exploitation of spatial and spectral features [4], background modeling for anomaly detection [5, 6], robust target detection techniques [7], low-dimensional representation, fusion of learning algorithms, the balance of statistical and physical modeling, and real-time computation [8, 9]. e aim of this special issue is to advance the capabilities of algorithms and analysis technologies for mul-tispectral and hyperspectral imagery by addressing some of the above-mentioned critical issues. We have received many submissions and selected six papers aer careful and rigorous peer review. e accepted papers cover a wide range of topics, such as anomaly detection, target detection and classi�cation, dimensionality reduction and reconstruction, fusion of hyperspectral detection algorithms, and non-Gaussian mixture modeling for hyperspectral imagery. e brief summaries of the accepted papers are as follows. e paper \" Hyperspectral anomaly detection: comparative evaluation in scenes with diverse complexity, \" by D. Borghys et al., provides a comprehensive review of popular hyperspec-tral anomaly detection methods, an important problem in hyperspectral signal processing, including the global Reed-Xiaoli (RX) method, subspace methods, local methods, and segmentation based methods. e extensive performance analysis of these methods is presented in scenes with various backgrounds and different representative targets. e comparative results reveal the superiority of some detectors in certain scenes over other detectors. e paper \" Non-Gaussian linear mixing models for hyperspectral images, \" by P. Bajorski, addresses the problem of modeling hyperspectral data using non-Gaussian distribution. It is done by assuming a linear mixing model consisting of nonrandom-structured background and random noise terms. e nonvariable part …

9 citations


Proceedings ArticleDOI
01 Jun 2013
TL;DR: A novel framework of sparse kernel learning for Support Vector Data Description (SVDD) based anomaly detection is presented and experimental results show that the proposed method can provide improved performance over the current state-of-the-art techniques.
Abstract: In this paper, a novel framework of sparse kernel learning for Support Vector Data Description (SVDD) based anomaly detection is presented. In this work, optimal sparse feature selection for anomaly detection is first modeled as a Mixed Integer Programming (MIP) problem. Due to the prohibitively high computational complexity of the MIP, it is relaxed into a Quadratically Constrained Linear Programming (QCLP) problem. The QCLP problem can then be practically solved by using an iterative optimization method, in which multiple subsets of features are iteratively found as opposed to a single subset. The QCLP-based iterative optimization problem is solved in a finite space called the Empirical Kernel Feature Space (EKFS) instead of in the input space or Reproducing Kernel Hilbert Space (RKHS). This is possible because of the fact that the geometrical properties of the EKFS and the corresponding RKHS remain the same. Now, an explicit nonlinear exploitation of the data in a finite EKFS is achievable, which results in optimal feature ranking. Experimental results based on a hyperspectral image show that the proposed method can provide improved performance over the current state-of-the-art techniques.

3 citations