scispace - formally typeset
Search or ask a question
Author

Hugh G. Lewis

Bio: Hugh G. Lewis is an academic researcher from University of Southampton. The author has contributed to research in topics: Space debris & Population. The author has an hindex of 24, co-authored 160 publications receiving 3101 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: The use of a Hopfield neural network to map the spatial distribution of classes more reliably using prior information of pixel composition determined from fuzzy classification was investigated, and the resultant maps provided an accurate and improved representation of the land covers studied.
Abstract: Fuzzy classification techniques have been developed recently to estimate the class composition of image pixels, but their output provides no indication of how these classes are distributed spatially within the instantaneous field of view represented by the pixel. As such, while the accuracy of land cover target identification has been improved using fuzzy classification, it remains for robust techniques that provide better spatial representation of land cover to be developed. Such techniques could provide more accurate land cover metrics for determining social or environmental policy, for example. The use of a Hopfield neural network to map the spatial distribution of classes more reliably using prior information of pixel composition determined from fuzzy classification was investigated. An approach was adopted that used the output from a fuzzy classification to constrain a Hopfield neural network formulated as an energy minimization tool. The network converges to a minimum of an energy function, defined as a goal and several constraints. Extracting the spatial distribution of target class components within each pixel was, therefore, formulated as a constraint satisfaction problem with an optimal solution determined by the minimum of the energy function. This energy minimum represents a "best guess" map of the spatial distribution of class components in each pixel. The technique was applied to both synthetic and simulated Landsat TM imagery, and the resultant maps provided an accurate and improved representation of the land covers studied, with root mean square errors (RMSEs) for Landsat imagery of the order of 0.09 pixels in the new fine resolution image recorded.

313 citations

Journal ArticleDOI
TL;DR: In this paper, three techniques for mapping the sub-pixel proportions of land cover classes in the New Forest, U.K. were compared: (i) artificial neural networks (ANN), (ii) mixture modelling, and (iii) fuzzy c -means classification.
Abstract: A problem with NOAA AVHRR imagery is that the intrinsic scale of spatial variation in land cover in the U.K. is usually finer than the scale of sampling imposed by the image pixels. The result is that most NOAA AVHRR pixels contain a mixture of land cover types (sub-pixel mixing). Three techniques for mapping the sub-pixel proportions of land cover classes in the New Forest, U.K. were compared: (i) artificial neural networks (ANN); (ii) mixture modelling; and (iii) fuzzy c -means classification. NOAA AVHRR imagery and SPOT HRV imagery, both for 28 June 1994, were obtained. The SPOT HRV images were classified using the maximum likelihood method, and used to derive the 'known' sub-pixel proportions of each land cover class for each NOAA AVHRR pixel. These data were then used to evaluate the predictions made (using the three techniques and the NOAA AVHRR imagery) in terms of the amount of information provided, the accuracy with which that information is provided, and the ease of implementation. The ...

295 citations

Journal ArticleDOI
TL;DR: This work applies a Hopfield neural network technique to super-resolution mapping of land cover features larger than a pixel, using information of pixel composition determined from soft classification, and shows how the approach can be extended in a new way to predict the spatial pattern of subpixel scale features.

236 citations

Journal ArticleDOI
TL;DR: It is shown that the constrained least squares LSMM is equivalent to the linear SVM, which relies on proving that the LSMM algorithm possesses the "maximum margin" property, which provides important insights about the role of the bias term and rank deficiency in the pure pixel matrix within the LS MM algorithm.
Abstract: Mixture modeling is becoming an increasingly important tool in the remote sensing community as researchers attempt to resolve subpixel, area information. This paper compares a well-established technique, linear spectral mixture models (LSMM), with a much newer idea based on data selection, support vector machines (SVM). It is shown that the constrained least squares LSMM is equivalent to the linear SVM, which relies on proving that the LSMM algorithm possesses the "maximum margin" property. This in turn shows that the LSMM algorithm can be derived from the same optimality conditions as the linear SVM, which provides important insights about the role of the bias term and rank deficiency in the pure pixel matrix within the LSMM algorithm. It also highlights one of the main advantages for using the linear SVM algorithm in that it performs automatic "pure pixel" selection from a much larger database. In addition, extensions to the basic SVM algorithm allow the technique to be applied to data sets that exhibit spectral confusion (overlapping sets of pure pixels) and to data sets that have nonlinear mixture regions. Several illustrative examples, based on an area-labeled Landsat dataset, are used to demonstrate the potential of this approach.

207 citations

Journal ArticleDOI
TL;DR: The most common statistically significant covariate used in landslide logistic regression was slope, followed by aspect as discussed by the authors, and the most commonly used covariates related to landsliding varied for earthquake induced landslides compared to rainfall-induced landslides, and between landslide type.
Abstract: Logistic regression studies which assess landslide susceptibility are widely available in the literature. However, a global review of these studies to synthesise and compare the results does not exist. There are currently no guidelines for the selection of covariates to be used in logistic regression analysis, and as such, the covariates selected vary widely between studies. An inventory of significant covariates associated with landsliding produced from the full set of such studies globally would be a useful aid to the selection of covariates in future logistic regression studies. Thus, studies using logistic regression for landslide susceptibility estimation published in the literature were collated, and a database was created of the significant factors affecting the generation of landslides. The database records the paper the data were taken from, the year of publication, the approximate longitude and latitude of the study area, the trigger method (where appropriate) and the most dominant type of landslides occurring in the study area. The significant and non-significant (at the 95 % confidence level) covariates were recorded, as well as their coefficient, statistical significance and unit of measurement. The most common statistically significant covariate used in landslide logistic regression was slope, followed by aspect. The significant covariates related to landsliding varied for earthquake-induced landslides compared to rainfall-induced landslides, and between landslide type. More importantly, the full range of covariates used was identified along with their frequencies of inclusion. The analysis showed that there needs to be more clarity and consistency in the methodology for selecting covariates for logistic regression analysis and in the metrics included when presenting the results. Several recommendations for future studies were given.

191 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A new method for performing a nonlinear form of principal component analysis by the use of integral operator kernel functions is proposed and experimental results on polynomial feature extraction for pattern recognition are presented.
Abstract: A new method for performing a nonlinear form of principal component analysis is proposed. By the use of integral operator kernel functions, one can efficiently compute principal components in high-dimensional feature spaces, related to input space by some nonlinear map—for instance, the space of all possible five-pixel products in 16 × 16 images. We give the derivation of the method and present experimental results on polynomial feature extraction for pattern recognition.

8,175 citations

Journal ArticleDOI
TL;DR: This issue's collection of essays should help familiarize readers with this interesting new racehorse in the Machine Learning stable, and give a practical guide and a new technique for implementing the algorithm efficiently.
Abstract: My first exposure to Support Vector Machines came this spring when heard Sue Dumais present impressive results on text categorization using this analysis technique. This issue's collection of essays should help familiarize our readers with this interesting new racehorse in the Machine Learning stable. Bernhard Scholkopf, in an introductory overview, points out that a particular advantage of SVMs over other learning algorithms is that it can be analyzed theoretically using concepts from computational learning theory, and at the same time can achieve good performance when applied to real problems. Examples of these real-world applications are provided by Sue Dumais, who describes the aforementioned text-categorization problem, yielding the best results to date on the Reuters collection, and Edgar Osuna, who presents strong results on application to face detection. Our fourth author, John Platt, gives us a practical guide and a new technique for implementing the algorithm efficiently.

4,319 citations

Journal ArticleDOI
TL;DR: It is likely that it is unlikely that a single standardized method of accuracy assessment and reporting can be identified, but some possible directions for future research that may facilitate accuracy assessment are highlighted.

3,800 citations

Journal ArticleDOI
TL;DR: In this paper, the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis.
Abstract: Principal component analysis (PCA) is a ubiquitous technique for data analysis and processing, but one which is not based upon a probability model. In this paper we demonstrate how the principal axes of a set of observed data vectors may be determined through maximum-likelihood estimation of parameters in a latent variable model closely related to factor analysis. We consider the properties of the associated likelihood function, giving an EM algorithm for estimating the principal subspace iteratively, and discuss the advantages conveyed by the definition of a probability density function for PCA.

3,362 citations