scispace - formally typeset
Search or ask a question

Showing papers by "Reid B. Porter published in 2003"


Proceedings Article
21 Aug 2003
TL;DR: A rank-based measure of margin is presented that can robustly combine large numbers of base hypothesis and has similar performance to other types of regularization.
Abstract: We investigate how stack filter function classes like weighted order statistics can be applied to classification problems. This leads to a new design criteria for linear classifiers when inputs are binary-valued and weights are positive. We present a rank-based measure of margin that is directly optimized as a standard linear program and investigate its relationship to regularization. Our approach can robustly combine large numbers of base hypothesis and has similar performance to other types of regularization.

15 citations


Journal ArticleDOI
TL;DR: A new shared weight network architecture that contains both neural network and morphological network functionality is introduced that provides speed-up of two orders of magnitude compared to a Pentium III 500 MHz software implementation.
Abstract: We propose a system for solving pixel-based multi-spectral image classification problems with high throughput pipelined hardware. We introduce a new shared weight network architecture that contains both neural network and morphological network functionality. We then describe its implementation on Reconfigurable Computers. The implementation provides speed-up for our system in two ways. (1) In the optimization of our network, using Evolutionary Algorithms, for new features and data sets of interest. (2) In the application of an optimized network to large image databases, or directly at the sensor as required. We apply our system to 4 feature identification problems of practical interest, and compare its performance to two advanced software systems designed specifically for multi-spectral image classification. We achieve comparable performance in both training and testing. We estimate speed-up of two orders of magnitude compared to a Pentium III 500 MHz software implementation.

10 citations


Proceedings ArticleDOI
01 Apr 2003
TL;DR: This work investigates the ability of a suite of automated feature extraction tools developed at Los Alamos National Laboratory to make use of multiple data sources for various feature extraction tasks and compares and contrast this software's capabilities on individual data sets from different data sources and fusion of results from multiple individual data sources.
Abstract: An increasing number and variety of platforms are now capable of collecting remote sensing data over a particular scene. For many applications, the information available from any individual sensor may be incomplete, inconsistent or imprecise. However, other sources may provide complementary and/or additional data. Thus, for an application such as image feature extraction or classification, it may be that fusing the mulitple data sources can lead to more consistent and reliable results. Unfortunately, with the increased complexity of the fused data, the search space of feature-extraction or classification algorithms also greatly increases. With a single data source, the determination of a suitable algorithm may be a significant challenge for an image analyst. With the fused data, the search for suitable algorithms can go far beyond the capabilities of a human in a realistic time frame, and becomes the realm of machine learning, where the computational power of modern computers can be harnessed to the task at hand. We describe experiments in which we investigate the ability of a suite of automated feature extraction tools developed at Los Alamos National Laboratory to make use of multiple data sources for various feature extraction tasks. We compare and contrast this software's capabilities on 1) individual data sets from different data sources 2) fused data sets from multiple data sources and 3) fusion of results from multiple individual data sources.

3 citations