scispace - formally typeset
Search or ask a question

Showing papers on "Grayscale published in 1999"


Journal ArticleDOI
TL;DR: A human-in-the-loop approach in which the human delineates the pathology bearing regions (PBR) and a set of anatomical landmarks in the image when the image is entered into the database is implemented.

401 citations


Book
10 Dec 1999
TL;DR: Part 1 Fractal Image Compression: Iterated Function Systems Fractal Encoding of Grayscale Images Speeding Up FractalEncoding; Part 2 Wavelet Image Comp compression: Simple Wavelets Daubechies Wavelets Waveletimage Compression Techniques Comparison of Fractal and Wavelet image Compression.
Abstract: Part 1 Fractal Image Compression: Iterated Function Systems Fractal Encoding of Grayscale Images Speeding Up Fractal Encoding. Part 2 Wavelet Image Compression: Simple Wavelets Daubechies Wavelets Wavelet Image Compression Techniques Comparison of Fractal and Wavelet Image Compression Appendix A - Using the Accompanying Software Appendix B - Utility Windows Library (UWL) Appendix C - Organization of the Accompanying Software Source Code.

276 citations


Journal ArticleDOI
TL;DR: A method for detecting human faces in color images is described that first separates skin regions from nonskin regions and then locates faces within skin regions through a training process that contains the likelihoods of different colors representing the skin.

273 citations


Journal ArticleDOI
01 Aug 1999
TL;DR: Experimental results indicate that the proposed digital image stabilizer is a computationally efficient alternative to existing DIS systems.
Abstract: A fast digital image stabilizer based on the Gray-coded bit-plane matching is proposed which is robust to irregular conditions such as moving objects and intentional panning. The proposed digital image stabilization (DIS) system performs motion estimation using the Gray-coded bit-plane of video sequences, greatly reducing the computational load. This motion estimation method can be realized using only binary Boolean functions which have significantly reduced computational complexity, while the accuracy of motion estimation is maintained. In order to further improve the computational efficiency, the Gray-coded bit-plane matching with the three-step search (3SS) is proposed. Experimental results indicate that the proposed digital image stabilizer is a computationally efficient alternative to existing DIS systems.

236 citations


Patent
11 Mar 1999
TL;DR: In this article, a technique for classifying video frames using statistical models of transform coefficients is disclosed, where image frames are transformed using a discrete cosine transform or Hadamard transform and the resulting transform matrices are reduced using truncation, principal component analysis, or linear discriminant analysis to produce feature vectors.
Abstract: Techniques for classifying video frames using statistical models of transform coefficients are disclosed. After optionally being decimated in time and space, image frames are transformed using a discrete cosine transform or Hadamard transform. The methods disclosed model image composition and operate on grayscale images. The resulting transform matrices are reduced using truncation, principal component analysis, or linear discriminant analysis to produce feature vectors. Feature vectors of training images for image classes are used to compute image class statistical models. Once image class statistical models are derived, individual frames are classified by the maximum likelihood resulting from the image class statistical models. Thus, the probabilities that a feature vector derived from a frame would be produced from each of the image class statistical models are computed. The frame is classified into the image class corresponding to the image class statistical model which produced the highest probability for the feature vector derived from the frame. Optionally, frame sequence information is taken into account by applying a hidden Markov model to represent image class transitions from the previous frame to the current frame. After computing all class probabilities for all frames in the video or sequence of frames using the image class statistical models and the image class transition probabilities, the final class is selected as having the maximum likelihood. Previous frames are selected in reverse order based upon their likelihood given determined current states.

157 citations


Patent
14 Jan 1999
TL;DR: In this article, a discrete wavelet transform is employed for embedding gray scale images which can be as great as 25% of the host image data, and a control parameter is used that can be tailored to either hiding or watermarking purposes, and is robust to operations such as JPEG compression.
Abstract: A method for digital watermarking and, in particular, for digital data hiding of significant amounts of data in images and video. The method employs a discrete wavelet transform for embedding gray scale images which can be as great as 25% of the host image data. A simple control parameter is used that can be tailored to either hiding or watermarking purposes, and is robust to operations such as JPEG compression. The method also uses noise-resilient channel codes based on multidimensional lattices which can provide for embedding signature data such as gray-scale or color images. Furthermore, embedded image data can be recovered in the absence of the original host image by inserting the data into the host image in the DCT domain by encoding the signature DCT coefficients using a lattice coding scheme before embedding, checking each block of host DCT coefficients for its texture content, and appropriately inserting the signatured codes depending on a local texture measure. The method further provides for source coding the signature data by vector quantization, where the indices are embedded in the host by perturbing it using orthogonal transform domain vector perturbations. The transform coefficients of the parent data are grouped into vectors, and the vectors are perturbed using noise-resilient channel codes derived from multidimensional lattices. The perturbations are constrained by a maximum allowable mean-squared error that can be introduced in the host. Also, speech can be hidden in video by wavelet transforming the host video frame by frame, and perturbing vectors of coefficients using lattice channel codes to represent hidden vector quantized speech. The embedded video is subjected to H.263 compression before retrieving the hidden speech.

154 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the wavelet correlation features contain more information than the intensity or the energy features of each color plane separately.

145 citations


Dissertation
01 Jan 1999
TL;DR: This thesis presents a variety of computational techniques for estimating 3D shape from 2D images, based on both passive and active technologies, and proposes an alternative 3D scanning scheme that does not require any other device besides a camera.
Abstract: Most animals use vision as a primary sensor to interact with their environment. Navigation or manipulation of objects are among the tasks that can be better achieved while understanding the three-dimensional structure of the scene. In this thesis, we present a variety of computational techniques for estimating 3D shape from 2D images, based on both passive and active technologies. The first proposed method is purely passive. In this technique, a single camera is moved in an unconstrained manner around the scene to model as it acquires a sequence of images. The reconstruction process consists then of retrieving the trajectory of the camera, as well as the 3D structure of the scene using only the information contained in the images. The second method is based on active lighting technology. In the philosophy of standard 3D scanning methods, a projector is used to project light patterns in the scene. The shape of the scene is then inferred from the way the patterns deform on the objects. The main novelty of our scheme compared to traditional methods is in the nature of the patterns, and the type of image processing associated to them. Instead of using standard binary patterns made out of black and white stripes, our scheme uses a sequence of grayscale patterns with a sinusoidal profile in brightness intensity. This choice allows us to establish correspondence (between camera image, and projector image) in a dense fashion, leading to depth computation at (almost) every pixel in the image. The last reconstruction method that we propose in this thesis is an alternative 3D scanning scheme that does not require any other device besides a camera. The main idea is to substitute the projector by a standard light source (such as a desk lamp), and use a pencil (or any other object with a straight edge) to cast planar shadows in the scene. The 3D geometry of the scene is then inferred from the way the shadow naturally deforms on the objects in the scene. Since this technology is largely inspired from structured lighting techniques, we call it ‘weakly structured lighting.’

142 citations


Patent
Alexander Berestov1
27 Oct 1999
TL;DR: In this article, the epipolar lines associated with two or more images (110,120) taken of the same scene such that the images are and vertically aligned were adjusted to align the points in the first and second images.
Abstract: A method (500) adjusts the epipolar lines associated with two or more images (110,120) taken of the same scene such that the images (110,120) are and vertically aligned. The method (500) creates two or more search columns on the first image. The images (110,120) are split into grayscale sub-images corresponding to each color coordinate used to describe the color of a point in the image. A matching algorithm is applied to each point in the search column in each sub-image pair to calculate the vertical shift between the matched points. The shift values calculated for the matched points are then extrapolated across the entire image and used to align the points in the first (110) and second (120) image.

113 citations


Patent
12 Mar 1999
TL;DR: In this article, an augmented reality presentation system that generates and presents a virtual image free from any latency from a real space is presented. But this system requires a position/posture sensor for time-sequentially inputting viewpoint position/position information, stereo cameras for inputting a continuous time sequence of a plurality of images, and an image processing apparatus.
Abstract: An augmented reality presentation system that generates and presents a virtual image free from any latency from a real space. This system has a position/posture sensor for time-sequentially inputting viewpoint position/posture information, stereo cameras for inputting a continuous time sequence of a plurality of images, and an image processing apparatus. The image processing apparatus detects a continuous time sequence of depth images ID from the continuous time sequence of input stereo images, estimates the viewpoint position/posture of the observer at a future time at which a three-dimensional image will be presented to the observer, on the basis of changes in previous viewpoint position/posture input from the position/posture sensor, continuously warps the continuously obtained depth images to those at the estimated future viewpoint position/posture, and presents three-dimensional grayscale (or color) images generated according to the warped depth images to the observer.

111 citations


Proceedings ArticleDOI
23 Jun 1999
TL;DR: This work uses information theoretic measures to determine the effectiveness of a variety of different edge detectors working at multiple scales on black and white and color images and gives quantitative measures for the advantages of multi-level processing, for the use of chromaticity in addition to greyscale, and for the relative effectiveness of different detectors.
Abstract: We treat the problem of edge detection as one of statistical inference. Local edge cues, implemented by filters, provide information about the likely positions of edges which can be used as input to higher-level models. Different edge cues can be evaluated by the statistical effectiveness of their corresponding filters evaluated on a dataset of 100 presegmented images. We use information theoretic measures to determine the effectiveness of a variety of different edge detectors working at multiple scales on black and white and color images. Our results give quantitative measures for the advantages of multi-level processing, for the use of chromaticity in addition to greyscale, and for the relative effectiveness of different detectors.

Patent
06 Oct 1999
TL;DR: In this paper, the color filter used to implement a color portion of the display is omitted from another, e.g., the gray scale portion, which is used to display text.
Abstract: Display apparatus, and methods for displaying images, e.g., text, on gray scale and color monitors (2400) are described. Gray scale displays implemented in accordance with the present invention include displays having a resolution in a first dimension, e.g. the horizontal dimension, which is several times the resolution in a second dimension, e.g., the vertical dimension. Various other displays (2400) of the present invention are capable of operating as both gray scale and color display devices. In one such display, the color filter used to implement a color portion of the display is omitted from another, e.g., gray scale portion of the same display. In such an embodiment, text, e.g., captions, are displayed using the gray scale portion of the display while color images, e.g., graphics, are displayed on the color portion of the display. In another display of the present invention, a color filter (2401) with filter cells (2410, 2411, 2412) that can be switched between a color and a clear mode of operation are employed. When images., e.g., text, are to be displayed as gray scale images, the filter cells (2410, 2411, 2412), corresponding to the portion of the display (2400) to be used to display the gray scale images, are switched to the clear mode of operation. In such an embodiment, the remaining portion or portions of the display (2400) may be used to display color images. Methods and apparatus for reducing and/or eliminating color distortions in images resulting from treating pixel subcomponents as independent luminous intensity sources are also described.

Proceedings ArticleDOI
23 Jun 1999
TL;DR: This paper presents an application of perceptual grouping rules for content-based image retrieval in a Bayesian framework for the retrieval of building images, and the results obtained are presented.
Abstract: This paper presents an application of perceptual grouping rules for content-based image retrieval. The semantic interrelationships between different primitive image features are exploited by perceptual grouping to detect the presence of manmade structures. A methodology based on these principles in a Bayesian framework for the retrieval of building images, and the results obtained are presented. The image database consists of monocular grayscale outdoor images taken from a ground-level camera.

Proceedings ArticleDOI
24 Oct 1999
TL;DR: This paper presents for the first time a definition of correlation applicable to color images, based on quaternions or hypercomplex numbers, and devised a visualization of the result using the polar form of a quaternion in which color denotes quaternional eigenaxis and phase, and a grayscale image represents the modulus.
Abstract: Autocorrelation and cross-correlation have been defined and utilized in signal and image processing for many years, but not for color or vector images. In this paper we present for the first time a definition of correlation applicable to color images, based on quaternions or hypercomplex numbers. We have devised a visualization of the result using the polar form of a quaternion in which color denotes quaternion eigenaxis and phase, and a grayscale image represents the modulus.

Patent
23 Aug 1999
TL;DR: An efficient chroma key-based coding technique for digital video with an optimized binary keying threshold is provided in this article, where the shape information of a foreground object is embedded in the keyed output, so there is no need to carry an explicit alpha plane, or use alpha plane coding.
Abstract: An efficient chroma key-based coding technique for digital video with an optimized switching threshold. An optimized binary keying threshold (T) is provided for switching between a first image region (such as a background region) (B) and second image region (such as a foreground object) (A) video picture. The threshold optimizes a PSNR of a quantization error Q of a key color K. A chroma key technique is also provided for representing the shape of a video object, where the shape information (alpha plane) of a foreground object is embedded in the keyed output, so there is no need to carry an explicit alpha plane, or use alpha plane coding. The chroma key shape representation technique provides a smooth transition at the boundary (500, 505) between objects without the need for special switching patterns, such as a general gray scale shape coding tool, or post-processing, e.g., using feathering filters.

Journal ArticleDOI
TL;DR: A simple and fast reflectional symmetry detection algorithm that employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetric axes of an object in the image.
Abstract: A simple and fast reflectional symmetry detection algorithm has been developed in this paper. The algorithm employs only the original gray scale image and the gradient information of the image, and it is able to detect multiple reflectional symmetry axes of an object in the image. The directions of the symmetry axes are obtained from the gradient orientation histogram of the input gray scale image by using the Fourier method. Both synthetic and real images have been tested using the proposed algorithm.

Patent
15 Jul 1999
TL;DR: In this paper, a system for automated x-ray system parameter evaluation is provided, where a physical model or template is created and stored in the system, one for each desired phantom.
Abstract: A system for automated x-ray system parameter evaluation is provided. A physical model or template is created and stored in the system, one for each desired phantom. The automated system imports a grayscale x-ray image and then processes the image to determine image components. First, a histogram of the image is created, then a threshold in the histogram is determined and the imported image is binarized with respect to the threshold. Next, connected component analysis is used to determine image components. If the components do not match, then the image is rejected. The system next locates landmarks in the imported image corresponding to expected physical structures. The landmarks include a perimeter ring, vertical and horizontal line segments, and fiducials. The system uses the landmarks to predict Regions of Interest (ROIs) where measurement of the x-ray system parameters takes place. Finally, the x-ray system parameters are measured in the identified ROIs.

Patent
25 Mar 1999
TL;DR: In this paper, a digitized image is treated by an electronic system where pairs of pixels within a convolution window are compared, and for those differences which are greater than a preselected, or automatically calculated, threshold, a black or white vector is counted, respectively, depending on whether the more centrally located pixel is darker or lighter than the outer pixel.
Abstract: A digitized image is treated by an electronic system where pairs of pixels within a convolution window are compared, and for those differences which are greater than a preselected, or automatically calculated, threshold, a black or white vector is counted, respectively, depending on whether the more centrally located pixel is darker or lighter than the outer pixel. The central pixel is replaced with an enhanced value, if it is surrounded by a majority of significantly lighter or darker pixels, as indicated by the black and white vector counts. Weighting factors allows for custom tailoring the enhancement algorithm to suit the need of the particular image.

Journal ArticleDOI
TL;DR: A local thresholding algorithm for binarizing gray scale images of aggregates from a gravitational flow–falling particles and the field test result show that it works under certain conditions.

Patent
26 Jan 1999
TL;DR: In this paper, a grayscale display is illuminated by energizing pixels of a weighted grid of eight line addresses, where each pixel is determined by the selection of the grid sets and the time slot allocated for the grids sets.
Abstract: The invention is directed to improve visual effects on digital display devices (Fig. 7) that use time and space modulation methods to display grayscale values, by using a distributed line technique to provide grayscale capability. The grayscale display is illuminated by energizing pixels of a weighted grid of eight line addresses. There are N grid sets where N is the number of time slots allocated per frame time. The visual grayscale brightness of each pixel is determined by the selection of the grid sets and the time slot allocated for the grid sets. The bit value selection, grid set allocation, and time slots are chosen such that the grayscale values are scattered in time and space so that the perception of visual disturbances and other perceived artifacts are avoided.

Patent
07 May 1999
TL;DR: In this paper, a document image capture method and scanner, and an image processing apparatus incorporating such a scanner, in which a document is scanned two or more times, is described. And the information representing the document image obtained in this way is preferably stored using a set of linked bit maps, one bit map for each block.
Abstract: A document image capture method and scanner, and an image processing apparatus incorporating such a scanner, in which a document is scanned two or more times. The first scan preferably provides bi-level image data, which is analyzed to identify blocks of uniform image type (for example, text, line drawing, grayscale image, or full-color image) within the document. The second scan, preferably performed at lower resolution than the first, provides grayscale or color information, which is substituted in the grayscale or color blocks, respectively, for the bi-level information obtained in the first scan. A third scan, to provide information of the third type, may also be performed. An operator preferably views an image of the document, based on the scanned information, to be sure that the identification and typing of the various blocks has been done correctly, and may instruct that the document be rescanned to provide new data for a designated portion of the document image, if it appears that an error has occurred. The information representing the document image obtained in this way is preferably stored using a set of linked bit maps, one bit map for each block. The memory capacity needed to store the information can be reduced further by treating the page and its margins as a frame, and by storing information about the frame, and any horizontal or vertical lines in the document, in simple vector form. Any portion of the document which is just background is not stored.

Patent
Daniel Q. Zhu1
05 Aug 1999
TL;DR: In this paper, a digital signal processing (DSP) method is used to process rendered text in order to achieve up to 300% of the horizontal resolution on any suitable digital display devices such as LCD, PDP and DLP.
Abstract: A digital signal processing (DSP) method to process rendered text in order to achieve up to 300% of the horizontal resolution on any suitable digital display devices such as LCD, PDP and DLP. When the text is rendered, a single picture element (a “pixel”) of a matrix display screen is actually composed of three “sub-pixels”: one red, one green, and one blue (RGB or BGR). Taken together this sub-pixel triplet makes up what has been traditionally thought of as a single pixel. By staggering and processing the sub-pixel elements horizontally, font resolution is effectively increased to the maximum of 300%. There are three processing steps involved. First, the color image is expanded to a gray scale image having triple the number of horizontal pixels as the original image by interleaving the sub-pixels. Next, a black and white text/graphics (TG) detector is deployed to identify the TG of interest in the gray scale image. Then, the, detected TG and only the detected TG is subject to a morphological thinning operation so that the TG approximates fonts (or graphics) that would be generated from a sub-pixel rendering engine. Finally, the processed TG display data is filtered to minimize color fringing while maximizing its resolution. The resulting display data including the processed TG data and the unprocessed color signals are converted back to the sub-pixels (e.g., RGB or BGR) domain and displayed.

Proceedings ArticleDOI
23 Jun 1999
TL;DR: A theory for multispectral contrast is developed that enables an optimal grayscale visualization of the first order contrast of an image with an arbitrary number of bands and can reveal significantly more interpretive information to an image analyst.
Abstract: We present a new formalism for the treatment and understanding of multispectral images and multisensor imagery based on first order contrast information. Although little attention has been paid to the utility of multispectral contrast, we develop a theory for multispectral contrast that enables us to produce an optimal grayscale visualization of the first order contrast of an image with an arbitrary number of bands. We demonstrate how our technique can reveal significantly more interpretive information to an image analyst, who can use it in a number of image understanding algorithms. Existing grayscale visualization strategies are reviewed and a discussion is given as to why our algorithm is optimal and outperforms them. A variety of experimental results are presented.

Proceedings ArticleDOI
23 Aug 1999
TL;DR: This work examines two different global image segmentation algorithms each using its own distance metric: k-means and a mixture of principal components (MPC) neural network and two variants of the algorithms are examined.
Abstract: In the past few years, researchers have been increasingly interested in color image segmentation. We analyze two different global image segmentation algorithms each using its own distance metric: k-means and a mixture of principal components (MPC) neural network. The k-means uses Euclidean distance for color comparisons while the MPC neural network uses vector angles. Two variants of the algorithms are examined. The first uses the RGB pixel itself for clustering while the second uses a 3/spl times/3 neighborhood. Preliminary results on a staged scene image are shown and discussed.

01 Jan 1999
TL;DR: A method for segmenting greyscale images which automatically estimates all necessary parameters, including choosing the number of segments, is proposed, both fast and general, and it does not require any training data.
Abstract: There is a growing need for image analysis methods which can process large image databases quickly and with limited human input I propose a method for segmenting greyscale images which automatically estimates all necessary parameters, including choosing the number of segments This method is both fast and general, and it does not require any training data The EM and ICM algorithms are used to fit an image model and compute a pseudolikelihood; this pseudolikelihood is used in a modified form of the Bayesian Information Criterion (BIC) to automatically select the number of segments A consistency result for this approach is proven and several example applications are shown A method for automatically detecting curves in spatial point patterns is also presented Principal curves are used to model curvilinear features; BIC is used to automatically select the amount of smoothing Applications to simulated minefields and seismological data are shown

01 Jan 1999
TL;DR: In this article, the presence and frequency of screen patterns in halftone areas and suppression of detected screens by low-pass filtering is detected and suppressed using α-crossing and Fast Accumulator Function (FAF).
Abstract: One of the major challenges in scanning and printing documents in a digital library is the preservation of the quality of the documents and in particular of the images they contain. When photographs are offset-printed, the process of screening usually takes place. During screening, a continuous tone image is converted into a bi-level image by applying a screen to replace each color in the original image. When high-resolution scanning of screened images is performed, it is very common in the digital version of the document to observe the screen patterns used during the original printing. In addition, when printing the digital document, moire effects tend to appear because printing requires halftoning. In order to automatically suppress these moire patterns, it is necessary to detect the image areas of the document and remove the screen pattern present in those areas. In this paper, we present efficient and robust techniques to segment a grayscale document into halftone image areas, detect the presence and frequency of screen patterns in halftone areas and suppress their detected screens. We present novel techniques to perform fast segmentation based on α-crossings, detection of screen frequencies using a Fast Accumulator Function and suppression of detected screens by low-pass filtering.

Proceedings ArticleDOI
20 Sep 1999
TL;DR: A fast and robust skew detection algorithm for gray-scale images is presented that uses small randomly selected regions to speed up the process and has good results in detecting skew in various kinds of pages.
Abstract: A fast and robust skew detection algorithm for gray-scale images is presented. The MCCSD (modified cross-correlation skew detection) algorithm uses horizontal and vertical cross-correlation simultaneously to deal with vertically laid-out text, which is commonly used in Chinese or Japanese documents. Instead of calculating the correlation for the entire image, we use small randomly selected regions to speed up the process. The region verification stage and further processing of auxiliary peaks make our method robust and reliable. An experiment shows that the proposed method has good results in detecting skew in various kinds of pages.

Proceedings ArticleDOI
H. Kamada1, K. Fujimoto
20 Sep 1999
TL;DR: A new high-speed, high-accuracy binarization method for recognizing text in document images that takes only 1/100 the processing time over the method that performs image interpolation first and reduced unrecognized characters by 46.5%, compared with conventional global binarizing.
Abstract: We propose a new high-speed, high-accuracy binarization method for recognizing text in document images. First character neighborhoods are extracted from input images using a global thresholding value that is shifted to the background pixel value from the thresholding value of conventional global binarization. Second, characters are extracted using an original local binarization process integrated with image interpolation. Our method takes only 1/100 the processing time over the method that performs image interpolation first. Therefore our method binarizes an A4 size text image (150dpi) in an average of only 3.3 seconds using a 166 MHz Pentium processor. Furthermore, our method reduced unrecognized characters by 46.5%, compared with conventional global binarization.

Journal ArticleDOI
TL;DR: This model generalizes traditional random field models by allowing the spatial interaction parameters of the field to be random variables and defines a compact color feature vector which captures within and between color band information.

Proceedings ArticleDOI
13 Aug 1999
TL;DR: A novel approach to target classification in synthetic aperture radar (SAR) imagery is developed, in which grayscale test images are compared to templates using a mean-square error (MSE) criterion, and coarsely quantization is conducted and maximum-likelihood (ML) classification is conducted using simple, robust statistical models.
Abstract: In this paper, we develop a novel approach to target classification in synthetic aperture radar (SAR) imagery. In contrast to the conventional approach, in which grayscale test images are compared to templates using a mean-square error (MSE) criterion, we coarsely quantize the grayscale pixel values and then conduct maximum-likelihood (ML) classification using simple, robust statistical models. The advantage of this approach is that coarse quantization can preserve a great deal of discriminating information while simultaneously reducing the complexity of the statistical variation target SAR signatures to something that can be characterized accurately. We consider two distinct quantization schemes, each having its own merits. The first preserves the contrast among the target, shadow and background regions while sacrificing the target region's internal structural detail; the second preserves the target's shape and internal structural detail while sacrificing the contrast between the shadow and background regions. We postulate statistical models for the conditional likelihood of quantized imagery (one model per quantization scheme), identify model parameters from data, and then build and test ML target classifiers. For a number of challenging ATR problems examined in DARPA's Moving and Stationary Target Acquisition and Recognition (MSTAR) program, these ML classifiers are found to lead to significantly better classification performance than that obtained with the MSE metric, and as good or better than that obtained with virtually all competing MSTAR-developed approaches.