scispace - formally typeset
Search or ask a question

Showing papers on "Mahalanobis distance published in 2006"


Journal ArticleDOI
TL;DR: This paper provides a new proof of the characterization of Riemannian centers of mass and an original gradient descent algorithm to efficiently compute them and develops the notions of mean value and covariance matrix of a random element, normal law, Mahalanobis distance and χ2 law.
Abstract: In medical image analysis and high level computer vision, there is an intensive use of geometric features like orientations, lines, and geometric transformations ranging from simple ones (orientations, lines, rigid body or affine transformations, etc.) to very complex ones like curves, surfaces, or general diffeomorphic transformations. The measurement of such geometric primitives is generally noisy in real applications and we need to use statistics either to reduce the uncertainty (estimation), to compare observations, or to test hypotheses. Unfortunately, even simple geometric primitives often belong to manifolds that are not vector spaces. In previous works [1, 2], we investigated invariance requirements to build some statistical tools on transformation groups and homogeneous manifolds that avoids paradoxes. In this paper, we consider finite dimensional manifolds with a Riemannian metric as the basic structure. Based on this metric, we develop the notions of mean value and covariance matrix of a random element, normal law, Mahalanobis distance and ?2 law. We provide a new proof of the characterization of Riemannian centers of mass and an original gradient descent algorithm to efficiently compute them. The notion of Normal law we propose is based on the maximization of the entropy knowing the mean and covariance of the distribution. The resulting family of pdfs spans the whole range from uniform (on compact manifolds) to the point mass distribution. Moreover, we were able to provide tractable approximations (with their limits) for small variances which show that we can effectively implement and work with these definitions.

804 citations


Journal ArticleDOI
TL;DR: Sensitivity analysis of the matching techniques is especially important because none of the proposed methods in the literature is a priori superior to the others and the suggested joint consideration of propensity score matching and multivariate analysis offers an approach to assessing the robustness of the estimates.

312 citations


Proceedings ArticleDOI
01 Oct 2006
TL;DR: A new approach to the multi-robot map-alignment problem that enables teams of robots to build joint maps without initial knowledge of their relative poses is presented, with an optimal algorithm for merging (not necessarily overlapping) maps that are created by different robots independently.
Abstract: This paper presents a new approach to the multi-robot map-alignment problem that enables teams of robots to build joint maps without initial knowledge of their relative poses. The key contribution of this work is an optimal algorithm for merging (not necessarily overlapping) maps that are created by different robots independently. Relative pose measurements between pairs of robots are processed to compute the coordinate transformation between any two maps. Noise in the robot-to-robot observations, propagated through the map-alignment process, increases the error in the position estimates of the transformed landmarks, and reduces the overall accuracy of the merged map. When there is overlap between the two maps, landmarks that appear twice provide additional information, in the form of constraints, which increases the alignment accuracy. Landmark duplicates are identified through a fast nearest-neighbor matching algorithm. In order to reduce the computational complexity of this search process, a kd-tree is used to represent the landmarks in the original map. The criterion employed for matching any two landmarks is the Mahalanobis distance. As a means of validation, we present experimental results obtained from two robots mapping an area of 4,800 m2

226 citations


Journal ArticleDOI
TL;DR: This paper addresses the problem of characterizing ensemble similarity from sample similarity in a principled manner by using a reproducing kernel as a characterization of sample similarity, and suggests a probabilistic distance measure in the reproducingkernel Hilbert space (RKHS) as the ensemble similarity.
Abstract: This paper addresses the problem of characterizing ensemble similarity from sample similarity in a principled manner. Using a reproducing kernel as a characterization of sample similarity, we suggest a probabilistic distance measure in the reproducing kernel Hilbert space (RKHS) as the ensemble similarity. Assuming normality in the RKHS, we derive analytic expressions for probabilistic distance measures that are commonly used in many applications, such as Chernoff distance (or the Bhattacharyya distance as its special case), Kullback-Leibler divergence, etc. Since the reproducing kernel implicitly embeds a nonlinear mapping, our approach presents a new way to study these distances whose feasibility and efficiency is demonstrated using experiments with synthetic and real examples. Further, we extend the ensemble similarity to the reproducing kernel for ensemble and study the ensemble similarity for more general data representations.

201 citations


Journal ArticleDOI
TL;DR: This study finds that landmarks and their geometry-based approach can account for variations of face expression and aging very well and can be used either in stand-alone mode or in conjunction with other approaches to reduce the search space a priori.

153 citations


Proceedings ArticleDOI
14 May 2006
TL;DR: A framework for large margin classification by Gaussian mixture models (GMMs), which have many parallels to support vector machines (SVMs) but use ellipsoids to model classes instead of half-spaces is developed.
Abstract: We develop a framework for large margin classification by Gaussian mixture models (GMMs). Large margin GMMs have many parallels to support vector machines (SVMs) but use ellipsoids to model classes instead of half-spaces. Model parameters are trained discriminatively to maximize the margin of correct classification, as measured in terms of Mahalanobis distances. The required optimization is convex over the model's parameter space of positive semidefinite matrices and can be performed efficiently. Large margin GMMs are naturally suited to large problems in multiway classification; we apply them to phonetic classification and recognition on the TIMIT database. On both tasks, we obtain significant improvement over baseline systems trained by maximum likelihood estimation. For the problem of phonetic classification, our results are competitive with other state-of-the-art classifiers, such as hidden conditional random fields.

152 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel technique for clustering and classification of object trajectory-based video motion clips using spatiotemporal function approximations, and proposes a Mahalanobis classifier for the detection of anomalous trajectories.
Abstract: This paper proposes a novel technique for clustering and classification of object trajectory-based video motion clips using spatiotemporal function approximations. Assuming the clusters of trajectory points are distributed normally in the coefficient feature space, we propose a Mahalanobis classifier for the detection of anomalous trajectories. Motion trajectories are considered as time series and modelled using orthogonal basis function representations. We have compared three different function approximations --- least squares polynomials, Chebyshev polynomials and Fourier series obtained by Discrete Fourier Transform (DFT). Trajectory clustering is then carried out in the chosen coefficient feature space to discover patterns of similar object motions. The coefficients of the basis functions are used as input feature vectors to a Self- Organising Map which can learn similarities between object trajectories in an unsupervised manner. Encoding trajectories in this way leads to efficiency gains over existing approaches that use discrete point-based flow vectors to represent the whole trajectory. Our proposed techniques are validated on three different datasets --- Australian sign language, hand-labelled object trajectories from video surveillance footage and real-time tracking data obtained in the laboratory. Applications to event detection and motion data mining for multimedia video surveillance systems are envisaged.

125 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a complementary methodology to assist contemporary lean assessment tools that will provide a quantitative measure of leanness by benchmarking other exemplar lean industries along with specific pointers for improvements based on cost considerations.
Abstract: Lean manufacturing is a competitive philosophy adopted by many companies to produce cost effective products and services. Contemporary lean assessment tools are designed to evaluate a company's status of lean implementation and success. However, most lean assessment tools provide qualitative analysis and do not provide any clear direction of where the improvement efforts should be directed. In this paper, we propose a complementary methodology to assist contemporary lean assessment tools that will provide a quantitative measure of leanness by benchmarking other exemplar lean industries along with specific pointers for improvements based on cost considerations. The proposed Mahalanobis Taguchi Gram Schmidt System (MTGS) based methodology consists of four steps. The first three steps consist of data collection using contemporary lean assessment tools, standardizing the data, and using the standardized data for calculating the Mahalanobis Distance (MD) by the using the MTGS method. The MTGS method provides the direction of abnormality and can be used even in cases of multi-collinear data. The fourth step helps to identify the direction of improvement for a given set of capital constraints. The methodology is demonstrated using an example.

102 citations


Journal ArticleDOI
TL;DR: The CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction, and the CBR models have better performance than models based on multiple linear regression.
Abstract: The resources allocated for software quality assurance and improvement have not increased with the ever-increasing need for better software quality. A targeted software quality inspection can detect faulty modules and reduce the number of faults occurring during operations. We present a software fault prediction modeling approach with case-based reasoning (CBR), a part of the computational intelligence field focusing on automated reasoning processes. A CBR system functions as a software fault prediction model by quantifying, for a module under development, the expected number of faults based on similar modules that were previously developed. Such a system is composed of a similarity function, the number of nearest neighbor cases used for fault prediction, and a solution algorithm. The selection of a particular similarity function and solution algorithm may affect the performance accuracy of a CBR-based software fault prediction system. This paper presents an empirical study investigating the effects of using three different similarity functions and two different solution algorithms on the prediction accuracy of our CBR system. The influence of varying the number of nearest neighbor cases on the performance accuracy is also explored. Moreover, the benefits of using metric-selection procedures for our CBR system is also evaluated. Case studies of a large legacy telecommunications system are used for our analysis. It is observed that the CBR system using the Mahalanobis distance similarity function and the inverse distance weighted solution algorithm yielded the best fault prediction. In addition, the CBR models have better performance than models based on multiple linear regression.

97 citations


Journal ArticleDOI
TL;DR: In this paper, an influence weight is assigned to each predictor in the conditioning set with the aim of identifying nearest neighbours that represent the conditional dependence in an improved manner, and the workability of the proposed modification is tested using synthetic data from known linear and nonlinear models and its applicability is illustrated through an example where daily rainfall is downscaled over 15 stations near Sydney, Australia using a predictor set consisting of selected large-scale atmospheric circulation variables.

87 citations


Book ChapterDOI
13 Dec 2006
TL;DR: This paper illustrates the point for partially synthetic microdata and shows that, in some cases, Mahalanobis DBRL can yield a very high re-identification percentage, far superior to the one offered by other record linkage methods.
Abstract: Distance-based record linkage (DBRL) is a common approach to empirically assessing the disclosure risk in SDC-protected microdata. Usually, the Euclidean distance is used. In this paper, we explore the potential advantages of using the Mahalanobis distance for DBRL. We illustrate our point for partially synthetic microdata and show that, in some cases, Mahalanobis DBRL can yield a very high re-identification percentage, far superior to the one offered by other record linkage methods.

Journal ArticleDOI
TL;DR: In this paper, a power quality (PQ) event detection and classification method using higher order cumulants as the feature parameter, and quadratic classifiers as the classification method is presented.
Abstract: In this paper, we present a novel power-quality (PQ) event detection and classification method using higher order cumulants as the feature parameter, and quadratic classifiers as the classification method. We have observed that local higher order statistical parameters that are estimated from short segments of 50-Hz notch-filtered voltage waveform data carry discriminative features for PQ events analyzed herein. A vector with six parameters consisting of local minimas and maximas of higher order central cumulants starting from the second (variance) up to the fourth cumulant is used as the feature vector. Local vector magnitudes and simple thresholding provide an immediate event detection criterion. After the detection of a PQ event, local maxima and minima of the cumulants around the event instant are used for the event-type classification. We have observed that the minima and maxima for each statistical order produces clusters in the feature space. These clusters were observed to exhibit noncircular topology; hence, quadratic-type classifiers that require the Mahalanobis distance metric are proposed. The events investigated and presented are line-to-ground arcing faults and voltage sags due to the induction motor starting. Detection and classification results obtained from an experimentally staged PQ event data set are presented.

Journal ArticleDOI
TL;DR: In this paper, the authors presented a method of anomaly detection for primary screening that uses past observations of gamma-ray signatures to define an expected benign vehicle population, which is then compared to this expected population using statistical criteria that reflect acceptable alarm rates and probabilities of detection.
Abstract: Many international border crossings screen cargo for illicit nuclear material using radiation portal monitors (RPMs) that measure the gamma-ray flux emitted by vehicles. Screening often consists of primary, which acts as a trip-wire for suspect vehicles, and secondary, which locates the radiation source and performs isotopic identification. The authors present a method of anomaly detection for primary screening that uses past observations of gamma-ray signatures to define an expected benign vehicle population. Newly acquired spectra are then compared to this expected population using statistical criteria that reflect acceptable alarm rates and probabilities of detection. Shown here is an analysis of spectroscopic RPM data collected at an international border crossing using this technique. The raw data were analyzed to develop an expected benign vehicle population by decimating the original pulse-height channels, extracting composite variables with principal components analysis, and estimating variance-weighted distances from the "mean vehicle spectra" with the Mahalanobis distance metric. The following analysis considers data acquired with both NaI(Tl)-based and plastic scintillator-based RPMs. For each system, performance estimates for anomaly sources are compared to common nuisance sources. The algorithm reported here shows promising results in that it is more sensitive to the anomaly sources than common nuisance sources for both RPM types.

Journal ArticleDOI
TL;DR: The spectral analysis and classification for discrimination among normal, premalignant, and malignant conditions were performed using principal component analysis (PCA) and artificial neural network (ANN) separately on the same set of spectral data.
Abstract: Pulsed laser-induced autofluorescence spectroscopic studies of pathologically certified normal, premalignant, and malignant oral tissues were carried out at 325 nm excitation. The spectral analysis and classification for discrimination among normal, premalignant, and malignant conditions were performed using principal component analysis (PCA) and artificial neural network (ANN) separately on the same set of spectral data. In case of PCA, spectral residuals, Mahalanobis distance, and scores of factors were used for discrimination among normal, premalignant, and malignant cases. In ANN, parameters like mean, spectral residual, standard deviation, and total energy were used to train the network. The ANN used in this study is a classical multiplayer feed-forward type with a back-propagation algorithm for the training of the network. The specificity and sensitivity were determined in both classification schemes. In the case of PCA, they are 100 and 92.9%, respectively, whereas for ANN they are 100 and 96.5% for the data set considered.

Proceedings Article
01 Jan 2006
TL;DR: It is shown that using Kullback-Leibler (KL) divergence as a local distance further improves the performance of the template-based approach, now beating state-of-the-art of more complex posterior-based HMMs systems (usually referred to as "Tandem").
Abstract: Given the availability of large speech corpora, as well as the increasing of memory and computational resources, the use of template matching approaches for automatic speech recognition (ASR) have recently attracted new attention. In such template-based approaches, speech is typically represented in terms of acoustic vector sequences, using spectral-based features such as MFCC of PLP, and local distances are usually based on Euclidean or Mahalanobis distances. In the present paper, we further investigate template-based ASR and show (on a continuous digit recognition task) that the use of posterior-based features significantly improves the standard template-based approaches, yielding to systems that are very competitive to state-of-the-art HMMs, even when using a very limited number (e.g., 10) of reference templates. Since those posteriors-based features can also be interpreted as a probability distribution, we also show that using Kullback-Leibler (KL) divergence as a local distance further improves the performance of the template-based approach, now beating state-of-the-art of more complex posterior-based HMMs systems (usually referred to as "Tandem").

Proceedings ArticleDOI
01 Nov 2006
TL;DR: A human identification method using electrocardiogram (ECG) based on Bayes' theorem achieved better accuracy than the exiting method based on the Mahalanobis' distance.
Abstract: A human identification method using electrocardiogram (ECG) is presented based on Bayes' theorem. A data base containing 502 ECG recordings are used for development and evaluation. Each ECG recording is divided into two segments: a segment for training, and a segment for performance evaluation. The ECG features are extracted from both the training dataset and the test dataset for model development and identification. Principal component analysis is used to reduce the dimension of feature variables. Classification method based Bayes' theorem are deduced. Results of experiments confirmed that the classification based on Bayes' theorem achieved better accuracy than the exiting method based on the Mahalanobis' distance.

Proceedings ArticleDOI
04 Jan 2006
TL;DR: This paper proposes a novel vision system for clustering and classification of object-based video motion clips using spatiotemporal models, and proposes a simple Mahalanobis classifier for the detection of anomalous trajectories.
Abstract: Techniques for understanding video object motion activity are becoming increasingly important with the widespread adoption of CCTV surveillance systems. In this paper we propose a novel vision system for clustering and classification of object-based video motion clips using spatiotemporal models. Object trajectories are modeled as motion time series using the lowest order Fourier coefficients obtained by Discrete Fourier Transform. Trajectory clustering is then carried out in the DFT-coefficient feature space to discover patterns of similar object motion activity. The DFT coefficients are used as input feature vectors to a Self-Organising Map which can learn similarities between object trajectories in an unsupervised manner. Encoding trajectories in this way leads to efficiency gains over existing approaches that use discrete point-based flow vectors to represent the whole trajectory. Assuming the clusters of trajectory points are distributed normally in the coefficient feature space, we propose a simple Mahalanobis classifier for the detection of anomalous trajectories. Our proposed techniques are validated on three different datasets - Australian sign language, handlabelled object trajectories from video surveillance footage and real-time tracking data obtained in the laboratory. Applications to event detection and motion data mining for visual surveillance systems are envisaged.

01 Jan 2006
TL;DR: In this article, a new method combining intensity and range images and providing insensitivity to expression variation based on Log-Gabor Templates is presented, by breaking a single image into 75 semi-independent observations the reliance of the algorithm upon any particular part of the face is relaxed allowing robustness in the presence of occulusions, distortions and facial expressions.
Abstract: The addition of Three Dimensional (3D) data has the potential to greatly improve the accuracy of Face Recognition Technologies by providing complementary information. In this paper a new method combining intensity and range images and providing insensitivity to expression variation based on Log-Gabor Templates is presented. By breaking a single image into 75 semi-independent observations the reliance of the algorithm upon any particular part of the face is relaxed allowing robustness in the presence of occulusions, distortions and facial expressions. Also presented is a new distance measure based on the Mahalanobis Cosine metric which has desirable discriminatory characteristics in both the 2D and 3D domains. Using the 3D database collected by University of Notre Dame for the Face Recognition Grand Challenge (FRGC), benchmarking results are presented demonstrating the performance of the proposed methods.

Journal ArticleDOI
TL;DR: In this article, a nonparametric resampling technique is used to estimate the sampling variability for the target site, as well as for every site that is a potential member of the pooling group.
Abstract: [1] In recent years, catchment similarity measures based on flood seasonality have become popular alternatives for identifying hydrologically homogeneous pooling groups used in regional flood frequency analysis. Generally, flood seasonality pooling measures are less prone to errors and are more robust than measures based on flood magnitude data. However, they are also subject to estimation uncertainty resulting from sampling variability. Because of sampling variability, catchment similarity in flood seasonality can significantly deviate from the true similarity. Therefore sampling variability should be directly incorporated in the pooling algorithm to decrease the level of pooling uncertainty. This paper develops a new pooling approach that takes into consideration the sampling variability of flood seasonality measures used as pooling variables. A nonparametric resampling technique is used to estimate the sampling variability for the target site, as well as for every site that is a potential member of the pooling group for the target site. The variability is quantified by Mahalanobis distance ellipses. The similarity between the target site and the potential site is then assessed by finding the minimum confidence interval at which their Mahalanobis ellipses intersect. The confidence intervals can be related to regional homogeneity, which allows the target degree of regional homogeneity to be set in advance. The approach is applied to a large set of catchments from Great Britain, and its performance is compared with the performance of a previously used pooling technique based on the Euclidean distance. The results demonstrate that the proposed approach outperforms the previously used approach in terms of the overall homogeneity of delineated pooling groups in the study area.

Journal ArticleDOI
01 Oct 2006
TL;DR: A novel anomaly detection and clustering algorithm for the network intrusion detection based on factor analysis and Mahalanobis distance to identify outliers based on a trained model, and to cluster attacks by abnormal features is presented.
Abstract: This paper presents a novel anomaly detection and clustering algorithm for the network intrusion detection based on factor analysis and Mahalanobis distance. Factor analysis is used to uncover the latent structure of a set of variables. The Mahalanobis distance is used to determine the "similarity" of a set of values from an "unknown" sample to a set of values measured from a collection of "known" samples. By utilizing factor analysis and Mahalanobis distance, we developed an algorithm 1) to identify outliers based on a trained model, and 2) to cluster attacks by abnormal features.

Proceedings ArticleDOI
01 Oct 2006
TL;DR: This paper proposes an iterative algorithm that minimizes the sum of Mahalanobis distances by linearizing around the current estimate at each iteration of the Euclidean motion, which is fast, does not depend on a good initialization, and can be applied to large sequences in complex outdoor terrains.
Abstract: We study the problem of registering local relative pose estimates to produce a global consistent trajectory of a moving robot. Traditionally, this problem has been studied with a flat world assumption wherein the robot motion has only three degrees of freedom. In this paper, we generalize this for the full six-degrees-of-freedom Euclidean motion. Given relative pose estimates and their covariances, our formulation uses the underlying Lie Algebra of the euclidean motion to compute the absolute poses. Ours is an iterative algorithm that minimizes the sum of Mahalanobis distances by linearizing around the current estimate at each iteration. Our algorithm is fast, does not depend on a good initialization, and can be applied to large sequences in complex outdoor terrains. It can also be applied to fuse uncertain pose information from different available sources including GPS, LADAR, wheel encoders and vision sensing to obtain more accurate odometry. Experimental results using both simulated and real data support our claim.

01 Jan 2006
TL;DR: The paper compares the results of three advanced classifiers against a simple minimum distance classifier, and shows that while the simple classifier provides an error rate just over 6%, error rates down to 1-2% can be achieved with a combination of feature selection together with an advanced classsifier such as ant colony optimization.
Abstract: The pap-smear benchmark database provides data for comparing classification methods. The data consists of 917 images of pap-smear cells, classified carefully by cyto-technicians and doctors. The classes are difficult to separate, since class membership is not clearly defined. A basic data analysis provides numerical measures indicating how well the classes are separated, based on the Mahalanobis distance norm. The paper compares the results of three advanced classifiers against a simple minimum distance classifier. The results show that while the simple classifier provides an error rate just over 6%, error rates down to 1-2% can be achieved with a combination of feature selection together with an advanced classsifier such as ant colony optimization. Students and researchers can access the database via the Internet, and use it to test and compare their own classification methods.

Proceedings ArticleDOI
21 Aug 2006
TL;DR: In this paper, a collision probability analysis for spherical objects exhibiting linear relative motion is performed by combining covariances and physical object dimensions at the point of closest approach, and the resulting covariance ellipsoid and hardbody are projected onto the plane perpendicular to relative velocity by assuming linear relative motions and constant positional uncertainty throughout the brief encounter.
Abstract: Linear methods for computing satellite collision probability can be extended to accommodate nonlinear relative motion in the presence of changing position and velocity uncertainties. Collision probability analysis for spherical objects exhibiting linear relative motion is accomplished by combining covariances and physical object dimensions at the point of closest approach. The resulting covariance ellipsoid and hardbody are projected onto the plane perpendicular to relative velocity by assuming linear relative motion and constant positional uncertainty throughout the brief encounter. Collision potential is determined from the object footprint on the projected, two-dimensional, covariance ellipse. For nonlinear motion, the dimension associated with relative velocity must be reintroduced. This can be simply done by breaking the collision tube into sufficiently small cylinders such that the sectional motion is nearly linear, computing the linear probability associated with each section, and then summing. The method begins with object position and velocity data at the time of closest approach. Propagation of position, velocity, and covariance is done forward/backward in time until a user limit is reached. An alternate method is presented that creates a voxel grid in Mahalanobis space, computes the probability of each affected voxel as the combined object passes through the space, and sums. These general methods are not dependent on a specific propagator or linear probability model.

Proceedings ArticleDOI
16 Aug 2006
TL;DR: A new Polynomial Mahalanobis Distance is introduced and its ability to capture the properties of an initial positive path sample and produce accurate path segmentation with few outliers is demonstrated.
Abstract: Autonomous robot navigation in outdoor environments remains a challenging and unsolved problem A key issue is our ability to identify safe or navigable paths far enough ahead of the robot to allow smooth trajectories at acceptable speeds Colour or texture-based labeling of safe path regions in image sequences is one way to achieve this far field prediction A challenge for classifiers identifying path and nonpath regions is to make meaningful comparisons of feature vectors at pixels or over a window Most simple distance metrics cannot use all the information available and therefore the resulting labeling does not tightly capture the visible path We introduce a new Polynomial Mahalanobis Distance and demonstrate its ability to capture the properties of an initial positive path sample and produce accurate path segmentation with few outliers Experiments show the method’s effectiveness for path segmentation in natural scenes using both colour and texture feature vectors The new metric is compared with classifications based on Euclidean and standard Mahalanobis distance and produces superior results

Proceedings ArticleDOI
22 Nov 2006
TL;DR: A new method combining intensity and range images and providing insensitivity to expression variation based on Log-Gabor Templates is presented, allowing robustness in the presence of occulusions, distortions and facial expressions.
Abstract: The addition of Three Dimensional (3D) data has the potential to greatly improve the accuracy of Face Recognition Technologies by providing complementary information. In this paper a new method combining intensity and range images and providing insensitivity to expression variation based on Log-Gabor Templates is presented. By breaking a single image into 75 semi-independent observations the reliance of the algorithm upon any particular part of the face is relaxed allowing robustness in the presence of occulusions, distortions and facial expressions. Also presented is a new distance measure based on the Mahalanobis Cosine metric which has desirable discriminatory characteristics in both the 2D and 3D domains. Using the 3D database collected by University of Notre Dame for the Face Recognition Grand Challenge (FRGC), benchmarking results are presented demonstrating the performance of the proposed methods.

Journal Article
TL;DR: In this paper, a view-invariant approach for human identication at a distance, using gait recognition, is presented, where the distance vectors are dierences between the bounding box and silhouette and are extracted using 4 projections of the silhouette.
Abstract: Gait refers to the style of walking of an individual. This paper presents a view-invariant approach for human identication at a distance, using gait recognition. Recognition of a person from their gait is a biometric of increasing interest. Based on principal component analysis (PCA), this paper describes a simple, but ecient approach to gait recognition. Binarized silhouettes of a motion object are represented by 1-D signals, which are the basic image features called distance vectors. The distance vectors are dierences between the bounding box and silhouette, and are extracted using 4 projections of the silhouette. Based on normalized correlation of the distance vectors, gait cycle estimation is rst performed to extract the gait cycle. Second, eigenspace transformation, based on PCA, is applied to time-varying distance vectors and Mahalanobis distances-based supervised pattern classication are then performed in the lowerdimensional eigenspace for human identication. A fusion strategy is nally executed to produce a nal decision. Experimental results on 3 main databases demonstrate that the right person in the top 2 matches 100% of the time for the cases where training and testing sets corresponds to the same walking styles, and in the top 3-4 matches 100% of the time when training and testing sets do not correspond to the same walking styles.

Journal ArticleDOI
TL;DR: A change‐detection system in remotely sensed imagery based on the combination of fuzzy sets and neural networks shows that it has a great potential for land‐cover change detection since it allows the nature of change to be extracted automatically.
Abstract: This paper aims to propose a change‐detection system in remotely sensed imagery based on the combination of fuzzy sets and neural networks. Multitemporal images are directly classified into change and no‐change classes using a fuzzy membership model in order to provide complete information about the change. Presently, two fuzzy models derived from the Mahalanobis distance and a fuzzy neural network (FNN) combination are proposed and compared. In order to evaluate the performance of each model, extensive experiments using different performance indicators are carried out on two SPOT HRV images covering a region of Algeria. Results obtained showed that it has a great potential for land‐cover change detection since it allows the nature of change to be extracted automatically. Furthermore, the FNN‐based model gives the best performance. This model allows a reduced amount of false alarms with higher change detection accuracy.

Journal ArticleDOI
TL;DR: The paper shows that correlation filter classifiers, a relatively unheralded model-based approach, have a greater robustness and accuracy than traditional appearance-based methods (such as PCA) and provided the best choice for facial matching.

Journal ArticleDOI
TL;DR: The Mahalanobis-Taguchi system (MTS) as discussed by the authors is a diagnosis and forecasting method using multivariate data, which is a measure based on correlations between the variables and patterns that can be identified and analyzed with respect to a base or reference group.
Abstract: The Mahalanobis–Taguchi system (MTS) is a diagnosis and forecasting method using multivariate data. Mahalanobis distance (MD) is a measure based on correlations between the variables and patterns that can be identified and analyzed with respect to a base or reference group. The MTS is of interest because of its reported accuracy in forecasting using small, correlated data sets. This is the type of data that is encountered with consumer vehicle ratings. MTS enables a reduction in dimensionality and the ability to develop a scale based on MD values. MTS identifies a set of useful variables from the complete data set with equivalent correlation and considerably less time and data. This article presents the application of the MTS, its applicability in identifying a reduced set of useful variables in multidimensional systems, and a comparison of results with those obtained from a standard statistical approach to the problem.

Journal ArticleDOI
TL;DR: A new Gaussian graphical modeling that is robustified against possible outliers and an outlying score, similar to but more robust than the Mahalanobis distance, which makes it easier to identify outlying observations.