scispace - formally typeset
Search or ask a question

Showing papers on "Mahalanobis distance published in 2014"


Proceedings ArticleDOI
23 Jun 2014
TL;DR: The proposed DDML trains a deep neural network which learns a set of hierarchical nonlinear transformations to project face pairs into the same feature subspace, under which the distance of each positive face pair is less than a smaller threshold and that of each negative pair is higher than a larger threshold.
Abstract: This paper presents a new discriminative deep metric learning (DDML) method for face verification in the wild. Different from existing metric learning-based face verification methods which aim to learn a Mahalanobis distance metric to maximize the inter-class variations and minimize the intra-class variations, simultaneously, the proposed DDML trains a deep neural network which learns a set of hierarchical nonlinear transformations to project face pairs into the same feature subspace, under which the distance of each positive face pair is less than a smaller threshold and that of each negative pair is higher than a larger threshold, respectively, so that discriminative information can be exploited in the deep network. Our method achieves very competitive face verification performance on the widely used LFW and YouTube Faces (YTF) datasets.

730 citations


Journal ArticleDOI
TL;DR: It is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity, and an extension of the learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections.
Abstract: The objective of this work is to learn descriptors suitable for the sparse feature detectors used in viewpoint invariant matching. We make a number of novel contributions towards this goal. First, it is shown that learning the pooling regions for the descriptor can be formulated as a convex optimisation problem selecting the regions using sparsity. Second, it is shown that descriptor dimensionality reduction can also be formulated as a convex optimisation problem, using Mahalanobis matrix nuclear norm regularisation. Both formulations are based on discriminative large margin learning constraints. As the third contribution, we evaluate the performance of the compressed descriptors, obtained from the learnt real-valued descriptors by binarisation. Finally, we propose an extension of our learning formulations to a weakly supervised case, which allows us to learn the descriptors from unannotated image collections. It is demonstrated that the new learning methods improve over the state of the art in descriptor learning on the annotated local patches data set of Brown et al. and unannotated photo collections of Philbin et al. .

400 citations


Book ChapterDOI
01 Jan 2014
TL;DR: This chapter reviews the main ideas of Mahalanobis metric learning in general and gives a detailed study on different approaches for the task of single-shot person re-identification, also comparing to the state of the art.
Abstract: Recently, Mahalanobis metric learning has gained a considerable interest for single-shot person re-identification. The main idea is to build on an existing image representation and to learn a metric that reflects the visual camera-to-camera transitions, allowing for a more powerful classification. The goal of this chapter is twofold. We first review the main ideas of Mahalanobis metric learning in general and then give a detailed study on different approaches for the task of single-shot person re-identification, also comparing to the state of the art. In particular, for our experiments, we used Linear Discriminant Metric Learning (LDML), Information Theoretic Metric Learning (ITML), Large Margin Nearest Neighbor (LMNN), Large Margin Nearest Neighbor with Rejection (LMNN-R), Efficient Impostor-based Metric Learning (EIML), and KISSME. For our evaluations we used four different publicly available datasets (i.e., VIPeR, ETHZ, PRID 2011, and CAVIAR4REID). Additionally, we generated the new, more realistic PRID 450S dataset, where we also provide detailed segmentations. For the latter one, we also evaluated the influence of using well-segmented foreground and background regions. Finally, the corresponding results are presented and discussed.

251 citations


Journal ArticleDOI
TL;DR: This paper reformulates person reidentification in a camera network as a multitask distance metric learning problem, and presents a novel multitask maximally collapsing metric learning (MtMCML) model that works substantially better than other current state-of-the-art person reIdentification methods.
Abstract: Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

243 citations


Journal ArticleDOI
TL;DR: A learning scheme for the construction of optimized spectral descriptors is shown and related to Mahalanobis metric learning, which shows the superiority of the proposed approach in generating correspondences is demonstrated on synthetic and scanned human figures.
Abstract: Informative and discriminative feature descriptors play a fundamental role in deformable shape analysis. For example, they have been successfully employed in correspondence, registration, and retrieval tasks. In recent years, significant attention has been devoted to descriptors obtained from the spectral decomposition of the Laplace-Beltrami operator associated with the shape. Notable examples in this family are the heat kernel signature (HKS) and the recently introduced wave kernel signature (WKS). The Laplacian-based descriptors achieve state-of-the-art performance in numerous shape analysis tasks; they are computationally efficient, isometry-invariant by construction, and can gracefully cope with a variety of transformations. In this paper, we formulate a generic family of parametric spectral descriptors. We argue that to be optimized for a specific task, the descriptor should take into account the statistics of the corpus of shapes to which it is applied (the "signal") and those of the class of transformations to which it is made insensitive (the "noise"). While such statistics are hard to model axiomatically, they can be learned from examples. Following the spirit of the Wiener filter in signal processing, we show a learning scheme for the construction of optimized spectral descriptors and relate it to Mahalanobis metric learning. The superiority of the proposed approach in generating correspondences is demonstrated on synthetic and scanned human figures. We also show that the learned descriptors are robust enough to be learned on synthetic data and transferred successfully to scanned shapes.

193 citations


Journal ArticleDOI
TL;DR: The results demonstrate that the proposed Hausdorff learning approach can improve 3-D object retrieval performance.
Abstract: In view-based 3-D object retrieval, each object is described by a set of views. Group matching thus plays an important role. Previous research efforts have shown the effectiveness of Hausdorff distance in group matching. In this paper, we propose a 3-D object retrieval scheme with Hausdorff distance learning. In our approach, relevance feedback information is employed to select positive and negative view pairs with a probabilistic strategy, and a view-level Mahalanobis distance metric is learned. This Mahalanobis distance metric is adopted in estimating the Hausdorff distances between objects, based on which the objects in the 3-D database are ranked. We conduct experiments on three testing data sets, and the results demonstrate that the proposed Hausdorff learning approach can improve 3-D object retrieval performance.

188 citations


Book ChapterDOI
01 Nov 2014
TL;DR: A new large margin multi-metric learning (LM\(^3\)L) method for face and kinship verification in the wild that jointly learns multiple distance metrics under which the correlations of different feature representations of each sample are maximized.
Abstract: Metric learning has been widely used in face and kinship verification and a number of such algorithms have been proposed over the past decade. However, most existing metric learning methods only learn one Mahalanobis distance metric from a single feature representation for each face image and cannot deal with multiple feature representations directly. In many face verification applications, we have access to extract multiple features for each face image to extract more complementary information, and it is desirable to learn distance metrics from these multiple features so that more discriminative information can be exploited than those learned from individual features. To achieve this, we propose a new large margin multi-metric learning (LM\(^3\)L) method for face and kinship verification in the wild. Our method jointly learns multiple distance metrics under which the correlations of different feature representations of each sample are maximized, and the distance of each positive is less than a low threshold and that of each negative pair is greater than a high threshold, simultaneously. Experimental results show that our method can achieve competitive results compared with the state-of-the-art methods.

168 citations


Journal ArticleDOI
TL;DR: ExDet as discussed by the authors is a multivariate statistical tool based on the Mahalanobis distance, which measures the similarity between the reference and projection domains by accounting for both the deviation from the mean and the correlation between variables.
Abstract: Aim Correlative species distribution models (SDMs) often involve some degree of projection into novel covariate space (i.e. extrapolation), because calibration data may not encompass the entire space of interest. Most methods for identifying extrapolation focus on the range of each model covariate individually. However, extrapolation can occur that is well within the range of univariate variation, but which exhibits novel combinations between covariates. Our objective was to develop a tool that can detect, distinguish and quantify these two types of novelties: novel univariate range and novel combinations of covariates. Location Global, Australia, South Africa. Methods We developed a new multivariate statistical tool, based on the Mahalanobis distance, which measures the similarity between the reference and projection domains by accounting for both the deviation from the mean and the correlation between variables. The method also provides an assessment tool for the detection of the most influential covariates leading to dissimilarity. As an example application, we modelled an Australian shrub (Acacia cyclops) widely introduced to other countries and compared reference data, global distribution data and both types of model extrapolation against the projection globally and in South Africa. Results The new tool successfully detected and quantified the degree of dissimilarity for points that were either outside the univariate range or formed novel covariate combinations (correlations) but were still within the univariate range of covariates. For A. cyclops, more than half of the points (6617 of 10,785) from the global projection space that were found to lie within the univariate range of reference data exhibited distorted correlations. Not all the climate covariates used for modelling contributed to novelty equally over the geographical space of the model projection. Main conclusions Identifying non-analogous environments is a critical component of model interrogation. Our extrapolation detection (ExDet) tool can be used as a quantitative method for exploring novelty and interpreting the projections from correlative SDMs and is available for free download as stand-alone software from http://www.climond.org/exdet.

165 citations


Journal ArticleDOI
TL;DR: In this article, a robust Kalman filter scheme is proposed to resist the influence of the outliers in the observations, where a judging index is defined as the square of the Mahalanobis distance from the observation to its prediction.
Abstract: A robust Kalman filter scheme is proposed to resist the influence of the outliers in the observations. Two kinds of observation error are studied, i.e., the outliers in the actual observations and the heavy-tailed distribution of the observation noise. Either of the two kinds of errors can seriously degrade the performance of the standard Kalman filter. In the proposed method, a judging index is defined as the square of the Mahalanobis distance from the observation to its prediction. By assuming that the observation is Gaussian distributed with the mean and covariance being the observation prediction and its associate covariance, the judging index should be Chi-square distributed with the dimension of the observation vector as the degree of freedom. Hypothesis test is performed to the actual observation by treating the above Gaussian distribution as the null hypothesis and the judging index as the test statistic. If the null hypothesis should be rejected, it is concluded that outliers exist in the observations. In the presence of outliers scaling factors can be introduced to rescale the covariance of the observation noise or of the innovation vector, both resulting in a decreased filter gain. And the scaling factors can be solved using the Newton’s iterative method or in an analytical manner. The harmful influence of either of the two kinds of errors can be effectively resisted in the proposed method, so robustness can be achieved. Moreover, as the number of iterations needed in the iterative method may be rather large, the analytically calculated scaling factor should be preferred.

159 citations


Journal ArticleDOI
TL;DR: This paper evaluates eight different concept drift detectors and performs tests using artificial datasets affected by abrupt and gradual concept drifts, with several rates of drift, with and without noise and irrelevant attributes, and also using real-world datasets.
Abstract: In data stream environments, drift detection methods are used to identify when the context has changed. This paper evaluates eight different concept drift detectors ( ddm , eddm , pht , stepd , d o f , adwin , Paired Learners, and ecdd ) and performs tests using artificial datasets affected by abrupt and gradual concept drifts, with several rates of drift, with and without noise and irrelevant attributes, and also using real-world datasets. In addition, a 2 k factorial design was used to indicate the parameters that most influence performance which is a novelty in the area. Also, a variation of the Friedman non-parametric statistical test was used to identify the best methods. Experiments compared accuracy, evaluation time, as well as false alarm and miss detection rates. Additionally, we used the Mahalanobis distance to measure how similar the methods are when compared to the best possible detection output. This work can, to some extent, also be seen as a research survey of existing drift detection methods.

132 citations


Journal ArticleDOI
TL;DR: This work proposes to replace the standard 'decoding' approach to searchlight-based MVPA, measuring the performance of a classifier by its accuracy, with a method based on the multivariate form of the general linear model, making the full analytical power of complex factorial designs known from univariate fMRI analyses available to MVPA studies.

Journal ArticleDOI
TL;DR: A landslide susceptibility analysis is performed through an artificial neural network (ANN) algorithm, in order to model the nonlinear relationship between landslide manifestation and geological and geomorphological parameters, which results in a geospatial product that expresses the landslide susceptibility index.
Abstract: A landslide susceptibility analysis is performed through an artificial neural network (ANN) algorithm, in order to model the nonlinear relationship between landslide manifestation and geological and geomorphological parameters. The proposed methodology can be divided into two distinctive phases. In the first phase, the methodology introduces a specific distance metric, the Mahalanobis distance metric, to improve the selection of non-landslide records that “enriches” the training database and provides the model with the necessary data during the training phase. In the second phase, the methodology develops a ANN model that was capable of minimizing the effect of over-fitting by monitoring in parallel the testing data during the training phase and terminating the process of learning when a certain acceptable criteria are achieved. The model was capable in identifying unstable areas, expressed by a landslide susceptibility index. The proposed methodology has been applied in the County of Xanthi, in the northern part of Greece, an area where a well-established landslide database existed. The landslide-related parameters that had been taken in account in the analysis were the following: lithology, distance from geological boundaries, distance from tectonic features, elevation, slope inclination, slope orientation, distance from hydrographic network and distance from road network. These parameters have been normalized and reclassified and used as input variables, while the description of a given area as landslide/non-landslide was assumed to be the output variable. The final outcome of the model was a geospatial product, which expressed the landslide susceptibility index and when compared with an up-to-date landslide inventory database showed satisfactory results.

Journal ArticleDOI
TL;DR: Experimental results indicate that the proposed hyperspectral anomaly detection approach in this paper outperforms three state-of-art commonly used anomaly detection algorithms.
Abstract: This paper proposes a nonlinear version of an anomaly detector with a robust regression detection strategy for hyperspectral imagery. In the traditional Mahalanobis distance-based hyperspectral anomaly detectors, the background statistics are easily contaminated by anomaly targets, resulting in a poor detection performance. The traditional detectors also often fail to detect anomaly targets when the samples in the image do not conform to a Gaussian normal distribution. In order to solve these problems, this paper proposes a robust nonlinear anomaly detection (RNAD) method by utilizing robust regression analysis in the kernel feature space. Using the robust regression detection strategy, this method can suppress the contamination of the detection statistics by anomaly targets. Moreover, in this anomaly detection method, the input data are implicitly mapped into an appropriate high-dimensional kernel feature space by nonlinear mapping, which is associated with the selected kernel function. Experiments were conducted on synthetic data and an airborne AVIRIS hyperspectral image, and the experimental results indicate that the proposed hyperspectral anomaly detection approach in this paper outperforms three state-of-art commonly used anomaly detection algorithms.

Journal ArticleDOI
TL;DR: Consideration of the correlation between indicators improves the evaluation results (in terms of sorting and closeness) to a certain extent compared to the traditional TOPSIS method and softens the closeness value, consistent with reality.
Abstract: Evaluation of the competitiveness of high-tech industry is a technical decision-making issue involving multiple criteria. It is also a practical path to promote a country's competitiveness. However, the competitiveness indicators in high-tech industry often act and react upon one another. Moreover, different dimensions and indicator weights also affect the evaluation results. In this paper, the Mahalanobis distance is used to improve the traditional technique for order preference by similarity to ideal solution (TOPSIS). The improved TOPSIS method has the following properties: (1) an improved relative closeness which is invariant after non-singular linear transformation, and (2) the weighted Mahalanobis distance is the same as the weighted Euclidean distance when the indicators are uncorrelated. The new method is applied to evaluate the competitiveness of the Chinese high-tech industry using data from 2011. Consideration of the correlation between indicators improves the evaluation results (in terms of sorting and closeness) to a certain extent compared to the traditional TOPSIS method. The top five provinces are: Guangdong, Jiangsu, Shanghai, Beijing, and Shandong. This finding reflects the practical linkage among provinces and softens the closeness value, consistent with reality.

Journal ArticleDOI
TL;DR: A two-step parametric method was developed to predict the impending failure of HDDs using the aggregate of statistical models and could achieve 68% failure detection rate with 0% false alarm rate, much better than the state-of-the-art methods.
Abstract: Predicting the impending failure of hard disk drives (HDDs) is crucial for preventing essential data from losing. In this paper, a two-step parametric method was developed to predict the impending failure of HDDs using the aggregate of statistical models. This method deals with the problem of failure prediction in two steps: anomaly detection and failure prediction. First, Mahalanobis distance was used for aggregating all the monitored variables into one index, which was then transformed into Gaussian variables by Box-Cox transformation. By defining an appropriate threshold, anomalies in HDDs were detected as a result. Second, a sliding-window-based generalized likelihood ratio test was proposed to track the anomaly progression in an HDD. When the occurrence of anomalies in a time interval is found to be statistically significant, indicating the HDD is approaching failure. In this work, we also derived a new cost function to adjust the prediction rate. This is important in a way to balance the failure detection rate and false alarm rate as well as to provide an advanced warning of HDD failures to the users, whereby the users can back up their data in time. Then the developed method was applied on a synthetic data set showing its effectiveness on predicting failures. To demonstrate the practical usefulness, this method was also applied on a real-life HDD data set. The result shows that our method could achieve 68% failure detection rate with 0% false alarm rate. This is much better than the results achieved by the state-of-the-art methods, such as support vector machine and hidden Markov models.

Journal ArticleDOI
TL;DR: CCR provides robust credal classification results with a relatively low computational burden and several experiments are presented at the end of this paper to evaluate and compare the performances of this CCR method with respect to other classification methods.

Journal ArticleDOI
TL;DR: A novel approach for diagnosing incipient faults in analog circuits is proposed, and a statistical property feature vector composed of range, mean, standard deviation, skewness, kurtosis, entropy and centroid is proposed to select a near-optimal feature vector for each binary classifier.

Proceedings ArticleDOI
11 Jun 2014
TL;DR: A generalization of the SHADE protocol, called GSHADE, that enables privacy-preserving computation of several distance metrics, including (normalized) Hamming distance, Euclidean distance, Mahalanobis distance, and scalar product.
Abstract: At WAHC'13, Bringer et al. introduced a protocol called SHADE for secure and efficient Hamming distance computation using oblivious transfer only. In this paper, we introduce a generalization of the SHADE protocol, called GSHADE, that enables privacy-preserving computation of several distance metrics, including (normalized) Hamming distance, Euclidean distance, Mahalanobis distance, and scalar product. GSHADE can be used to efficiently compute one-to-many biometric identification for several traits (iris, face, fingerprint) and benefits from recent optimizations of oblivious transfer extensions. GSHADE allows identification against a database of 1000 Eigenfaces in 1.28 seconds and against a database of 10000 IrisCodes in 17.2 seconds which is more than 10 times faster than previous works.

Journal ArticleDOI
TL;DR: The proposed method provides an effective way of extracting urban built-up areas from Landsat series images and could be applicable to other applications.
Abstract: Urban built-up area information is required by various applications. However, urban built-up area extraction using moderate resolution satellite data, such as Landsat series data, is still a challenging task due to significant intra-urban heterogeneity and spectral confusion with other land cover types. In this paper, a new method that combines spectral information and multivariate texture is proposed. The multivariate textures are separately extracted from multispectral data using a multivariate variogram with different distance measures, i.e., Euclidean, Mahalanobis and spectral angle distances. The multivariate textures and the spectral bands are then combined for urban built-up area extraction. Because the urban built-up area is the only target class, a one-class classifier, one-class support vector machine, is used. For comparison, the classical gray-level co-occurrence matrix (GLCM) is also used to extract image texture. The proposed method was evaluated using bi-temporal Landsat TM/ETM+ data of two megacity areas in China. Results demonstrated that the proposed method outperformed the use of spectral information alone and the joint use of the spectral information and the GLCM texture. In particular, the inclusion of multivariate variogram textures with spectral angle distance achieved the best results. The proposed method provides an effective way of extracting urban built-up areas from Landsat series images and could be applicable to other applications.

Journal ArticleDOI
TL;DR: An algorithm with a Bayesian approach based on a Markov-chain Monte Carlo method to cluster structural responses of the bridges into a reduced number of global state conditions, by taking into account eventual multimodality and heterogeneity of the data distribution is proposed.

Journal ArticleDOI
TL;DR: By integrating SVM and a Mahalanobis distance boundary constraint, SVRFMC can not only avoid the explicit modeling of observed data, but can also undertake appropriate smoothing with the consideration of contextual information, thereby exhibiting more universality and validity in the application of HSR image classification.
Abstract: In this paper, a modified conditional random fields (CRFs) classifier, namely the support vector conditional random fields classifier with a Mahalanobis distance boundary constraint (SVRFMC), is proposed to perform the task of classification for high spatial resolution (HSR) remote sensing imagery. In SVRFMC, the CRFs model has the intrinsic ability of incorporating the contextual information in both the observation and labeling fields. Support vector machine (SVM) is set as the spectral term to get a more precise estimation of each pixel's probability of belonging to each possible class. To preserve the spatial details in the classification result, a Mahalanobis distance boundary constraint is considered as the spatial term to undertake appropriate spatial smoothing. By integrating SVM and a Mahalanobis distance boundary constraint, SVRFMC can not only avoid the explicit modeling of observed data, but can also undertake appropriate smoothing with the consideration of contextual information, thereby exhibiting more universality and validity in the application of HSR image classification, especially when the image has a complex land-cover class distribution and the training samples are limited. Three HSR images comprising QuickBird, IKONOS, and HYDICE imagery were utilized to evaluate the performance of the proposed algorithm in comparison to other image classification approaches: noncontextual multiclass SVM, a traditional object-oriented classifier (OOC), an object-oriented classification based on fractal net evolution approach (FNEA) segmentation (OO-FNEA), a simplified CRF model with boundary constraint (BC-CRF), and a recently proposed contextual classifier combining SVM and Markov random fields (Markovian support vector classifier). The experimental results demonstrate that the SVRFMC algorithm is superior to the other methods, providing a satisfactory classification result for HSR imagery, including both multispectral HSR imagery and hyperspectral HSR imagery, even with limited training samples, from both the visualization and quantitative evaluations.

Journal ArticleDOI
TL;DR: This paper reduces the dimensionality of ExHoG using Asymmetric Principal Component Analysis (APCA) for improved quadratic classification and addresses the asymmetry issue in training sets of human detection where there are much fewer human samples than non-human samples.
Abstract: This paper proposes a quadratic classification approach on the subspace of Extended Histogram of Gradients (ExHoG) for human detection. By investigating the limitations of Histogram of Gradients (HG) and Histogram of Oriented Gradients (HOG), ExHoG is proposed as a new feature for human detection. ExHoG alleviates the problem of discrimination between a dark object against a bright background and vice versa inherent in HG. It also resolves an issue of HOG whereby gradients of opposite directions in the same cell are mapped into the same histogram bin. We reduce the dimensionality of ExHoG using Asymmetric Principal Component Analysis (APCA) for improved quadratic classification. APCA also addresses the asymmetry issue in training sets of human detection where there are much fewer human samples than non-human samples. Our proposed approach is tested on three established benchmarking data sets - INRIA, Caltech, and Daimler - using a modified Minimum Mahalanobis distance classifier. Results indicate that the proposed approach outperforms current state-of-the-art human detection methods.

Journal ArticleDOI
TL;DR: In this article, a novel approach for initializing covariance matrices is proposed, based on the K-means algorithm with Mahalanobis distances, which is commonly used with the Euclidean metric.

Journal ArticleDOI
TL;DR: The effectiveness of features from a statistics based local damage detection algorithm called Influenced Coefficient Based Damage Detection Algo- rithm (IDDA) is expanded for a more complex structural system.
Abstract: Many current damage detection techniques rely on the skill and experience of a trained inspector and also require a priori knowledge about the struc- ture's properties. However, this study presents adapta- tion of several change point analysis techniques for their performance in civil engineering damage detection. Lit- erature shows different statistical approaches which are developed for detection of changes in observations for different applications including structural damage detec- tion. However, despite their importance in damage de- tection, control charts and statistical frameworks are not properly utilized in this area. On the other hand, most of the existing change point analysis techniques were originally developed for applications in the stock mar- ket or industrial engineering processes; utilizing them in structural damage detection needs adjustments and ver- ification. Therefore, in this article several change point detection methods are evaluated and adjusted for a dam- age detection scheme. The effectiveness of features from a statistics based local damage detection algorithm called Influenced Coefficient Based Damage Detection Algo- rithm (IDDA) is expanded for a more complex structural system. The statistics used in this study include the uni- variate Cumulative Sum, Exponentially Weighted Mov- ing Average (EWMA), Mean Square Error (MSE), and multivariate Mahalanobis distances, and Fisher Crite- rion. They are used to make control charts that detect and localize the damage by correlating locations of a sen- sor network with the damage features. A Modified MSE statistic, called ModMSE statistic, is introduced to re- move the sensitivity of the MSE statistic to the variance of a data set. The effectiveness of each statistic is analyzed.

Journal ArticleDOI
TL;DR: OBCD was less sensitive to the misregistration effect, and the sensitivity further decreased with an increase in local mean object size, whereas high-spatial resolution images typically have higher spectral variability within neighboring pixels than the relatively low resolution datasets.
Abstract: High-spatial resolution remote sensing imagery provides unique opportunities for detailed characterization and monitoring of landscape dynamics. To better handle such data sets, change detection using the object-based paradigm, i.e., object-based change detection (OBCD), have demonstrated improved performances over the classic pixel-based paradigm. However, image registration remains a critical pre-process, with new challenges arising, because objects in OBCD are of various sizes and shapes. In this study, we quantified the effects of misregistration on OBCD using high-spatial resolution SPOT 5 imagery (5 m) for three types of landscapes dominated by urban, suburban and rural features, representing diverse geographic objects. The experiments were conducted in four steps: (i) Images were purposely shifted to simulate the misregistration effect. (ii) Image differencing change detection was employed to generate difference images with all the image-objects projected to a feature space consisting of both spectral and texture variables. (iii) The changes were extracted using the Mahalanobis distance and a change ratio. (iv) The results were compared to the ‘real’ changes from the image pairs that contained no purposely introduced registration error. A pixel-based change detection method using similar steps was also developed for comparisons. Results indicate that misregistration had a relatively low impact on object size and shape for most areas. When the landscape is comprised of small mean object sizes (e.g., in urban and suburban areas), the mean size of ‘change’ objects was smaller than the mean of all objects and their size discrepancy became larger with the decrease in object size. Compared to the results using the pixel-based paradigm, OBCD was less sensitive to the misregistration effect, and the sensitivity further decreased with an increase in local mean object size. However, high-spatial resolution images typically have higher spectral variability within neighboring pixels than the relatively low resolution datasets. As a result, accurate image registration remains crucial to change detection even if an object-based approach is used.

Journal ArticleDOI
01 Feb 2014
TL;DR: In this paper, the Mahalanobis distance between pairs of multivariate observations is used as a measure of similarity between the observations and the theoretical distribution is derived, and the result is used for judging on the degree of isolation of an observation.
Abstract: The Mahalanobis distance between pairs of multivariate observations is used as a measure of similarity between the observations. The theoretical distribution is derived, and the result is used for judging on the degree of isolation of an observation. In case of spatially dependent data where spatial coordinates are available, different exploratory tools are introduced for studying the degree of isolation of an observation from a fraction of its neighbors, and thus to identify local multivariate outliers.

Journal ArticleDOI
TL;DR: Results demonstrate that the proposed approach may be conveniently used in real-life applications, since cepstral features outperform AR coefficients when dealing with experimental data modeled to mimic the operational and environmental variability.

Proceedings Article
21 Jun 2014
TL;DR: An efficient algorithm to learn a Mahalanobis distance metric by directly optimizing a ranking loss is developed, which significantly outperforms alternative methods on several real-world tasks, and can scale to large and high-dimensional data.
Abstract: We develop an efficient algorithm to learn a Mahalanobis distance metric by directly optimizing a ranking loss. Our approach focuses on optimizing the top of the induced rankings, which is desirable in tasks such as visualization and nearest-neighbor retrieval. We further develop and justify a simple technique to reduce training time significantly with minimal impact on performance. Our proposed method significantly outperforms alternative methods on several real-world tasks, and can scale to large and high-dimensional data.

Journal ArticleDOI
TL;DR: The algorithm is a generalization of the previously proposed incremental algorithm and successively finds optimal partitions with k = 2, 3, ?

Journal ArticleDOI
TL;DR: In this paper, a methodology for the online detection of health status of rolling element bearing into various damage stages for naturally progressing defect is proposed for online monitoring and damage stage detection, which is successfully verified on the vibration data acquired from the naturally induced and progressed defect experiments.