scispace - formally typeset
Search or ask a question

Showing papers on "Euclidean distance published in 2018"


Journal ArticleDOI
TL;DR: A novel objective function is proposed to jointly optimize similarity metric learning, local positive mining and robust deep feature embedding for person re-id by proposing a novel sampling to mine suitable positives within a local range to improve the deep embedding in the context of large intra-class variations.

196 citations


Journal ArticleDOI
TL;DR: Overall, this work strongly advises the use of multivariate noise normalisation as a general preprocessing step, recommend LDA, SVM and WeiRD as classifiers for decoding and highlight the cross‐validated Euclidean distance as a reliable and unbiased default choice for RSA.

114 citations


Journal ArticleDOI
TL;DR: A robust state estimation algorithm against FDI attack is presented and it is shown that the proposed method is able to detect malicious attack, which is undetectable by traditional bad data detection (BDD) methods.
Abstract: The evolution of traditional energy networks toward smart grids increases security vulnerabilities in the power system infrastructure. State estimation plays an essential role in the efficient and reliable operation of power systems, so its security is a major concern. Coordinated cyber-attacks, including false data injection (FDI) attack, can manipulate smart meters to present serious threats to grid operations. In this paper, a robust state estimation algorithm against FDI attack is presented. As a solution to mitigate such an attack, a new analytical technique is proposed based on the Markov chain theory and Euclidean distance metric. Using historical data of a set of trusted buses, a Markov chain model of the system normal operation is formulated. The estimated states are analyzed by calculating the Euclidean distance from the Markov model. States, which match the lower probability, are considered as attacked states. It is shown that the proposed method is able to detect malicious attack, which is undetectable by traditional bad data detection (BDD) methods. The proposed robust dynamic state estimation algorithm is built on a Kalman filter, and implemented on the massively parallel architecture of graphic processing unit using fine-grained parallel programming techniques. Numerical simulations demonstrate the efficiency and accuracy of the proposed mechanism.

110 citations


Journal ArticleDOI
TL;DR: This work proposes a weighted undersampling (WU) scheme for SVM based on space geometry distance, and thus produces an improved algorithm named WU-SVM, which well outperforms the state-of-the-art methods in terms of three popular metrics for imbalanced classification, i.e., area under the curve, F-Measure, and G-Mean.
Abstract: A support vector machine (SVM) plays a prominent role in classic machine learning, especially classification and regression. Through its structural risk minimization, it has enjoyed a good reputation in effectively reducing overfitting, avoiding dimensional disaster, and not falling into local minima. Nevertheless, existing SVMs do not perform well when facing class imbalance and large-scale samples. Undersampling is a plausible alternative to solve imbalanced problems in some way, but suffers from soaring computational complexity and reduced accuracy because of its enormous iterations and random sampling process. To improve their classification performance in dealing with data imbalance problems, this work proposes a weighted undersampling (WU) scheme for SVM based on space geometry distance, and thus produces an improved algorithm named WU-SVM. In WU-SVM, majority samples are grouped into some subregions (SRs) and assigned different weights according to their Euclidean distance to the hyper plane. The samples in an SR with higher weight have more chance to be sampled and put to use in each learning iteration, so as to retain the data distribution information of original data sets as much as possible. Comprehensive experiments are performed to test WU-SVM via 21 binary-class and six multiclass publically available data sets. The results show that it well outperforms the state-of-the-art methods in terms of three popular metrics for imbalanced classification, i.e., area under the curve, F-Measure, and G-Mean.

109 citations


Journal ArticleDOI
TL;DR: A CSI amplitude fingerprinting-based localization algorithm in Narrowband Internet of Things system, in which a centroid algorithm based on CSI propagation model is optimized and this algorithm can effectively reduce positioning error.
Abstract: With the proliferation of mobile devices, indoor fingerprinting-based localization has caught considerable interest on account of its high precision. Meanwhile, channel state information (CSI), as a promising positioning characteristic, has been gradually adopted as an enhanced channel metric in indoor positioning schemes. In this paper, we propose a CSI amplitude fingerprinting-based localization algorithm in Narrowband Internet of Things system, in which we optimize a centroid algorithm based on CSI propagation model. In particular, in the fingerprint matching, we utilize the method of multidimensional scaling (MDS) analysis to calculate the Euclidean distance and time-reversal resonating strength between the target point and the reference points and then employ the ${K}$ -nearest neighbor (KNN) algorithm for location estimation. By conjugate gradient method, moreover, we optimize the localization error of triangular centroid algorithm and combine the positioning result with MDS and KNN’s estimated position to get the final estimated position. Experiment results show that compared to some existing localization methods, our proposed algorithm can effectively reduce positioning error.

104 citations


Journal ArticleDOI
TL;DR: Two feature extraction methods are proposed that have good performance compared to previous methods such as Filter banks and Wavelet transform and the performance of the second method is significantly better than the first.

89 citations


Journal ArticleDOI
TL;DR: Experimental implementation of memristor crossbar hardware systems that can allow direct comparison of the Euclidean distances without normalizing the weights, and enables unsupervised K-means clustering algorithm through online learning, and produces high classification accuracy for the standard IRIS data set.
Abstract: Memristor-based neuromorphic networks have been actively studied as a promising candidate to overcome the von-Neumann bottleneck in future computing applications. Several recent studies have demonstrated memristor network’s capability to perform supervised as well as unsupervised learning, where features inherent in the input are identified and analyzed by comparing with features stored in the memristor network. However, even though in some cases the stored feature vectors can be normalized so that the winning neurons can be directly found by the (input) vector–(stored) vector dot-products, in many other cases, normalization of the feature vectors is not trivial or practically feasible, and calculation of the actual Euclidean distance between the input vector and the stored vector is required. Here we report experimental implementation of memristor crossbar hardware systems that can allow direct comparison of the Euclidean distances without normalizing the weights. The experimental system enables unsuperv...

86 citations


Journal ArticleDOI
TL;DR: The study demonstrated that DSM with EDM provided results comparable to RK and to the contextual multiscale methods, and thus it enhances the DSM toolbox.
Abstract: This study introduces a hybrid spatial modelling framework, which accounts for spatial non‐stationarity, spatial autocorrelation and environmental correlation. A set of geographic spatially autocorrelated Euclidean distance fields (EDF) was used to provide additional spatially relevant predictors to the environmental covariates commonly used for mapping. The approach was used in combination with machine‐learning methods, so we called the method Euclidean distance fields in machine‐learning (EDM). This method provides advantages over other prediction methods that integrate spatial dependence and state factor models, for example, regression kriging (RK) and geographically weighted regression (GWR). We used seven generic (EDFs) and several commonly used predictors with different regression algorithms in two digital soil mapping (DSM) case studies and compared the results to those achieved with ordinary kriging (OK), RK and GWR as well as the multiscale methods ConMap, ConStat and contextual spatial modelling (CSM). The algorithms tested in EDM were a linear model, bagged multivariate adaptive regression splines (MARS), radial basis function support vector machines (SVM), Cubist, random forest (RF) and a neural network (NN) ensemble. The study demonstrated that DSM with EDM provided results comparable to RK and to the contextual multiscale methods. Best results were obtained with Cubist, RF and bagged MARS. Because the tree‐based approaches produce discontinuous response surfaces, the resulting maps can show visible artefacts when only the EDFs are used as predictors (i.e. no additional environmental covariates). Artefacts were not obvious for SVM and NN and to a lesser extent bagged MARS. An advantage of EDM is that it accounts for spatial non‐stationarity and spatial autocorrelation when using a small set of additional predictors. The EDM is a new method that provides a practical alternative to more conventional spatial modelling and thus it enhances the DSM toolbox. HIGHLIGHTS: We present a hybrid mapping approach that accounts for spatial dependence and environmental correlation. The approach is based on a set of generic Euclidean distance fields (EDF). Our Euclidean distance fields in machine learning (EDM) can model non‐stationarity and spatial autocorrelation. The EDM approach eliminates the need for kriging of residuals and produces accurate digital soil maps.

84 citations


Journal ArticleDOI
TL;DR: An a posteriori error indicator, which corresponds to the dual norm of the residual associated with the time-averaged momentum equation, is presented and it is demonstrated that the error indicator is highly-correlated with the error in mean flow prediction, and can be efficiently computed through an offline/online strategy.

72 citations


Journal ArticleDOI
TL;DR: A novel metric learning framework to learn a distance metric across a Euclidean space and a Riemannian manifold to fuse average appearance and pattern variation of faces within one video to improve face recognition from videos is proposed.
Abstract: Riemannian manifolds have been widely employed for video representations in visual classification tasks including video-based face recognition. The success mainly derives from learning a discriminant Riemannian metric which encodes the non-linear geometry of the underlying Riemannian manifolds. In this paper, we propose a novel metric learning framework to learn a distance metric across a Euclidean space and a Riemannian manifold to fuse average appearance and pattern variation of faces within one video. The proposed metric learning framework can handle three typical tasks of video-based face recognition: Video-to-Still, Still-to-Video and Video-to-Video settings. To accomplish this new framework, by exploiting typical Riemannian geometries for kernel embedding, we map the source Euclidean space and Riemannian manifold into a common Euclidean subspace, each through a corresponding high-dimensional Reproducing Kernel Hilbert Space (RKHS). With this mapping, the problem of learning a cross-view metric between the two source heterogeneous spaces can be converted to learning a single-view Euclidean distance metric in the target common Euclidean space. By learning information on heterogeneous data with the shared label, the discriminant metric in the common space improves face recognition from videos. Extensive experiments on four challenging video face databases demonstrate that the proposed framework has a clear advantage over the state-of-the-art methods in the three classical video-based face recognition scenarios.

69 citations


Journal ArticleDOI
TL;DR: The experimental results indicate that this novel fault diagnosis approach of rolling bearings based on feature reduction that uses Global-Local Margin Fisher Analysis can comprehensively extract global-local discriminant fault information, which can make its extracted fault features more sensitive and significantly improve classification accuracy rate.

Journal ArticleDOI
TL;DR: Different clustering approaches are studied from the theoretical perspective to understand their relevance in context of massive data-sets and empirically these have been tested on artificial benchmarks to highlight their strengths and weaknesses.

Journal ArticleDOI
26 Feb 2018-Sensors
TL;DR: Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD).
Abstract: Hyperspectral image (HSI) based detection has attracted considerable attention recently in agriculture, environmental protection and military applications as different wavelengths of light can be advantageously used to discriminate different types of objects. Unfortunately, estimating the background distribution and the detection of interesting local objects is not straightforward, and anomaly detectors may give false alarms. In this paper, a Deep Belief Network (DBN) based anomaly detector is proposed. The high-level features and reconstruction errors are learned through the network in a manner which is not affected by previous background distribution assumption. To reduce contamination by local anomalies, adaptive weights are constructed from reconstruction errors and statistical information. By using the code image which is generated during the inference of DBN and modified by adaptively updated weights, a local Euclidean distance between under test pixels and their neighboring pixels is used to determine the anomaly targets. Experimental results on synthetic and recorded HSI datasets show the performance of proposed method outperforms the classic global Reed-Xiaoli detector (RXD), local RX detector (LRXD) and the-state-of-the-art Collaborative Representation detector (CRD).

Journal ArticleDOI
TL;DR: This paper proposes a novel nonnegative factorization method, called structurally incoherent low-rank NMF (SILR-NMF), in which they jointly consider structural incoherence and low- rank properties of data for image classification.
Abstract: As a popular dimensionality reduction method, nonnegative matrix factorization (NMF) has been widely used in image classification However, the NMF does not consider discriminant information from the data themselves In addition, most NMF-based methods use the Euclidean distance as a metric, which is sensitive to noise or outliers in data To solve these problems, in this paper, we introduce structural incoherence and low-rank to NMF and propose a novel nonnegative factorization method, called structurally incoherent low-rank NMF (SILR-NMF), in which we jointly consider structural incoherence and low-rank properties of data for image classification For the corrupted data, we use the $L_{1}$ norm as a constraint to ensure the noise is sparse SILR-NMF learns a clean data matrix from the noisy data by low-rank learning As a result, the SILR-NMF can capture the global structure information of the data, which is more robust than local information to noise By introducing the structural incoherence of the learned clean data, SILR-NMF ensures the clean data points from different classes are as independent as possible To verify the performance of the proposed method, extensive experiments are conducted on six image databases The experimental results demonstrate that our proposed method has substantial gain over existing NMF approaches

Journal ArticleDOI
TL;DR: A new categorical method based on partitions called Manhattan Frequency k-Means (MFk-M) is detailed, which aims to convert the initial categorical data into numeric values using the relative frequency of each modality in the attributes.

Journal ArticleDOI
TL;DR: A novel similarity (or dissimilarity) measure, SRIHASS is introduced to find the similarity between temporal associations, and the algorithm for time profiled associating mining called Z-SPAMINE is given that is primarily inspired from SPAMINE.
Abstract: Mining and visualization of time profiled temporal associations is an important research problem that is not addressed in a wider perspective and is understudied. Visual analysis of time profiled temporal associations helps to better understand hidden seasonal, emerging, and diminishing temporal trends. The pioneering work by Yoo and Shashi Sekhar termed as SPAMINE applied the Euclidean distance measure. Following their research, subsequent studies were only restricted to the use of Euclidean distance. However, with an increase in the number of time slots, the dimensionality of a prevalence time sequence of temporal association, also increases, and this high dimensionality makes the Euclidean distance not suitable for the higher dimensions. Some of our previous studies, proposed Gaussian based dissimilarity measures and prevalence estimation approaches to discover time profiled temporal associations. To the best of our knowledge, there is no research that has addressed a similarity measure which is based on the standard score and normal probability to find the similarity between temporal patterns in z-space and retains monotonicity. Our research is pioneering work in this direction. This research has three contributions. First, we introduce a novel similarity (or dissimilarity) measure, SRIHASS to find the similarity between temporal associations. The basic idea behind the design of dissimilarity measure is to transform support values of temporal associations onto z-space and then obtain probability sequences of temporal associations using a normal distribution chart. The dissimilarity measure uses these probability sequences to estimate the similarity between patterns in z-space. The second contribution is the prevalence bound estimation approach. Finally, we give the algorithm for time profiled associating mining called Z-SPAMINE that is primarily inspired from SPAMINE. Experiment results prove that our approach, Z-SPAMINE is computationally more efficient and scalable compared to existing approaches such as Naive, Sequential and SPAMINE that applies the Euclidean distance.

Journal ArticleDOI
TL;DR: Each rather-short motion is encoded into a compact visual representation from which a highly descriptive 4,096-dimensional feature vector is extracted using a fine-tuned deep convolutional neural network which enables efficient motion indexing by any metric-based index structure.
Abstract: Motion capture data describe human movements in the form of spatio-temporal trajectories of skeleton joints Intelligent management of such complex data is a challenging task for computers which requires an effective concept of motion similarity However, evaluating the pair-wise similarity is a difficult problem as a single action can be performed by various actors in different ways, speeds or starting positions Recent methods usually model the motion similarity by comparing customized features using distance-based functions or specialized machine-learning classifiers By combining both these approaches, we transform the problem of comparing motions of variable sizes into the problem of comparing fixed-size vectors Specifically, each rather-short motion is encoded into a compact visual representation from which a highly descriptive 4,096-dimensional feature vector is extracted using a fine-tuned deep convolutional neural network The advantage is that the fixed-size features are compared by the Euclidean distance which enables efficient motion indexing by any metric-based index structure Another advantage of the proposed approach is its tolerance towards an imprecise action segmentation, the variance in movement speed, and a lower data quality All these properties together bring new possibilities for effective and efficient large-scale retrieval

Journal ArticleDOI
TL;DR: A two-stage localization for copy-move forgery detection (CMFD) is proposed and the proposed method performs better on public benchmark databases than do other state-of-the-art CMFD schemes.

Journal ArticleDOI
TL;DR: A combined score is created that incorporates information from 3D electron microscopy maps as well as crosslinking that achieves, on average, better results than either information type alone and demonstrates the potential of integrative modeling with XL-MS and low-resolution cryoelectron microscopy.

Posted Content
TL;DR: Wasserstein elliptical embeddings are presented, which consist in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets, and shown to be more intuitive and better behaved numerically than the alternative choice of Gaussianembeddings with the Kullback-Leibler divergence.
Abstract: Embedding complex objects as vectors in low dimensional spaces is a longstanding problem in machine learning. We propose in this work an extension of that approach, which consists in embedding objects as elliptical probability distributions, namely distributions whose densities have elliptical level sets. We endow these measures with the 2-Wasserstein metric, with two important benefits: (i) For such measures, the squared 2-Wasserstein metric has a closed form, equal to a weighted sum of the squared Euclidean distance between means and the squared Bures metric between covariance matrices. The latter is a Riemannian metric between positive semi-definite matrices, which turns out to be Euclidean on a suitable factor representation of such matrices, which is valid on the entire geodesic between these matrices. (ii) The 2-Wasserstein distance boils down to the usual Euclidean metric when comparing Diracs, and therefore provides a natural framework to extend point embeddings. We show that for these reasons Wasserstein elliptical embeddings are more intuitive and yield tools that are better behaved numerically than the alternative choice of Gaussian embeddings with the Kullback-Leibler divergence. In particular, and unlike previous work based on the KL geometry, we learn elliptical distributions that are not necessarily diagonal. We demonstrate the advantages of elliptical embeddings by using them for visualization, to compute embeddings of words, and to reflect entailment or hypernymy.

Journal ArticleDOI
TL;DR: The main contribution of this paper is to solve the optimization of cluster combination by minimizing the proposed energy function and to extract nonphotosynthetic components through a hierarchical clustering process automatically.

Posted Content
TL;DR: Zhang et al. as mentioned in this paper replaced the Euclidean distance with the cosine similarity to better utilize the $L2$-normalization, which is able to attenuate the curse of dimensionality.
Abstract: Deep distance metric learning (DDML), which is proposed to learn image similarity metrics in an end-to-end manner based on the convolution neural network, has achieved encouraging results in many computer vision tasks.$L2$-normalization in the embedding space has been used to improve the performance of several DDML methods. However, the commonly used Euclidean distance is no longer an accurate metric for $L2$-normalized embedding space, i.e., a hyper-sphere. Another challenge of current DDML methods is that their loss functions are usually based on rigid data formats, such as the triplet tuple. Thus, an extra process is needed to prepare data in specific formats. In addition, their losses are obtained from a limited number of samples, which leads to a lack of the global view of the embedding space. In this paper, we replace the Euclidean distance with the cosine similarity to better utilize the $L2$-normalization, which is able to attenuate the curse of dimensionality. More specifically, a novel loss function based on the von Mises-Fisher distribution is proposed to learn a compact hyper-spherical embedding space. Moreover, a new efficient learning algorithm is developed to better capture the global structure of the embedding space. Experiments for both classification and retrieval tasks on several standard datasets show that our method achieves state-of-the-art performance with a simpler training procedure. Furthermore, we demonstrate that, even with a small number of convolutional layers, our model can still obtain significantly better classification performance than the widely used softmax loss.

Journal ArticleDOI
TL;DR: In this article, the authors focus on one very general definition and argue that it is incompatible with the requirement of preserving the field equations and the symmetries at global level: in some cases the Euclidean metric cannot be defined on the original Lorentzian manifold but only on a submanifold.
Abstract: There are various ways of defining the Wick rotation in a gravitational context. There are good arguments to view it as an analytic continuation of the metric, instead of the coordinates. We focus on one very general definition and argue that it is incompatible with the requirement of preserving the field equations and the symmetries at global level: in some cases the Euclidean metric cannot be defined on the original Lorentzian manifold but only on a submanifold. This phenomenon is related to the existence of horizons, as illustrated in the cases of the de Sitter and Schwarzschild metrics.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a structured graph Laplacian embedding algorithm, which can formulate all these structured distance relationships into the graph-Laplacians form.

Book ChapterDOI
08 Sep 2018
TL;DR: This work introduces a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression and intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework.
Abstract: How to economically cluster large-scale multi-view images is a long-standing problem in computer vision. To tackle this challenge, we introduce a novel approach named Highly-economized Scalable Image Clustering (HSIC) that radically surpasses conventional image clustering methods via binary compression. We intuitively unify the binary representation learning and efficient binary cluster structure learning into a joint framework. In particular, common binary representations are learned by exploiting both sharable and individual information across multiple views to capture their underlying correlations. Meanwhile, cluster assignment with robust binary centroids is also performed via effective discrete optimization under \(\ell _{21}\)-norm constraint. By this means, heavy continuous-valued Euclidean distance computations can be successfully reduced by efficient binary XOR operations during the clustering procedure. To our best knowledge, HSIC is the first binary clustering work specifically designed for scalable multi-view image clustering. Extensive experimental results on four large-scale image datasets show that HSIC consistently outperforms the state-of-the-art approaches, whilst significantly reducing computational time and memory footprint.

Journal ArticleDOI
TL;DR: A distance is proposed that combines Minkowski and Chebyshev distances and can be seen as an intermediary distance that not only achieves efficient run times in neighbourhood iteration tasks in Z 2, but also obtains good accuracies when coupled with the k-Nearest Neighbours (k-NN) classifier.

Journal ArticleDOI
TL;DR: A weighted joint nearest neighbor and sparse representation method is proposed in this paper, which consists of the following steps: a Gaussian weighted function has been introduced into the joint region of test pixels so as to obtain the weighted joint Euclidean distance.
Abstract: The $k$ -nearest neighbor ( $k$ -NN) method relies on Euclidean distance as a classification measure to obtain the labels of the test samples. Recently, many studies show that joint region of test samples can make full use of the spatial information of hyperspectral image. However, traditional joint $k$ -NN algorithm holds that the weight of the each test sample in a local region is identical, which is not reasonable, since each test sample may have different importance and distribution. To solve this problem, a weighted joint nearest neighbor and sparse representation method is proposed in this paper, which consists of the following steps: first, a Gaussian weighted function has been introduced into the joint region of test pixels so as to obtain the weighted joint Euclidean distance. Next, the sparse representation-based method is adopted to obtain the representation residuals. Finally, a decision function is applied to achieve the balance between the weighted joint Euclidean distance and residual of the sparse representation. Experiments performed on the four real HSI datasets have demonstrated that the proposed methods can achieve better performance than several previous methods.

Journal ArticleDOI
TL;DR: A simple and effective graph construction method to construct the graph over data lying on multiple data manifolds is utilized and a novel regularization term on sample-wise margins is introduced to the objective function, which enables the proposed method fully utilizes the input data structure and the label information for classification.
Abstract: Graph-based semi-supervised learning (SSL) has attracted great attention over the past decade. However, there are still several open problems in this paper, including: 1) how to construct an effective graph over data with complex distribution and 2) how to define and effectively use pair-wise similarity for robust label propagation. In this paper, we utilize a simple and effective graph construction method to construct the graph over data lying on multiple data manifolds. The method can guarantee the connectiveness between pair-wise data points. Then, the global pair-wise data similarity is naturally characterized by geodesic distance-based joint probability, where the geodesic distance is approximated by the graph distance. The new data similarity is much more effective than previous Euclidean distance-based similarities. To apply data structure for robust label propagation, Kullback–Leibler divergence is utilized to measure the inconsistency between the input pair-wise similarity and the output similarity. In order to further consider intraclass and interclass variances, a novel regularization term on sample-wise margins is introduced to the objective function. This enables the proposed method fully utilizes the input data structure and the label information for classification. An efficient optimization method and the convergence analysis have been proposed for our problem. Besides, out-of-sample extension is discussed and addressed. Comparisons with the state-of-the-art SSL methods on image classification tasks have been presented to show the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that the vector pointing in the direction that minimizes the Euclidean distance is shown to be optimal in the directional distance function in data envelopment analysis.
Abstract: This paper is concerned with optimal directions in the directional distance function in data envelopment analysis. It is shown that the vector pointing in the direction that minimizes the Euclidean...

Journal ArticleDOI
TL;DR: Experimental results demonstrate that the proposed method significantly outperforms the existing methods in terms of classification error, and two well-known classifiers (LDA and SVM) are investigated.
Abstract: This paper presents a novel algorithm (CVSTSCSP) for determining discriminative features from an optimal combination of temporal, spectral and spatial information for motor imagery brain computer interfaces. The proposed method involves four phases. In the first phase, EEG signal is segmented into overlapping time segments and bandpass filtered through frequency filter bank of variable size subbands. In the next phase, features are extracted from the segmented and filtered data using stationary common spatial pattern technique (SCSP) that can handle the non- stationarity and artifacts of EEG signal. The univariate feature selection method is used to obtain a relevant subset of features in the third phase. In the final phase, the classifier is used to build adecision model. In this paper, four univariate feature selection methods such as Euclidean distance, correlation, mutual information and Fisher discriminant ratio and two well-known classifiers (LDA and SVM) are investigated. The proposed method has been validated using the publicly available BCI competition IV dataset Ia and BCI Competition III dataset IVa. Experimental results demonstrate that the proposed method significantly outperforms the existing methods in terms of classification error. A reduction of 76.98%, 75.65%, 73.90% and 72.21% in classification error over both datasets and both classifiers can be observed using the proposed CVSTSCSP method in comparison to CSP, SBCSP, FBCSP and CVSCSP respectively.