scispace - formally typeset
Search or ask a question

Showing papers on "Sequential minimal optimization published in 2015"


Journal ArticleDOI
TL;DR: The research has proved that the complexity of SVM (LibSVM) is O(n3) and the time complexity shown that C++ faster than Java, both in training and testing, beside that the data growth will be affect and increase the time of computation.
Abstract: Support Vector Machines (SVM) is one of machine learning methods that can be used to perform classification task. Many researchers using SVM library to accelerate their research development. Using such a library will save their time and avoid to write codes from scratch. LibSVM is one of SVM library that has been widely used by researchers to solve their problems. The library also integrated to WEKA, one of popular Data Mining tools. This article contain results of our work related to complexity analysis of Support Vector Machines. Our work has focus on SVM algorithm and its implementation in LibSVM. We also using two popular programming languages i.e C++ and Java with three different dataset to test our analysis and experiment. The results of our research has proved that the complexity of SVM (LibSVM) is O(n3) and the time complexity shown that C++ faster than Java, both in training and testing, beside that the data growth will be affect and increase the time of computation.

201 citations


Journal ArticleDOI
TL;DR: A Near-Bayesian Support Vector Machine (NBSVM) is proposed for such imbalanced classification problems, by combining the philosophies of decision boundary shift and unequal regularization costs.

130 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive evaluation of pattern classification methods for fault identification in induction motors is presented, including Naive Bayes, k-Nearest Neighbor, Support Vector Machine (SVM), Artificial Neural Network (ANN), Repeated Incremental Pruning to Produce Error Reduction, and C4.5 Decision Tree.

98 citations


Journal ArticleDOI
TL;DR: This paper establishes SMO algorithms for pin-SVM and its sparse version, a quadratic programming problem with box constraints, for which the sequential minimal optimization (SMO) technique is applicable.

37 citations


Journal ArticleDOI
TL;DR: A new method for efficient classification of multidimensional data based on a tensor-based kernel applied to the Support Vector Machines is proposed, which represents data as tensors, in order to preserve data dimensionality and to allow for processing of complex structures.

32 citations


Proceedings ArticleDOI
17 Oct 2015
TL;DR: A data-driven approach to accurately differentiate the physical faults from cyber-attacks is utilized and outperforms other existing popular supervised classification approaches considering the cyber-attack and fault datasets.
Abstract: Recently, there has been significant increase in interest on Smart Grid security. Researchers have proposed various techniques to detect cyber-attacks using sensor data. However, there has been little work to distinguish a cyber-attack from a power system physical fault. A serious operational failure in physical power grid may occur from the mitigation strategies if fault is wrongly classified as a cyber-attack or vice-versa. In this paper, we utilize a data-driven approach to accurately differentiate the physical faults from cyber-attacks. First, we create a realistic dataset by generating different types of faults and cyber-attacks on the IEEE 30 bus benchmark test system. With extensive experiments, we observe that most of the established supervised methods perform poorly for the classification of faults and cyber-attacks specially for the practical datasets. Hence, we provide a data-driven approach where labelled data are projected in a new low-dimensional subspace using Principal Component Analysis (PCA). Next, Sequential Minimal Optimization (SMO) based Support Vectors are trained using the new projection of the original dataset. With both simulated and practical datasets, we have observed that the proposed classification method outperforms other existing popular supervised classification approaches considering the cyber-attack and fault datasets.

27 citations


Journal ArticleDOI
TL;DR: The proposed algorithm is named as Fitness based Position Update in SMO (FPSMO) algorithm as it updates position of individuals based on their fitness as it enhances the rate of convergence.

25 citations


Proceedings ArticleDOI
01 Dec 2015
TL;DR: In this paper, color images of melanoma are imparted to classify them among malignant and benign classes using Support Vector Machine (SVM) optimized by Sequential Minimal Optimization (SMO).
Abstract: Melanoma is quite a precarious form of skin cancer. The malignant skin tumors much resemble benign nevus, mole or dysplastic naevi. For dermatologists, it is a tedious task to analyze every patient sample more precisely, so it needs a decision support system to analyze the danger associated with a given sample. In this work color images of melanoma are imparted to classify them among malignant and benign classes using Support Vector Machine (SVM) optimized by Sequential Minimal Optimization(SMO). As a part of the preprocessing step, an illumination compensation based segmentation algorithm is deployed. The segmentation process is followed by the proposed iterative dilation method to remove noise from a lesion. Some prominent features calculated from the segmented image based on asymmetric lesion-behavior, border irregularity, color variations and spanned diameter. Finally, these feature vector applied as an input to SVM classifier, which is used to distinguish malignant from benign samples of skin lesions. The dataset is divided into training and testing data to account and validate the system performance.

23 citations


Proceedings ArticleDOI
18 Jun 2015
TL;DR: A comparison among the different classifiers such as Multilayer Perception (MLP), Sequential Minimal Optimization (SMO), Bayesian Logistic Regression (BLR) and k-star by using classification accuracy and error rate based on the percentage split method shows that the best algorithm in WEKA is MLP classifier with an accuracy of 83.333% and kappa statistics is 0.625.
Abstract: Colon cancer causes the deaths of about half a million people every year. The common method of its detection is histopathological tissue analysis, it leads to tiredness and workload to the pathologist. A novel method is proposed that combines both structural and statistical pattern recognition used for the detection of colon cancer. This paper presents a comparison among the different classifiers such as Multilayer Perception (MLP), Sequential Minimal Optimization (SMO), Bayesian Logistic Regression (BLR) and k-star by using classification accuracy and error rate based on the percentage split method. The result shows that the best algorithm in WEKA is MLP classifier with an accuracy of 83.333% and kappa statistics is 0.625. The MLP classifier which has a lower error rate, will be preferred as more powerful classification capability.

19 citations


Journal ArticleDOI
TL;DR: Two techniques, model tree and sequential minimal optimization based support vector machine, which have not been used before to model surface roughness, were applied to the training data to build prediction models and reduced the minimum R a value of experimental data.

19 citations


Book ChapterDOI
01 Jan 2015
TL;DR: This Chapter details a class of learning mechanisms known as the Support Vector Machines (SVMs), giving the machine learning framework, define and introduce the concepts of linear classifiers, and describe formally the SVMs as large margin classifiers.
Abstract: This Chapter details a class of learning mechanisms known as the Support Vector Machines (SVMs). We start by giving the machine learning framework, define and introduce the concepts of linear classifiers, and describe formally the SVMs as large margin classifiers. We focus on the convex optimization problem and in particular we deal with the Sequential Minimal Optimization (SMO) which is crucial to proceed to implement the algorithm. Finally we detail issues of the SVMs implementation. Regarding the latter, several aspects related to CPU and GPU implementation are surveyed. Our aim is two fold: first, we implement the multi-thread CPU version, test it in benchmark data sets; then we proceed with the GPU version. We intend to give a clear understanding of specific aspects related to the implementation of basic SVM machines in a many-core perspective. Further developments can easily be extended to other SVM variants launching one step further the potential for big data adaptive machines.

Journal ArticleDOI
TL;DR: An extended variant of the popular decomposition technique, sequential minimal optimization, is proposed, which is called hybrid working set (HWS) algorithm, to effectively utilize the benefits of cached kernel columns and the parallel computational power of the coprocessor.
Abstract: Support vector machines (SVM) are a popular class of supervised models in machine learning. The associated compute intensive learning algorithm limits their use in real-time applications. This paper presents a fully scalable architecture of a coprocessor, which can compute multiple rows of the kernel matrix in parallel. Further, we propose an extended variant of the popular decomposition technique, sequential minimal optimization, which we call hybrid working set (HWS) algorithm, to effectively utilize the benefits of cached kernel columns and the parallel computational power of the coprocessor. The coprocessor is implemented on Xilinx Virtex 7 field-programmable gate array-based VC707 board and achieves a speedup of upto $25\times $ for kernel computation over single threaded computation on Intel Core i5. An application speedup of upto $15\times $ over software implementation of LIBSVM and speedup of upto $23\times $ over SVM Light is achieved using the HWS algorithm in unison with the coprocessor. The reduction in the number of iterations and sensitivity of the optimization time to variation in cache size using the HWS algorithm are also shown.

Proceedings ArticleDOI
01 Oct 2015
TL;DR: The experimental results have shown that SMO algorithm with V2M-SVD feature extraction can achieve the best performance for the classification of basic hand movements.
Abstract: Surface Electromyography (sEMG) signal analysis is a challenging task in neuroscience. The signal is associated with an activity of muscles in Human body. It is a part of how human can control the robotic arm for helping people with disabilities. In this paper, we propose a new method based on Singular Value Decomposition (SVD) and SMO algorithm for classifying sEMG signals into six basic hand movements. By this proposed method, SVD is adopted for feature extraction and SMO classifier is used for classifying sEMG signals into six classes of basic hand movements in five subjects. In preliminary experiment, we investigates the number of features that can yield the best performance in the classification and it is found that the optimal number of features is 50. For performance evaluation, five classifiers including Decision Tree, K-nearest neighbor, Naive Bayes, RBF, and SMO, with 10 fold cross-validation technique are adopted. The experimental results have shown that SMO algorithm with V2M-SVD feature extraction can achieve the best performance for the classification of basic hand movements.

Journal ArticleDOI
TL;DR: A unified convex framework that allows many different variations in the formulation with very diverse numerical performance and can capture the existing methods, i.e., standard soft-margin SVM, l1-SVM, and SVMs with standardization, feature selection, scaling, and many more SVMs, as special cases.
Abstract: In this paper, we start with the standard support vector machine (SVM) formulation and extend it by considering a general SVM formulation with normalized margin. This results in a unified convex framework that allows many different variations in the formulation with very diverse numerical performance. The proposed unified framework can capture the existing methods, i.e., standard soft-margin SVM, l1-SVM, and SVMs with standardization, feature selection, scaling, and many more SVMs, as special cases. Furthermore, our proposed framework can not only provide us with more insights on different SVMs from the “energy” and “penalty” point of views, which help us understand the connections and differences between them in a unified way, but also enable us to propose more SVMs that outperform the existing ones under some scenarios.

Proceedings ArticleDOI
01 Dec 2015
TL;DR: The present research confirmed that the use of selected number of Fourier coefficients to approximate the ECG beat signal and compress the input features to the classifier can lead to high classification accuracies and improve the generalization ability of the CSVM classifier.
Abstract: Electrocardiogram (ECG) is widely used for the diagnosis of cardiac arrhythmia conditions. An automatic classification of four beat types Normal (N), premature ventricular contraction (PVC), Supraventricular premature or ectopic beat (SVPB) and Fusion of ventricular and normal beat (FUSION) is implemented using a Multi-class Support Vector Machine (MSVM) and Complex Support Vector Machine (CSVM) algorithms [1]. The ECG signals used in these studies were obtained from the European ST-T Database. A number of beats from different leads and patients were selected for training and evaluating classifier performance. Successful ECG arrhythmia classification usually requires optimizing the following procedures: Pre-processing and beat detection, feature extraction and selection, and classifier optimization. Pre-processing and R peak detection is performed with the WFDB Software Package. This reads the annotation and finds the R (peak) location. R (peak) location used as a reference to detect peaks in other wave such P and T and extract ECG beat. ECG beats are extracted after windowing the signal using 106 samples before the R and 106 samples after the R-peak. Discrete Cosine and Sine transforms or the Discrete Fourier Transform (DFT) were used for feature extraction and dimensionality reduction of the input vector at the input of the classifier. Studies after selecting either 100 or 50 Fourier coefficients for reconstructing individual ECG beats in the feature selection phase were performed. MATLAB software routines were used to train and validate both the CSVM and the Multi-class Support Vector Machine (MSVM) classifier. A Complex kernel function, (Gaussian RBK) with 5-fold cross validation was used for adjusting the kernel values. Sequential minimal optimization (SMO) [2] was used to train the CSVM and compute the corresponding complex hyper-plane parameters. The aim of the study was to improve multi-class SVM by extending traditional SVM algorithms to complex spaces so as to simultaneously classify four types of heartbeats. Results illustrate that the proposed beat classifier is very reliable, and that it may be adopted for automatic detection of arrhythmia conditions and classification. Accuracies between 86% and 94% are obtained for MSVM and CSVM classification respectively. Using CSVM, a 4 classes problem can be classified rapidly by decomposing it into two distinct SVM tasks. Moreover, the present research confirmed that the use of selected number of Fourier coefficients to approximate the ECG beat signal and compress the input features to the classifier can lead to high classification accuracies and improve the generalization ability of the CSVM classifier. Future work on wavelet pre-processing to further compress the input space of the classifier by generating wavelets on the basis of higher order moment criteria [3] as well as alternative approaches for extending the CSVM input and output spaces to arbitrary dimension using Clifford algebra SVM [4] will be discussed at the conference.

Journal ArticleDOI
TL;DR: The experimental results on 2-D artificial datasets and benchmark datasets show that IMIES is able to select suitable kernel parameters, and the GG algorithm is computationally more efficient while achieving comparable accuracies to the SMO algorithm.
Abstract: The primal maximum margin problem of OCSVM is equivalent to a nearest point problem.A generalized Gilbert (GG) algorithm is proposed to solve the nearest point problem.An improved MIES is developed for the Gaussian kernel parameter selection.The GG algorithm is computationally more efficient than the SMO algorithm. This paper is devoted to two issues involved in the one-class support vector machine (OCSVM), i.e., the optimization algorithm and the kernel parameter selection. For appropriate choices of parameters, the primal maximum margin problem of OCSVM is equivalent to a nearest point problem. A generalized Gilbert (GG) algorithm is proposed to solve the nearest point problem. Compared with the algebraic algorithms developed for OCSVM, such as the well-known sequential minimal optimization (SMO) algorithm, the GG algorithm is a novel geometric algorithm that has an intuitive and explicit optimization target at each iteration. Moreover, an improved MIES (IMIES) is developed for the Gaussian kernel parameter selection. IMIES is implemented by constraining the geometric locations of edge and interior sample mappings relative to OCSVM separating hyper-planes. The experimental results on 2-D artificial datasets and benchmark datasets show that IMIES is able to select suitable kernel parameters, and the GG algorithm is computationally more efficient while achieving comparable accuracies to the SMO algorithm.

Journal ArticleDOI
TL;DR: A linear convergence rate for the extension of the Mitchell, Dem?yanov and Malozemov (MDM) algorithm for solving the Nearest Point Problem is proved and may pave the way for proving similar results for the SMO algorithm used to train SVMs.

Proceedings ArticleDOI
Shigeo Abe1
12 Jul 2015
TL;DR: This paper trains support vector regressors fusing sequential minimal optimization (SMO) and Newton's method using the SVR formulation that includes the absolute variables and demonstrates the validity of the method over SMO using several benchmark data sets.
Abstract: In this paper, we train support vector regressors (SVRs) fusing sequential minimal optimization (SMO) and Newton's method. We use the SVR formulation that includes the absolute variables. A partial derivative of the absolute variable with respect to the associated variable is indefinite when the variable takes on zero. We determine the derivative value according to whether the optimal solution exits in the positive region, negative region, or at zero. In selecting working set, we use the method that we have developed for the SVM, namely, in addition to the pair of variables selected by SMO, loop variables that repeatedly appear in training, are added to the working set. By this method the working set size is automatically determined. We demonstrate the validity of our method over SMO using several benchmark data sets.

Journal ArticleDOI
TL;DR: Methods of construction of support vector machines (SVMs) require no additional a priori information and allow large volumes of multidimensional data to be processed, which is especially important for solving various problems in computational biology.
Abstract: Methods of construction of support vector machines (SVMs) require no additional a priori information and allow large volumes of multidimensional data to be processed, which is especially important for solving various problems in computational biology. The main algorithms of SVM construction for binary classification are reviewed. The issue of the quality of the SVM learning algorithms is considered, and a description of proposed algorithms is given that is sufficient for their practical implementation. Comparative analysis of the efficiency of support vector classifiers is presented.

Proceedings ArticleDOI
12 Mar 2015
TL;DR: The basic idea of SVM and algorithms is introduced, the basic principles of least squares support vector machine are studied, concrete algorithm description, including the kernel function, etc; and the application of the algorithm in the classification is studied.
Abstract: Support vector machine is a classification algorithm emerged in recent years and has been successfully applied to many areas, and least squares support vector machine is a technology developed from the traditional support vector machine and has important researching significance. Firstly, this paper introduces the basic idea of SVM and algorithms; secondly to study the basic principles of least squares support vector machine, concrete algorithm description, including the kernel function, etc; finally, this paper studied the application of the algorithm in the classification, the least squares support vector machine as a novel artificial intelligence technology is an extension of the standard support vector machine and has been more widely used in various disciplines, with global optimization, good marketing ability and other features, so this research has some theoretical significance.

Proceedings ArticleDOI
Weiling Zhang1, Wei Hu1, Yong Min1, Lei Chen1, Le Zheng1, Xianzhuang Liu1 
01 Nov 2015
TL;DR: A synthetic stability classifier based on reformed support vector machines and an SVM solver for large scale problem is designed based on sequential minimal optimization (SMO) so as to speed up computation.
Abstract: Online transient stability assessment (TSA) has always been a tough problem for power systems. One of the promising solutions is to extract hidden stability rules from historical data by machine learning algorithms. These algorithms have not been fully accommodated to TSA, since power system has its special characteristics. To ensure conservativeness of TSA, this paper proposes a synthetic stability classifier based on reformed support vector machines. It separates samples into stable, unstable and grey area. The stable and unstable classes are expected to be exactly correct. Moreover, an SVM solver for large scale problem is designed based on sequential minimal optimization (SMO). It decomposes large scale training into parallel small scale training so as to speed up computation. Case studies on IEEE 39-bus system show no false dismissals and demonstrate the advantage of proposed classifier and SVM solver.

01 Jan 2015
TL;DR: Four most effective classification methods, namely, Radial Basis Function Network, SelfOrganizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been applied and performances of different combinations of classifiers and attribute reduction methods have been compared.
Abstract: With the increase in Internet users the number of malicious users are also growing day-by-day posing a serious problem in distinguishing between normal and abnormal behavior of users in the network. This has led to the research area of intrusion detection which essentially analyzes the network traffic and tries to determine normal and abnormal patterns of behavior.In this paper, we have analyzed the standard NSL-KDD intrusion dataset using some neural network based techniques for predicting possible intrusions. Four most effective classification methods, namely, Radial Basis Function Network, SelfOrganizing Map, Sequential Minimal Optimization, and Projective Adaptive Resonance Theory have been applied. In order to enhance the performance of the classifiers, three entropy based feature selection methods have been applied as preprocessing of data. Performances of different combinations of classifiers and attribute reduction methods have also been compared.

Proceedings ArticleDOI
Katelyn Gao1
01 Dec 2015
TL;DR: This work considers solving online one-class SVMs with an active-set method for quadratic programming (QP) and proposes a method to find a good warm start by exploiting the structure of the SVM optimization problem.
Abstract: A great advantage of support vector machines (SVMs) is its capability to learn decision borders, represented by a set of particular data points called margin support vectors. The real-time or nearly real-time online learning and detection from data streams poses stringent time and space constraints for the learner. We consider solving online one-class SVMs with an active-set method for quadratic programming (QP). At each iteration, the problem size is the size of the estimated support vectors so far. Active-set programming has the nice property that the solution of a previous problem can serve as a warm start of the next and computation time can thereby be greatly reduced. In general, finding a good warm-start point is difficult. We propose a method to find a good warm start by exploiting the structure of the SVM optimization problem.

Proceedings ArticleDOI
26 Jul 2015
TL;DR: The proposed approach combines a mean square error (MSE) formulation with Platt's sequential minimal optimization algorithm, with the aim of taking benefit from the effectiveness of this quadratic programming technique in both computation time and memory occupation.
Abstract: This paper addresses the problem of parameter optimization for Markov random field (MRF) models for supervised classification of remote sensing images. MRF model parameters generally impact on classification accuracy, and their automatic optimization is still an open issue especially in the supervised case. The proposed approach combines a mean square error (MSE) formulation with Platt's sequential minimal optimization algorithm, with the aim of taking benefit from the effectiveness of this quadratic programming technique in both computation time and memory occupation. The experimental validation is carried out with five real data sets comprising multipolarization and multifrequency SAR, multispectral high-resolution, single date and multitemporal imagery. The method is compared with two techniques based on MSE criteria and on the Ho-Kashyap and Goldfard-Idnani numerical algorithms.

17 Feb 2015
TL;DR: The paper presents a sequential dual method for the non-convex structured ramp loss minimization and includes the results on artificial data when the method is exposed to outlayers.
Abstract: The paper presents a sequential dual method for the non-convex structured ramp loss minimization. The method uses the concave-convex procedure which transforms a non-convex problem iterativelly into a series of convex ones. The sequential minimal optimization is used to deal with the convex optimization by sequentially traversing through the data and optimizing parameters associated with the incrementally built set of active structures inside each of the training examples. The paper includes the results on two sequence labeling problems, shallow parsing and part-of- speech tagging, and also presents the results on artificial data when the method is exposed to outlayers. The comparison with a primal sub-gradient method with the structured ramp and hinge loss is also presented.

Journal ArticleDOI
TL;DR: Two new decomposition algorithms for training bound-constrained SVMs, projected gradient algorithm and interior point method are combined together to solve the quadratic subproblem effciently.
Abstract: Abstract Bound-constrained Support Vector Machine(SVM) is one of the stateof- art model for binary classification. The decomposition method is currently one of the major methods for training SVMs, especially when the nonlinear kernel is used. In this paper, we proposed two new decomposition algorithms for training bound-constrained SVMs. Projected gradient algorithm and interior point method are combined together to solve the quadratic subproblem effciently. The main difference between the two algorithms is the way of choosing working set. The first one only uses first order derivative information of the model for simplicity. The second one incorporate part of second order information into the process of working set selection, besides the gradient. Both algorithms are proved to be global convergent in theory. New algorithms is compared with the famous package BSVM. Numerical experiments on several public data sets validate the effciency of the proposed methods.

Journal ArticleDOI
TL;DR: Investigating genomic interactions networks, investigating protein-protein interaction networks to predict cancer related proteins using sequential minimal Optimization (SMO) for training Support Vector Machine (SVM).
Abstract: early diagnosis of cancer is crucial to improving the survival rate and to prolong the lives of patients. With the large amounts of medical data available in the medical field, applying data mining tools and an efficient prediction methodology to diagnose diseases can lead to useful knowledge to support medical professionals in saving lives. This paper explores genomic interactions networks, investigating protein-protein interaction networks to predict cancer related proteins using sequential minimal Optimization (SMO) for training Support Vector Machine (SVM). The WEKA software was utilized as the data mining tool, which is an open source collection of machine learning algorithms. The provided data set was studied and analyzed in order to build a useful and reliable model to predict cancer and non-cancer related proteins. Keywordsmining, Support Vector Machine (SVM), Protein-Protein Interaction (PPI), Sequential Minimal Optimization (SMO)

Proceedings ArticleDOI
01 Sep 2015
TL;DR: A novel approach for hand gesture recognition in complex background images based on Histograms of Orientation Gradient which in independent of segmentation task followed by Sequential minimal Optimization (SMO).
Abstract: This paper presents a novel approach for hand gesture recognition in complex background images The method is based on Histograms of Orientation Gradient (HOG) which in independent of segmentation task followed by Sequential minimal Optimization (SMO) In our experiment we use benchmark Jochen-Triesch database for hand gesture recognition under complex and clutter background In addition to this perturbation is added to images to increase the database The vector size is reduced by increasing the number of pixels per cell without compromising accuracy The proposed system gives overall recognition rate of 9312% which demonstrates the robustness of the system under illumination changes, rotation and translation

Book ChapterDOI
01 Jan 2015
TL;DR: It was found that a combination of mel-frequency cepstrum coefficients (MFCCs) and entropy as an intrinsic audio feature captures the specific frequency response due to the tolerance in the nominal values of the electronic components associated to individual computer device.
Abstract: This study investigates the use of blind source computer device identification for forensic investigation of the recorded VoIP call. It was found that a combination of mel-frequency cepstrum coefficients (MFCCs) and entropy as an intrinsic audio feature captures the specific frequency response due to the tolerance in the nominal values of the electronic components associated to individual computer device. By applying the supervised learning techniques such as naive Bayesian, linear logistic regression, neural networks (NN), support vector machines (SVM) and sequential minimal optimization (SMO) classifier to the Entropy-MFCC features, state-of-the-art identification accuracy of above 99.8 % has been achieved on a set of 5 iMacs and 5 desktop PCs from the same model.

Proceedings ArticleDOI
01 Nov 2015
TL;DR: An improved version of a parallelizing sequential minimization optimization, which can avoid falling into the endless loops, is proposed, which is more effective than the previous methods.
Abstract: In our previous work, a parallelizing sequential minimization optimization was proposed, where the algorithm was executed successfully but its convergence cannot be guaranteed in some cases. In this paper, an improved version is proposed, which can avoid falling into the endless loops. In the proposed method, the multiple violation pairs are selected in each step, and depending on the decrement value of the objective function, a single-pair update or multiple-pair update is determined. Experimental results show that the proposed method is more effective than the previous methods. The parallel algorithm is well executed while the accuracy is maintained and the convergence is completely guaranteed.