Swift Imbalance Data Classification using SMOTE and Extreme Learning Machine
Citations
19 citations
15 citations
Cites methods from "Swift Imbalance Data Classification..."
...Several techniques have been used to alleviate the problem of class imbalance, including data sampling (Random Undersampling (RUS) [12]) where a duplicating instance is generated from the minority classes and boosting [6], RUSBoost [13]) where synthetic examples from the rare or minority class are created to interpolate between existing minority-class instances, thus creating a denser minority class....
[...]
9 citations
7 citations
3 citations
References
17,313 citations
11,512 citations
10,217 citations
"Swift Imbalance Data Classification..." refers methods in this paper
...[26] proposed a simple and effective learning algorithm known as ELM....
[...]
6,320 citations
"Swift Imbalance Data Classification..." refers background or methods in this paper
...Certain significant methods for tackling the imbalanced 978-1-5386-9471-8/19/$31.00 c©2019 IEEE dataset situation includes two basic broad fields : Sampling methods for imbalanced data and Cost-Sensitive Methods for Imbalanced Learning [1]....
[...]
...For evaluation purpose, we have used Geometric mean (Gmean), F-score and ROC curve as the evaluation metrics as these are principally used for assessing the data classification algorithms [1]....
[...]
...that the dataset available in these fields was imbalanced, which was the principal reason of the substandard classification performance [1]....
[...]
...dataset situation includes two basic broad fields : Sampling methods for imbalanced data and Cost-Sensitive Methods for Imbalanced Learning [1]....
[...]
2,914 citations
"Swift Imbalance Data Classification..." refers methods in this paper
...Some techniques includes random undersampling and oversampling; informed undersampling [7]; synthetic sampling such as Synthetic Minority Over-sampling Technique (SMOTE) which has shown a great deal of success in various applications [8] and sampling with data cleaning techniques like the Condensed Nearest Neighbor rule [9], OSS method [10], the Neighborhood Cleaning rule (NCL) [11]....
[...]
...However, in-case of low imbalance ratio J , one can use downsampling, Condensed Nearest Neighbour (CNN) with T-Link [9], along with ELM....
[...]
...For comparing, we have also used a downsampling technique, CNN+T-Link [9]....
[...]
...The prior three have been used with original dataset, upsampled dataset (by using SMOTE) and downsampled dataset (by using CNN+T-Link)....
[...]
...In this work, four different models, namely DTC, ELM, K-Means (each with original data, upsampling data using SMOTE and downsampling data using CNN+T-Link) along with W-ELM have been compared against each other....
[...]