J
Jie Du
Researcher at Shenzhen University
Publications - 10
Citations - 183
Jie Du is an academic researcher from Shenzhen University. The author has contributed to research in topics: Extreme learning machine & Deep learning. The author has an hindex of 5, co-authored 9 publications receiving 67 citations. Previous affiliations of Jie Du include University of Macau.
Papers
More filters
Journal ArticleDOI
Novel Efficient RNN and LSTM-Like Architectures: Recurrent and Gated Broad Learning Systems and Their Applications for Text Classification
TL;DR: In this article, a kind of flat neural networks called the broad learning system (BLS) is employed to derive two novel learning methods for text classification, including recurrent BLS and long short-term memory (LSTM)-like architecture: gated BLS (G-BLS).
Journal ArticleDOI
Postboosting Using Extended G-Mean for Online Sequential Multiclass Imbalance Learning
TL;DR: PBG is shown to outperform the other compared methods on all data sets in various aspects including the issues of data scarcity, dense-majority, DCDS, DCDD, and unscaled data.
Journal ArticleDOI
Accurate and efficient sequential ensemble learning for highly imbalanced multi-class data.
Chi-Man Vong,Jie Du +1 more
TL;DR: A novel sequential ensemble learning (SEL) framework designed to simultaneously resolve multiple issues must be resolved simultaneously, including accuracy on classifying highly imbalanced multi-class data; training efficiency for large data; and sensitivity to high imbalance ratio (IR).
Journal ArticleDOI
Post-boosting of classification boundary for imbalanced data using geometric mean
TL;DR: A novel imbalance learning method for binary classes, named as Post-Boosting of classification boundary for Imbalanced data (PBI), which can significantly improve the performance of any trained neural networks (NN) classification boundary.
Journal ArticleDOI
Robust Online Multilabel Learning Under Dynamic Changes in Data Distribution With Labels
Jie Du,Chi-Man Vong +1 more
TL;DR: The proposed work is highly robust to CDDL in both the sequential model update and multilabel thresholding and improves the performance in different evaluation measures, including Hamming loss, F1-measure, Precision, and Recall while taking short training time on most evaluated datasets.