scispace - formally typeset
C

Chang Xu

Researcher at University of Sydney

Publications -  467
Citations -  13012

Chang Xu is an academic researcher from University of Sydney. The author has contributed to research in topics: Computer science & Chemistry. The author has an hindex of 42, co-authored 260 publications receiving 7189 citations. Previous affiliations of Chang Xu include University of Melbourne & Information Technology University.

Papers
More filters
Proceedings Article

Cost-sensitive feature selection via f-measure optimization reduction

TL;DR: This paper presents a novel cost-sensitive feature selection (CSFS) method which optimizes F-measure instead of accuracy to take class imbalance issue into account, and proves the efficiency and significance of the method.
Posted Content

DAFL: Data-Free Learning of Student Networks

TL;DR: This work proposes a novel framework for training efficient deep neural networks by exploiting generative adversarial networks (GANs), where the pre-trained teacher networks are regarded as a fixed discriminator and the generator is utilized for derivating training samples which can obtain the maximum response on the discriminator.
Posted Content

Streaming Label Learning for Modeling Labels on the Fly

TL;DR: Streaming label learning is defined, i.e., labels are arrived on the fly, to model newly arrived labels with the help of the knowledge learned from past labels and can generate a tighter generalization error bound for new labels than the general ERM framework with trace norm or Frobenius norm regularization.
Proceedings ArticleDOI

R-SVM+: Robust Learning with Privileged Information

TL;DR: A novel Robust SVM+ (RSVM+) algorithm is proposed based on a rigorous theoretical analysis and transformed into a quadratic programming problem, which can be efficiently optimized using off-the-shelf solvers.
Proceedings ArticleDOI

A Targeted Attack on Black-Box Neural Machine Translation with Parallel Data Poisoning

TL;DR: It is shown that targeted attacks on black-box NMT systems are feasible, based on poisoning a small fraction of their parallel training data, and that this attack can be realised practically via targeted corruption of web documents crawled to form the system’s training data.