scispace - formally typeset
I

Ivor W. Tsang

Researcher at University of Technology, Sydney

Publications -  361
Citations -  22076

Ivor W. Tsang is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Computer science & Support vector machine. The author has an hindex of 64, co-authored 322 publications receiving 18649 citations. Previous affiliations of Ivor W. Tsang include Hong Kong University of Science and Technology & Agency for Science, Technology and Research.

Papers
More filters
Journal ArticleDOI

Domain Adaptation via Transfer Component Analysis

TL;DR: This work proposes a novel dimensionality reduction framework for reducing the distance between domains in a latent space for domain adaptation and proposes both unsupervised and semisupervised feature extraction approaches, which can dramatically reduce thedistance between domain distributions by projecting data onto the learned transfer components.
Journal ArticleDOI

Core Vector Machines: Fast SVM Training on Very Large Data Sets

TL;DR: This paper shows that many kernel methods can be equivalently formulated as minimum enclosing ball (MEB) problems in computational geometry and obtains provably approximately optimal solutions with the idea of core sets, and proposes the proposed Core Vector Machine (CVM) algorithm, which can be used with nonlinear kernels and has a time complexity that is linear in m.
Posted Content

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

TL;DR: Co-teaching as discussed by the authors trains two deep neural networks simultaneously, and let them teach each other given every mini-batch: first, each network feeds forward all data and selects some data of possibly clean labels; secondly, two networks communicate with each other what data in this minibatch should be used for training; finally, each networks back propagates the data selected by its peer network and updates itself.
Proceedings ArticleDOI

Co-teaching: Robust Training of Deep Neural Networks with Extremely Noisy Labels

TL;DR: Empirical results on noisy versions of MNIST, CIFar-10 and CIFAR-100 demonstrate that Co-teaching is much superior to the state-of-the-art methods in the robustness of trained deep models.
Journal ArticleDOI

Domain Transfer Multiple Kernel Learning

TL;DR: Comprehensive experiments on three domain adaptation data sets demonstrate that DTMKL-based methods outperform existing cross-domain learning and multiple kernel learning methods.