I
Ivor W. Tsang
Researcher at University of Technology, Sydney
Publications - 361
Citations - 22076
Ivor W. Tsang is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Computer science & Support vector machine. The author has an hindex of 64, co-authored 322 publications receiving 18649 citations. Previous affiliations of Ivor W. Tsang include Hong Kong University of Science and Technology & Agency for Science, Technology and Research.
Papers
More filters
Proceedings Article
Graph Cross Networks with Vertex Infomax Pooling
TL;DR: A novel graph cross network (GXN) to achieve comprehensive feature learning from multiple scales of a graph, which includes a novel vertex infomax pooling (VIPool), and a novel feature-crossing layer, enabling feature interchange across scales.
Journal ArticleDOI
‘Who Likes What and, Why?’ Insights into Modeling Users’ Personality Based on Image ‘Likes’
TL;DR: This work attempts to model personality traits of users using a collection of images they tag as ‘favorite’ (or like) on Flickr, using a novel machine learning approach to model users’ personality based on the image features.
Posted Content
VR-SGD: A Simple Stochastic Variance Reduction Method for Machine Learning
Fanhua Shang,Kaiwen Zhou,Hongying Liu,James Cheng,Ivor W. Tsang,Lijun Zhang,Dacheng Tao,Licheng Jiao +7 more
TL;DR: Experimental results show that VR-SGD converges significantly faster than SVRG and Prox-SVRG, and usually outperforms state-of-the-art accelerated methods, e.g., Katyusha.
Proceedings ArticleDOI
SimpleNPKL: simple non-parametric kernel learning
TL;DR: This paper proposes an efficient approach to NPK learning from side information, referred to as Simple NPKL, which can efficiently learn non-parametric kernels from large sets of pairwise constraints and shows that the proposed SimpleNPKL with linear loss has a closed-form solution that can be simply computed by the Lanczos algorithm.
Proceedings ArticleDOI
Efficient kernel feature extraction for massive data sets
TL;DR: Comparisons with the original MMDA, KPCA, and KFD on a number of large data sets show that the proposed feature extractor can improve classification accuracy, and is also faster than these kernel-based methods by more than an order of magnitude.