scispace - formally typeset
I

Ivor W. Tsang

Researcher at University of Technology, Sydney

Publications -  361
Citations -  22076

Ivor W. Tsang is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Computer science & Support vector machine. The author has an hindex of 64, co-authored 322 publications receiving 18649 citations. Previous affiliations of Ivor W. Tsang include Hong Kong University of Science and Technology & Agency for Science, Technology and Research.

Papers
More filters
Posted Content

Variational Composite Autoencoders.

TL;DR: A variational composite autoencoder is proposed to sidestep the issue of low effective learning in the presence of the complex data structure or the intractable latent variable by amortizing on top of the hierarchical latent variable model.
Journal ArticleDOI

Structure-Informed Shadow Removal Networks

TL;DR: Zhang et al. as discussed by the authors proposed a structure-informed shadow removal network (StructNet) to leverage the image structure information to address the shadow remnant problem, which first reconstructs the structure information of the input image without shadows, and then uses the restored shadow-free structure prior to guiding the image-level shadow removal.
Posted Content

Black-box Optimizer with Implicit Natural Gradient

TL;DR: A novel theoretical framework for black-box optimization is presented, in which the method performs stochastic update with the implicit natural gradient of an exponential-family distribution, and the convergence rate of this framework with full matrix update for convex functions is proved.
Posted Content

Node Attribute Generation on Graphs.

TL;DR: This work proposes a deep adversarial learning based method to generate node attributes; called node attribute neural generator (NANG), which learns a unifying latent representation which is shared by both node attributes and graph structures and can be translated to different modalities.
Journal ArticleDOI

Latent Boundary-guided Adversarial Training

TL;DR: A novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining (LADDER) that adversarially trains DNN models on latent boundary-guided adversarial examples through adding perturbations to latent features is proposed.