I
Ivor W. Tsang
Researcher at University of Technology, Sydney
Publications - 361
Citations - 22076
Ivor W. Tsang is an academic researcher from University of Technology, Sydney. The author has contributed to research in topics: Computer science & Support vector machine. The author has an hindex of 64, co-authored 322 publications receiving 18649 citations. Previous affiliations of Ivor W. Tsang include Hong Kong University of Science and Technology & Agency for Science, Technology and Research.
Papers
More filters
Journal ArticleDOI
Earning Extra Performance from Restrictive Feedbacks
TL;DR: In this paper , a model provider can access the operational performance of the candidate model multiple times via feedback from a local user (or a group of users) by utilizing the feedbacks, which could be as simple as scalars, such as inference accuracy or usage rate.
Journal ArticleDOI
LADDER: Latent boundary-guided adversarial training
TL;DR: Li et al. as mentioned in this paper proposed a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining (LADDER) that adversarially trains DNN models on latent boundary-guided adversarial examples.
Journal ArticleDOI
UTSGAN: Unseen Transition Suss GAN for Transition-Aware Image-to-image Translation
TL;DR: Unseen Transition Suss GAN (UTSGAN) as mentioned in this paper constructs a manifold for the transition with a stochastic transition encoder and coherently regularizes and generalizes result consistency and transition consistency on both training and unobserved translations.
Journal ArticleDOI
Latent Class-Conditional Noise Model
TL;DR: In this paper , a latent class-conditional noise model (LCCN) is proposed to parameterize the noise transition under a Bayesian framework, which is constrained on a simplex characterized by the complete dataset, instead of some ad hoc parametric space wrapped by the neural layer.
Posted Content
Understanding VAEs in Fisher-Shannon Plane
TL;DR: In this article, the authors investigate VAEs in the Fisher-Shannon plane and demonstrate that the representation learning and the log-likelihood estimation are intrinsically related to these two information quantities.