scispace - formally typeset
J

James Bailey

Researcher at University of Melbourne

Publications -  394
Citations -  13628

James Bailey is an academic researcher from University of Melbourne. The author has contributed to research in topics: Cluster analysis & Computer science. The author has an hindex of 46, co-authored 377 publications receiving 10283 citations. Previous affiliations of James Bailey include University of London & Simon Fraser University.

Papers
More filters
Journal ArticleDOI

Information Theoretic Measures for Clusterings Comparison: Variants, Properties, Normalization and Correction for Chance

TL;DR: An organized study of information theoretic measures for clustering comparison, including several existing popular measures in the literature, as well as some newly proposed ones, and advocates the normalized information distance (NID) as a general measure of choice.
Proceedings ArticleDOI

Information theoretic measures for clusterings comparison: is a correction for chance necessary?

TL;DR: This paper derives the analytical formula for the expected mutual information value between a pair of clusterings, and proposes the adjusted version for several popular information theoretic based measures.
Proceedings ArticleDOI

Symmetric Cross Entropy for Robust Learning With Noisy Labels

TL;DR: The proposed Symmetric cross entropy Learning (SL) approach simultaneously addresses both the under learning and overfitting problem of CE in the presence of noisy labels, and empirically shows that SL outperforms state-of-the-art methods.
Posted Content

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

TL;DR: The analysis of the LID characteristic for adversarial regions not only motivates new directions of effective adversarial defense, but also opens up more challenges for developing new attacks to better understand the vulnerabilities of DNNs.
Proceedings Article

Improving Adversarial Robustness Requires Revisiting Misclassified Examples

TL;DR: This paper proposes a new defense algorithm called MART, which explicitly differentiates the misclassified and correctly classified examples during the training, and shows that MART and its variant could significantly improve the state-of-the-art adversarial robustness.