scispace - formally typeset
J

James Bailey

Researcher at University of Melbourne

Publications -  394
Citations -  13628

James Bailey is an academic researcher from University of Melbourne. The author has contributed to research in topics: Cluster analysis & Computer science. The author has an hindex of 46, co-authored 377 publications receiving 10283 citations. Previous affiliations of James Bailey include University of London & Simon Fraser University.

Papers
More filters
Proceedings Article

Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality

TL;DR: In this article, the dimensional properties of adversarial regions are characterized via the use of Local Intrinsic Dimensionality (LID), which assesses the space-filling capability of the region surrounding a reference example, based on the distance distribution of the example to its neighbors.
Book ChapterDOI

Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks

TL;DR: Refool is proposed, a new type of backdoor attack inspired by an important natural phenomenon: reflection to plant reflections as backdoor into a victim model and can attack state-of-the-art DNNs with high success rate, and is resistant to state of theart backdoor defenses.
Proceedings Article

On the convergence and robustness of adversarial training

TL;DR: This paper proposes a dynamic training strategy to gradually increase the convergence quality of the generated adversarial examples, which significantly improves the robustness of adversarial training.
Proceedings ArticleDOI

Iterative Learning with Open-set Noisy Labels

TL;DR: In this paper, a Siamese network is proposed to detect noisy labels and learn deep discriminative features in an iterative fashion, and a reweighting module is also applied to simultaneously emphasize the learning from clean labels and reduce the effect caused by noisy labels.
Journal ArticleDOI

Understanding adversarial attacks on deep learning based medical image analysis systems

TL;DR: In this article, the authors provide a deeper understanding of adversarial examples in the context of medical images and find that medical DNN models can be more vulnerable to adversarial attacks compared to models for natural images, according to two different viewpoints.