Open AccessPosted Content
An Alternative Auxiliary Task for Enhancing Image Classification
Reads0
Chats0
TLDR
In this paper, the Fourier transform of the input image was used as an auxiliary task to improve the performance of the primary image reconstruction task and introduce novel constraints not well covered by image reconstruction.Abstract:
Image reconstruction is likely the most predominant auxiliary task for image
classification. In this paper, we investigate ``estimating the Fourier
Transform of the input image" as a potential alternative auxiliary task, in the
hope that it may further boost the performances on the primary task or
introduce novel constraints not well covered by image reconstruction. We
experimented with five popular classification architectures on the CIFAR-10
dataset, and the empirical results indicated that our proposed auxiliary task
generally improves the classification accuracy. More notably, the results
showed that in certain cases our proposed auxiliary task may enhance the
classifiers' resistance to adversarial attacks generated using the fast
gradient sign method.read more
References
More filters
Proceedings ArticleDOI
Deep Residual Learning for Image Recognition
TL;DR: In this article, the authors proposed a residual learning framework to ease the training of networks that are substantially deeper than those used previously, which won the 1st place on the ILSVRC 2015 classification task.
Proceedings Article
Very Deep Convolutional Networks for Large-Scale Image Recognition
Karen Simonyan,Andrew Zisserman +1 more
TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Dissertation
Learning Multiple Layers of Features from Tiny Images
TL;DR: In this paper, the authors describe how to train a multi-layer generative model of natural images, using a dataset of millions of tiny colour images, described in the next section.
Posted Content
MobileNetV2: Inverted Residuals and Linear Bottlenecks
TL;DR: A new mobile architecture, MobileNetV2, is described that improves the state of the art performance of mobile models on multiple tasks and benchmarks as well as across a spectrum of different model sizes and allows decoupling of the input/output domains from the expressiveness of the transformation.
Proceedings Article
Explaining and Harnessing Adversarial Examples
TL;DR: It is argued that the primary cause of neural networks' vulnerability to adversarial perturbation is their linear nature, supported by new quantitative results while giving the first explanation of the most intriguing fact about them: their generalization across architectures and training sets.