scispace - formally typeset
Open AccessBook ChapterDOI

Detecting People in Artwork with CNNs

Reads0
Chats0
TLDR
In this article, the authors show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements.
Abstract
CNNs have massively improved performance in object detection in photographs. However research into object detection in artwork remains limited. We show state-of-the-art performance on a challenging dataset, People-Art, which contains people from photos, cartoons and 41 different artwork movements. We achieve this high performance by fine-tuning a CNN for this task, thus also demonstrating that training CNNs on photos results in overfitting for photos: only the first three or four layers transfer from photos to artwork. Although the CNN's performance is the highest yet, it remains less than 60\% AP, suggesting further work is needed for the cross-depiction problem. The final publication is available at Springer via this http URL

read more

Citations
More filters
Proceedings ArticleDOI

Cross-Domain Weakly-Supervised Object Detection Through Progressive Domain Adaptation

TL;DR: In this paper, a cross-domain weakly supervised object detection framework is proposed to detect common objects in a variety of image domains without instance-level annotations, where the classes to be detected in the target domain are all or a subset of those in the source domain.
Journal ArticleDOI

Bearing Fault Detection and Diagnosis Using Case Western Reserve University Dataset With Deep Learning Approaches: A Review

TL;DR: This paper summarizes the recent works which use the CWRU bearing dataset in machinery fault detection and diagnosis employing deep learning algorithms and can be of good help for future researchers to start their work on machinery fault Detection and diagnosis using the C WRU dataset.
Proceedings ArticleDOI

Discovering Visual Patterns in Art Collections With Spatially-Consistent Feature Learning

TL;DR: The key technical insight is to adapt a standard deep feature to this task by fine-tuning it on the specific art collection using self-supervised learning, and spatial consistency between neighbouring feature matches is used as supervisory fine- Tuning signal.
Posted Content

OmniArt: Multi-task Deep Learning for Artistic Data Analysis.

TL;DR: An efficient and accurate method for multi-task learning with a shared representation applied in the artistic domain and a challenge like nature to the new aggregated data set with almost half a million samples and structured meta-data to encourage further research and societal engagement.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: This work investigates the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting using an architecture with very small convolution filters, which shows that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 weight layers.
Proceedings Article

Very Deep Convolutional Networks for Large-Scale Image Recognition

TL;DR: In this paper, the authors investigated the effect of the convolutional network depth on its accuracy in the large-scale image recognition setting and showed that a significant improvement on the prior-art configurations can be achieved by pushing the depth to 16-19 layers.
Proceedings ArticleDOI

ImageNet: A large-scale hierarchical image database

TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Related Papers (5)