scispace - formally typeset
Open AccessPosted Content

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

Reads0
Chats0
TLDR
The authors proposed a new challenge set for multimodal classification, focusing on detecting hate speech in multi-modal memes, where difficult examples are added to the dataset to make it hard to rely on unimodal signals.
Abstract
This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. It is constructed such that unimodal models struggle and only multimodal models can succeed: difficult examples ("benign confounders") are added to the dataset to make it hard to rely on unimodal signals. The task requires subtle reasoning, yet is straightforward to evaluate as a binary classification problem. We provide baseline performance numbers for unimodal models, as well as for multimodal models with various degrees of sophistication. We find that state-of-the-art methods perform poorly compared to humans (64.73% vs. 84.7% accuracy), illustrating the difficulty of the task and highlighting the challenge that this important problem poses to the community.

read more

Citations
More filters
Posted Content

Learning Transferable Visual Models From Natural Language Supervision

TL;DR: In this article, a pre-training task of predicting which caption goes with which image is used to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
Proceedings ArticleDOI

Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

TL;DR: The HASOC track intends to stimulate development in Hate Speech for Hindi, German and English by identifying Hate Speech in Social Media using LSTM networks processing word embedding input.
Journal ArticleDOI

Directions in abusive language training data, a systematic review: Garbage in, garbage out.

TL;DR: This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data leading to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.
Proceedings ArticleDOI

TextOCR: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text

TL;DR: TextOCR as discussed by the authors is an arbitrary-shaped scene text detection and recognition with 900k annotated words collected on real images from TextVQA dataset, which can do scene text based reasoning on an image in an end-to-end fashion.
Posted Content

Tackling Online Abuse: A Survey of Automated Abuse Detection Methods

TL;DR: A comprehensive survey of the methods that have been proposed to date for automated abuse detection in the field of natural language processing (NLP), providing a platform for further development of this area.
References
More filters
Proceedings ArticleDOI

VQA: Visual Question Answering

TL;DR: The task of free-form and open-ended Visual Question Answering (VQA) is proposed, given an image and a natural language question about the image, the task is to provide an accurate natural language answer.
Proceedings ArticleDOI

CIDEr: Consensus-based image description evaluation

TL;DR: A novel paradigm for evaluating image descriptions that uses human consensus is proposed and a new automated metric that captures human judgment of consensus better than existing metrics across sentences generated by various sources is evaluated.
Posted Content

HuggingFace's Transformers: State-of-the-art Natural Language Processing.

TL;DR: The \textit{Transformers} library is an open-source library that consists of carefully engineered state-of-the art Transformer architectures under a unified API and a curated collection of pretrained models made by and available for the community.
Posted Content

Aggregated Residual Transformations for Deep Neural Networks

TL;DR: On the ImageNet-1K dataset, it is empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy and is more effective than going deeper or wider when the authors increase the capacity.
Posted Content

VQA: Visual Question Answering

TL;DR: The task of free-form and open-ended Visual Question Answering (VQA) is proposed, given an image and a natural language question about the image, the task is to provide an accurate natural language answer.
Related Papers (5)