scispace - formally typeset
Open AccessPosted Content

The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes

TLDR
The authors proposed a new challenge set for multimodal classification, focusing on detecting hate speech in multi-modal memes, where difficult examples are added to the dataset to make it hard to rely on unimodal signals.
Abstract: 
This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes. It is constructed such that unimodal models struggle and only multimodal models can succeed: difficult examples ("benign confounders") are added to the dataset to make it hard to rely on unimodal signals. The task requires subtle reasoning, yet is straightforward to evaluate as a binary classification problem. We provide baseline performance numbers for unimodal models, as well as for multimodal models with various degrees of sophistication. We find that state-of-the-art methods perform poorly compared to humans (64.73% vs. 84.7% accuracy), illustrating the difficulty of the task and highlighting the challenge that this important problem poses to the community.

read more

Citations
More filters
Posted Content

Learning Transferable Visual Models From Natural Language Supervision

TL;DR: In this article, a pre-training task of predicting which caption goes with which image is used to learn SOTA image representations from scratch on a dataset of 400 million (image, text) pairs collected from the internet.
Proceedings ArticleDOI

Overview of the HASOC track at FIRE 2019: Hate Speech and Offensive Content Identification in Indo-European Languages

TL;DR: The HASOC track intends to stimulate development in Hate Speech for Hindi, German and English by identifying Hate Speech in Social Media using LSTM networks processing word embedding input.
Journal ArticleDOI

Directions in abusive language training data, a systematic review: Garbage in, garbage out.

TL;DR: This paper systematically reviews abusive language dataset creation and content in conjunction with an open website for cataloguing abusive language data leading to a synthesis providing evidence-based recommendations for practitioners working with this complex and highly diverse data.
Proceedings ArticleDOI

TextOCR: Towards large-scale end-to-end reasoning for arbitrary-shaped scene text

TL;DR: TextOCR as discussed by the authors is an arbitrary-shaped scene text detection and recognition with 900k annotated words collected on real images from TextVQA dataset, which can do scene text based reasoning on an image in an end-to-end fashion.
Posted Content

Tackling Online Abuse: A Survey of Automated Abuse Detection Methods

TL;DR: A comprehensive survey of the methods that have been proposed to date for automated abuse detection in the field of natural language processing (NLP), providing a platform for further development of this area.
References
More filters
Proceedings Article

Automated Hate Speech Detection and the Problem of Offensive Language

TL;DR: This work used a crowd-sourced hate speech lexicon to collect tweets containing hate speech keywords and labels a sample of these tweets into three categories: those containinghate speech, only offensive language, and those with neither.
Proceedings ArticleDOI

Hateful Symbols or Hateful People? Predictive Features for Hate Speech Detection on Twitter

TL;DR: A list of criteria founded in critical race theory is provided, and these are used to annotate a publicly available corpus of more than 16k tweets and present a dictionary based the most indicative words in the data.
Proceedings ArticleDOI

CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

TL;DR: In this paper, the authors present a diagnostic dataset that tests a range of visual reasoning abilities and provides insights into their abilities and limitations, and use this dataset to analyze a variety of modern visual reasoning systems.
Posted Content

CLEVR: A Diagnostic Dataset for Compositional Language and Elementary Visual Reasoning

TL;DR: This work presents a diagnostic dataset that tests a range of visual reasoning abilities and uses this dataset to analyze a variety of modern visual reasoning systems, providing novel insights into their abilities and limitations.
Journal ArticleDOI

YFCC100M: the new data in multimedia research

TL;DR: This publicly available curated dataset of almost 100 million photos and videos is free and legal for all.
Related Papers (5)