scispace - formally typeset
Open AccessProceedings Article

What to do about bad language on the internet

Reads0
Chats0
TLDR
A critical review of the NLP community's response to the landscape of bad language is offered, and a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora is presented.
Abstract
The rise of social media has brought computational linguistics in ever-closer contact with bad language: text that defies our expectations about vocabulary, spelling, and syntax. This paper surveys the landscape of bad language, and offers a critical review of the NLP community’s response, which has largely followed two paths: normalization and domain adaptation. Each approach is evaluated in the context of theoretical and empirical work on computer-mediated communication. In addition, the paper presents a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Multimodular Text Normalization of Dutch User-Generated Content

TL;DR: This work investigates the usefulness of a multimodular approach to account for the diversity of normalization issues encountered in user-generated content (UGC) and provides a detailed analysis of the performance of the different modules and the overall system.
Journal ArticleDOI

#London2012: Towards Citizen-Contributed Urban Planning Through Sentiment Analysis of Twitter Data

TL;DR: A combined approach for analyzing large sports events considering event days vs comparison days (before or after the event) and different user groups and different users groups (residents vs visitors), as well as integrating sentiment analysis and topic extraction is proposed.
Proceedings ArticleDOI

Part-of-Speech Tagging for Historical English

TL;DR: This paper assess the capability of domain adaptation techniques to cope with historical texts, focusing on the classic benchmark task of part-of-speech tagging, and demonstrate that feature embedding method for unsupervised domain adaptation outperforms word embeddings and Brown clusters, showing the importance of embedding the entire feature space, rather than just individual words.
Proceedings ArticleDOI

Fast Easy Unsupervised Domain Adaptation with Marginalized Structured Dropout

TL;DR: This work proposes a new technique called marginalized structured dropout, which exploits feature structure to obtain a remarkably simple and efficient feature projection in the context of fine-grained part-of-speech tagging on a dataset of historical Portuguese.
Proceedings ArticleDOI

Multi-modular domain-tailored OCR post-correction

TL;DR: It is shown that the combination of different approaches, such as e.g. Statistical Machine Translation and spell checking, with the help of a ranking mechanism tremendously improves over single-handed approaches in OCR post-correction.
References
More filters
Proceedings ArticleDOI

Earthquake shakes Twitter users: real-time event detection by social sensors

TL;DR: This paper investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event and produces a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location.
Journal ArticleDOI

Critical questions for big data

TL;DR: The era of Big Data has begun as discussed by the authors, where diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people.
Proceedings ArticleDOI

Feature-rich part-of-speech tagging with a cyclic dependency network

TL;DR: A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models.
Book

Natural Language Processing with Python

TL;DR: This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation.
Proceedings ArticleDOI

Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling

TL;DR: By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference.
Related Papers (5)