scispace - formally typeset
Open AccessProceedings Article

What to do about bad language on the internet

Reads0
Chats0
TLDR
A critical review of the NLP community's response to the landscape of bad language is offered, and a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora is presented.
Abstract
The rise of social media has brought computational linguistics in ever-closer contact with bad language: text that defies our expectations about vocabulary, spelling, and syntax. This paper surveys the landscape of bad language, and offers a critical review of the NLP community’s response, which has largely followed two paths: normalization and domain adaptation. Each approach is evaluated in the context of theoretical and empirical work on computer-mediated communication. In addition, the paper presents a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Lörres, Möppes, and the Swiss. (Re)Discovering regional patterns in anonymous social media data

TL;DR: The results of this study strongly suggest the existence of region-specific patterns of language use representing distinctive strategies of linguistic stylization in relation to linguistic resources and topics.
Dissertation

Approaches to Automatic Text Structuring

Nicolai Erbs
TL;DR: Two prototypes of textStructuring systems are presented, which integrate techniques for automatic text structuring in a wiki setting and in an e-learning setting with eBooks, and the effect of senses on computing similarities is analyzed.
Proceedings ArticleDOI

Passive-Aggressive Sequence Labeling with Discriminative Post-Editing for Recognising Person Entities in Tweets

TL;DR: Noise-tolerant methods for sequence labeling are explored and discriminative post-editing is applied to exceed state-of-the-art performance for person recognition in tweets, reaching an F1 of 84%.
Posted Content

Generalisation in Named Entity Recognition: A Quantitative Analysis

TL;DR: In this article, the authors quantify how this diversity impacts state-of-the-art NER methods, by measuring named entity (NE) and context variability, feature sparsity, and their effects on precision and recall.
Proceedings ArticleDOI

Semi-supervised probabilistics approach for normalising informal short text messages

TL;DR: The method uses language model probability to characterise the relationship between formal and Informal-word, then employs the string similarity with a log-linear model to includes features for both word-level transformation and local context similarity.
References
More filters
Proceedings ArticleDOI

Earthquake shakes Twitter users: real-time event detection by social sensors

TL;DR: This paper investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event and produces a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location.
Journal ArticleDOI

Critical questions for big data

TL;DR: The era of Big Data has begun as discussed by the authors, where diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people.
Proceedings ArticleDOI

Feature-rich part-of-speech tagging with a cyclic dependency network

TL;DR: A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models.
Book

Natural Language Processing with Python

TL;DR: This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation.
Proceedings ArticleDOI

Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling

TL;DR: By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference.
Related Papers (5)