scispace - formally typeset
Open AccessProceedings Article

What to do about bad language on the internet

Reads0
Chats0
TLDR
A critical review of the NLP community's response to the landscape of bad language is offered, and a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora is presented.
Abstract
The rise of social media has brought computational linguistics in ever-closer contact with bad language: text that defies our expectations about vocabulary, spelling, and syntax. This paper surveys the landscape of bad language, and offers a critical review of the NLP community’s response, which has largely followed two paths: normalization and domain adaptation. Each approach is evaluated in the context of theoretical and empirical work on computer-mediated communication. In addition, the paper presents a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Overcoming Language Variation in Sentiment Analysis with Social Attention

TL;DR: This paper proposed a novel attention-based neural network architecture, in which attention is divided among several basis models, depending on the author's position in the social network, to make sentiment analysis more robust to social language variation.
Proceedings ArticleDOI

Robust Training under Linguistic Adversity

TL;DR: This work proposes a linguistically-motivated approach for training robust models based on exposing the model to corrupted text examples at training time, considering several flavours of linguistically plausible corruption, include lexical semantic and syntactic methods.
Book ChapterDOI

Natural Language Processing, Sentiment Analysis, and Clinical Analytics

TL;DR: This work will look into various prevalent theories underlying the NLP field and how they can be leveraged to gather users’ sentiments on social media and some applications of sentiment analysis and application of NLP to mental health.
Proceedings Article

Crowdsourcing and annotating NER for Twitter #drift

TL;DR: Two important points are observed: (a) language drift on Twitter is significant, and while off-the-shelf systems have been reported to perform well on in-sample data, they often perform poorly on new samples of tweets, making it more feasible to “catch up” with language drift.
Posted Content

Twitter-Network Topic Model: A Full Bayesian Treatment for Social Network and Text Modeling.

TL;DR: The TN topic model significantly outperforms several existing nonparametric models due to its flexibility and enables additional informative inference such as authors' interests, hashtag analysis, as well as leading to further applications such as author recommendation, automatic topic labeling and hashtag suggestion.
References
More filters
Proceedings ArticleDOI

Earthquake shakes Twitter users: real-time event detection by social sensors

TL;DR: This paper investigates the real-time interaction of events such as earthquakes in Twitter and proposes an algorithm to monitor tweets and to detect a target event and produces a probabilistic spatiotemporal model for the target event that can find the center and the trajectory of the event location.
Journal ArticleDOI

Critical questions for big data

TL;DR: The era of Big Data has begun as discussed by the authors, where diverse groups argue about the potential benefits and costs of analyzing genetic sequences, social media interactions, health records, phone logs, government records, and other digital traces left by people.
Proceedings ArticleDOI

Feature-rich part-of-speech tagging with a cyclic dependency network

TL;DR: A new part-of-speech tagger is presented that demonstrates the following ideas: explicit use of both preceding and following tag contexts via a dependency network representation, broad use of lexical features, and effective use of priors in conditional loglinear models.
Book

Natural Language Processing with Python

TL;DR: This book offers a highly accessible introduction to natural language processing, the field that supports a variety of language technologies, from predictive text and email filtering to automatic summarization and translation.
Proceedings ArticleDOI

Incorporating Non-local Information into Information Extraction Systems by Gibbs Sampling

TL;DR: By using simulated annealing in place of Viterbi decoding in sequence models such as HMMs, CMMs, and CRFs, it is possible to incorporate non-local structure while preserving tractable inference.
Related Papers (5)