Open AccessProceedings Article
What to do about bad language on the internet
Jacob Eisenstein
- pp 359-369
Reads0
Chats0
TLDR
A critical review of the NLP community's response to the landscape of bad language is offered, and a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora is presented.Abstract:
The rise of social media has brought computational linguistics in ever-closer contact with bad language: text that defies our expectations about vocabulary, spelling, and syntax. This paper surveys the landscape of bad language, and offers a critical review of the NLP community’s response, which has largely followed two paths: normalization and domain adaptation. Each approach is evaluated in the context of theoretical and empirical work on computer-mediated communication. In addition, the paper presents a quantitative analysis of the lexical diversity of social media text, and its relationship to other corpora.read more
Citations
More filters
Proceedings Article
Improved Part-of-Speech Tagging for Online Conversational Text with Word Clusters
TL;DR: This work systematically evaluates the use of large-scale unsupervised word clustering and new lexical features to improve tagging accuracy on Twitter and achieves state-of-the-art tagging results on both Twitter and IRC POS tagging tasks.
Journal ArticleDOI
Survey of the state of the art in natural language generation: core tasks, applications and evaluation
Albert Gatt,Emiel Krahmer +1 more
TL;DR: A survey of the state of the art in natural language generation can be found in this article, with an up-to-date synthesis of research on the core tasks in NLG and the architectures adopted in which such tasks are organized.
Proceedings ArticleDOI
BERTweet: A pre-trained language model for English Tweets
TL;DR: BERweet as discussed by the authors is the first large-scale pre-trained language model for English Tweets, having the same architecture as BERT-base and is trained using the RoBERTa pre-training procedure.
Journal ArticleDOI
Predicting crime using Twitter and kernel density estimation
TL;DR: This article uses Twitter-specific linguistic analysis and statistical topic modeling to automatically identify discussion topics across a major city in the United States and shows that the addition of Twitter data improves crime prediction performance versus a standard approach based on kernel density estimation.
Posted Content
Language (Technology) is Power: A Critical Survey of "Bias" in NLP
TL;DR: The authors survey 146 papers analyzing "bias" in NLP systems, finding that their motivations are often vague, inconsistent, and lacking in normative reasoning, despite the fact that analyzing bias is an inherently normative process.
References
More filters
Proceedings Article
Unsupervised Mining of Lexical Variants from Noisy Text
TL;DR: A novel unsupervised method for extracting domain-specific lexical variants given a large volume of text and yields a 20% reduction in word error rate over an existing state-of-the-art approach.
Phonological Factors in Social Media Writing
TL;DR: Examples of the phonological variable of consonant cluster reduction in Twitter suggest that when social media writing transcribes phonological properties of speech, it is not merely a case of inventing orthographic transcriptions, but rather, social media displays influence from structural properties ofThe phonological system.
Posted Content
Gender in twitter: styles, stances, and social networks
TL;DR: Pairing computational methods and social theory offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms.
Posted Content
Mapping the geographical diffusion of new words
TL;DR: This paper shows how an autoregressive model of word frequencies in social media can be used to induce a network of linguistic influence between American cities, and measures the factors that drive the spread of lexical innovation.