scispace - formally typeset
D

David Bamman

Researcher at University of California, Berkeley

Publications -  76
Citations -  3481

David Bamman is an academic researcher from University of California, Berkeley. The author has contributed to research in topics: Computer science & Treebank. The author has an hindex of 24, co-authored 69 publications receiving 2818 citations. Previous affiliations of David Bamman include Carnegie Mellon University & University of California.

Papers
More filters
Proceedings Article

Contextualized Sarcasm Detection on Twitter

TL;DR: By including extra-linguistic information from the context of an utterance on Twitter — such as properties of the author, the audience and the immediate communicative environment — this work is able to achieve gains in accuracy compared to purely linguistic features in the detection of this complex phenomenon, while also shedding light on features of interpersonal interaction that enable sarcasm in conversation.
Journal ArticleDOI

Censorship and deletion practices in Chinese social media

TL;DR: This work presents the first large–scale analysis of political content censorship in social media, i.e. the active deletion of messages published by individuals, and uncovers a set of politically sensitive terms whose presence in a message leads to anomalously higher rates of deletion.
Journal ArticleDOI

Gender identity and lexical variation in social media

TL;DR: This paper studied the relationship between gender, linguistic style, and social networks, using a novel corpus of 14,000 Twitter users and found that social network homophily is correlated with the use of same-gender language markers.
Journal ArticleDOI

Gender identity and lexical variation in social media

TL;DR: Pairing computational methods and social theory offers a new perspective on how gender emerges as individuals position themselves relative to audiences, topics, and mainstream gender norms.
Proceedings ArticleDOI

Adversarial Training for Relation Extraction

TL;DR: Experimental results demonstrate that adversarial training is generally effective for both CNN and RNN models and significantly improves the precision of predicted relations.