A
Alex Wang
Researcher at Carnegie Mellon University
Publications - 53
Citations - 8872
Alex Wang is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: Convex hull & Transfer of learning. The author has an hindex of 20, co-authored 45 publications receiving 5949 citations. Previous affiliations of Alex Wang include New York University & Johns Hopkins University.
Papers
More filters
Proceedings ArticleDOI
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
TL;DR: The gluebenchmark as mentioned in this paper is a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models.
Proceedings Article
GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding
TL;DR: A benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models, which favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.
Proceedings Article
SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems
Alex Wang,Yada Pruksachatkun,Nikita Nangia,Amanpreet Singh,Julian Michael,Felix Hill,Omer Levy,Samuel R. Bowman +7 more
TL;DR: A new benchmark styled after GLUE is presented, a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard are presented.
Posted Content
What do you learn from context? Probing for sentence structure in contextualized word representations
Ian Tenney,Patrick Xia,Berlin Chen,Alex Wang,Adam Poliak,R. Thomas McCoy,Najoung Kim,Benjamin Van Durme,Samuel R. Bowman,Dipanjan Das,Ellie Pavlick +10 more
TL;DR: The authors investigate word-level contextual representations from four recent models and investigate how they encode sentence structure across a range of syntactic, semantic, local, and long-range phenomena, finding that existing models trained on language modeling and translation produce strong representations for syntactic phenomena, but only offer comparably small improvements on semantic tasks over a non-contextual baseline.
Posted Content
On Measuring Social Biases in Sentence Encoders
TL;DR: The Word Embedding Association Test is extended to measure bias in sentence encoders and mixed results including suspicious patterns of sensitivity that suggest the test’s assumptions may not hold in general.