scispace - formally typeset
J

Julian Michael

Researcher at University of Washington

Publications -  33
Citations -  7298

Julian Michael is an academic researcher from University of Washington. The author has contributed to research in topics: Computer science & Semantic role labeling. The author has an hindex of 15, co-authored 27 publications receiving 4869 citations. Previous affiliations of Julian Michael include New York University & University of Texas at Austin.

Papers
More filters
Proceedings ArticleDOI

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

TL;DR: The gluebenchmark as mentioned in this paper is a benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models.
Proceedings Article

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

TL;DR: A benchmark of nine diverse NLU tasks, an auxiliary dataset for probing models for understanding of specific linguistic phenomena, and an online platform for evaluating and comparing models, which favors models that can represent linguistic knowledge in a way that facilitates sample-efficient learning and effective knowledge-transfer across tasks.
Proceedings Article

SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems

TL;DR: A new benchmark styled after GLUE is presented, a new set of more difficult language understanding tasks, a software toolkit, and a public leaderboard are presented.
Proceedings ArticleDOI

Supervised Open Information Extraction

TL;DR: A novel formulation of Open IE as a sequence tagging problem, addressing challenges such as encoding multiple extractions for a predicate, and a supervised model that outperforms the existing state-of-the-art Open IE systems on benchmark datasets.
Posted Content

GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding

TL;DR: The General Language Understanding Evaluation Benchmark (GLUE) as mentioned in this paper is a tool for evaluating and analyzing the performance of models across a diverse range of existing NLU tasks, which incentivizes sharing knowledge across tasks because some tasks have very limited training data.