Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
Jacob Devlin,Ming-Wei Chang,Kenton Lee,Kristina Toutanova +3 more
- pp 4171-4186
Reads0
Chats0
TLDR
BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.Abstract:
We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models (Peters et al., 2018a; Radford et al., 2018), BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers. As a result, the pre-trained BERT model can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks, such as question answering and language inference, without substantial task-specific architecture modifications. BERT is conceptually simple and empirically powerful. It obtains new state-of-the-art results on eleven natural language processing tasks, including pushing the GLUE score to 80.5 (7.7 point absolute improvement), MultiNLI accuracy to 86.7% (4.6% absolute improvement), SQuAD v1.1 question answering Test F1 to 93.2 (1.5 point absolute improvement) and SQuAD v2.0 Test F1 to 83.1 (5.1 point absolute improvement).read more
Citations
More filters
Posted Content
Dataset Condensation with Gradient Matching
TL;DR: This paper proposes a training set synthesis technique, called Dataset Condensation, that learns to produce a small set of informative samples for training deep neural networks from scratch in a small fraction of the required computational cost on the original data while achieving comparable results.
Journal ArticleDOI
Supercomputer-Based Ensemble Docking Drug Discovery Pipeline with Application to Covid-19.
Atanu Acharya,Rupesh Agarwal,Rupesh Agarwal,Matthew B. Baker,Jerome Baudry,Debsindhu Bhowmik,Swen Boehm,Kendall G. Byler,Samuel Yen-Chi Chen,Leighton Coates,Connor J. Cooper,Connor J. Cooper,Omar Demerdash,Isabella Daidone,John D. Eblen,John D. Eblen,Sally R. Ellingson,Stefano Forli,Jens Glaser,James C. Gumbart,John A. Gunnels,Oscar Hernandez,Stephan Irle,Stephan Irle,Daniel W. Kneller,Andrey Kovalevsky,Jeffrey M. Larkin,Travis J Lawrence,Scott LeGrand,Shih-Hsien Liu,Shih-Hsien Liu,Julie C. Mitchell,Gilchan Park,Jerry M. Parks,Jerry M. Parks,Anna Pavlova,Loukas Petridis,Loukas Petridis,Duncan Poole,Line Pouchard,Arvind Ramanathan,David M. Rogers,Diogo Santos-Martins,Aaron Scheinberg,Ada Sedova,Y. Shen,Y. Shen,Jeremy C. Smith,Jeremy C. Smith,Micholas Dean Smith,Micholas Dean Smith,Carlos Soto,A. Tsaris,Mathialakan Thavappiragasam,Andreas F. Tillack,Josh V. Vermaas,V. Q. Vuong,V. Q. Vuong,Junqi Yin,Shinjae Yoo,Mai Zahran,Laura Zanetti-Polzi +61 more
TL;DR: A supercomputer-driven pipeline for in silico drug discovery using enhanced sampling molecular dynamics (MD) and ensemble docking is presented, including the use of quantum mechanical, machine learning, and artificial intelligence methods to cluster MD trajectories and rescore docking poses.
Posted Content
Racism is a Virus: Anti-Asian Hate and Counterhate in Social Media during the COVID-19 Crisis
TL;DR: Analysis of the social network reveals that hateful and counterspeech users interact and engage extensively with one another, instead of living in isolated polarized communities, and finds that nodes were highly likely to become hateful after being exposed to hateful content in the year 2020.
Book ChapterDOI
Spatio-Temporal Graph Transformer Networks for Pedestrian Trajectory Prediction
TL;DR: In this paper, a spatio-temporal graph convolutional neural network (STREAM) is proposed for trajectory prediction by only attention mechanisms. But, the performance of the model is limited.
Posted Content
Towards Learning a Generic Agent for Vision-and-Language Navigation via Pre-training
TL;DR: This paper presents the first pre-training and fine-tuning paradigm for vision-and-language navigation (VLN) tasks, which leads to significant improvement over existing methods, achieving a new state of the art.
References
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI
ImageNet: A large-scale hierarchical image database
TL;DR: A new database called “ImageNet” is introduced, a large-scale ontology of images built upon the backbone of the WordNet structure, much larger in scale and diversity and much more accurate than the current image datasets.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings Article
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Proceedings ArticleDOI
Deep contextualized word representations
Matthew E. Peters,Mark Neumann,Mohit Iyyer,Matt Gardner,Christopher Clark,Kenton Lee,Luke Zettlemoyer +6 more
TL;DR: This paper introduced a new type of deep contextualized word representation that models both complex characteristics of word use (e.g., syntax and semantics), and how these uses vary across linguistic contexts (i.e., to model polysemy).