CLUE: A Chinese Language Understanding Evaluation Benchmark
Liang Xu,Hai Hu,Xuanwei Zhang,Lu Li,Chenjie Cao,Yudong Li,Yechen Xu,Kai Sun,Dian Yu,Cong Yu,Yin Tian,Qianqian Dong,Weitang Liu,Bo Shi,Yiming Cui,Junyi Li,Jun Zeng,Rongzhao Wang,Weijian Xie,Yanting Li,Yina Patterson,Zuoyu Tian,Yiwen Zhang,He Zhou,Shaoweihua Liu,Zhe Zhao,Qipeng Zhao,Cong Yue,Xinrui Zhang,Zhengliang Yang,Kyle Richardson,Zhenzhong Lan +31 more
- pp 4762-4772
TLDR
The first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark is introduced, an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text.Abstract:
The advent of natural language understanding (NLU) benchmarks for English, such as GLUE and SuperGLUE allows new NLU models to be evaluated across a diverse set of tasks. These comprehensive benchmarks have facilitated a broad range of research and applications in natural language processing (NLP). The problem, however, is that most such benchmarks are limited to English, which has made it difficult to replicate many of the successes in English NLU for other languages. To help remedy this issue, we introduce the first large-scale Chinese Language Understanding Evaluation (CLUE) benchmark. CLUE is an open-ended, community-driven project that brings together 9 tasks spanning several well-established single-sentence/sentence-pair classification tasks, as well as machine reading comprehension, all on original Chinese text. To establish results on these tasks, we report scores using an exhaustive set of current state-of-the-art pre-trained Chinese models (9 in total). We also introduce a number of supplementary datasets and additional tools to help facilitate further progress on Chinese NLU. Our benchmark is released at https://www.cluebenchmarks.comread more
Citations
More filters
Posted ContentDOI
Pre-Training with Whole Word Masking for Chinese BERT
TL;DR: The whole word masking (wwm) strategy for Chinese BERT is introduced, along with a series of Chinese pre-trained language models, and a simple but effective model called MacBERT is proposed, which improves upon RoBERTa in several ways.
Journal ArticleDOI
A Survey of Large Language Models
Wayne Xin Zhao,Kun Zhou,Junyi Li,Tianyi Tang,Xiaolei Wang,Yupeng Hou,Yingqian Min,Beichen Zhang,Junjie Zhang,Zican Dong,Yifan Du,Chen Yang,Yushuo Chen,Zhongyong Chen,Jinhao Jiang,Ruiyang Ren,Yifan Li,Xinyu Tang,Zikang Liu,Peiyu Liu,Jian-Yun Nie,Ji-Rong Wen +21 more
TL;DR: Recently, a large language model (LLM) as mentioned in this paper has been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks.
Proceedings ArticleDOI
GLM-130B: An Open Bilingual Pre-trained Model
Aohan Zeng,Xiao Liu,Zhengxiao Du,Zihan Wang,Hanyu Lai,Ming Ding,Zhuoyi Yang,Yifan Xu,Wendi Zheng,Xiao Xia,Weng Lam Tam,Zixuan Ma,Yufei Xue,Jidong Zhai,Wenguang Chen,Feng Zhang,Yuxiao Dong,Jie Tang +17 more
TL;DR: An attempt to open-source a 100B-scale model at least as good as GPT-3 and unveil how models of such a scale can be successfully pre-trained, including its design choices, training strategies for both efficiency and stability, and engineering efforts is introduced.
Proceedings ArticleDOI
Improving Sign Language Translation with Monolingual Data by Sign Back-Translation
TL;DR: This article proposed a sign back-translation (SignBT) approach, which incorporates massive spoken language texts into SLT training to tackle the parallel data bottleneck, and obtained a substantial improvement over previous state-of-the-art SLT methods.
Proceedings ArticleDOI
OCNLI: Original Chinese Natural Language Inference
TL;DR: The Original Chinese Natural Language Inference dataset (OCNLI) as mentioned in this paper is the first large-scale natural language inference dataset for Chinese, consisting of 56,000 annotated sentence pairs.
References
More filters
Proceedings Article
Attention is All you Need
Ashish Vaswani,Noam Shazeer,Niki Parmar,Jakob Uszkoreit,Llion Jones,Aidan N. Gomez,Lukasz Kaiser,Illia Polosukhin +7 more
TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Proceedings ArticleDOI
Glove: Global Vectors for Word Representation
TL;DR: A new global logbilinear regression model that combines the advantages of the two major model families in the literature: global matrix factorization and local context window methods and produces a vector space with meaningful substructure.
Proceedings ArticleDOI
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
TL;DR: BERT as mentioned in this paper pre-trains deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context in all layers, which can be fine-tuned with just one additional output layer to create state-of-the-art models for a wide range of tasks.
Proceedings Article
Distributed Representations of Words and Phrases and their Compositionality
TL;DR: This paper presents a simple method for finding phrases in text, and shows that learning good vector representations for millions of phrases is possible and describes a simple alternative to the hierarchical softmax called negative sampling.
Posted Content
RoBERTa: A Robustly Optimized BERT Pretraining Approach
Yinhan Liu,Myle Ott,Naman Goyal,Jingfei Du,Mandar Joshi,Danqi Chen,Omer Levy,Michael Lewis,Luke Zettlemoyer,Veselin Stoyanov +9 more
TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.