scispace - formally typeset
Open AccessProceedings ArticleDOI

mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

Reads0
Chats0
TLDR
This paper proposed a multilingual variant of T5, mT5, which was pre-trained on a new Common Crawl-based dataset covering 101 languages and achieved state-of-the-art performance on many multilingual benchmarks.
Abstract
The recent “Text-to-Text Transfer Transformer” (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent “accidental translation” in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

Teven Le Scao, +386 more
- 09 Nov 2022 - 
TL;DR: BLOOM as discussed by the authors is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total).
Journal ArticleDOI

GPT-NeoX-20B: An Open-Source Autoregressive Language Model

TL;DR: GPT-NeoX-20B is introduced, a 20 billion parameter autoregressive language model trained on the Pile, whose weights will be made freely and openly available to the public through a permissive license.
Journal ArticleDOI

A Survey of Large Language Models

TL;DR: Recently, a large language model (LLM) as mentioned in this paper has been proposed by pre-training Transformer models over large-scale corpora, showing strong capabilities in solving various NLP tasks.
References
More filters
Proceedings ArticleDOI

CamemBERT: a Tasty French Language Model

TL;DR: CamemBERT as discussed by the authors is a French version of the Bi-directional Encoders for Transformers (BERT) for part-of-speech tagging, dependency parsing, named entity recognition, and natural language inference.
Proceedings ArticleDOI

PhoBERT: Pre-trained language models for Vietnamese

TL;DR: Experimental results show that PhoBERT consistently outperforms the recent best pre-trained multilingual model XLM-R and improves the state-of-the-art in multiple Vietnamese-specific NLP tasks including Part- of-speech tagging, Dependency parsing, Named-entity recognition and Natural language inference.
Posted Content

BERTje : A Dutch BERT Model

TL;DR: The transformer-based pre-trained language model BERT has helped to improve state-of-the-art performance on many natural language processing (NLP) tasks, but a monolingual Dutch BERT model called BERTje is developed and evaluated, which consistently outperforms the equally-sized multilingual Bert model on downstream NLP tasks.
Journal ArticleDOI

TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages

TL;DR: TyDi QA as mentioned in this paper ) is a question answering dataset covering 11 typologically diverse languages with question answering in English, French, German, Dutch, Italian, Spanish, and Russian.
Proceedings ArticleDOI

MLQA: Evaluating Cross-lingual Extractive Question Answering

TL;DR: MLQA as discussed by the authors ) is a multi-way aligned extractive QA evaluation benchmark intended to spur research in this area, which contains QA instances in 7 languages, English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese.
Related Papers (5)
Trending Questions (3)
ISINDEBELE text generation under NLP using MT5 tool

The paper does not specifically mention ISINDEBELE text generation using the MT5 tool. The paper introduces mT5, a multilingual variant of T5, and demonstrates its performance on multilingual benchmarks.

Isindebele text generation under NLP using MT5 tool

The paper does not mention specifically about Isindebele text generation using the MT5 tool.

A Massively Multilingual Pre-trained Text-to-Text Transformer?

The paper introduces mT5, a multilingual variant of T5, which is a massively multilingual pre-trained text-to-text transformer.