scispace - formally typeset
Open AccessProceedings ArticleDOI

mT5: A Massively Multilingual Pre-trained Text-to-Text Transformer

Reads0
Chats0
TLDR
This paper proposed a multilingual variant of T5, mT5, which was pre-trained on a new Common Crawl-based dataset covering 101 languages and achieved state-of-the-art performance on many multilingual benchmarks.
Abstract
The recent “Text-to-Text Transfer Transformer” (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks. In this paper, we introduce mT5, a multilingual variant of T5 that was pre-trained on a new Common Crawl-based dataset covering 101 languages. We detail the design and modified training of mT5 and demonstrate its state-of-the-art performance on many multilingual benchmarks. We also describe a simple technique to prevent “accidental translation” in the zero-shot setting, where a generative model chooses to (partially) translate its prediction into the wrong language. All of the code and model checkpoints used in this work are publicly available.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

μPLAN: Summarizing using a Content Plan as Cross-Lingual Bridge

TL;DR: The authors proposed an approach to cross-lingual summarization that uses an intermediate planning step as a crosslingual bridge, i.e. identifying the salient content and expressing in which order to present the information, separate from the surface form.
Journal ArticleDOI

The BLue Amazon Brain (BLAB): A Modular Architecture of Services about the Brazilian Maritime Territory

TL;DR: The current version of BLAB’s architecture is described and the challenges faced so far, such as the lack of training data and the scattered state of domain information are described, presenting a considerable challenge in the development of artificial intelligence for technical domains.

`i t ` ak ´ ur ` oso : e xploiting c ross -l ingual t rans ferability for n atural l anguage g eneration of d ialogues in l ow -r esource , a frican l an guages

African Lan
TL;DR: The results show that the hypothesis that deep monolingual models learn some abstractions that generalise across languages holds and the representation of under-represented African languages is represented and demonstrating the cross-lingual transferability hypothesis.

Comparing domain-specific and domain-general BERT variants for inferred real-world knowledge through rare grammatical features in Serbian

Jelke Bloem
TL;DR: The authors compared the performance of BERTić, a Bosnian-Croatian-Montenegrin-Serbian model, and Multilingual BERT on a Named Entity Recognition (NER) task and Masked Language Modelling (MLM) task based around a rare phenomenon of indeclinable female foreign names in Serbian.
Proceedings ArticleDOI

GLAMI-1M: A Multilingual Image-Text Fashion Dataset

TL;DR: GLAMI-1M as mentioned in this paper is a multilingual image-text classification dataset and benchmark, which contains images of fashion products with item descriptions, each in 1 of 13 languages.
References
More filters
Proceedings Article

Attention is All you Need

TL;DR: This paper proposed a simple network architecture based solely on an attention mechanism, dispensing with recurrence and convolutions entirely and achieved state-of-the-art performance on English-to-French translation.
Posted Content

RoBERTa: A Robustly Optimized BERT Pretraining Approach

TL;DR: It is found that BERT was significantly undertrained, and can match or exceed the performance of every model published after it, and the best model achieves state-of-the-art results on GLUE, RACE and SQuAD.
Proceedings ArticleDOI

SQuAD: 100,000+ Questions for Machine Comprehension of Text

TL;DR: The Stanford Question Answering Dataset (SQuAD) as mentioned in this paper is a reading comprehension dataset consisting of 100,000+ questions posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage.
Proceedings ArticleDOI

Unsupervised Cross-lingual Representation Learning at Scale

TL;DR: It is shown that pretraining multilingual language models at scale leads to significant performance gains for a wide range of cross-lingual transfer tasks, and the possibility of multilingual modeling without sacrificing per-language performance is shown for the first time.
Proceedings ArticleDOI

Universal Language Model Fine-tuning for Text Classification

TL;DR: Universal Language Model Fine-tuning (ULMFiT) as mentioned in this paper is an effective transfer learning method that can be applied to any task in NLP, and introduces techniques that are key for finetuning a language model.
Related Papers (5)
Trending Questions (3)
ISINDEBELE text generation under NLP using MT5 tool

The paper does not specifically mention ISINDEBELE text generation using the MT5 tool. The paper introduces mT5, a multilingual variant of T5, and demonstrates its performance on multilingual benchmarks.

Isindebele text generation under NLP using MT5 tool

The paper does not mention specifically about Isindebele text generation using the MT5 tool.

A Massively Multilingual Pre-trained Text-to-Text Transformer?

The paper introduces mT5, a multilingual variant of T5, which is a massively multilingual pre-trained text-to-text transformer.