scispace - formally typeset
Open AccessProceedings Article

LanideNN: Multilingual Language Identification on Character Window

Reads0
Chats0
TLDR
The authors proposed a method for textual language identification where languages can change arbitrarily and the goal is to identify the spans of each of the languages, which is based on bidirectional recurrent neural networks and performs well in monolingual and multilingual language identification tasks.
Abstract
In language identification, a common first step in natural language processing, we want to automatically determine the language of some input text. Monolingual language identification assumes that the given document is written in one language. In multilingual language identification, the document is usually in two or three languages and we just want their names. We aim one step further and propose a method for textual language identification where languages can change arbitrarily and the goal is to identify the spans of each of the languages. Our method is based on Bidirectional Recurrent Neural Networks and it performs well in monolingual and multilingual language identification tasks on six datasets covering 131 languages. The method keeps the accuracy also for short documents and across domains, so it is ideal for off-the-shelf use without preparation of training data.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings Article

Deep Models for Arabic Dialect Identification on Benchmarked Data

TL;DR: The experimental results show that variantsof (attention-based) bidirectional recurrent neural networks achieve best accuracy (acc) on the AOC task, significantly outperforming all competitive baselines.
Proceedings ArticleDOI

Sparse Traditional Models Outperform Dense Neural Networks: the Curious Case of Discriminating between Similar Languages

TL;DR: The authors presented the results of their participation in the VarDial 4 shared task on discriminating closely related languages using simple traditional models using linear support vector machines (SVMs) and a neural network (NN).
Proceedings ArticleDOI

A Dataset and Classifier for Recognizing Social Media English

TL;DR: It is found that a demographic language model—which identifies messages with language similar to that used by several U.S. ethnic populations on Twitter—can be used to improve English language identification performance when combined with a traditional supervised language identifier.
Proceedings ArticleDOI

A Fast, Compact, Accurate Model for Language Identification of Codemixed Text

TL;DR: The authors proposed a fine-grained multilingual language identification model that provides a language code for every token in a sentence, including codemixed text containing multiple languages, by using a feed-forward network with a simple globally constrained decoder.

Comparing Approaches to Dravidian Language Identification

TL;DR: The authors used a Naive Bayes classifier with adaptive language models, which has shown to obtain competitive performance in many language and dialect identification tasks, and a transformer-based model which is widely regarded as the state-of-the-art in a number of NLP tasks.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal Article

Dropout: a simple way to prevent neural networks from overfitting

TL;DR: It is shown that dropout improves the performance of neural networks on supervised learning tasks in vision, speech recognition, document classification and computational biology, obtaining state-of-the-art results on many benchmark data sets.
Proceedings ArticleDOI

Learning Phrase Representations using RNN Encoder--Decoder for Statistical Machine Translation

TL;DR: In this paper, the encoder and decoder of the RNN Encoder-Decoder model are jointly trained to maximize the conditional probability of a target sequence given a source sequence.
Journal ArticleDOI

Finding Structure in Time

TL;DR: A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed.
Related Papers (5)