scispace - formally typeset
Open AccessProceedings Article

Better Word Representations with Recursive Neural Networks for Morphology

Reads0
Chats0
TLDR
This paper combines recursive neural networks, where each morpheme is a basic unit, with neural language models to consider contextual information in learning morphologicallyaware word representations and proposes a novel model capable of building representations for morphologically complex words from their morphemes.
Abstract
Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Inducing Embeddings for Rare Words through Morphological Decomposition, Stemming and Bidirectional Translation

TL;DR: A novel algorithm to induce embeddings for rare words by leveraging morphological decomposition, stemming and bidirectional translation is proposed, which maintains a relatively lightweight model but generates qualified representations for a wider range of vocabulary from the same corpus.
Book ChapterDOI

An Attention-Based Approach for Mongolian News Named Entity Recognition

TL;DR: The experimental results show that the Mongolian Named Entity Recognition of attention mechanism is superior to the traditional Bi-LSTM-CRF joint model.
Posted Content

Hierarchical Multi Task Learning with Subword Contextual Embeddings for Languages with Rich Morphology

TL;DR: Evaluated on Dependency Parsing and Named Entity Recognition tasks, subword contextual embeddings consistently outperformed other approaches on all languages tested and enabled achieving state-of-the-art results with little annotation requirements.
Proceedings ArticleDOI

attr2vec: Jointly Learning Word and Contextual Attribute Embeddings with Factorization Machines

TL;DR: Attr2vec is introduced, a novel framework for jointly learning embeddings for words and contextual attributes based on factorization machines and it is shown that they exhibit higher similarity between functionally related words compared to traditional approaches.
Journal ArticleDOI

Are word boundaries useful for unsupervised language learning?

TL;DR: It is shown that gold boundaries can be replaced by automatically found ones obtained with an unsupervised segmentation algorithm, and that even modest segmentation performance gives a gain in performance on two of the three tasks compared to basic character/phone based models without boundary information.
References
More filters
Journal ArticleDOI

WordNet: a lexical database for English

TL;DR: WordNet1 provides a more effective combination of traditional lexicographic information and modern computing, and is an online lexical database designed for use under program control.
Journal ArticleDOI

A neural probabilistic language model

TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.
Journal Article

Natural Language Processing (Almost) from Scratch

TL;DR: A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.
Proceedings ArticleDOI

A unified architecture for natural language processing: deep neural networks with multitask learning

TL;DR: This work describes a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense using a language model.
Proceedings Article

Recurrent neural network based language model

TL;DR: Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model.
Related Papers (5)