scispace - formally typeset
Open AccessProceedings Article

Better Word Representations with Recursive Neural Networks for Morphology

Reads0
Chats0
TLDR
This paper combines recursive neural networks, where each morpheme is a basic unit, with neural language models to consider contextual information in learning morphologicallyaware word representations and proposes a novel model capable of building representations for morphologically complex words from their morphemes.
Abstract
Vector-space word representations have been very successful in recent years at improving performance across a variety of NLP tasks. However, common to most existing work, words are regarded as independent entities without any explicit relationship among morphologically related words being modeled. As a result, rare and complex words are often poorly estimated, and all unknown words are represented in a rather crude way using only one or a few vectors. This paper addresses this shortcoming by proposing a novel model that is capable of building representations for morphologically complex words from their morphemes. We combine recursive neural networks (RNNs), where each morpheme is a basic unit, with neural language models (NLMs) to consider contextual information in learning morphologicallyaware word representations. Our learned models outperform existing word representations by a good margin on word similarity tasks across many datasets, including a new dataset we introduce focused on rare words to complement existing ones in an interesting way.

read more

Content maybe subject to copyright    Report

Citations
More filters

Better Language Model with Hypernym Class Prediction

He Bai
TL;DR: This study hypothesizes that class-based prediction leads to an implicit context aggregation for similar words and thus can improve generalization for rare words and train large neural LMs by gradually annealing from predicting the class to token prediction during training.
Proceedings ArticleDOI

Ontology-based concept similarity integrating image semantic and visual information

TL;DR: This paper proposes an ontology concept similarity measure that simultaneously utilizes the image semantic annotations and visual features to optimize the ontology-based metrics and demonstrates the effectiveness of the proposed method.
Book ChapterDOI

FMEBA: A Fusion Multi-feature Model for Chinese Out of Vocabulary Word Embedding Generation

TL;DR: A Fusion Multi-feature Encoder Based on Attention (FMEBA) is proposed for processing Chinese OOV words, in which the radical feature of characters is used as well as character-level Transformer Encoder to process character sequence information and context information.

Learning Embeddings for Text and Images from Structure of the Data

TL;DR: The author states that the author did not intend for the book to be taken as gospel, but that it was meant to be used as a guide to further studies.
References
More filters
Journal ArticleDOI

WordNet: a lexical database for English

TL;DR: WordNet1 provides a more effective combination of traditional lexicographic information and modern computing, and is an online lexical database designed for use under program control.
Journal ArticleDOI

A neural probabilistic language model

TL;DR: The authors propose to learn a distributed representation for words which allows each training sentence to inform the model about an exponential number of semantically neighboring sentences, which can be expressed in terms of these representations.
Journal Article

Natural Language Processing (Almost) from Scratch

TL;DR: A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.
Proceedings ArticleDOI

A unified architecture for natural language processing: deep neural networks with multitask learning

TL;DR: This work describes a single convolutional neural network architecture that, given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense using a language model.
Proceedings Article

Recurrent neural network based language model

TL;DR: Results indicate that it is possible to obtain around 50% reduction of perplexity by using mixture of several RNN LMs, compared to a state of the art backoff language model.
Related Papers (5)