scispace - formally typeset
Open AccessJournal Article

Natural Language Processing (Almost) from Scratch

Reads0
Chats0
TLDR
A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.
Abstract
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.

read more

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI

Learning Continuous Phrase Representations for Translation Modeling

TL;DR: This paper improves the performance of a state-of-the-art phrase-based statistical machine translation system trained on WMT 2012 French-English data by up to 1.3 BLEU points.
Posted Content

Polyglot: Distributed Word Representations for Multilingual NLP

TL;DR: The authors used word embeddings for more than 100 languages using their corresponding Wikipedias and found their performance to be competitive with near state-of-the-art methods in English, Danish and Swedish.
Proceedings ArticleDOI

Multitask learning for mental health conditions with limited social media data

TL;DR: The framework proposed significantly improves over all baselines and single-task models for predicting mental health conditions, with particularly significant gains for conditions with limited data, and establishes for the first time the potential of deep learning in the prediction of mental health from online user-generated text.
Proceedings ArticleDOI

Cross-Lingual Transfer Learning for POS Tagging without Cross-Lingual Resources

TL;DR: Evaluating on POS datasets from 14 languages in the Universal Dependencies corpus, it is shown that the proposed transfer learning model improves the POS tagging performance of the target languages without exploiting any linguistic knowledge between the source language and the target language.
Proceedings ArticleDOI

A general path-based representation for predicting program properties

TL;DR: In this article, a path-based representation for learning from programs is presented, which allows a learning model to leverage the structured nature of code rather than treating it as a flat sequence of tokens.
References
More filters
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

A tutorial on hidden Markov models and selected applications in speech recognition

TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book

Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference

TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Journal ArticleDOI

A fast learning algorithm for deep belief nets

TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Journal ArticleDOI

Machine learning

TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Related Papers (5)