Open AccessJournal Article
Natural Language Processing (Almost) from Scratch
Reads0
Chats0
TLDR
A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.Abstract:
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.read more
Citations
More filters
Posted Content
Cross-stitch Networks for Multi-task Learning
TL;DR: This paper proposes a principled approach to learn shared representations in Convolutional Networks using multitask learning using a new sharing unit: "cross-stitch" unit that combines the activations from multiple networks and can be trained end-to-end.
Posted Content
Edge Intelligence: Paving the Last Mile of Artificial Intelligence with Edge Computing
TL;DR: A comprehensive survey of the recent research efforts on EI is conducted, which provides an overview of the overarching architectures, frameworks, and emerging key technologies for deep learning model toward training/inference at the network edge.
Proceedings ArticleDOI
Deep Learning for Entity Matching: A Design Space Exploration
Sidharth Mudgal,Han Li,Theodoros Rekatsinas,AnHai Doan,Youngchoon Park,Ganesh Krishnan,Rohit Deep,Esteban Arcaute,Vijay Raghavendra +8 more
TL;DR: The results show that DL does not outperform current solutions on structured EM, but it can significantly outperform them on textual and dirty EM, which suggests that practitioners should seriously consider using DL for textual anddirty EM problems.
Proceedings ArticleDOI
Using Convolutional Neural Networks to Classify Hate-Speech
Björn Gambäck,Utpal Kumar Sikdar +1 more
TL;DR: A deep learning-based Twitter hate-speech text classification system that assigns each tweet to one of four predefined categories: racism, sexism, both (racism and sexism) and non-hate-speech.
Proceedings ArticleDOI
Two/Too Simple Adaptations of Word2Vec for Syntax Problems
TL;DR: Two simple modifications to the models in the popular Word2Vec tool are presented, in order to generate embeddings more suited to tasks involving syntax.
References
More filters
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
A tutorial on hidden Markov models and selected applications in speech recognition
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Journal ArticleDOI
A fast learning algorithm for deep belief nets
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.