Open AccessJournal Article
Natural Language Processing (Almost) from Scratch
Reads0
Chats0
TLDR
A unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling is proposed.Abstract:
We propose a unified neural network architecture and learning algorithm that can be applied to various natural language processing tasks including part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. This versatility is achieved by trying to avoid task-specific engineering and therefore disregarding a lot of prior knowledge. Instead of exploiting man-made input features carefully optimized for each task, our system learns internal representations on the basis of vast amounts of mostly unlabeled training data. This work is then used as a basis for building a freely available tagging system with good performance and minimal computational requirements.read more
Citations
More filters
Proceedings ArticleDOI
"Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection
TL;DR: Li et al. as discussed by the authors designed a hybrid convolutional neural network to integrate meta-data with text and showed that this hybrid approach can improve a text-only deep learning model.
Journal ArticleDOI
Evaluating the Visualization of What a Deep Neural Network Has Learned
TL;DR: In this article, a general methodology based on region perturbation for evaluating ordered collections of pixels such as heatmaps is presented, and the authors compare heatmaps computed by three different methods on the SUN397, ILSVRC2012, and MIT Places data sets.
Journal ArticleDOI
A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis
Xiaoxuan Liu,Livia Faes,Aditya Kale,Siegfried K Wagner,Dun Jack Fu,Alice Bruynseels,Thushika Mahendiran,Gabriella Moraes,Mohith Shamdas,Christoph Kern,Christoph Kern,Joseph R. Ledsam,Martin Schmid,Konstantinos Balaskas,Konstantinos Balaskas,Eric J. Topol,Lucas M. Bachmann,Pearse A. Keane,Alastair K Denniston +18 more
TL;DR: A major finding of the review is that few studies presented externally validated results or compared the performance of deep learning models and health-care professionals using the same sample, which limits reliable interpretation of the reported diagnostic accuracy.
Posted Content
CTRL: A Conditional Transformer Language Model for Controllable Generation
TL;DR: CTRL is released, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior, providing more explicit control over text generation.
Proceedings ArticleDOI
Deep Models Under the GAN: Information Leakage from Collaborative Deep Learning
TL;DR: In this article, the authors show that any privacy-preserving collaborative deep learning model is susceptible to a powerful attack that exploits the real-time nature of the learning process that allows the adversary to train a Generative Adversarial Network (GAN) that generates prototypical samples of the targeted training set that was meant to be private (the samples generated by the GAN are intended to come from the same distribution as the training data).
References
More filters
Journal ArticleDOI
Gradient-based learning applied to document recognition
Yann LeCun,Léon Bottou,Léon Bottou,Yoshua Bengio,Yoshua Bengio,Yoshua Bengio,Patrick Haffner +6 more
TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI
A tutorial on hidden Markov models and selected applications in speech recognition
TL;DR: In this paper, the authors provide an overview of the basic theory of hidden Markov models (HMMs) as originated by L.E. Baum and T. Petrie (1966) and give practical details on methods of implementation of the theory along with a description of selected applications of HMMs to distinct problems in speech recognition.
Book
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference
TL;DR: Probabilistic Reasoning in Intelligent Systems as mentioned in this paper is a complete and accessible account of the theoretical foundations and computational methods that underlie plausible reasoning under uncertainty, and provides a coherent explication of probability as a language for reasoning with partial belief.
Journal ArticleDOI
A fast learning algorithm for deep belief nets
TL;DR: A fast, greedy algorithm is derived that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory.
Journal ArticleDOI
Machine learning
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.