scispace - formally typeset
Journal ArticleDOI

Autoencoder for words

Reads0
Chats0
TLDR
A training method that encodes each word into a different vector in semantic space and its relation to low entropy coding is presented and is applied to the stylish analyses of two Chinese novels.
About
This article is published in Neurocomputing.The article was published on 2014-09-01. It has received 390 citations till now. The article focuses on the topics: Autoencoder.

read more

Citations
More filters
Journal ArticleDOI

Deep learning for visual understanding

TL;DR: The state-of-the-art in deep learning algorithms in computer vision is reviewed by highlighting the contributions and challenges from over 210 recent research papers, and the future trends and challenges in designing and training deep neural networks are summarized.
Journal ArticleDOI

Deep learning algorithms for human activity recognition using mobile and wearable sensor networks: State of the art and research challenges

TL;DR: The focus of this review is to provide in-depth summaries of deep learning methods for mobile and wearable sensor-based human activity recognition, and categorise the studies into generative, discriminative and hybrid methods.
Journal ArticleDOI

Machine learning on big data

TL;DR: A framework of ML on big data (MLBiD) is introduced to guide the discussion of its opportunities and challenges and provides directions for identification of associated opportunities and challenged and open up future work in many unexplored or under explored research areas.
Posted Content

Deep Learning for Anomaly Detection: A Survey.

TL;DR: A structured and comprehensive overview of research methods in deep learning-based anomaly detection, grouped state-of-the-art research techniques into different categories based on the underlying assumptions and approach adopted.
Journal ArticleDOI

Quantum autoencoders for efficient compression of quantum data

TL;DR: The model of a quantum autoencoder is introduced to perform similar tasks on quantum data to compress ground states of the Hubbard model and molecular Hamiltonians and shows an example of a simple programmable circuit that can be trained as an efficient autoenCoder.
References
More filters
Journal ArticleDOI

Learning the parts of objects by non-negative matrix factorization

TL;DR: An algorithm for non-negative matrix factorization is demonstrated that is able to learn parts of faces and semantic features of text and is in contrast to other methods that learn holistic, not parts-based, representations.
Journal ArticleDOI

Finding Structure in Time

TL;DR: A proposal along these lines first described by Jordan (1986) which involves the use of recurrent links in order to provide networks with a dynamic memory and suggests a method for representing lexical categories and the type/token distinction is developed.

Learning parts of objects by non-negative matrix factorization

D. D. Lee
TL;DR: In this article, non-negative matrix factorization is used to learn parts of faces and semantic features of text, which is in contrast to principal components analysis and vector quantization that learn holistic, not parts-based, representations.
Book

Unsupervised learning

TL;DR: In this article, an unsupervised learner brings to bear prior biases as to what aspects of the structure of the input should be captured in the output, which is called prior bias capture.
Book

Unsupervised learning

H. B. Barlow
TL;DR: It is argued that the redundancy of sensory messages provides the knowledge incorporated in the maps or models and a representation whose elements are independent makes it possible to form associations with logical functions of the elements, not just with the elements themselves.
Related Papers (5)