scispace - formally typeset
Open AccessJournal ArticleDOI

A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines

Reads0
Chats0
TLDR
Autoencoders (AEs) as mentioned in this paper have emerged as an alternative to manifold learning for conducting nonlinear feature fusion, and they can be used to generate reduced feature sets through the fusion of the original ones.
About
This article is published in Information Fusion.The article was published on 2018-11-01 and is currently open access. It has received 209 citations till now. The article focuses on the topics: Isomap & Feature (computer vision).

read more

Citations
More filters
Journal ArticleDOI

Friendship Inference in Mobile Social Networks: Exploiting Multi-Source Information With Two-Stage Deep Learning Framework

TL;DR: Wang et al. as discussed by the authors proposed a two-stage deep learning framework for friendship inference, namely TDFI, which enables MSNs to exploit multi-source information simultaneously, rather than hierarchically.
Posted Content

Deconvolution-and-convolution Networks.

TL;DR: In this paper, a deep deconvolutional-convolutional network (DCNet) was proposed for 1D big data analysis through learning a deep decoder-decoder network.
Book ChapterDOI

Dimensionality Reduction Using Convolutional Autoencoders

TL;DR: In this article , the authors have implemented a convolutional autoencoder to study the impact of kernel size and activation function on the accuracy of the algorithm and concluded that (3 * 3) is the best choice for kernel size, and PReLU is best suitable for activation function used in the convolution layers.
Journal ArticleDOI

A Computer Method for Pronation-Supination Assessment in Parkinson’s Disease Based on Latent Space Representations of Biomechanical Indicators

TL;DR: In this article , the authors used wearable sensors for biomechanical measurements for pronation-supination hand movement evaluations using a new analysis method when compared to the other methods mentioned in the literature.
References
More filters
Proceedings Article

Adam: A Method for Stochastic Optimization

TL;DR: This work introduces Adam, an algorithm for first-order gradient-based optimization of stochastic objective functions, based on adaptive estimates of lower-order moments, and provides a regret bound on the convergence rate that is comparable to the best known results under the online convex optimization framework.
Journal ArticleDOI

Long short-term memory

TL;DR: A novel, efficient, gradient based method called long short-term memory (LSTM) is introduced, which can learn to bridge minimal time lags in excess of 1000 discrete-time steps by enforcing constant error flow through constant error carousels within special units.
Journal ArticleDOI

Gradient-based learning applied to document recognition

TL;DR: In this article, a graph transformer network (GTN) is proposed for handwritten character recognition, which can be used to synthesize a complex decision surface that can classify high-dimensional patterns, such as handwritten characters.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Book

Deep Learning

TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Related Papers (5)