scispace - formally typeset
Open AccessJournal ArticleDOI

A practical tutorial on autoencoders for nonlinear feature fusion: taxonomy, models, software and guidelines

Reads0
Chats0
TLDR
Autoencoders (AEs) as mentioned in this paper have emerged as an alternative to manifold learning for conducting nonlinear feature fusion, and they can be used to generate reduced feature sets through the fusion of the original ones.
About
This article is published in Information Fusion.The article was published on 2018-11-01 and is currently open access. It has received 209 citations till now. The article focuses on the topics: Isomap & Feature (computer vision).

read more

Citations
More filters
Posted Content

Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI.

TL;DR: Previous efforts to define explainability in Machine Learning are summarized, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought, and a taxonomy of recent contributions related to the explainability of different Machine Learning models are proposed.
Posted Content

Artificial Intelligence Forecasting of Covid-19 in China

TL;DR: If the data are reliable and there are no second transmissions, the AI-inspired methods can accurately forecast the transmission dynamics of the Covid-19 across the provinces/cities in China, which is a powerful tool for helping public health planning and policymaking.
Journal ArticleDOI

A new divergence measure for belief functions in D–S evidence theory for multisensor data fusion

TL;DR: The proposed RB divergence is the first such measure to consider the correlations between both belief functions and subsets of the sets of belief functions, thus allowing it to provide a more convincing and effective solution for measuring the discrepancy between BBAs in D–S evidence theory.
References
More filters
Proceedings Article

Greedy Layer-Wise Training of Deep Networks

TL;DR: These experiments confirm the hypothesis that the greedy layer-wise unsupervised training strategy mostly helps the optimization, by initializing weights in a region near a good local minimum, giving rise to internal distributed representations that are high-level abstractions of the input, bringing better generalization.
Journal ArticleDOI

A learning algorithm for continually running fully recurrent neural networks

TL;DR: The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks.
Journal ArticleDOI

A logical calculus of the ideas immanent in nervous activity

TL;DR: It is shown that many particular choices among possible neurophysiological assumptions are equivalent, in the sense that for every net behaving under one assumption, there exists another net which behaves under the other and gives the same results, although perhaps not in the same time.
Journal ArticleDOI

Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1 ?

TL;DR: These deviations from linearity provide a potential explanation for the weak forms of non-linearity observed in the response properties of cortical simple cells, and they further make predictions about the expected interactions among units in response to naturalistic stimuli.
Proceedings ArticleDOI

Multi-column deep neural networks for image classification

TL;DR: In this paper, a biologically plausible, wide and deep artificial neural network architectures was proposed to match human performance on tasks such as the recognition of handwritten digits or traffic signs, achieving near-human performance.
Related Papers (5)