scispace - formally typeset
Journal ArticleDOI

A systematic review of deep transfer learning for machinery fault diagnosis

Reads0
Chats0
TLDR
This paper reviews the research progress of the deep transfer learning for the machinery fault diagnosis in recently years, summarizing, classifying and explaining many publications on this topic with discussing various deep transfer architectures and related theories.
About
This article is published in Neurocomputing.The article was published on 2020-09-24. It has received 193 citations till now. The article focuses on the topics: Transfer of learning & Deep learning.

read more

Citations
More filters
Journal ArticleDOI

A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges

TL;DR: Deep Transfer Learning (DTL) is a new paradigm of machine learning, which can not only leverage the advantages of Deep Learning (DL) in feature representation, but also benefit from the superiority of transfer learning (TL) in knowledge transfer as mentioned in this paper .
Journal ArticleDOI

A perspective survey on deep transfer learning for fault diagnosis in industrial scenarios: Theories, applications and challenges

TL;DR: Deep Transfer Learning (DTL) is a new paradigm of machine learning, which can not only leverage the advantages of Deep Learning (DL) in feature representation, but also benefit from the superiority of transfer learning (TL) in knowledge transfer.
Journal ArticleDOI

Meta-learning for few-shot bearing fault diagnosis under complex working conditions

TL;DR: A novel meta-learning fault diagnosis method (MLFD) based on model-agnostic meta- learning that achieves fast and accurate few-shot bearing fault diagnosis under unseen working conditions by leveraging the learned knowledge.
Journal ArticleDOI

Prognostics and health management: A review from the perspectives of design, development and decision

TL;DR: This review work digs into the problem essence of the DE3 of PHM by reviewing the research work, extracting the research conclusions from 235 related publications, and exposing the current methodologies and solution frameworks for addressing theDE3 issues.
Journal ArticleDOI

Transfer learning for remaining useful life prediction of multi-conditions bearings based on bidirectional-GRU network

TL;DR: A new transfer learning method based on bidirectional Gated Recurrent Unit (TBiGRU) is proposed to accurately predict the RUL of bearings under different working conditions and can adaptively recognize different running states of bearings and obtain corresponding training labels, and realize better RUL prediction performance under differentWorking conditions.
References
More filters
Proceedings Article

ImageNet Classification with Deep Convolutional Neural Networks

TL;DR: The state-of-the-art performance of CNNs was achieved by Deep Convolutional Neural Networks (DCNNs) as discussed by the authors, which consists of five convolutional layers, some of which are followed by max-pooling layers, and three fully-connected layers with a final 1000-way softmax.
Journal ArticleDOI

Generative Adversarial Nets

TL;DR: A new framework for estimating generative models via an adversarial process, in which two models are simultaneously train: a generative model G that captures the data distribution and a discriminative model D that estimates the probability that a sample came from the training data rather than G.
Journal ArticleDOI

Reducing the Dimensionality of Data with Neural Networks

TL;DR: In this article, an effective way of initializing the weights that allows deep autoencoder networks to learn low-dimensional codes that work much better than principal components analysis as a tool to reduce the dimensionality of data is described.
Journal ArticleDOI

Representation Learning: A Review and New Perspectives

TL;DR: Recent work in the area of unsupervised feature learning and deep learning is reviewed, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks.
Proceedings Article

Wasserstein Generative Adversarial Networks

TL;DR: This work introduces a new algorithm named WGAN, an alternative to traditional GAN training that can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches.
Related Papers (5)