scispace - formally typeset
Search or ask a question

Showing papers on "Deep belief network published in 2023"


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel vibration amplitude spectrum imaging feature extraction method using continuous wavelet transform and image conversion, which can extract the image features with two-dimensional and eliminate the effect of handcrafted features under low signalto-noise ratio conditions, different operating conditions, and data segmentation.
Abstract: Bearing fault diagnosis is of significance to ensure the safe and reliable operation of a motor. Deep learning provides a powerful ability to extract the features of raw data automatically. A convolutional deep belief network (CDBN) is an effective deep learning method. In this article, a novel vibration amplitude spectrum imaging feature extraction method using continuous wavelet transform and image conversion is proposed, which can extract the image features with two-dimensional and eliminate the effect of handcrafted features under low signal-to-noise ratio conditions, different operating conditions, and data segmentation. Then, a novel CDBN with Gaussian distribution is constructed to learn the representative features for bearing fault classification. The proposed method is tested on motor bearing dataset with four and ten classifications. The results have been compared with other methods. The experiment results show that the proposed method has achieved significant improvements and is more effective than the traditional methods.

52 citations


Journal ArticleDOI
01 Feb 2023-Sensors
TL;DR: In this paper , a Deep Belief Network (DBN) was used to classify six tool conditions (one healthy and five faulty) through image-based vibration signals acquired in real time.
Abstract: The controlled interaction of work material and cutting tool is responsible for the precise outcome of machining activity. Any deviation in cutting parameters such as speed, feed, and depth of cut causes a disturbance to the machining. This leads to the deterioration of a cutting edge and unfinished work material. Recognition and description of tool failure are essential and must be addressed using intelligent techniques. Deep learning is an efficient method that assists in dealing with a large amount of dynamic data. The manufacturing industry generates momentous information every day and has enormous scope for data analysis. Most intelligent systems have been applied toward the prediction of tool conditions; however, they must be explored for descriptive analytics for on-board pattern recognition. In an attempt to recognize the variation in milling operation leading to tool faults, the development of a Deep Belief Network (DBN) is presented. The network intends to classify in total six tool conditions (one healthy and five faulty) through image-based vibration signals acquired in real time. The model was designed, trained, tested, and validated through datasets collected considering diverse input parameters.

7 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a novel approach of cascading two different types of Restricted Boltzmann Machine (RBM) in Deep Belief Networks (DBN) method for the SLA classification using single-lead electrocardiogram (ECG) signals.

5 citations


Journal ArticleDOI
01 Apr 2023-Energy
TL;DR: In this paper , an improved hybrid model based on variational mode decomposition (VMD) optimized by the cuckoo search algorithm (CSA), seasonal autoregressive integrated moving average (SARIMA), and deep belief network (DBN) is put foreword for short term power load prediction.

4 citations


Journal ArticleDOI
29 Jan 2023-Symmetry
TL;DR: In this paper , a hybrid system for cracked tire detection based on the adaptive selection of correlation features and deep belief neural networks was proposed, which has three steps: feature extraction, selection, and classification.
Abstract: Tire defects are crucial for safe driving. Specialized experts or expensive tools such as stereo depth cameras and depth gages are usually used to investigate these defects. In image processing, feature extraction, reduction, and classification are presented as three challenging and symmetric ways to affect the performance of machine learning models. This paper proposes a hybrid system for cracked tire detection based on the adaptive selection of correlation features and deep belief neural networks. The proposed system has three steps: feature extraction, selection, and classification. First, the oriented gradient histogram extracts features from the tire images. Second, the proposed adaptive correlation feature selection selects important features with a threshold value adapted to the nature of the images. The last step of the system is to predict the image category based on the deep belief neural networks technique. The proposed model is tested and evaluated using real images of cracked and normal tires. The experimental results show that the proposed solution performs better than the current studies in effectively classifying tire defect images. The proposed hybrid cracked tire detection system based on adaptive correlation feature selection and Deep Belief Neural Networks’ performance provided better classification accuracy (88.90%) than that of Belief Neural Networks (81.6%) and Convolution Neural Networks (85.59%).

3 citations


Journal ArticleDOI
TL;DR: In this article , a multi-model data-fusion-based deep transfer learning (MMF-DTL) framework was proposed to improve RUL estimation of rolling bearings through degradation images (DI) and pre-trained deep convolutional neural networks (CNNs).

3 citations


Journal ArticleDOI
TL;DR: In this paper , a deep belief neural network-back propagation (DBN-BP) model was proposed to predict the fatigue life of LPBF-fabricated Ti-6Al-4V up to VHCF regime.

3 citations


Journal ArticleDOI
TL;DR: In this paper , two hybrid DL classifiers, i.e., convolutional neural network (CNN) + deep belief network (DBN) and bidirectional long shortterm memory (Bi-LSTM) + gated recurrent network (GRU), were designed and tuned using the already proposed optimization algorithms, which results in improved model accuracy.
Abstract: Because of the rise in the number of cyberattacks, the devices that make up the Internet of Things (IoT) environment are experiencing increased levels of security risks. In recent years, a significant number of centralized systems have been developed to identify intrusions into the IoT environment. However, due to diverse requirements of IoT devices such as dispersion, scalability, resource restrictions, and decreased latency, these strategies were unable to achieve notable outcomes. The present paper introduces two novel metaheuristic optimization algorithms for optimizing the weights of deep learning (DL) models, use of DL may help in the detection and prevention of cyberattacks of this nature. Furthermore, two hybrid DL classifiers, i.e., convolutional neural network (CNN) + deep belief network (DBN) and bidirectional long short-term memory (Bi-LSTM) + gated recurrent network (GRU), were designed and tuned using the already proposed optimization algorithms, which results in ads to improved model accuracy. The results are evaluated against the recent approaches in the relevant field along with the hybrid DL classifier. Model performance metrics such as accuracy, rand index, f-measure, and MCC are used to draw conclusions about the model’s validity by employing two distinct datasets. Regarding all performance metrics, the proposed approach outperforms both conventional and cutting-edge methods.

3 citations


Journal ArticleDOI
TL;DR: In this paper , a novel metaheuristic optimization algorithm, called swarm spider optimization (SSO), was utilized to optimize the parameters of the DBN so as to improve its performance.
Abstract: Renewable energy power prediction plays a crucial role in the development of renewable energy generation, and it also faces a challenging issue because of the uncertainty and complex fluctuation caused by environmental and climatic factors. In recent years, deep learning has been increasingly applied in the time series prediction of new energy, where Deep Belief Networks (DBN) can perform outstandingly for learning of nonlinear features. In this paper, we employed the DBN as the prediction model to forecast wind power and PV power. A novel metaheuristic optimization algorithm, called swarm spider optimization (SSO), was utilized to optimize the parameters of the DBN so as to improve its performance. The SSO is a novel swarm spider behavior based optimization algorithm, and it can be employed for addressing complex optimization and engineering problems. Considering that the prediction performance of the DBN is affected by the number of the nodes in the hidden layer, the SSO is used to optimize this parameter during the training stage of DBN (called SSO-DBN), which can significantly enhance the DBN prediction performance. Two datasets, including wind power and PV power with their influencing factors, were used to evaluate the forecasting performance of the proposed SSO-DBN. We also compared the proposed model with several well-known methods, and the experiment results demonstrate that the proposed prediction model has better stability and higher prediction accuracy in comparison to other methods.

2 citations


Journal ArticleDOI
TL;DR: In this article , an ensemble based deep learning method is used to develop a recommen- dation model in which both ratings and reviews are analyzed simultaneously, and the recommendation method used in this work is based on deep learning which employs back propagation neural networks with many hidden layers and varying nodes, facilitating rapid learning.
Abstract: Identifying user preferences is a complex operation, which makes its automa- tion challenging, and existing recommendation systems that rely on one of the parameters ratings or reviews are incapable of performing effectively. In this work, an ensemble based deep learning method is used to develop a recommen- dation model in which both ratings and reviews are analyzed simultaneously. The first step is to identify the features required in our work which is reviews and ratings of a product. Next step is to conduct sentiment analysis on reviews to get numerical values for subsequent processing which gives output in terms of polarity and subjectivity, where polarity identifies the emotions expressed in reviews ranging from −1 to +1 and subjectivity identifies the relevance of par- ticular review. Reviews and ratings as features used in this work as inputs for ensemble based deep neural network. For this, both the inputs need to be in same range so scaling is done. The recommendation method used in this work is based on deep learning which employs back propagation neural networks with many hidden layers and varying nodes, facilitating rapid learning. In this pa- per, few selected representative deep learning architectures with varied amounts of layers concealed to improve the learning capability of this model. However, model of the recommendation system can yet be improved, such as the inabil- ity to explain the deep learning recommendation system, which diminishes its credibility. We assessed our work based on hidden layer architectural variations using measures such as precision, loss, and execution time.

2 citations


Journal ArticleDOI
TL;DR: In this paper , a deep belief network (DBN) was proposed for analyzing the interior-two-flange web crippling performance of cold-formed stainless steel channels with centered and offset web holes.

Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , a deep learning and improved whale optimization algorithm based framework is proposed for human action recognition, which consists of a few core stages i.e., frames initial preprocessing, fine-tuned pre-trained deep learning models through transfer learning, features fusion using modified serial based approach, and improved Whale optimization based best features selection for final classification.
Abstract: Human action recognition (HAR) based on Artificial intelligence reasoning is the most important research area in computer vision. Big breakthroughs in this field have been observed in the last few years; additionally, the interest in research in this field is evolving, such as understanding of actions and scenes, studying human joints, and human posture recognition. Many HAR techniques are introduced in the literature. Nonetheless, the challenge of redundant and irrelevant features reduces recognition accuracy. They also faced a few other challenges, such as differing perspectives, environmental conditions, and temporal variations, among others. In this work, a deep learning and improved whale optimization algorithm based framework is proposed for HAR. The proposed framework consists of a few core stages i.e., frames initial preprocessing, fine-tuned pre-trained deep learning models through transfer learning (TL), features fusion using modified serial based approach, and improved whale optimization based best features selection for final classification. Two pre-trained deep learning models such as InceptionV3 and Resnet101 are fine-tuned and TL is employed to train on action recognition datasets. The fusion process increases the length of feature vectors; therefore, improved whale optimization algorithm is proposed and selects the best features. The best selected features are finally classified using machine learning (ML) classifiers. Four publicly accessible datasets such as Ut-interaction, Hollywood, Free Viewpoint Action Recognition using Motion History Volumes (IXMAS), and centre of computer vision (UCF) Sports, are employed and achieved the testing accuracy of 100%, 99.9%, 99.1%, and 100% respectively. Comparison with state of the art techniques (SOTA), the proposed method showed the improved accuracy.

Journal ArticleDOI
01 Mar 2023-Cancers
TL;DR: In this article , a marine predator's algorithm with deep learning as a lung and colon cancer classification (MPADL-LC3) technique is presented, which employs CLAHE-based contrast enhancement as a pre-processing step.
Abstract: Simple Summary The histopathological detection of these malignancies is a vital element in determining the optimal solution. Timely and initial diagnosis of the sickness on either front diminishes the possibility of death. Deep learning (DL) and machine learning (ML) methods are used to hasten such cancer recognition, allowing the research community to examine more patients in a much shorter period and at a less cost. Abstract Cancer is a deadly disease caused by various biochemical abnormalities and genetic diseases. Colon and lung cancer have developed as two major causes of disability and death in human beings. The histopathological detection of these malignancies is a vital element in determining the optimal solution. Timely and initial diagnosis of the sickness on either front diminishes the possibility of death. Deep learning (DL) and machine learning (ML) methods are used to hasten such cancer recognition, allowing the research community to examine more patients in a much shorter period and at a less cost. This study introduces a marine predator’s algorithm with deep learning as a lung and colon cancer classification (MPADL-LC3) technique. The presented MPADL-LC3 technique aims to properly discriminate different types of lung and colon cancer on histopathological images. To accomplish this, the MPADL-LC3 technique employs CLAHE-based contrast enhancement as a pre-processing step. In addition, the MPADL-LC3 technique applies MobileNet to derive feature vector generation. Meanwhile, the MPADL-LC3 technique employs MPA as a hyperparameter optimizer. Furthermore, deep belief networks (DBN) can be applied for lung and color classification. The simulation values of the MPADL-LC3 technique were examined on benchmark datasets. The comparison study highlighted the enhanced outcomes of the MPADL-LC3 system in terms of different measures.

Journal ArticleDOI
01 Jan 2023-Sensors
TL;DR: In this paper , a deep belief network (DBN) was proposed for cable fault identification and localization based on a time−frequency domain joint impedance spectrum, and the DBN-based cable fault type recognition model and location model were constructed and used to realize the type recognition and location of cable faults.
Abstract: To improve the accuracy of shallow neural networks in processing complex signals and cable fault diagnosis, and to overcome the shortage of manual dependency and cable fault feature extraction, a deep learning method is introduced, and a time−frequency domain joint impedance spectrum is proposed for cable fault identification and localization based on a deep belief network (DBN). Firstly, based on the distribution parameter model of power cables, we model and analyze the cables under normal operation and different fault types, and we obtain the headend input impedance spectrum and the headend input time−frequency domain impedance spectrum of cables under various operating conditions. The headend input impedance amplitude and phase of normal operation and different fault cables are extracted as the original input samples of the cable fault type identification model; the real part of the headend input time–frequency domain impedance of the fault cables is extracted as the original input samples of the cable fault location model. Then, the unsupervised pre−training and supervised inverse fine−tuning methods are used for automatically learning, training, and extracting the cable fault state features from the original input samples, and the DBN−based cable fault type recognition model and location model are constructed and used to realize the type recognition and location of cable faults. Finally, the proposed method is validated by simulation, and the results show that the method has good fault feature extraction capability and high fault type recognition and localization accuracy.

Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , the authors studied the correspondence between SFTs and neural networks and showed that deep belief networks (DBNs) are universal approximators of a p-adic SFT.
Abstract: In this work we initiate the study of the correspondence between p-adic statistical field theories (SFTs) and neural networks (NNs). In general quantum field theories over a p-adic spacetime can be formulated in a rigorous way. Nowadays these theories are considered just mathematical toy models for understanding the problems of the true theories. In this work we show these theories are deeply connected with the deep belief networks (DBNs). Hinton et al. constructed DBNs by stacking several restricted Boltzmann machines (RBMs). The purpose of this construction is to obtain a network with a hierarchical structure (a deep learning architecture). An RBM corresponds to a certain spin glass, we argue that a DBN should correspond to an ultrametric spin glass. A model of such a system can be easily constructed by using p-adic numbers. In our approach, a p-adic SFT corresponds to a p-adic continuous DBN, and a discretization of this theory corresponds to a p-adic discrete DBN. We show that these last machines are universal approximators. In the p-adic framework, the correspondence between SFTs and NNs is not fully developed. We point out several open problems.

Posted ContentDOI
12 Jan 2023
TL;DR: In this article , a deep unsupervised machine learning model for early detection of diabetes using voting ensemble feature selection and deep belief neural networks (DBN) was proposed, which can help reduce the fatality of this disease.
Abstract: Diabetes mellitus is a popular life-threatening disease and patients may gradually have started suffering from other diabetes-causing diseases such as heart attacks, stroke, hypertension, blurry vision, blindness, foot ulcer, amputation, kidney damage and other organ failures before diagnosis. Early detection can help reduce the fatality of this disease. Deep learning models have proven very useful in disease detection and computer-aided diagnosis. In this work, we proposed a deep unsupervised machine learning model for early detection of diabetes using voting ensemble feature selection and deep belief neural networks (DBN). Dataset was obtained from an online repository containing responses of prediagnosed patients to direct questionnaires administered in Sylhet Diabetes Hospital in Sylhet, Bangladesh. The dataset was preprocessed and preprocessed. Features were reduced using the ensemble feature selector. The DBN model was pretrained and tuned to obtain optimal performance. The model was also compared with other models with no multiple hidden layers. The DBN performed at its relative best with F1-measure, precision and recall of 1.00, 0.92 and 1.00 respectively. We conclude that DBN is a useful tool for an unsupervised early prediction of Type II diabetes mellitus.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors used a stacked autoencoder to construct the model health index through feature learning and information fusion of the vibration signals collected by the sensors, and then a continuous deep belief network was used to perform feature learning on the constructed health index to predict future performance changes in the model.
Abstract: Mechanical fault prediction is one of the main problems in condition-based maintenance, and its purpose is to predict the future working status of the machine based on the collected status information of the machine. However, on one hand, the model health indices based on the information collected by the sensors will directly affect the evaluation results of the system. On the other hand, because the model health index is a continuous time series, the effect of feature learning on continuous data also affects the results of fault prognosis. This paper makes full use of the autonomous information fusion capability of the stacked autoencoder and the strong feature learning capability of continuous deep belief networks for continuous data, and proposes a novel fault prognosis method. Firstly, a stacked autoencoder is used to construct the model health index through the feature learning and information fusion of the vibration signals collected by the sensors. To solve the local fluctuations in the health indices, the exponentially weighted moving average method is used to smooth the index data to reduce the impact of noise. Then, a continuous deep belief network is used to perform feature learning on the constructed health index to predict future performance changes in the model. Finally, a fault prognosis experiment based on bearing data was performed. The experimental results show that the method combines the advantages of stacked autoencoders and continuous deep belief networks, and has a lower prediction error than traditional intelligent fault prognosis methods.

Journal ArticleDOI
TL;DR: In this paper , the authors developed an Intelligent Wireless Endoscopic Image Classification using Gannet Optimization Algorithm with Deep Learning (IWEIC-GOADL) model.
Abstract: Wireless capsule endoscopy (WCE) is a non-invasive wireless imaging technology that gained wider popularity. The main drawback of WCE is that it produces a massive amount of images that healthcare professionals should analyze, which is time-consuming. Many researchers have suggested machine learning and image-processing methods for classifying gastrointestinal tract disorders. Data augmentation and classical image processing techniques are integrated with the adjustable pre-trained deep convolutional neural network (DCNN) to categorize diseases in the digestive tract from WCE images. This study develops an Intelligent Wireless Endoscopic Image Classification using Gannet Optimization Algorithm with Deep Learning (IWEIC-GOADL) model. The IWEIC-GOADL technique mainly examines the WCE images for classification purposes. As a preprocessing step, the presented IWEIC-GOADL technique executes the Gabor filtering (GF) method for the noise removal process. In addition, the presented IWEIC-GOADL technique employs a deconvolution VGG19 (DeVGG19) model for feature vector generation, and its hyperparameter tuning process takes place by the GOA. Finally, the IWEIC-GODL technique applies the deep belief network (DBN) model for WCE image classification purposes. A wide range of simulations was performed on a benchmark dataset to demonstrate the better performance of the IWEIC-GODL technique. The stimulation outcome stated the improvements of the IWEIC-GODL algorithm over other recent techniques.


Journal ArticleDOI
TL;DR: In this article , an ant lion optimization (ALO) with deep belief network (DBN) for lung cancer detection and classification with survival rate prediction is proposed. But, the proposed model is not suitable for the detection of lung cancer.
Abstract: The combination of machine learning (ML) approaches in healthcare is a massive advantage designed at curing illness of millions of persons. Several efforts are used by researchers for detecting and providing primary phase insights as to cancer analysis. Lung cancer remained the essential source of disease connected mortality for both men as well as women and their frequency was increasing around the world. Lung disease is the unrestrained progress of irregular cells which begin off in one or both Lungs. The previous detection of cancer is not simpler procedure however if it can be detected, it can be curable, also finding the survival rate is a major challenging task. This study develops an Ant lion Optimization (ALO) with Deep Belief Network (DBN) for Lung Cancer Detection and Classification with survival rate prediction. The proposed model aims to identify and classify the presence of lung cancer. Initially, the proposed model undergoes min-max data normalization approach to preprocess the input data. Besides, the ALO algorithm gets executed to choose an optimal subset of features. In addition, the DBN model receives the chosen features and performs lung cancer classification. Finally, the optimizer is utilized for hyperparameter optimization of the DBN model. In order to report the enhanced performance of the proposed model, a wide-ranging experimental analysis is performed and the results reported the supremacy of the proposed model.

Journal ArticleDOI
01 Jan 2023
TL;DR: In this paper , an Intelligent Hyperparameter Tuned Deep Learning-based Human Activity Recognition (IHPTDL-HAR) technique is proposed to recognize human actions in healthcare environment and helps the patients in managing their healthcare service.
Abstract: Human Activity Recognition (HAR) has been made simple in recent years, thanks to recent advancements made in Artificial Intelligence (AI) techniques. These techniques are applied in several areas like security, surveillance, healthcare, human-robot interaction, and entertainment. Since wearable sensor-based HAR system includes in-built sensors, human activities can be categorized based on sensor values. Further, it can also be employed in other applications such as gait diagnosis, observation of children/adult’s cognitive nature, stroke-patient hospital direction, Epilepsy and Parkinson’s disease examination, etc. Recently-developed Artificial Intelligence (AI) techniques, especially Deep Learning (DL) models can be deployed to accomplish effective outcomes on HAR process. With this motivation, the current research paper focuses on designing Intelligent Hyperparameter Tuned Deep Learning-based HAR (IHPTDL-HAR) technique in healthcare environment. The proposed IHPTDL-HAR technique aims at recognizing the human actions in healthcare environment and helps the patients in managing their healthcare service. In addition, the presented model makes use of Hierarchical Clustering (HC)-based outlier detection technique to remove the outliers. IHPTDL-HAR technique incorporates DL-based Deep Belief Network (DBN) model to recognize the activities of users. Moreover, Harris Hawks Optimization (HHO) algorithm is used for hyperparameter tuning of DBN model. Finally, a comprehensive experimental analysis was conducted upon benchmark dataset and the results were examined under different aspects. The experimental results demonstrate that the proposed IHPTDL-HAR technique is a superior performer compared to other recent techniques under different measures.

Journal ArticleDOI
TL;DR: In this paper , the authors proposed a new forecasting method, i.e., CEEMDAN-DBN-ELM, which decomposes the time series of DGC with NNC from online DGA monitor into multiple steady-components, which makes the prediction model easier to mine the variation characteristics of sample data.

Journal ArticleDOI
Qi Li, Wenxu Qiao, Yaru Shi, Wei Ba, Fan Wang, Xiaopeng Hu 
TL;DR: Wang et al. as mentioned in this paper proposed a novel modeling algorithm for the temperature parameter of the wave rotor refrigeration process based on elastic net and dingo optimization deep belief network (Enet-DOA-DBN).

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a hybrid model of Deep Belief Network (DBN) enhanced Extreme Learning Machine (ELM), where the parameters of the hybrid model are optimized by Particle Swarm Optimization.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed an energy-based latent variable model (EBLVM) for unsupervised feature learning, which defines a new energy function for the continuous visible and hidden variables in which the visible variable is transformed by the deep neural network.
Abstract: 본 논문은 비지도 특징학습을 위한 새로운 에너지기반 은닉변수 모델(EBLVM)을 제안한다. EBLVM의 결합 확률밀도함수는 심층 신경망에 의해 변환된 연속적인 가시변수와 은닉변수에 대한 새로운 에너지 함수를 정의한다. 경사도 기반 contrastive divergence 알고리즘을 사용하여 새로운 EBLVM의 파라미터를 훈련한다. EBLVM은 심층구조를 갖고 모든 은닉층을 연합하여 학습하기 때문에 각 층에서 특징학습을 위한 효과적인 특징들을 추출할 수 있다. Fashion MNIST와 CIFAR10 데이터를 사용한 비교사 특징학습 실험에서 제안된 기법은 기존의 stacked RBM, DBN, DBM 그리고 DEM보다 더 향상된 인식성능을 나타낸다. This paper proposes a new energy-based latent-variable model (EBLVM) for unsupervised feature learning. The joint probability density function of EBLVM defines a new energy function for the continuous visible and hidden variables in which the visible variable is transformed by the deep neural network. We train the parameters of a new EBLVM using a gradient-based contrastive divergence algorithm. Since EBLVM has a deep structure and learns by combining all hidden layers, effective features for feature learning can be extracted from each layer. In comparative feature learning experiments using Fashion MNIST and CIFAR10 data, the proposed method shows better recognition performance than the existing stacked RBM, DBN, DBM, and DEM.

Journal ArticleDOI
TL;DR: In this article , a deep learning network model combining transfer learning of convolutional neural network with supervised training and unsupervised training of deep belief network is proposed for gearbox fault diagnosis.
Abstract: Abstract This paper applies thermal imaging technology to gearbox fault diagnosis. The temperature field calculation model is established to obtain the temperature field images of various faults. A deep learning network model combining transfer learning of convolutional neural network with supervised training and unsupervised training of deep belief network is proposed. The model requires one-fifth of the training time of the convolutional neural network model. The data set used for training the deep learning network model is expanded by using the temperature field simulation image of the gearbox. The results show that the network model has over 97% accuracy for the diagnosis of simulation faults. The finite element model of gearbox can be modified with experimental data to obtain more accurate thermal images, and this method can be better used in practice.

Journal ArticleDOI
TL;DR: A comprehensive survey of deep learning-based methods for structural reliability analysis can be found in this article , where the most common categories of DL-based models used in SRA are classified into supervised methods, unsupervised methods, and hybrid deep learning methods.
Abstract: Abstract One of the most significant and growing research fields in mechanical and civil engineering is Structural Reliability Analysis (SRA). A reliable and precise SRA usually has to deal with complicated , aand numerically expensive problems. Artificial intelligence-based (AI) nd specifically, Deep learning-based (DL) methods, have been applied to the SRA problems to reduce the computational cost and to improve the accuracy of reliability estimation as well. This article reviews the recent advances in using DL models in SRA problems. The review includes the most common categories of DL-based methods used in SRA. More specifically, the application of supervised methods, unsupervised methods, and hybrid deep learning methods in SRA are explained. In this paper, the supervised methods for SRA are categorized as Multi-Layer Perceptron (MLP), Convolutional Neural Networks (CNN), Recurrent Neural Networks (RNN ), Long short-term memory (LSTM), Bidirectional (Bi-LSTM) and Gated recurrent units (GRU). For the unsupervised methods, we have investigated methods such as Generative Adversarial Network (GAN), Autoencoders (AE), Self-Organizing Map (SOM), Restricted Boltzmann Machine (RBM), and Deep Belief Network (DBN). We have made a comprehensive survey of these methods in SRA. Aiming towards an efficient SRA, deep learning-based methods applied for approximating the limit state function (LSF) with First/Second Order Reliability Methods (FORM/SORM), Monte Carlo simulation (MCS), or MCS with importance sampling (IS). Accordingly, the current paper focuses on the structure of different DL-based models and the applications of each DL method in various SRA problems. This survey helps researchers in mechanical and civil engineering, especially those who are engaged with structural and reliability analysis or dealing with quality assurance problems.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors presented an evidence network reasoning recognition method based on a cloud fuzzy belief, which can deal with the random uncertainty and cognitive uncertainty simultaneously, overcoming the problem that the traditional method cannot carry out hierarchical recognition, and it can effectively use sensor information and expert knowledge to realize the deep cognition of the target intention.
Abstract: Uncertainty is widely present in target recognition, and it is particularly important to express and reason the uncertainty. Based on the advantage of the evidence network in uncertainty processing, this paper presents an evidence network reasoning recognition method based on a cloud fuzzy belief. In this method, a hierarchical structure model of an evidence network is constructed; the MIC (maximum information coefficient) method is used to measure the degree of correlation between nodes and determine the existence of edges, and the belief of corresponding attributes is generated based on the cloud model. In addition, the method of information entropy is used to determine the conditional reliability table of non-root nodes, and the target recognition under uncertain conditions is realized afterwards by evidence network reasoning. The simulation results show that the proposed method can deal with the random uncertainty and cognitive uncertainty simultaneously, overcoming the problem that the traditional method has where it cannot carry out hierarchical recognition, and it can effectively use sensor information and expert knowledge to realize the deep cognition of the target intention.

Journal ArticleDOI
TL;DR: In this paper , a new predictive manufacturing system in Industry 4.0 for examining the machines is proposed, where deep features are extracted through the multi-scale Dilation Attention Convolutional Neural Network (MSDA•CNN) and weighted features are given to the Optimized Hybrid Fault Detection (OHFD) that is performed by the deep neural network (DNN) and Deep Belief Network (DBN).
Abstract: The predictive maintenance function is ensured with the earlier detection of errors and faults in the machinery before reaching its critical stages. On the other hand, the challenges faced by Internet of things (IoT) devices are the security problem because they can be easily attacked by comparing the other devices such as computers or portable devices. It cannot solve the high‐dimensional issues and imbalanced data. The computation cost is very expensive when using the modern sampling method. In addition, the conventional methods for predictive maintenance are incorporated with a single method. So, the maintenance and prognostic tasks are very hard to address simultaneously. Thus, a new predictive manufacturing system in Industry 4.0 for examining the machines is proposed. In the initial stage, the data are collected from IoT industry sensors. Considerably, the data cleaning is carried out, and deep features are extracted through the “Multi‐Scale Dilation Attention Convolutional Neural Network (MSDA‐CNN).” Further, the deep, weighted features are extracted, where the weight is optimized using the hybrid algorithm named Probabilistic Beetle Swarm‐Butterfly Optimization (PBS‐BO). In the end, the weighted features are given to the Optimized Hybrid Fault Detection (OHFD) that is performed by the “Deep Neural Network (DNN) and Deep Belief Network (DBN).” Finally, if any faults in machines are predicted, then the system sends alerts to the industrialists for suitable decision‐making. The efficiency of the suggested model is evaluated on a set of real measurements in Industry 4.0.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a DBN-MLP fusion neural network method for multi-dimensional analysis and fault-type diagnosis of smart energy meter fault data, which can effectively reduce the number of training iterations and improve the accuracy of diagnosis.
Abstract: In order to effectively utilize the large amount of high-dimensionality historical data generated by energy meters during operation, this paper proposes a DBN-MLP fusion neural network method for multi-dimensional analysis and fault-type diagnosis of smart energy meter fault data. In this paper, we first use DBN to strengthen the feature extraction ability of the network and solve the problem of many kinds of feature data and high dimensionality of historical data. After that, the processed feature information is input into the MLP neural network, and the strong processing ability of MLP for nonlinear numbers is used to solve the problem of weak correlation among data in the historical data set and improve the accuracy rate of faults diagnosis. The final results show that the DBN-MLP method used in this paper can effectively reduce the number of training iterations to reduce the training time and improve the accuracy of diagnosis.