scispace - formally typeset
Open accessJournal ArticleDOI: 10.1016/J.JMSY.2021.02.006

Enabling predictive maintenance integrated production scheduling by operation-specific health prognostics with generative deep learning

02 Mar 2021-Journal of Manufacturing Systems (Elsevier)-Vol. 61, pp 830-855
Abstract: Predictive Maintenance (PdM) is one of the core innovations in recent years that sparks interest in both research and industry. While researchers develop more and more complex machine learning (ML) models to predict the remaining useful life (RUL), most models are not designed with regard to actual industrial practice and are not validated with industrial data. To overcome this gap between research and industry and to create added value, we propose a holistic framework that aims at directly integrating PdM models with production scheduling. To enable PdM-integrated production scheduling (PdM-IPS), an operation-specific health prognostics model is required. Therefore, we propose a generative deep learning model based on the conditional variational autoencoder (CVAE) that can derive an operation-specific health indicator (HI) from large-scale industrial condition monitoring (CM) data. We choose this unsupervised learning approach to cope with one of the biggest challenges of applying PdM in industry: the lack of labelled failure data. The health prognostics model provides a quantitative measure of degradation given a specific production sequence and thus enables PdM-IPS. The framework is validated both on NASA’s C-MAPSS data set as well as real industrial data from machining centers for automotive component manufacturing. The results indicate that the approach can both capture and quantify changes in machine condition such that PdM-IPS can be subsequently realized.

... read more

Topics: Prognostics (66%), Predictive maintenance (54%), Autoencoder (54%) ... read more

9 results found

Journal ArticleDOI: 10.1016/J.JMSY.2021.05.003
Abstract: Predictive maintenance (PdM) advocates for the usage of machine learning technologies to monitor asset's health conditions and plan maintenance activities accordingly. However, according to the specific degradation process, some health-related measures (e.g. temperature) may be not informative enough to reliably assess the health stage. Moreover, each measure needs to be properly treated to extract the information linked to the health stage. Those issues are usually addressed by performing a manual feature engineering, which results in high management cost and poor generalization capability of those approaches. In this work, we address this issue by coupling a health stage classifier with a feature learning mechanism. With feature learning, minimally processed data are automatically transformed into informative features. Many effective feature learning approaches are based on deep learning. With those, the features are obtained as a non-linear combination of the inputs, thus it is difficult to understand the input's contribution to the classification outcome and so the reasoning behind the model. Still, these insights are increasingly required to interpret the results and assess the reliability of the model. In this regard, we propose a feature learning approach able to (i) effectively extract high-quality features by processing different input signals, and (ii) provide useful insights about the most informative domain transformations (e.g. Fourier transform or probability density function) of the input signals (e.g. vibration or temperature). The effectiveness of the proposed approach is tested with publicly available real-world datasets about bearings' progressive deterioration and compared with the traditional feature engineering approach.

... read more

Topics: Feature engineering (71%), Feature learning (65%), Deep learning (53%) ... read more

2 Citations

Journal ArticleDOI: 10.1016/J.JMSY.2021.06.001
Abstract: Small and medium manufacturing enterprises (SMEs) often lack skills and resources required to perform in-house PHM analytics. While cloud-based services provide SMEs the option to outsource PHM analytics in the cloud, a critical limiting factor to such arrangement is the data owner’s unwillingness to share data due to data privacy concerns. In this paper, we showcase how homomorphic encryption, a cryptographic technique that allows direct computation on encrypted data, can enable a secure PHM outsourcing with high precision for SMEs. We first outline a two-party collaborative framework for a secure outsourcing of PHM analytics for SMEs. Next, we introduce a frequency-based peak detection algorithm (H-FFT-C) that generates a machine health diagnosis and prescription report, while keeping the machine data private. We demonstrate the secure PHM outsourcing scenario on a lab-scale fiber extrusion device. Our demonstration is comprised of key functionalities found in many PHM applications. Finally, the extensibility and limitation of the approach used in this study is summarized.

... read more

Topics: Analytics (55%), Outsourcing (54%), Encryption (52%) ... read more

1 Citations

Journal ArticleDOI: 10.1016/J.JMSY.2021.04.007
Abstract: Topology optimization has become a valuable design tool for structures to be fabricated by additive manufacturing (AM). However, during early stage design, parameters are frequently evolving, resulting in multiple similar TO runs. Especially when design for manufacturing principles expand the parameter space, this iterative process is computationally burdensome, and does not take advantage of redundant information in each study. We introduce a deep learning-based framework that learns latent similarities between runs in a training set to predict near optimal designs, enabling efficient wholistic understanding of the problem setup space, which includes both loading conditions and, for the first time in this study, manufacturing process configuration. Learning was achieved using a conditional generative adversarial network (cGAN) trained on a dataset of randomized boundary conditions, loadings, and AM build orientations, and the corresponding optimal structures obtained through overhang-filtered TO. cGAN predictions showed good agreement with true optima. For even greater accuracy, predictions can be post-processed by applying a small number of TO iterations. Manifold learning techniques were used to provide further insight, and we were able to conclude that the cGAN error generally increases with distance between the load and the boundary conditions or build plate. Interestingly, in 9% of test cases, the cGAN generated structures with compliances better than the corresponding TO-calculated structures, often by as much as 50 % with an average of 7.8 %. That some of these structures appeared qualitatively different in form suggests the potential value of the approach in other domains such as generative design, where a range of alternate near-optimal designs are used to guide the ideation process.

... read more

Topics: Generative Design (59%), Topology optimization (59%), Design for manufacturability (53%) ... read more

1 Citations

Open accessJournal ArticleDOI: 10.1109/ACCESS.2021.3127084
Tarek Berghout1, Mohamed Benbouzid2, S. M. Muyeen3, Toufik Bentrcia1  +1 moreInstitutions (3)
15 Nov 2021-IEEE Access
Abstract: Nowadays, machine learning has emerged as a promising alternative for condition monitoring of industrial processes, making it indispensable for maintenance planning. Such a learning model is able to assess health states in real time provided that both training and testing samples are complete and have the same probability distribution. However, it is rare and difficult in practical applications to meet these requirements due to the continuous change in working conditions. Besides, conventional hyperparameters tuning via grid search or manual tuning requires a lot of human intervention and becomes inflexible for users. Two objectives are targeted in this work. In an attempt to remedy the data distribution mismatch issue, we firstly introduce a feature extraction and selection approach built upon correlation analysis and dimensionality reduction. Secondly, to diminish human intervention burdens, we propose an Automatic artificial Neural network with an Augmented Hidden Layer (Auto-NAHL) for the classification of health states. Within the designed network, it is worthy to mention that the novelty of the implemented neural architecture is attributed to the new multiple feature mappings of the inputs, where such configuration allows the hidden layer to learn multiple representations from several random linear mappings and produce a single final efficient representation. Hyperparameters tuning including the network architecture, is fully automated by incorporating Particle Swarm Optimization (PSO) technique. The designed learning process is evaluated on a complex industrial plant as well as various classification problems. Based on the obtained results, it can be claimed that our proposal yields better response to new hidden representations by obtaining a higher approximation compared to some previous works.

... read more

Topics: Dimensionality reduction (56%), Network architecture (56%), Artificial neural network (55%) ... read more

Open accessJournal ArticleDOI: 10.1016/J.CIRPJ.2021.09.003
Abstract: Data-based methods are capable to monitor machine components. Approaches for semi-supervised anomaly detection are trained using sensor data that describe the normal state of machine components. Thus, such approaches are interesting for industrial practice, since sensor data do not have to be labeled in a time-consuming and costly way. In this work, an ensemble approach for semi-supervised anomaly detection is used to detect anomalies. It is shown that the ensemble approach is suitable for condition monitoring of ball screws. For the evaluation of the approach, a data set of a regular test cycle of a ball screw from automotive industry is used.

... read more

Topics: Ball screw (58%), Anomaly detection (58%), Condition monitoring (54%) ... read more


58 results found

Open accessProceedings Article
Diederik P. Kingma1, Max Welling1Institutions (1)
01 Jan 2014-
Abstract: How can we perform efficient inference and learning in directed probabilistic models, in the presence of continuous latent variables with intractable posterior distributions, and large datasets? We introduce a stochastic variational inference and learning algorithm that scales to large datasets and, under some mild differentiability conditions, even works in the intractable case. Our contributions is two-fold. First, we show that a reparameterization of the variational lower bound yields a lower bound estimator that can be straightforwardly optimized using standard stochastic gradient methods. Second, we show that for i.i.d. datasets with continuous latent variables per datapoint, posterior inference can be made especially efficient by fitting an approximate inference model (also called a recognition model) to the intractable posterior using the proposed lower bound estimator. Theoretical advantages are reflected in experimental results.

... read more

Topics: Approximate inference (67%), Inference (55%), Estimator (53%) ... read more

14,546 Citations

Open accessJournal ArticleDOI: 10.1016/0377-0427(87)90125-7
Peter J. Rousseeuw1Institutions (1)
Abstract: A new graphical display is proposed for partitioning techniques. Each cluster is represented by a so-called silhouette, which is based on the comparison of its tightness and separation. This silhouette shows which objects lie well within their cluster, and which ones are merely somewhere in between clusters. The entire clustering is displayed by combining the silhouettes into a single plot, allowing an appreciation of the relative quality of the clusters and an overview of the data configuration. The average silhouette width provides an evaluation of clustering validity, and might be used to select an ‘appropriate’ number of clusters.

... read more

Topics: Silhouette (68%), Dunn index (61%), Cluster analysis (56%) ... read more

10,821 Citations

Open accessProceedings Article
Xavier Glorot1, Yoshua Bengio1Institutions (1)
31 Mar 2010-
Abstract: Whereas before 2006 it appears that deep multilayer neural networks were not successfully trained, since then several algorithms have been shown to successfully train them, with experimental results showing the superiority of deeper vs less deep architectures. All these experimental results were obtained with new initialization or training mechanisms. Our objective here is to understand better why standard gradient descent from random initialization is doing so poorly with deep neural networks, to better understand these recent relative successes and help design better algorithms in the future. We first observe the influence of the non-linear activations functions. We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. Surprisingly, we find that saturated units can move out of saturation by themselves, albeit slowly, and explaining the plateaus sometimes seen when training neural networks. We find that a new non-linearity that saturates less can often be beneficial. Finally, we study how activations and gradients vary across layers and during training, with the idea that training may be more difficult when the singular values of the Jacobian associated with each layer are far from 1. Based on these considerations, we propose a new initialization scheme that brings substantially faster convergence. 1 Deep Neural Networks Deep learning methods aim at learning feature hierarchies with features from higher levels of the hierarchy formed by the composition of lower level features. They include Appearing in Proceedings of the 13 International Conference on Artificial Intelligence and Statistics (AISTATS) 2010, Chia Laguna Resort, Sardinia, Italy. Volume 9 of JMLR: WC Weston et al., 2008). Much attention has recently been devoted to them (see (Bengio, 2009) for a review), because of their theoretical appeal, inspiration from biology and human cognition, and because of empirical success in vision (Ranzato et al., 2007; Larochelle et al., 2007; Vincent et al., 2008) and natural language processing (NLP) (Collobert & Weston, 2008; Mnih & Hinton, 2009). Theoretical results reviewed and discussed by Bengio (2009), suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may need deep architectures. Most of the recent experimental results with deep architecture are obtained with models that can be turned into deep supervised neural networks, but with initialization or training schemes different from the classical feedforward neural networks (Rumelhart et al., 1986). Why are these new algorithms working so much better than the standard random initialization and gradient-based optimization of a supervised training criterion? Part of the answer may be found in recent analyses of the effect of unsupervised pretraining (Erhan et al., 2009), showing that it acts as a regularizer that initializes the parameters in a “better” basin of attraction of the optimization procedure, corresponding to an apparent local minimum associated with better generalization. But earlier work (Bengio et al., 2007) had shown that even a purely supervised but greedy layer-wise procedure would give better results. So here instead of focusing on what unsupervised pre-training or semi-supervised criteria bring to deep architectures, we focus on analyzing what may be going wrong with good old (but deep) multilayer neural networks. Our analysis is driven by investigative experiments to monitor activations (watching for saturation of hidden units) and gradients, across layers and across training iterations. We also evaluate the effects on these of choices of activation function (with the idea that it might affect saturation) and initialization procedure (since unsupervised pretraining is a particular form of initialization and it has a drastic impact).

... read more

Topics: Deep learning (61%), Vanishing gradient problem (57%), Initialization (57%) ... read more

9,463 Citations

Open accessPosted Content
20 Jun 2014-arXiv: Learning
Abstract: The ever-increasing size of modern data sets combined with the difficulty of obtaining label information has made semi-supervised learning one of the problems of significant practical importance in modern data analysis. We revisit the approach to semi-supervised learning with generative models and develop new models that allow for effective generalisation from small labelled data sets to large unlabelled ones. Generative approaches have thus far been either inflexible, inefficient or non-scalable. We show that deep generative models and approximate Bayesian inference exploiting recent advances in variational methods can be used to provide significant improvements, making generative approaches highly competitive for semi-supervised learning.

... read more

1,900 Citations

Open accessProceedings Article
Kihyuk Sohn1, Xinchen Yan1, Honglak Lee1Institutions (1)
07 Dec 2015-
Abstract: Supervised deep learning has been successfully applied to many recognition problems. Although it can approximate a complex many-to-one function well when a large amount of training data is provided, it is still challenging to model complex structured output representations that effectively perform probabilistic inference and make diverse predictions. In this work, we develop a deep conditional generative model for structured output prediction using Gaussian latent variables. The model is trained efficiently in the framework of stochastic gradient variational Bayes, and allows for fast prediction using stochastic feed-forward inference. In addition, we provide novel strategies to build robust structured prediction algorithms, such as input noise-injection and multi-scale prediction objective at training. In experiments, we demonstrate the effectiveness of our proposed algorithm in comparison to the deterministic deep neural network counterparts in generating diverse but realistic structured output predictions using stochastic inference. Furthermore, the proposed training methods are complimentary, which leads to strong pixel-level object segmentation and semantic labeling performance on Caltech-UCSD Birds 200 and the subset of Labeled Faces in the Wild dataset.

... read more

Topics: Structured prediction (64%), Generative model (60%), Deep learning (57%) ... read more

1,593 Citations