scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Intelligent Manufacturing in 2020"


Journal ArticleDOI
TL;DR: This exhaustive literature review provides a concrete definition of Industry 4.0 and defines its six design principles such as interoperability, virtualization, local, real-time talent, service orientation and modularity.
Abstract: Manufacturing industry profoundly impact economic and societal progress. As being a commonly accepted term for research centers and universities, the Industry 4.0 initiative has received a splendid attention of the business and research community. Although the idea is not new and was on the agenda of academic research in many years with different perceptions, the term “Industry 4.0” is just launched and well accepted to some extend not only in academic life but also in the industrial society as well. While academic research focuses on understanding and defining the concept and trying to develop related systems, business models and respective methodologies, industry, on the other hand, focuses its attention on the change of industrial machine suits and intelligent products as well as potential customers on this progress. It is therefore important for the companies to primarily understand the features and content of the Industry 4.0 for potential transformation from machine dominant manufacturing to digital manufacturing. In order to achieve a successful transformation, they should clearly review their positions and respective potentials against basic requirements set forward for Industry 4.0 standard. This will allow them to generate a well-defined road map. There has been several approaches and discussions going on along this line, a several road maps are already proposed. Some of those are reviewed in this paper. However, the literature clearly indicates the lack of respective assessment methodologies. Since the implementation and applications of related theorems and definitions outlined for the 4th industrial revolution is not mature enough for most of the reel life implementations, a systematic approach for making respective assessments and evaluations seems to be urgently required for those who are intending to speed this transformation up. It is now main responsibility of the research community to developed technological infrastructure with physical systems, management models, business models as well as some well-defined Industry 4.0 scenarios in order to make the life for the practitioners easy. It is estimated by the experts that the Industry 4.0 and related progress along this line will have an enormous effect on social life. As outlined in the introduction, some social transformation is also expected. It is assumed that the robots will be more dominant in manufacturing, implanted technologies, cooperating and coordinating machines, self-decision-making systems, autonom problem solvers, learning machines, 3D printing etc. will dominate the production process. Wearable internet, big data analysis, sensor based life, smart city implementations or similar applications will be the main concern of the community. This social transformation will naturally trigger the manufacturing society to improve their manufacturing suits to cope with the customer requirements and sustain competitive advantage. A summary of the potential progress along this line is reviewed in introduction of the paper. It is so obvious that the future manufacturing systems will have a different vision composed of products, intelligence, communications and information network. This will bring about new business models to be dominant in industrial life. Another important issue to take into account is that the time span of this so-called revolution will be so short triggering a continues transformation process to yield some new industrial areas to emerge. This clearly puts a big pressure on manufacturers to learn, understand, design and implement the transformation process. Since the main motivation for finding the best way to follow this transformation, a comprehensive literature review will generate a remarkable support. This paper presents such a review for highlighting the progress and aims to help improve the awareness on the best experiences. It is intended to provide a clear idea for those wishing to generate a road map for digitizing the respective manufacturing suits. By presenting this review it is also intended to provide a hands-on library of Industry 4.0 to both academics as well as industrial practitioners. The top 100 headings, abstracts and key words (i.e. a total of 619 publications of any kind) for each search term were independently analyzed in order to ensure the reliability of the review process. Note that, this exhaustive literature review provides a concrete definition of Industry 4.0 and defines its six design principles such as interoperability, virtualization, local, real-time talent, service orientation and modularity. It seems that these principles have taken the attention of the scientists to carry out more variety of research on the subject and to develop implementable and appropriate scenarios. A comprehensive taxonomy of Industry 4.0 can also be developed through analyzing the results of this review.

1,011 citations


Journal ArticleDOI
TL;DR: A segmentation-based deep-learning architecture that is designed for the detection and segmentation of surface anomalies and is demonstrated on a specific domain of surface-crack detection.
Abstract: Automated surface-anomaly detection using machine learning has become an interesting and promising area of research, with a very high and direct impact on the application domain of visual inspection. Deep-learning methods have become the most suitable approaches for this task. They allow the inspection system to learn to detect the surface anomaly by simply showing it a number of exemplar images. This paper presents a segmentation-based deep-learning architecture that is designed for the detection and segmentation of surface anomalies and is demonstrated on a specific domain of surface-crack detection. The design of the architecture enables the model to be trained using a small number of samples, which is an important requirement for practical applications. The proposed model is compared with the related deep-learning methods, including the state-of-the-art commercial software, showing that the proposed approach outperforms the related methods on the specific domain of surface-crack detection. The large number of experiments also shed light on the required precision of the annotation, the number of required training samples and on the required computational cost. Experiments are performed on a newly created dataset based on a real-world quality control case and demonstrates that the proposed approach is able to learn on a small number of defected surfaces, using only approximately 25–30 defective training samples, instead of hundreds or thousands, which is usually the case in deep-learning applications. This makes the deep-learning method practical for use in industry where the number of available defective samples is limited. The dataset is also made publicly available to encourage the development and evaluation of new methods for surface-defect detection.

393 citations


Journal ArticleDOI
TL;DR: This work sets out to be a guide to the status of DT development and application in today’s academic and industrial environment by selecting 123 representative items together with 22 supplementary works to address those two perspectives, while considering technical aspects as a fundamental.
Abstract: With the rapid advancement of cyber-physical systems, Digital Twin (DT) is gaining ever-increasing attention owing to its great capabilities to realize Industry 4.0. Enterprises from different fields are taking advantage of its ability to simulate real-time working conditions and perform intelligent decision-making, where a cost-effective solution can be readily delivered to meet individual stakeholder demands. As a hot topic, many approaches have been designed and implemented to date. However, most approaches today lack a comprehensive review to examine DT benefits by considering both engineering product lifecycle management and business innovation as a whole. To fill this gap, this work conducts a state-of-the art survey of DT by selecting 123 representative items together with 22 supplementary works to address those two perspectives, while considering technical aspects as a fundamental. The systematic review further identifies eight future perspectives for DT, including modular DT, modeling consistency and accuracy, incorporation of Big Data analytics in DT models, DT simulation improvements, VR integration into DT, expansion of DT domains, efficient mapping of cyber-physical data and cloud/edge computing integration. This work sets out to be a guide to the status of DT development and application in today’s academic and industrial environment.

310 citations


Journal ArticleDOI
TL;DR: An automated BPM solution is investigated to select and compose services in open business environment, Blockchain technology (BCT) is explored and proposed to transfer and verify the trustiness of businesses and partners, and a BPM framework is developed to illustrate how BCT can be integrated to support prompt, reliable, and cost-effective evaluation and transferring of Quality of Services in the workflow composition and management.
Abstract: Business process management (BPM) aims to optimize business processes to achieve better system performance such as higher profit, quicker response, and better services. BPM systems in Industry 4.0 are required to digitize and automate business process workflows and support the transparent interoperations of service vendors. The critical bottleneck to advance BPM systems is the evaluation, verification, and transformation of trustworthiness and digitized assets. Most of BPM systems rely heavily on domain experts or third parties to deal with trustworthiness. In this paper, an automated BPM solution is investigated to select and compose services in open business environment, Blockchain technology (BCT) is explored and proposed to transfer and verify the trustiness of businesses and partners, and a BPM framework is developed to illustrate how BCT can be integrated to support prompt, reliable, and cost-effective evaluation and transferring of Quality of Services in the workflow composition and management.

201 citations


Journal ArticleDOI
TL;DR: The proposed intelligent fault diagnosis method offers a new and promising approach to artificially create additional valid samples for model training, and the proposed method manages to achieve high diagnosis accuracy with small original training dataset.
Abstract: Intelligent machinery fault diagnosis system has been receiving increasing attention recently due to the potential large benefits of maintenance cost reduction, enhanced operation safety and reliability. This paper proposes a novel deep learning method for rotating machinery fault diagnosis. Since accurately labeled data are usually difficult to obtain in real industries, data augmentation techniques are proposed to artificially create additional valid samples for model training, and the proposed method manages to achieve high diagnosis accuracy with small original training dataset. Two augmentation methods are investigated including sample-based and dataset-based methods, and five augmentation techniques are considered in general, i.e. additional Gaussian noise, masking noise, signal translation, amplitude shifting and time stretching. The effectiveness of the proposed method is validated by carrying out experiments on two popular rolling bearing datasets. Fairly high diagnosis accuracy up to 99.9% can be obtained using limited training data. By comparing with the latest advanced researches on the same datasets, the superiority of the proposed method is demonstrated. Furthermore, the diagnostic performance of the deep neural network is extensively evaluated with respect to data augmentation strength, network depth and so forth. The results of this study suggest that the proposed intelligent fault diagnosis method offers a new and promising approach.

194 citations


Journal ArticleDOI
TL;DR: By applying a deep neural network to selective laser melting, a classification model of melt-pool images with respect to 6 laser power labels could be utilized to infer the location to cause the unexpected alteration of microstructures or separate the defective products non-destructively.
Abstract: By applying a deep neural network to selective laser melting, we studied a classification model of melt-pool images with respect to 6 laser power labels. Laser power influenced to form pores or cracks determining the part quality and was positively-linearly dependent to the density of the part. Using the neural network of which the number of nodes is dropped with increasing the layer number achieved satisfactory inference when melt-pool images had blurred edges. The proposed neural network showed the classification failure rate under 1.1% for 13,200 test images and was more effective to monitor melt-pool images because it simultaneously handled various shapes, comparing with a simple calculation such as the sum of pixel intensity in melt-pool images. The classification model could be utilized to infer the location to cause the unexpected alteration of microstructures or separate the defective products non-destructively.

143 citations


Journal ArticleDOI
TL;DR: This paper presents a state-of-the-art of ML-aided PPC (ML-PPC) done through a systematic literature review analyzing 93 recent research application articles and proposes a mapping to classify the scientific literature to identify further research perspectives.
Abstract: Because of their cross-functional nature in the company, enhancing Production Planning and Control (PPC) functions can lead to a global improvement of manufacturing systems. With the advent of the Industry 4.0 (I4.0), copious availability of data, high-computing power and large storage capacity have made of Machine Learning (ML) approaches an appealing solution to tackle manufacturing challenges. As such, this paper presents a state-of-the-art of ML-aided PPC (ML-PPC) done through a systematic literature review analyzing 93 recent research application articles. This study has two main objectives: contribute to the definition of a methodology to implement ML-PPC and propose a mapping to classify the scientific literature to identify further research perspectives. To achieve the first objective, ML techniques, tools, activities, and data sources which are required to implement a ML-PPC are reviewed. The second objective is developed through the analysis of the use cases and the addressed characteristics of the I4.0. Results suggest that 75% of the possible research domains in ML-PPC are barely explored or not addressed at all. This lack of research originates from two possible causes: firstly, scientific literature rarely considers customer, environmental, and human-in-the-loop aspects when linking ML to PPC. Secondly, recent applications seldom couple PPC to logistics as well as to design of products and processes. Finally, two key pitfalls are identified in the implementation of ML-PPC models: the complexity of using Internet of Things technologies to collect data and the difficulty of updating the ML model to adapt it to the manufacturing system changes.

132 citations


Journal ArticleDOI
TL;DR: An architecture for a digital twin, which enables the exchange of data and information between a remote emulation or simulation and the physical twin, and can be implemented in new and legacy production facilities, with a minimal disruption of current installations.
Abstract: Industry 4.0, cyber-physical production systems (CPPS) and the Internet of Things (IoT) are current focusses in automation and data exchange in manufacturing, arising from the rapid increase in capabilities in information and communication technologies and the ubiquitous internet. A key enabler for the advances promised by CPPSs is the concept of a digital twin, which is the virtual representation of a real-world entity, or the physical twin. An important step towards the success of Industry 4.0 is the establishment of practical reference architectures. This paper presents an architecture for such a digital twin, which enables the exchange of data and information between a remote emulation or simulation and the physical twin. The architecture comprises different layers, including a local data layer, an IoT Gateway layer, cloud-based databases and a layer containing emulations and simulations. The architecture can be implemented in new and legacy production facilities, with a minimal disruption of current installations. This architecture provides a service-based and real-time enabled infrastructure for vertical and horizontal integration. To evaluate the architecture, it was implemented for a small, but typical, physical manufacturing system component.

130 citations


Journal ArticleDOI
TL;DR: A new tool wear predicting method based on multi-domain feature fusion by deep convolutional neural network (DCNN) to combine adaptive feature fusion with automatic continuous prediction is proposed in this paper.
Abstract: Tool wear monitoring has been increasingly important in intelligent manufacturing to increase machining efficiency. Multi-domain features can effectively characterize tool wear condition, but manual feature fusion lowers monitoring efficiency and hinders the further improvement of predicting accuracy. In order to overcome these deficiencies, a new tool wear predicting method based on multi-domain feature fusion by deep convolutional neural network (DCNN) is proposed in this paper. In this method, multi-domain (including time-domain, frequency domain and time–frequency domain) features are respectively extracted from multisensory signals (e.g. three-dimensional cutting force and vibration) as health indictors of tool wear condition, then the relationship between these features and real-time tool wear is directly established based on the designed DCNN model to combine adaptive feature fusion with automatic continuous prediction. The performance of the proposed tool wear predicting method is experimentally validated by using three tool run-to-failure datasets measured from three-flute ball nose tungsten carbide cutter of high-speed CNC machine under dry milling operations. The experimental results show that the predicting accuracy of the proposed method is significantly higher than other advanced methods.

120 citations


Journal ArticleDOI
TL;DR: A hybrid information system based on a long short-term memory network (LSTM) for tool wear prediction using a stacked LSTM and a nonlinear regression model to predict tool wear based on the new input vector.
Abstract: Excessive tool wear leads to the damage and eventual breakage of the tool, workpiece, and machining center. Therefore, it is crucial to monitor the condition of tools during processing so that appropriate actions can be taken to prevent catastrophic tool failure. This paper presents a hybrid information system based on a long short-term memory network (LSTM) for tool wear prediction. First, a stacked LSTM is used to extract the abstract and deep features contained within the multi-sensor time series. Subsequently, the temporal features extracted are combined with process information to form a new input vector. Finally, a nonlinear regression model is designed to predict tool wear based on the new input vector. The proposed method is validated on both NASA Ames milling data set and the 2010 PHM Data Challenge data set. Results show the outstanding performance of the hybrid information model in tool wear prediction, especially when the experiments are run under various operating conditions.

105 citations


Journal ArticleDOI
TL;DR: A generic data-driven cyber-physical approach for personalised SCP co-development in a cloud-based environment is proposed with a novel concept of smart, connected, open architecture product, of which the interaction processes are enabled by co- development toolkits with smartness and connectedness.
Abstract: The rapid development of information and communication technology enables a promising market of information densely product, i.e. smart, connected product (SCP), and also changes the way of user–designer interaction in the product development process. For SCP, massive data generated by users drives its design innovation and somehow determines its final success. Nevertheless, most existing works only look at the new functionalities or values that are derived in the one-way communication by introducing novel data analytics methods. Few work discusses about an effective and systematic approach to enable individual user innovation in such context, i.e. co-development process, which sets the fundamental basis of the prevailing concept of data-driven design. Aiming to fill this gap, this paper proposes a generic data-driven cyber-physical approach for personalised SCP co-development in a cloud-based environment. A novel concept of smart, connected, open architecture product is hence introduced with a generic cyber-physical model established in a cloud-based environment, of which the interaction processes are enabled by co-development toolkits with smartness and connectedness. Both the personalized SCP modelling method and the establishment of its cyber-physical product model are described in details. To further demonstrate the proposed approach, a case study of a smart wearable device (i.e. i-BRE respiratory mask) development process is given with general discussions.

Journal ArticleDOI
TL;DR: Experimental results and K-fold cross validation show that the multi-spectral deep CNN model can effectively detect the solar cell surface defects with higher accuracy and greater adaptability and can increase the efficiency of solar cell manufacturing and make the manufacturing process smarter.
Abstract: Similar and indeterminate defect detection of solar cell surface with heterogeneous texture and complex background is a challenge of solar cell manufacturing. The traditional manufacturing process relies on human eye detection which requires a large number of workers without a stable and good detection effect. In order to solve the problem, a visual defect detection method based on multi-spectral deep convolutional neural network (CNN) is designed in this paper. Firstly, a selected CNN model is established. By adjusting the depth and width of the model, the influence of model depth and kernel size on the recognition result is evaluated. The optimal CNN model structure is selected. Secondly, the light spectrum features of solar cell color image are analyzed. It is found that a variety of defects exhibited different distinguishable characteristics in different spectral bands. Thus, a multi-spectral CNN model is constructed to enhance the discrimination ability of the model to distinguish between complex texture background features and defect features. Finally, some experimental results and K-fold cross validation show that the multi-spectral deep CNN model can effectively detect the solar cell surface defects with higher accuracy and greater adaptability. The accuracy of defect recognition reaches 94.30%. Applying such an algorithm can increase the efficiency of solar cell manufacturing and make the manufacturing process smarter.

Journal ArticleDOI
TL;DR: A real-time machining data application and service based on IMT digital twin, established with the aim of further data analysis and optimization, such as the machine tool dynamics, contour error estimation and compensation is presented.
Abstract: With the development of manufacturing, machining data applications are becoming a key technological component of enhancing the intelligence of manufacturing. The new generation of machine tools should be digitalized, highly efficient, network-accessible and intelligent. An intelligent machine tool (IMT) driven by the digital twin provides a superior solution for the development of intelligent manufacturing. In this paper, a real-time machining data application and service based on IMT digital twin is presented. Multisensor fusion technology is adopted for real-time data acquisition and processing. Data transmission and storage are completed using the MTConnect protocol and components. Multiple forms of HMIs and applications are developed for data visualization and analysis in digital twin, including the machining trajectory, machining status and energy consumption. An IMT digital twin model is established with the aim of further data analysis and optimization, such as the machine tool dynamics, contour error estimation and compensation. Examples of the IMT digital twin application are presented to prove that the development method of the IMT digital twin is effective and feasible. The perspective development of machining data analysis and service is also discussed.

Journal ArticleDOI
TL;DR: The result shows that the proposed approach for fault prognosis with the degradation sequence of equipment based on the recurrent neural network is able to achieve significant performance whether in one-step prediction task, in long-term predictions task, or in remaining useful life prediction task.
Abstract: In general, fault prognosis research usually leads to the research of remaining useful life prediction and performance prediction (prediction of target feature), which can be regarded as a sequence learning problem. Considering the significant success achieved by the recurrent neural network in sequence learning problems such as precise timing, speech recognition, and so on, this paper proposes a novel approach for fault prognosis with the degradation sequence of equipment based on the recurrent neural network. Long short-term memory (LSTM) network is utilized due to its capability of learning long-term dependencies, which takes the concatenated feature and operation state indicator of the equipment as the input. Note that the indicator is a one-hot vector, and based on it, the remaining useful life can be estimated without any pre-defined threshold. The outputs of the LSTM networks are connected to a fully-connected layer to map the hidden state into the parameters of a Gaussian mixture model and a categorical distribution so that the predicted output sequence can be sampled from them. The performance of the proposed method is verified by the health monitoring data of aircraft turbofan engines. The result shows that the proposed approach is able to achieve significant performance whether in one-step prediction task, in long-term prediction task, or in remaining useful life prediction task.

Journal ArticleDOI
TL;DR: A ML integrated design for additive manufacturing framework is proposed, which takes advantage of ML that can learn the complex relationships between the design and performance spaces and has the capability to model input–output relationships in both directions.
Abstract: For improving manufacturing efficiency and minimizing costs, design for additive manufacturing (AM) has been accordingly proposed. The existing design for AM methods are mainly surrogate model based. Due to the increasingly available data nowadays, machine learning (ML) has been applied to medical diagnosis, image processing, prediction, classification, learning association, etc. A variety of studies have also been carried out to use machine learning for optimizing the process parameters of AM with corresponding objectives. In this paper, a ML integrated design for AM framework is proposed, which takes advantage of ML that can learn the complex relationships between the design and performance spaces. Furthermore, the primary advantage of ML over other surrogate modelling methods is the capability to model input–output relationships in both directions. That is, a deep neural network can model property–structure relationships, given structure–property input–output data. A case study was carried out to demonstrate the effectiveness of using ML to design a customized ankle brace that has a tunable mechanical performance with tailored stiffness.

Journal ArticleDOI
TL;DR: The results indicate that the RVFL-EO model has the predicting ability to estimate the laser-cutting characteristics of PMMA sheet.
Abstract: In this paper, an enhanced random vector functional link network (RVFL) algorithm was employed to predict kerf quality indices during CO2 laser cutting of polymethylmethacrylate (PMMA) sheets. In the proposed model, the equilibrium optimizer (EO) is used to augment the prediction capability of RVFL via selecting the optimal values of RVFL parameters. The predicting model includes four input variables: gas pressure, sheet thickness, laser power, and cutting speed, and five kerf quality indices: rough zone ratio, widths of up and down heat affected zones, maximum surface roughness, and kerf taper angle. The experiments were designed using Taguchi L18 orthogonal array. The kerf surface contains three main zones: rough, transient, and smooth zones. The results of conventional RVFL as well as modified RVFL-EO algorithms were compared with experimental ones. Seven statistical criteria were used to assess the performance of the proposed algorithms. The results indicate that the RVFL-EO model has the predicting ability to estimate the laser-cutting characteristics of PMMA sheet.

Journal ArticleDOI
TL;DR: Results reveal that the vibration signatures obtained from developed non-contact sensor compare well with the accelerometer data obtained under the same conditions which makes the developed sensor a cost-effective tool for the condition monitoring of rotating machines.
Abstract: Bearing defects have been accepted as one of the major causes of failure in rotating machinery. It is important to identify and diagnose the failure behavior of bearings for the reliable operation of equipment. In this paper, a low-cost non-contact vibration sensor has been developed for detecting the faults in bearings. The supervised learning method, support vector machine (SVM), has been employed as a tool to validate the effectiveness of the developed sensor. Experimental vibration data collected for different bearing defects under various loading and running conditions have been analyzed to develop a system for diagnosing the faults for machine health monitoring. Fault diagnosis has been accomplished using discrete wavelet transform for denoising the signal. Mahalanobis distance criteria has been employed for selecting the strongest feature on the extracted relevant features. Finally, these selected features have been passed to the SVM classifier for identifying and classifying the various bearing defects. The results reveal that the vibration signatures obtained from developed non-contact sensor compare well with the accelerometer data obtained under the same conditions. A developed sensor is a promising tool for detecting the bearing damage and identifying its class. SVM results have established the effectiveness of the developed non-contact sensor as a vibration measuring instrument which makes the developed sensor a cost-effective tool for the condition monitoring of rotating machines.

Journal ArticleDOI
TL;DR: A generalized methodology for automated material identification using machine vision and machine learning technologies to contribute to the cognitive abilities of machine tools as wells as material handling devices such as robots deployed in industry 4.0 is presented.
Abstract: Manufacturing has experienced tremendous changes from industry 1.0 to industry 4.0 with the advancement of technology in fast-developing areas such as computing, image processing, automation, machine vision, machine learning along with big data and Internet of things. Machine tools in industry 4.0 shall have the ability to identify materials which they handle so that they can make and implement certain decisions on their own as needed. This paper aims to present a generalized methodology for automated material identification using machine vision and machine learning technologies to contribute to the cognitive abilities of machine tools as wells as material handling devices such as robots deployed in industry 4.0. A dataset of the surfaces of four materials (Aluminium, Copper, Medium density fibre board, and Mild steel) that need to be identified and classified is prepared and processed to extract red, green and blue color components of RGB color model. These color components are used as features while training the machine learning algorithm. Support vector machine is used as a classifier and other classification algorithms such as Decision trees, Random forests, Logistic regression, and k-Nearest Neighbor are also applied to the prepared data set. The capability of the proposed methodology to identify the different group of materials is verified with the images available in an open source database. The methodology presented has been validated by conducting four experiments for checking the classification accuracies of the classifier. Its robustness has also been checked for various camera orientations, illumination levels, and focal length of the lens. The results presented show that the proposed scheme can be implemented in an existing manufacturing setup without major modifications.

Journal ArticleDOI
TL;DR: Deep learning computer vision methods were used to evaluate the quality of lithium-ion battery electrode for automated detection of microstructural defects from light microscopy images of the sectioned cells, demonstrating that deep learning models are able to learn accurate representations of the microstructure images well enough to distinguish instances with defects from those without defect.
Abstract: During the manufacturing of lithium-ion battery electrodes, it is difficult to prevent certain types of defects, which affect the overall battery performance and lifespan. Deep learning computer vision methods were used to evaluate the quality of lithium-ion battery electrode for automated detection of microstructural defects from light microscopy images of the sectioned cells. The results demonstrate that deep learning models are able to learn accurate representations of the microstructure images well enough to distinguish instances with defects from those without defect. Furthermore, the benefits of using pretrained networks for microstructure classification were also demonstrated, achieving the highest classification accuracies. This method provides an approach to analyse thousands of Li-ion battery micrographs for quality assessment in a very short time and it can also be combined with other common battery characterization methods for further technical analysis.

Journal ArticleDOI
TL;DR: An approach to use standardized work description for automated procedure generation of mobile assistant robots is introduced and an optimized human–robot task allocation is found, in which the tasks are allocated in an intelligent and comprehensible way.
Abstract: Human–robot collaboration is enabled by the digitization of production and has become a key technology for the factory of the future. It combines the strengths of both the human worker and the assistant robot and allows the implementation of an varying degree of automation in workplaces in order to meet the increasing demand of flexibility of manufacturing systems. Intelligent planning and control algorithms are needed for the organization of the work in hybrid teams of humans and robots. This paper introduces an approach to use standardized work description for automated procedure generation of mobile assistant robots. A simulation tool is developed that implements the procedure model and is therefore capable of calculating different objective parameters like production time or ergonomics during a production cycle as a function of the human–robot task allocation. The simulation is validated with an existing workplace in an assembly line at the Volkswagen plant in Wolfsburg, Germany. Furthermore, a new method is presented to optimize the task allocation in human–robot teams for a given workplace, using the simulation as fitness function in a genetic algorithm. The advantage of this new approach is the possibility to evaluate different distributions of the tasks, while considering the dynamics of the interaction between the worker and the robot in their shared workplace. Using the presented approach for a given workplace, an optimized human–robot task allocation is found, in which the tasks are allocated in an intelligent and comprehensible way.

Journal ArticleDOI
TL;DR: A deep learning-based quality identification method for metal AM process that is able to achieve promising performance using limited supervised samples with low quality, such as noisy and blurred images.
Abstract: As a promising modern technology, additive manufacturing (AM) has been receiving increasing research and industrial attention in the recent years. With its rapid development, the importance of quality monitoring in AM process has been recognized, which significantly affects the property of the manufactured parts. Since the conventional hand-crafted features for quality identification are generally costly, time-consuming and sensitive to noises, the intelligent data-driven automatic process monitoring methods are becoming more and more popular at present. This paper proposes a deep learning-based quality identification method for metal AM process. To alleviate the requirement for large amounts of high-quality labeled training data by most existing data-driven methods, an identification consistency-based approach is proposed to better explore the semi-supervised training data. The proposed method is able to achieve promising performance using limited supervised samples with low quality, such as noisy and blurred images. Experiments on a real-world metal AM dataset are implemented to validate the effectiveness of the proposed method, which offers a promising tool for real industrial applications.

Journal ArticleDOI
TL;DR: This work develops a novel framework through combined use of multi-domain vibration feature extraction, feature selection and cost-sensitive learning method that consistently outperforms the traditional classifiers such as support vector machine (SVM), gradient boosting decision tree (GBDT), etc.
Abstract: Fault diagnosis plays an essential role in rotating machinery manufacturing systems to reduce their maintenance costs. How to improve diagnosis accuracy remains an open issue. To this end, we develop a novel framework through combined use of multi-domain vibration feature extraction, feature selection and cost-sensitive learning method. First, we extract time-domain, frequency-domain, and time-frequency-domain features to make full use of vibration signals. Second, a feature selection technique is employed to obtain a feature subset with good generalization properties, by simultaneously measuring the relevance and redundancy of features. Third, a cost-sensitive learning method is designed for a classifier to effectively learn the discriminating boundaries, with an extremely imbalanced distribution of fault instances. For illustration, a real-world dataset of rotating machinery collected from an oil refinery in China is utilized. The extensive experiments have demonstrated that our multi-domain feature extraction and feature selection can significantly improve the diagnosis accuracy. Meanwhile, our cost-sensitive learning method consistently outperforms the traditional classifiers such as support vector machine (SVM), gradient boosting decision tree (GBDT), etc., and even better than the classification method calibrated by six popular imbalanced data resampling algorithms, such as the Synthetic Minority Over-sampling Technique (SMOTE) and the Adaptive Synthetic sampling method (ADASYN), in terms of decreasing missed alarms and reducing the average cost. Owing to its high evaluation scores and low average misclassification cost, cost-sensitive GBDT (CS-GBDT) is preferred for imbalanced fault diagnosis in practice.

Journal ArticleDOI
TL;DR: A dynamic greedy search strategy was developed to avoid blind searching in traditional strategy and weighted iteration update of the Q function, including the weighted mean of the maximum fuzzy earning, was designed to improve the speed and accuracy of the improved learning algorithm.
Abstract: Given the dynamic and uncertain production environment of job shops, a scheduling strategy with adaptive features must be developed to fit variational production factors. Therefore, a dynamic scheduling system model based on multi-agent technology, including machine, buffer, state, and job agents, was built. A weighted Q-learning algorithm based on clustering and dynamic search was used to determine the most suitable operation and to optimize production. To address the large state space problem caused by changes in the system state, four state features were extracted. The dimension of the system state was decreased through the clustering method. To reduce the error between the actual system states and clustering ones, the state difference degree was defined and integrated with the iteration formula of the Q function. To select the optimal state-action pair, improved search and iteration update strategies were proposed. Convergence analysis of the proposed algorithm and simulation experiments indicated that the proposed adaptive strategy is well adaptable and effective in different scheduling environments, and shows better performance in complex environments. The two contributions of this research are as follows: (1) a dynamic greedy search strategy was developed to avoid blind searching in traditional strategy. (2) Weighted iteration update of the Q function, including the weighted mean of the maximum fuzzy earning, was designed to improve the speed and accuracy of the improved learning algorithm.

Journal ArticleDOI
TL;DR: ConvLBM, a novel approach to monitor Laser Based Manufacturing processes in real-time using a Convolutional Neural Network model to extract features and quality indicators from raw Medium Wavelength Infrared coaxial images, is presented.
Abstract: The extraction of meaningful features from the monitoring of laser processes is the foundation of new non-destructive quality inspection methods for the manufactured pieces, which has been and remains a growing interest in industry. We present ConvLBM, a novel approach to monitor Laser Based Manufacturing processes in real-time. ConvLBM uses a Convolutional Neural Network model to extract features and quality indicators from raw Medium Wavelength Infrared coaxial images. We demonstrate the ability of ConvLBM to represent process dynamics, and predict quality indicators in two scenarios: dilution estimation in Laser Metal Deposition, and location of defects in laser welding processes. Obtained results represent a breakthrough in the 3D printing of large metal parts, and in the quality control of welding processes. We are also releasing the first large dataset of annotated images of laser manufacturing.

Journal ArticleDOI
TL;DR: A new methodology to solve a Closed-Loop Supply Chain management problem through a decision-making system based on fuzzy logic built on machine learning with satisfactory results was tested on an industrial hospital laundry, highlighting the potential of this proposal for its incorporation into the Industry 4.0 framework.
Abstract: This paper presents a new methodology to solve a Closed-Loop Supply Chain (CLSC) management problem through a decision-making system based on fuzzy logic built on machine learning. The system will provide decisions to operate a production plant integrated in a CLSC to meet the production goals with the presence of uncertainties. One of the main contributions of the proposal is the ability to reject the effects that the imbalances in the rest of the chain have on the inventories of raw materials and finished products. For this, an intelligent algorithm will be in charge of the supervision of the plant operation and task-reprogramming to ensure the achievement of the process goals. Fuzzy logic and machine learning techniques are combined to design the tool. The method was tested on an industrial hospital laundry with satisfactory results, thus highlighting the potential of this proposal for its incorporation into the Industry 4.0 framework.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive review of the state-of-the-art methods and practice reported in the literature dealing with many different aspects of data-informed inverse design by reviewing the origins and common practice of inverse problems in engineering design.
Abstract: A significant body of knowledge exists on inverse problems and extensive research has been conducted on data-driven design in the past decade. This paper provides a comprehensive review of the state-of-the-art methods and practice reported in the literature dealing with many different aspects of data-informed inverse design. By reviewing the origins and common practice of inverse problems in engineering design, the paper presents a closed-loop decision framework of product usage data-informed inverse design. Specifically reviewed areas of focus include data-informed inverse requirement analysis by user generated content, data-informed inverse conceptual design for product innovation, data-informed inverse embodiment design for product families and product platforming, data-informed inverse analysis and optimization in detailed design, along with prevailing techniques for product usage data collection and analytics. The paper also discusses the challenges of data-informed inverse design and the prospects for future research.

Journal ArticleDOI
TL;DR: Application of the proposed monitoring system to experimental data shows that the KPCA based method can effectively monitor the tool wear.
Abstract: Tool wear is one of the consequences of a machining process. Excessive tool wear can lead to poor surface finish, and result in a defective product. It can also lead to premature tool failure, and may result in process downtime and damaged components. With this in mind, it has long been desired to monitor tool wear/tool condition. Kernel principal component analysis (KPCA) is proposed as an effective and efficient method for monitoring the tool condition in a machining process. The KPCA-based method may be used to identify faults (abnormalities) in a process through the fusion of multi-sensor signals. The method employs a control chart monitoring approach that uses Hotelling’s T2-statistic and Q-statistic to identify the faults in conjunction with control limits, which are computed by kernel density estimation (KDE). KDE is a non-parametric technique to approximate a probability density function. Four performance metrics, abnormality detection rate, false detection rate, detection delay, and prediction accuracy, are employed to test the reliability of the monitoring system and are used to compare the KPCA-based method with PCA-based method. Application of the proposed monitoring system to experimental data shows that the KPCA based method can effectively monitor the tool wear.

Journal ArticleDOI
Tao Zan1, Zhihao Liu1, Hui Wang1, Min Wang1, Xiangsheng Gao1 
TL;DR: A CCPR method based on a one-dimensional convolutional neural network (1D-CNN) based on the influence of the network structural parameters and activation functions on the recognition performance is analyzed and discussed, and some suggestions for parameter selection are given.
Abstract: Unnatural control chart patterns (CCPs) usually correspond to the specific factors in a manufacturing process, so the control charts have become important means of the statistical process control. Therefore, an accurate and automatic control chart pattern recognition (CCPR) is of great significance for manufacturing enterprises. In order to improve the CCPR accuracy, experts have designed various complex features, which undoubtedly increases the workload and difficulty of the quality control. To solve these problems, a CCPR method based on a one-dimensional convolutional neural network (1D-CNN) is proposed. The proposed method does not require to extract complex features manually; instead, it uses a 1D-CNN to obtain the optimal feature set from the raw data of the CCPs through the feature learning and completes the CCPR. The dataset for training and validation, containing six typical CCPs, is generated by the Monte-Carlo simulation. Then, the influence of the network structural parameters and activation functions on the recognition performance is analyzed and discussed, and some suggestions for parameter selection are given. Finally, the performance of the proposed method is compared with that of the traditional multi-layer perceptron method using the same dataset. The comparison results show that the proposed 1D-CNN method has obvious advantages in the CCPR tasks. Compared with the related literature, the features extracted by the 1D-CNN are of higher quality. Furthermore, the 1D-CNN trained with simulation dataset still perform well in recognizing the real dataset from the production environment.

Journal ArticleDOI
TL;DR: This paper proposes a data-driven method in a self-supervised manner, which is different from previous prognostic methods, and shows that the algorithm outperforms other fault detection methods.
Abstract: As a part of prognostics and health management (PHM), fault detection has been used in many fields to improve the reliability of the system and reduce the manufacturing costs. Due to the complexity of the system and the richness of the sensors, fault detection still faces some challenges. In this paper, we propose a data-driven method in a self-supervised manner, which is different from previous prognostic methods. In our algorithm, we first extract feature indices of each batch and concatenate them into one feature vector. Then the principal components are extracted by Kernel PCA. Finally, the fault is detected by the reconstruction error in the feature space. Samples with high reconstruction error are identified as faulty. To demonstrate the effectiveness of the proposed algorithm, we evaluate our algorithm on a benchmark dataset for fault detection, and the results show that our algorithm outperforms other fault detection methods.

Journal ArticleDOI
TL;DR: A novel TCM method that employs only a few appropriate feature parameters of acoustic sensor signals in conjunction with a two-layer angle kernel extreme learning machine is proposed to achieve superior TCM performance relative to other state-of-the-art methods based on sound sensor data.
Abstract: Tool condition monitoring (TCM) in numerical control machines plays an essential role in ensuring high manufacturing quality. The TCM process is conducted according to the data obtained from one or more of a variety of sensors, among which acoustic sensors offer numerous practical advantages. However, acoustic sensor data suffer from strong noise, which can severely limit the accuracy of predictions regarding tool condition. The present work addresses this issue by proposing a novel TCM method that employs only a few appropriate feature parameters of acoustic sensor signals in conjunction with a two-layer angle kernel extreme learning machine. The two-layer network structure is applied to enhance the learning of features associated with complex nonlinear data, and two angle kernel functions without hyperparameters are employed to avoid the complications associated with the use of preset hyperparameters in conventional kernel functions. The proposed TCM method is experimentally demonstrated to achieve superior TCM performance relative to other state-of-the-art methods based on sound sensor data.