scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automation Science and Engineering in 2020"


Journal ArticleDOI
TL;DR: A novel optimal path planning algorithm based on the convolutional neural network (CNN), namely the neural RRT* (NRRT*), which utilizes a nonuniform sampling distribution generated from a CNN model and achieves better performance.
Abstract: Rapidly random-exploring tree (RRT) and its variants are very popular due to their ability to quickly and efficiently explore the state space. However, they suffer sensitivity to the initial solution and slow convergence to the optimal solution, which means that they consume a lot of memory and time to find the optimal path. It is critical to quickly find a short path in many applications such as the autonomous vehicle with limited power/fuel. To overcome these limitations, we propose a novel optimal path planning algorithm based on the convolutional neural network (CNN), namely the neural RRT* (NRRT*). The NRRT* utilizes a nonuniform sampling distribution generated from a CNN model. The model is trained using quantities of successful path planning cases. In this article, we use the A* algorithm to generate the training data set consisting of the map information and the optimal path. For a given task, the proposed CNN model can predict the probability distribution of the optimal path on the map, which is used to guide the sampling process. The time cost and memory usage of the planned path are selected as the metric to demonstrate the effectiveness and efficiency of the NRRT*. The simulation results reveal that the NRRT* can achieve convincing performance compared with the state-of-the-art path planning algorithms. Note to Practitioners —The motivation of this article stems from the need to develop a fast and efficient path planning algorithm for practical applications such as autonomous driving, warehouse robot, and countless others. Sampling-based algorithms are widely used in these areas due to their good scalability and high efficiency. However, the quality of the initial path is not guaranteed and it takes much time to converge to the optimal path. To quickly obtain a high-quality initial path and accelerate the convergence speed, we propose the NRRT*. It utilizes a nonuniform sampling distribution and achieves better performance. The NRRT* can be also applied to other sampling-based algorithms for improved results in different applications.

198 citations


Journal ArticleDOI
TL;DR: It is proved that all states of the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB) by utilizing the Lyapunov stability principles.
Abstract: In this article, an admittance-based controller for physical human–robot interaction (pHRI) is presented to perform the coordinated operation in the constrained task space. An admittance model and a soft saturation function are employed to generate a differentiable reference trajectory to ensure that the end-effector motion of the manipulator complies with the human operation and avoids collision with surroundings. Then, an adaptive neural network (NN) controller involving integral barrier Lyapunov function (IBLF) is designed to deal with tracking issues. Meanwhile, the controller can guarantee the end-effector of the manipulator limited in the constrained task space. A learning method based on the radial basis function NN (RBFNN) is involved in controller design to compensate for the dynamic uncertainties and improve tracking performance. The IBLF method is provided to prevent violations of the constrained task space. We prove that all states of the closed-loop system are semiglobally uniformly ultimately bounded (SGUUB) by utilizing the Lyapunov stability principles. At last, the effectiveness of the proposed algorithm is verified on a Baxter robot experiment platform. Note to Practitioners —This work is motivated by the neglect of safety in existing controller design in physical human–robot interaction (pHRI), which exists in industry and services, such as assembly and medical care. It is considerably required in the controller design for rigorously handling constraints. Therefore, in this article, we propose a novel admittance-based human–robot interaction controller. The developed controller has the following functionalities: 1) ensuring reference trajectory remaining in the constrained task space: a differentiable reference trajectory is shaped by the desired admittance model and a soft saturation function; 2) solving uncertainties of robotic dynamics: a learning approach based on radial basis function neural network (RBFNN) is involved in controller design; and 3) ensuring the end-effector of the manipulator remaining in the constrained task space: different from other barrier Lyapunov function (BLF), integral BLF (IBLF) is proposed to constrain system output directly rather than tracking error, which may be more convenient for controller designers. The controller can be potentially applied in many areas. First, it can be used in the rehabilitation robot to avoid injuring the patient by limiting the motion. Second, it can ensure the end-effector of the industrial manipulator in a prescribed task region. In some industrial tasks, dangerous or damageable tools are mounted on the end-effector, and it will hurt humans and bring damage to the robot when the end-effector is out of the prescribed task region. Third, it may bring a new idea to the designed controller for avoiding collisions in pHRI when collisions occur in the prescribed trajectory of end-effector.

170 citations


Journal ArticleDOI
TL;DR: A new generation method called surface defect-generation adversarial network (SDGAN), which employs generative adversarial networks (GANs), is proposed to generate defect images using a large number of defect-free images from industrial sites, and experiments show that the defect images generated by the SDGAN have better image quality and diversity than thosegenerated by the state-of-the-art methods.
Abstract: This article aims to improve deep-learning-based surface defect recognition. Owing to the insufficiency of the defect images in practical production lines and the high cost of labeling, it is difficult to obtain a sufficient defect data set in terms of diversity and quantity. A new generation method called surface defect-generation adversarial network (SDGAN), which employs generative adversarial networks (GANs), is proposed to generate defect images using a large number of defect-free images from industrial sites. Experiments show that the defect images generated by the SDGAN have better image quality and diversity than those generated by the state-of-the-art methods. The SDGAN is applied to expand the commutator cylinder surface defect image data sets with and without labels (referred to as the CCSD-L and CCSD-NL data sets, respectively). Regarding anomaly recognition, a 1.77% error rate and a 49.43% relative improvement (IMP) for the CCSD-NL defect data set are obtained. Regarding defect classification, a 0.74% error rate and a 57.47% IMP for the CCSD-L defect data set are achieved. Moreover, defect classification trained on the images augmented by the SDGAN is robust to uneven and poor lighting conditions. Note to Practitioners —This article proposes a method of defect image generation to address the lack of industrial defect images. Traditional defect recognition methods have two disadvantages: different types of defects require different algorithms and handcrafted features are deficient. Defect recognition using deep learning can solve the above problems. However, deep learning requires a plethora of images, and the number of industrial defect images cannot meet this requirement. We propose a new defect image-generation method called SDGAN to generate a defect image data set that balances diversity and authenticity. In practice, we employ a large number of defect-free images to generate a large number of defect images using our method to expand the industry defect-free image data set. Then, the augmented defect data set is used to build a deep-learning defect recognition model. Experiments show that the accuracy of defect recognition can be significantly improved by building a deep-learning defect recognition model using the augmented data set. Therefore, deep learning can achieve excellent performance in defect recognition with a limited number of defect images.

91 citations


Journal ArticleDOI
TL;DR: A new method termed the fine-grained adversarial network-based domain adaptation (FANDA) is proposed to address the cross-domain industrial fault diagnosis problem and can reduce the distribution discrepancy of both the global domains and each fault class across the domains automatically.
Abstract: While machine-learning techniques have been widely used in smart industrial fault diagnosis, there is a major assumption that the source domain data (where the diagnosis model is trained) and the future target data (where the model is applied) must have the same distribution. However, this assumption may not hold in real industrial applications due to the changing operating conditions or mechanical wear. Recent advances have embedded the adversarial-learning mechanism into deep neural networks to reduce the distribution discrepancy between different domains to learn domain-invariant features and perform fault diagnosis. However, they only aligned the distributions of domains and neglected the fault-discriminative structure underlying the target domain, which leads to a decline in the diagnostic performance. In this article, a new method termed the fine-grained adversarial network-based domain adaptation (FANDA) is proposed to address the cross-domain industrial fault diagnosis problem. Different from the existing domain adversarial adaptation methods considering the domain discrepancy only, the features in FANDA are learned by competing against multiple-domain discriminators, which enable both a global alignment for two domains and a fine-grained alignment for each fault class across two domains. Thus, the fault-discriminative structure underlying two domains can be preserved in the adaptation process and the fault classification ability learned on the source domain can remain effective on the target data. Experiments on a mechanical bearing case and an industrial three-phase flow process case demonstrate the effectiveness of the proposed method. Note to Practitioners—The varying industrial conditions (domains) can lead to the degradation of diagnostic performance as the distribution can change from the source domain to the target domain. The focus of this article is to develop a fine-grained adversarial network-based domain adaptation (FANDA) strategy to diagnose different kinds of faults across the domains. The proposed FANDA algorithm can reduce the distribution discrepancy of both the global domains and each fault class across the domains automatically. The training procedure is completed using an adversarial way, driving the learned feature representations to be transferable across two domains. Thus, the fault classifier learned on the source domain can be applied to the target domain directly. It is noted that common deep network architectures can be embedded into the FANDA framework, and thus, this article is suitable for carrying out cross-domain fault diagnosis tasks in diverse advanced manufacturing applications.

90 citations


Journal ArticleDOI
TL;DR: A data-driven model-free method using Image-Based Visual Servoing (IBVS), which uses features directly extracted in the image space as feedbacks, which can enable velocity-independent path following of an arbitrarily given path on the plane, which permits a better experience of user interaction.
Abstract: Magnetically actuated microswimmers have attracted researchers to investigate their swimming characteristics and controlled actuation. Although plenty of studies on actuating helical microswimmers have been carried out, robust closed-loop controls should be still explored for practical applications. In this paper, we proposed a data-driven model-free method using Image-Based Visual Servoing (IBVS), which uses features directly extracted in the image space as feedbacks. The IBVS method can eliminate camera calibration errors. We have demonstrated with experiments that the proposed IBVS method can enable velocity-independent path following of an arbitrarily given path on the plane, which permits a better experience of user interaction. The proposed control method is successfully applied to obstacle avoidance tasks and has the potential for the application in complex circumstances. This approach is promising for biomedical applications. Note to Practitioners —This paper is motivated by the problem of driving a small-scale swimming robot with a helical body by magnetic fields along a predefined path. The proposed new closed-loop control uses features directly extracted in the image space as feedbacks. We demonstrated with experiments that the helical swimming robot can follow an arbitrarily given path on the plane using the proposed control method. The proposed control method is also successfully applied to obstacle avoidance tasks.

87 citations


Journal ArticleDOI
TL;DR: A two-stage decomposition method is proposed such that an industrial size problem can be solved and can get an optimal solution of the concerned problem with one-week-scale batches and jobs in short time, thereby proving the readiness to put it in industrial use.
Abstract: Production scheduling is a crucial task in modern steel plants. The scheduling of a wire rod and bar rolling process is challenging in many steel plants, which has a direct impact on their production efficiency and profit. This article studies a new single-machine scheduling problem with sequence-dependent setup time, release time, and due time constraints originated from a wire rod and bar rolling process in steel plants. In this problem, jobs have been assigned to batches in advance. The objective is to schedule the batches and jobs on continuous time to minimize the number of late jobs. A mixed-integer program is created as a baseline model. A baseline method is used to solve this NP-hard problem by solving the baseline model. We further design a two-stage decomposition method after analyzing the characteristics of this problem. Both actual and simulated instances with varying sizes are solved by using the proposed methods. The results demonstrate that the baseline method can only solve some small-scale cases, while the decomposition method can solve all small-scale cases and some medium-scale cases. Finally, we reveal the impacts of different instances on the performance of the proposed decomposition method. Note to Practitioners —This article deals with a new single-machine scheduling problem arising from an industrial wire rod and bar rolling process. A baseline method is given to tackle this problem by solving an established mixed-integer program. Afterward, a two-stage decomposition method is proposed such that an industrial size problem can be solved. Computational results of both actual and simulated cases show that it is more efficient than the baseline method in solving the scheduling problem. It can get an optimal solution of the concerned problem with one-week-scale batches and jobs in short time, thereby proving the readiness to put it in industrial use.

84 citations


Journal ArticleDOI
TL;DR: A novel method called MOELS for minimizing both costs and makespan when deploying a workflow into a cloud datacenter is described, which seamlessly combines a list scheduling heuristic and an evolutionary algorithm to have complementary advantages.
Abstract: Cloud computing has nowadays become a dominant technology to reduce the computation cost by elastically providing resources to users on a pay-per-use basis. More and more scientific and business applications represented by workflows have been moved or are in active transition to cloud platforms. Therefore, efficient cloud workflow scheduling methods are in high demand. This paper investigates how to simultaneously optimize makespan and economical cost for workflow scheduling in clouds and proposes a multiobjective evolutionary list scheduling (MOELS) algorithm to address it. It embeds the classic list scheduling into a powerful multiobjective evolutionary algorithm (MOEA): a genome is represented by a scheduling sequence and a preference weight and is interpreted to a scheduling solution via a specifically designed list scheduling heuristic, and the genomes in the population are evolved through tailored genetic operators. The simulation experiments with the real-world data show that MOELS outperforms some state-of-the-art methods as it can always achieve a higher hypervolume (HV) value. Note to Practitioners —This paper describes a novel method called MOELS for minimizing both costs and makespan when deploying a workflow into a cloud datacenter. MOELS seamlessly combines a list scheduling heuristic and an evolutionary algorithm to have complementary advantages. It is compared with two state-of-the-art algorithms MOHEFT (multiobjective heterogeneous earliest finish time) and EMS-C (evolutionary multiobjective scheduling for cloud) in the simulation experiments. The results show that the average hypervolume value from MOELS is 3.42% higher than that of MOHEFT, and 2.27% higher than that of EMS-C. The runtime that MOELS requires rises moderately as a workflow size increases.

81 citations


Journal ArticleDOI
TL;DR: A setup change scheduling method using reinforcement learning is proposed in which each agent determines setup decisions in a decentralized manner and learns a centralized policy by sharing a neural network among the agents to deal with the changes in the number of machines.
Abstract: As semiconductor manufacturers, recently, have focused on producing multichip products (MCPs), scheduling semiconductor manufacturing operations become complicated due to the constraints related to reentrant production flows, sequence-dependent setups, and alternative machines. At the same time, the scheduling problems need to be solved frequently to effectively manage the variabilities in production requirements, available machines, and initial setup status. To minimize the makespan for an MCP scheduling problem, we propose a setup change scheduling method using reinforcement learning (RL) in which each agent determines setup decisions in a decentralized manner and learns a centralized policy by sharing a neural network among the agents to deal with the changes in the number of machines. Furthermore, novel definitions of state, action, and reward are proposed to address the variabilities in production requirements and initial setup status. Numerical experiments demonstrate that the proposed approach outperforms the rule-based, metaheuristic, and other RL methods in terms of the makespan while incurring shorter computation time than the metaheuristics considered. Note to Practitioners —This article studies a scheduling problem for die attach and wire bonding stages of a semiconductor packaging line. Due to the variabilities in production requirements, the number of available machines, and initial setup status, it is challenging for a scheduler to produce high-quality schedules within a specific time limit using existing approaches. In this article, a new scheduling method using reinforcement learning is proposed to enhance the robustness against the variabilities while achieving performance improvements. To verify the robustness of the proposed method, neural networks (NNs) trained on small-scale scheduling problems are used to solve large-scale scheduling problems. Experimental results show that the proposed method outperforms the existing approaches while requiring a short computation time. Furthermore, the trained NN performs well in solving unseen real-world scale problems even under stochastic processing time, suggesting the viability of the proposed method for real-world semiconductor packaging lines.

81 citations


Journal ArticleDOI
TL;DR: This article presents the first solution for effective control of DRCS, which needs no velocity feedback, respects the actuator constraints, and is designed and analyzed without linearizing the complicated nonlinear dynamic equations.
Abstract: As a class of underactuated systems, cooperative dual rotary crane systems (DRCSs) are widely used to complete the task of large payload transportation in complex environments, since the working capacity of single cranes is quite limited. However, the control issues of DRCS fail to receive enough attention at present. Compared with single cranes, DRCSs contain more state variables, geometric constraints, and coupling relationships. Therefore, the complex kinematic and dynamic characteristics make controller design/stability analysis very challenging for DRCS. In order to solve these problems, based on the dynamic model of DRCS established by Lagrange’s method, an output feedback control method with consideration for actuator constraints is designed to realize accurate dual boom positioning and rapid elimination of payload swings. The stability of the equilibrium point for the closed-loop system is analyzed by using Lyapunov techniques and LaSalle’s invariance principle. To the best of our knowledge, this article yields the first solution for effective control of DRCS, which needs no velocity feedback, respects the actuator constraints, and is designed and analyzed without linearizing the complicated nonlinear dynamic equations. Finally, a series of hardware experiments on a self-built experimental platform is carried out to illustrate the effectiveness of the proposed controller. Note to Practitioners —This article is motivated by the control problem of dual rotary boom crane systems. In order to meet industrial requirements, the masses and volumes of to-be-hoisted cargoes are larger than before, and consequently, dual-crane systems are more frequently needed to fulfill transportation tasks. For such systems, although they improve the load capacity, greater challenges are caused when eliminating cargo swings in the transportation process. To the best of our knowledge, current control methods are mainly designed based on linearized/reduced models, whose performance may be not satisfactory when swing angles are large. Moreover, most methods use velocity signals with unexpected noises and ignore the constraints of actuators’ amplitudes, which may not be feasible in practical applications. To solve these problems, this article presents an output feedback controller, which can simultaneously solve the problems of saturation constraints and velocity signal unavailability. The proposed controller can guarantee accurate boom positioning and cargo swing elimination with guaranteed theoretical proof. Furthermore, the control performance is verified experimentally on a self-built testbed. In future efforts, we intend to apply the presented control method to industrial applications.

72 citations


Journal ArticleDOI
Xinyu Wu1, Jia Liu1, Chenyang Huang1, Meng Su1, Tiantian Xu1 
TL;DR: It is demonstrated that the helical microswimmer is able to follow different paths in 3-D space with submillimeter accuracy using the proposed closed-loop controller according to an orientation-compensation model learned by neural networks.
Abstract: Controlling magnetic microswimmers toward 3-D manipulation tasks has received considerable attention. Although related studies on manipulating helical microswimmers have been developed, stable closed-loop controls and accuracy swimming models should be still investigated. This article addresses the problem of 3-D path following for magnetically driven helical microswimmers with an adaptive-compensation scheme. The orientation-compensation model in the global coordinate frame is learned by radial basis function (RBF) networks trained with backpropagation algorithms, which is used to express the motion of the helical microswimmer in the presence of the weight of the swimmer and lateral disturbances from the boundary effects. A proxy-based sliding-mode control (PSMC) approach is developed to design stable controllers based on the kinematic error model. The effects of variable parameters and boundary effects are also considered. Experimental results including different paths in 3-D space validated the path following with submillimeter accuracy using the helical microswimmer. Note to Practitioners —This article is motivated by the issue of the following predefined paths for magnetically driven helical microswimmers in 3-D space. The proposed closed-loop controller employs the error model in 3-D space to formulate the control law according to an orientation-compensation model learned by neural networks. It is demonstrated that the helical microswimmer is able to follow different paths in 3-D space with submillimeter accuracy using the proposed control scheme.

69 citations


Journal ArticleDOI
TL;DR: An elastic band-based rapidly exploring random tree (EB-RRT) algorithm is proposed to achieve real-time optimal motion planning for the mobile robot in the dynamic environment, which can maintain a homotopy optimal trajectory based on current heuristic trajectory.
Abstract: In a human–robot coexisting environment, it is pivotal for a mobile service robot to arrive at the goal position safely and efficiently. In this article, an elastic band-based rapidly exploring random tree (EB-RRT) algorithm is proposed to achieve real-time optimal motion planning for the mobile robot in the dynamic environment, which can maintain a homotopy optimal trajectory based on current heuristic trajectory. Inspired by the EB method, we propose a hierarchical framework consisting of two planners. In the global planner, a time-based RRT algorithm is used to generate a feasible heuristic trajectory for a specific task in the dynamic environment. However, this heuristic trajectory is nonoptimal. In the dynamic replanner, the time-based nodes on the heuristic trajectory are updated due to the internal contraction force and the repulsive force from the obstacles. In this way, the heuristic trajectory is optimized continuously, and the final trajectory can be proved to be optimal in the homotopy class of the heuristic trajectory. Simulation experiments reveal that compared with two state-of-the-art algorithms, our proposed method can achieve better performance in dynamic environments. Note to Practitioners —The motivation of this work stems from the need to achieve real-time optimal motion planning for the mobile robot in the human–robot coexisting environment. Sampling-based algorithms are widely used in this area due to their good scalability and high efficiency. However, the generated trajectory is usually far from optimal. To obtain an optimized trajectory for the mobile robot in the dynamic environment with moving pedestrians, we propose the EB-RRT algorithm on the basis of the time-based RRT tree and the EB method. Depending on the time-based RRT tree, we quickly get a heuristic trajectory and guarantee the probabilistic completeness of our algorithm. Then, we optimize the heuristic trajectory similar to the EB method, which achieves the homotopy optimality of the final trajectory. We also take into account the nonholonomic constraints, and our proposed algorithm can be applied to most mobile robots to further improve their motion planning ability and the trajectory quality.

Journal ArticleDOI
TL;DR: A nonlinear disturbance observer and a continuous global sliding mode controller are proposed for the regulation and disturbance estimation control of the overhead crane system and the stability and convergence characteristics are proven through rigorous theoretical analysis.
Abstract: For practical mechanical systems, uncertainties/disturbances, such as unmodeled dynamics and frictions, are nonignorable factors. For existing control methods, these factors are usually neglected or addressed by a robust way. As a consequence, the nominal control performance of these methods is sacrificed. Moreover, there exists the chattering problem for some existing robust methods, such as sliding mode control laws. To deal with these drawbacks, a continuous global sliding mode controller along with a nonlinear disturbance observer is designed for the regulation and disturbance estimation control of the overhead crane system. Specifically, the original crane dynamic model is transformed into a quasi-integrator-chain form through some transformations. Then, a nonlinear disturbance observer is designed and a continuous global sliding mode control method is introduced on the basis of the constructed disturbance observer. The stability and convergence characteristics are proven through rigorous theoretical analysis. Finally, to demonstrate the performance of the designed controller, a series of experimental tests are performed, and a comparison study between the devised method here and an existing method is given. Note to Practitioners —This article is motivated by the desire to deal with the regulation and disturbance rejection of the overhead crane system. In practical applications, uncertainties/disturbances are unavoidable problems for overhead cranes. For most existing methods, these issues are usually addressed in a robust way. To handle these existing problems, a nonlinear disturbance observer and a continuous global sliding mode controller are proposed for the regulation and disturbance estimation control of the overhead crane system. The disturbance observer is introduced to estimate and compensate for uncertain disturbances, and the sliding mode controller is designed to guarantee the convergence of the state variables of the closed-loop system. In the future, we will try to apply this method to practical overhead cranes.

Journal ArticleDOI
TL;DR: An intelligent control and management system-based MPC algorithm for a NGIM that may be considered as a practical solution to mitigate and address the development challenges and support the transition to precision and sustainable agriculture as well as the modernization of the agriculture is developed.
Abstract: This paper presents a novel high-level centralized control scheme for a smart network of greenhouses integrated microgrid (NGIM) forming a smart small power grid in the context of smart grids. The main purpose is to present an innovative control strategy-based coordinated model predictive control (MPC) that considers fluctuations of stochastic renewable sources as well as weather conditions. A comprehensive finite-horizon scheduling optimization model is formulated to optimally control the operation of the NGIM, which integrates both forecasts and newly updated information collected from the available sensors at the network level. The model can be implemented as a supervisory control and energy management system for the NGIM to manipulate the indoor climate and optimize the crop production. The cooperation is reached through a bidirectional communication infrastructure, where a master central controller is available at the network level and is in charge of coordinating and managing various control signals. An MPC-based algorithm is used for the future operation scheduling of all subsystems available in the NGIM. The MPC strategy is tested through a case study where the influences of climate data on the operation of the NGIM are analyzed via numerical results. Note to Practitioners —Under the smart grid paradigm, smart greenhouses can be taken as an alternative that can mitigate and face the development challenges of the agricultural sector. Smart greenhouses can be considered as active players that may play a key role in modernizing the agriculture by offering viable and new smart management solutions, advanced control strategies, and innovative decision-support tools, whose objective is to better support growers, investors, and professionals. Smart network of greenhouses integrated microgrid (NGIM) can play an increasing role in enhancing the sustainable energy supply in the agricultural sector. In addition, the incorporation of new information and communication technologies, advanced metering infrastructure, and optimal control strategies can support the agricultural sector to meet an increasing number of regulations on quality and environment. In this paper, a comprehensive scheduling optimization model-based MPC that considers fluctuations of stochastic renewable sources, as well as weather conditions, is formulated to optimally control the operation of the NGIM. We developed and validated an intelligent control and management system-based MPC algorithm for a NGIM that may be considered as a practical solution to mitigate and address the development challenges and support the transition to precision and sustainable agriculture as well as the modernization of the agriculture.

Journal ArticleDOI
TL;DR: A data-driven framework for fault detection and classification (FDC) during the wafer fabrication process is proposed by incorporating several useful machine learning approaches and experimental results demonstrate that the proposed framework can supply quality fault detection performances and provide valuable information regarding the critical SVIDs and associated key processing time for fault diagnostic.
Abstract: Fault detection and classification (FDC) is important for semiconductor manufacturing to monitor equipment’s condition and examine the potential cause of the fault. Each equipment in the semiconductor manufacturing process is often accompanied by a large amount of sensor readings, also called status variable identification (SVID). Identifying the key SVIDs accurately can make it easier for engineers to monitor the process and maintain the stability of the process and wafer productive yields. This article proposes using the random forests algorithm to analyze the importance of SVIDs of equipment sensors, automatically filters the key SVID by using ${k}$ -means, and integrates various machine learning methods to verify the key SVIDs and identify key processing time and steps. Upon the key parameters are identified, the key processing time and steps are investigated subsequently. The ensemble models constructed on ${k}$ -nearest neighbors ( ${k}$ NNs) and naive Bayes classifiers are presented for classifying wafers as normal or abnormal. Data visualization of multidimensional key SVIDs is performed by using ${t}$ -distributed stochastic neighbor embedding ( ${t}$ -SNE) to create a graphical aid in FDC for the process engineer. An empirical study is conducted to validate the proposed data-driven framework for fault detection and diagnostic. The experimental results demonstrate that the proposed framework can detect abnormality effectively with highly imbalanced classes and also gain insightful information about the key SVIDs and corresponding key processing time and steps. Note to Practitioners —The challenges of equipment sensor data analytics in semiconductor manufacturing include building the classifier to detect wafer abnormality correctly, identification of key status variable identifications (SVIDs) and processing time and steps of abnormality, and data visualization of the abnormality in a high-dimensional feature space. This article proposes a data-driven framework for fault detection and classification (FDC) during the wafer fabrication process by incorporating several useful machine learning approaches. Experimental results demonstrate that the proposed data-driven framework can supply quality fault detection performances and provide valuable information regarding the critical SVIDs and associated key processing time for fault diagnostic. The engineers can utilize the extracted fault patterns to perform a prognosis of the aging effect on process tools or modules for health management.

Journal ArticleDOI
TL;DR: Four iterated greedy algorithm-based methods by using different neighborhood structures and integrating a variable neighborhood descent method are developed to solve the scheduling problem of a batch production process, i.e., wire rod and bar rolling, which is modeled by a Petri net (PN).
Abstract: Wire rod and bar rolling is an important batch production process in steel production systems. A scheduling problem originated from this process is studied in this work by considering the constraints on sequence-dependent family setup time and release time. For each serial batch to be scheduled, it contains several jobs and the number of late jobs within it varies with its start time. First, we model a rolling process using a Petri net (PN), where a so-called rolling transition describes a rolling operation of a batch. The objective of the concerned problem is to determine a firing sequence of all rolling transitions such that the total number of late jobs is minimal. Next, a mixed-integer linear program is formulated based on the PN model. Due to the NP-hardness of the concerned problem, iterated greedy algorithm (IGA)-based methods by using different neighborhood structures and integrating a variable neighborhood descent method are developed to obtain its near-optimal solutions. To test the accuracy, speed, and stability of the proposed algorithms, we compare their solutions of different-size instances with those of CPLEX (a commercial software) and four heuristic peers. The results indicate that the proposed algorithms outperform their peers and have great potential to be applied to industrial production process scheduling.

Journal ArticleDOI
TL;DR: A novel application of the variable stiffness actuator (VSA)-based assistance/rehabilitation robot-featured impedance control using a cascaded position torque control loop and describes how to adjust the actuator stiffness to cooperatively work with the adaptive impedance control scheme.
Abstract: We present a novel application of the variable stiffness actuator (VSA)-based assistance/rehabilitation robot-featured impedance control using a cascaded position torque control loop. The robot follows the adaptive impedance control paradigm, thereby achieving an adaptive assistance level according to human joint torque. The feedforward human joint torque command is used to cooperatively adjust the impedance controller and the stiffness trajectory of the VSA (this functional architecture is referred to as the cooperative control framework). In this way, the task performance during movement training can be improved regarding: 1) safety —for example, when the subject intends to contribute considerable effort, low-gain impedance control is activated with a low stiffness actuator to further decrease output impedance and 2) tracking performance —for example, for the subject with less effort, high-gain impedance control is used while pursuing high stiffness to enhance the torque bandwidth. Regarding the safety aspect, we demonstrate that the torque controller designed at low stiffness can be sensitive to the disturbance for low output impedance while maintaining tracking performance. A precondition for this is to treat the input disturbance separately. This is guaranteed by our previously proposed torque control of the VSA using the linear quadratic Gaussian technique. This approach is also employed here, but with additional discussion on the observer design to serve the proposed cooperative control approach. Here, the effectiveness of the proposed control system is experimentally verified using a VSA prototype and a one-degree-of-freedom lower limb exoskeleton worn by a human test person. Note to Practitioners —Control of “physical human–robot interaction” can be achieved by the mechanical parts of the variable stiffness actuator (VSA). However, the mechanical construction for stiffness variation may limit the capacity to achieve low output stiffness and fast stiffness variation in speed. These limitations may become more evident in the assistance/rehabilitation robot applications. To overcome these limitations, the impedance control scheme can be employed to achieve a programmable impedance range and impedance variation speed. This control scheme has been widely applied on the fixed-compliance joint but lacks a way to be implemented on the VSA joint because of its existing capacity to control the impedance with the mechanical construction. This article presents a novel application of the impedance-controlled VSA used on a lower limb robot. We describe how to adjust the actuator stiffness to cooperatively work with the adaptive impedance control scheme. Based on our approach, the robot with the impedance-controlled VSA joint can extend the capacity of bandwidth and low output impedance. This is an improvement on the impedance-controlled fixed-compliance joint. The cooperative control framework presented here was tested on an exoskeleton system with two healthy test persons and is also applicable to other actuator prototypes. Future research aims to employ this system for actual patient training.

Journal ArticleDOI
TL;DR: The proposed PLPNet method can effectively detect polyps in colonoscopy images and generate high-quality segmentation masks in a pixel-to-pixel manner and corroborates that CNNs with very deep architecture and richer semantics are highly efficient in medical image learning and inference.
Abstract: Polyp recognition in colonoscopy images is crucial for early colorectal cancer detection and treatment. However, the current manual review requires undivided concentration of the gastroenterologist and is prone to diagnostic errors. In this article, we present an effective, two-stage approach called PLPNet, where the abbreviation “PLP” stands for the word “polyp,” for automated pixel-accurate polyp recognition in colonoscopy images using very deep convolutional neural networks (CNNs). Compared to hand-engineered approaches and previous neural network architectures, our PLPNet model improves recognition accuracy by adding a polyp proposal stage that predicts the location box with polyp presence. Several schemes are proposed to ensure the model’s performance. First of all, we construct a polyp proposal stage as an extension of the faster R-CNN, which performs as a region-level polyp detector to recognize the lesion area as a whole and constitutes stage I of PLPNet. Second, stage II of PLPNet is built in a fully convolutional fashion for pixelwise segmentation. We define a feature sharing strategy to transfer the learned semantics of polyp proposals to the segmentation task of stage II, which is proven to be highly capable of guiding the learning process and improve recognition accuracy. Additionally, we design skip schemes to enrich the feature scales and thus allow the model to generate detailed segmentation predictions. For accurate recognition, the advanced residual nets and feature pyramids are adopted to seek deeper and richer semantics at all network levels. Finally, we construct a two-stage framework for training and run our model convolutionally via a single-stream network at inference time to efficiently output the polyp mask. Experimental results on public data sets of GIANA Challenge demonstrate the accuracy gains of our approach, which surpasses previous state-of-the-art methods on the polyp segmentation task (74.7 Jaccard Index) and establishes new top results in the polyp localization challenge (81.7 recall). Note to Practitioners —Given the current manual review of colonoscopy is laborious and time-consuming, computational methods that can assist automatic polyp recognition will enhance the outcome both in terms of efficiency and diagnostic accuracy of colonoscopy. This article suggests a new approach using a very deep convolutional neural network (CNN) architecture for polyp recognition, which gains accuracy from deeper and richer representations. The method, called PLPNet, can effectively detect polyps in colonoscopy images and generate high-quality segmentation masks in a pixel-to-pixel manner. We evaluate the proposed framework on publicly available data sets, and we show by experiments that our method surpasses the state-of-the-art polyp recognition results. The finding of this article corroborates that CNNs with very deep architecture and richer semantics are highly efficient in medical image learning and inference. We believe that the proposed method will facilitate potential computer-aided applications in clinical practice, in that it can enhance medical decision-making in cancer detection and imaging.

Journal ArticleDOI
TL;DR: A structure dictionary learning-based method that overcomes the assumption that each operation mode of the industrial process should be modeled separately and can effectively detect faulty states, which is suitable for monitoring of real industrial systems.
Abstract: Most industrial systems frequently switch their operation modes due to various factors, such as the changing of raw materials, static parameter setpoints, and market demands. To guarantee stable and reliable operation of complex industrial processes under different operation modes, the monitoring strategy has to adapt different operation modes. In addition, different operation modes usually have some common patterns. To address these needs, this article proposes a structure dictionary learning-based method for multimode process monitoring. In order to validate the proposed approach, extensive experiments were conducted on a numerical simulation case, a continuous stirred tank heater (CSTH) process, and an industrial aluminum electrolysis process, in comparison with several state-of-the-art methods. The results show that the proposed method performs better than other conventional methods. Compared with conventional methods, the proposed approach overcomes the assumption that each operation mode of industrial processes should be modeled separately. Therefore, it can effectively detect faulty states. It is worth to mention that the proposed method can not only detect the faulty of the data but also classify the modes of normal data to obtain the operation conditions so as to adopt an appropriate control strategy. Note to Practitioners —Motivated by the fact that the industrial process often has different modes and they may have common patterns, this article proposes a structure dictionary learning method for multimode process monitoring. First, the structure dictionary learning method was proposed to extract the common pattern and mode-specific pattern of each mode. After two different patterns are extracted, the control limit for process monitoring can be obtained from the training data. When new data arrive, the monitoring process can be carried out. Intensive experimental results show that the proposed method performs better than other conventional methods. Compared to conventional methods, the proposed approach overcomes the assumption that each operation mode of the industrial process should be modeled separately. Therefore, it can effectively detect faulty states. Above all, it is suitable for monitoring of real industrial systems.

Journal ArticleDOI
TL;DR: The application here developed uses a statistical data-driven fault diagnosis technique, hence it requires a training stage using historical data to learn patterns and estimate parameters, and is scalable and flexible enough to facilitate the implementation in other industrial environments.
Abstract: Being able to detect, identify, and diagnose a fault is a key feature of industrial supervision systems, which enables advance asset management, in particular, predictive maintenance, which greatly increases efficiency and productivity. In this paper, an Industrial Internet app for real-time fault detection and diagnosis is implemented and tested in a pilot scale industrial motor. Real-time fault detection and identification is based on dynamic incremental principal component analysis (DIPCA) and reconstruction-based contribution (RBC). When the analysis indicates that one of the vibration measurements is responsible for the fault, a convolutional neural network (CNN) is used to identify the unbalance or bearing fault type. The application was evaluated in its three functionalities: fault detection, fault identification, and fault identification of vibration-related faults, yielding a fault detection rate over 99%, a false alarm rate below 5%, and an identification accuracy over 90%. Note to Practitioners —This paper focuses on designing and evaluating a real-time fault diagnosis application in an industrial setup. To this end, this paper also tackles the problem of developing a methodology for implementing advanced state-of-the-art fault detection techniques in real machinery, following industry standards and using a modern informatics architecture. The application here developed uses a statistical data-driven fault diagnosis technique, hence it requires a training stage using historical data to learn patterns and estimate parameters. A proof of concept in fault diagnosis for industrial motors is given; however, it should be noted that both the methodology and the deployed architecture are scalable and flexible enough to facilitate the implementation in other industrial environments. The implementation here presented was deployed using only open-source tools, which allows evaluating this tool without incurring in high expenses.

Journal ArticleDOI
TL;DR: This article proposes LeafGAN, a novel image-to-image translation system with own attention mechanism, which generates countless diverse and high-quality training images; it works as an efficient data augmentation for the diagnosis classifier.
Abstract: Many applications for the automated diagnosis of plant disease have been developed based on the success of deep learning techniques. However, these applications often suffer from overfitting, and the diagnostic performance is drastically decreased when used on test data sets from new environments. In this article, we propose LeafGAN, a novel image-to-image translation system with own attention mechanism. LeafGAN generates a wide variety of diseased images via transformation from healthy images, as a data augmentation tool for improving the performance of plant disease diagnosis. Due to its own attention mechanism, our model can transform only relevant areas from images with a variety of backgrounds, thus enriching the versatility of the training images. Experiments with five-class cucumber disease classification show that data augmentation with vanilla CycleGAN cannot help to improve the generalization, i.e., disease diagnostic performance increased by only 0.7% from the baseline. In contrast, LeafGAN boosted the diagnostic performance by 7.4%. We also visually confirmed that the generated images by our LeafGAN were much better quality and more convincing than those generated by vanilla CycleGAN. The code is available publicly at https://github.com/IyatomiLab/LeafGAN.

Journal ArticleDOI
TL;DR: An optimal allocation model of VMs is formulated and an improved differential evolution (IDE) method is developed to solve this optimization problem, given a batch of user tasks and the experimental results show that the proposed one can well outperform its compared ones, and its VM allocation results can achieve the highest satisfaction of both users and providers.
Abstract: A cloud computing paradigm has quickly developed and been applied widely for more than ten years. In a cloud data center, cloud service providers offer many kinds of cloud services, such as virtual machines (VMs), to users. How to achieve the optimized allocation of VMs for users to satisfy the requirements of both users and providers is an important problem. To make full use of VMs for providers and ensure low makespan of user tasks, we formulate an optimal allocation model of VMs and develop an improved differential evolution (IDE) method to solve this optimization problem, given a batch of user tasks. We compare the proposed method with several existing methods, such as round-robin (RR), min–min, and differential evolution. The experimental results show that it can more efficiently decrease the cost of cloud service providers while achieving lower makespan of user tasks than its three peers. Note to Practitioners —VM allocation is one of the challenging problems in cloud computing systems, especially when user task makespan and cost of cloud service providers have to be considered together. We propose an IDE approach to solve this problem. To show its performance, this article compares the commonly used methods, i.e., RR and min–min, as well as the classic differential evolution method. A cloud simulation platform called CloudSim is used to test these methods. The experimental results show that the proposed one can well outperform its compared ones, and its VM allocation results can achieve the highest satisfaction of both users and providers. The proposed method can be readily applicable to industrial cloud computing systems.

Journal ArticleDOI
TL;DR: A GraphSLAM-based approach to automate this signal map construction process by reducing the survey overhead significantly and exploiting the magnetic headings to improve the trajectory optimization performance is suggested.
Abstract: Opportunistic signals (e.g., WiFi, magnetic fields, and ambient light) have been extensively studied for low-cost indoor localization, especially via fingerprinting. We present an automatic site survey approach to build the signal maps in space-constrained environments (e.g., modern office buildings). The survey can be completed by a single smartphone user during normal walking, say, with a little human intervention. Our approach follows the classical GraphSLAM framework: the front end constructs a pose graph by incorporating the relative motion constraints from the pedestrian dead-reckoning (PDR), the loop-closure constraints by magnetic sequence matching with the WiFi signal similarity validation, and the global heading constraints from the opportunistic magnetic heading measurements; and the back end generates a globally consistent trajectory via graph optimization to provide ground-truth locations for the collected signal fingerprints along the survey path. We then build the signal map (also known as fingerprint database) upon these location-labeled fingerprints by the Gaussian processes regression (GPR) for later online localization. Specifically, we exploit the pseudowall constraints from the GPR variance map of magnetic fields and the observations of ceiling lights to correct the PDR drifts with a particle filter. We evaluate our approach on several data sets collected from both the HKUST academic building and a shopping mall. We demonstrate the real-time localization on a smartphone in an office area, with 50th percentile accuracy of 2.30 m and 90th percentile accuracy of 3.41 m. Note to Practitioners —This paper was motivated by the problem of the efficient signal map construction for fingerprinting-based localization on smartphones. The conventional manual site survey method, known to be time-consuming and labor-intensive, hinders the penetration of fingerprinting methods in practice. This paper suggests a GraphSLAM-based approach to automate this signal map construction process by reducing the survey overhead significantly. A surveyor is merely asked to walk through an indoor venue with an Android smartphone held in hand with a little human intervention. Meanwhile, opportunistic signals (e.g., WiFi and magnetic fields) are captured by smartphone sensors. We construct a GraphSLAM engine to first identify the measurement constraints from these signal observations and then recover the surveyor’s walking trajectory by the graph optimization. We can generate signal maps using the captured signals alongside the recovered trajectory. In this paper, we propose a WiFi signal similarity validation method to reduce false positive loop-closures and exploit the magnetic headings to improve the trajectory optimization performance. In addition, we propose to use the generated magnetic field variance map and the lights distribution map for localization. The efficacy of the proposed site survey approach is proven through field experiments, and real-time localization is demonstrated on a smartphone using the generated signal maps. The localization experiment was conducted by a single user with the same Android smartphone that was used in the site survey. Therefore, the usability of signal maps on other devices and the generality to other users have not yet been testified. We will leave these issues in our future work.

Journal ArticleDOI
TL;DR: A cloud model based on interval-valued intuitionistic uncertain linguistic and builds a cloud-based Petri net model to assess the risk of subway fire accident of subway, using fuzzy linguistic decision variables is proposed.
Abstract: This article proposes a risk assessment method based on interval intuitionistic integrated cloud Petri net (IIICPN). The cloud model is widely used in data mining and knowledge discovery, especially in risk assessment problems with linguistic variables. However, the cloud models proposed in the literature do not express interval-valued intuitionistic linguistic satisfactorily, and the reasoning methods based on the cloud models cannot perform risk assessment well. The work in this article includes the definition of IIIC and IIICPN, the method of converting the interval-valued intuitionistic uncertain linguistic numbers into IIIC, and the reasoning method of IIICPN. As proofs, a subway fire accident model is adopted to confirm the feasibility of the proposed method, and comparison experiments between the IIICPN with general fuzzy Petri net and the trapezium cloud model are conducted to verify the superiority of the proposed model.

Journal ArticleDOI
TL;DR: A method to improve 3-D printing by adding rotation during the manufacturing process by developing a general volume decomposition algorithm for effectively reducing the area that needs supporting structures.
Abstract: We present a method for fabricating general models with multi-directional 3-D printing systems by printing different model regions along with different directions. The core of our method is a support-effective volume decomposition algorithm that minimizes the area of the regions with large overhangs. A beam-guided searching algorithm with manufacturing constraints determines the optimal volume decomposition, which is represented by a sequence of clipping planes. While current approaches require manually assembling separate components into a final model, our algorithm allows for directly printing the final model in a single pass. It can also be applied to models with loops and handles. A supplementary algorithm generates special supporting structures for models where supporting structures for large overhangs cannot be eliminated. We verify the effectiveness of our method using two hardware systems: a Cartesian-motion-based system and an angular-motion-based system. A variety of 3-D models have been successfully fabricated on these systems. Note to Practitioners —In conventional planar-layer-based 3-D printing systems, supporting structures need to be added at the bottom of large overhanging regions to prevent material collapse. Supporting structures used in single-material 3-D printing technologies have three major problems: being difficult to remove, introducing surface damage, and wasting material. This article introduces a method to improve 3-D printing by adding rotation during the manufacturing process. To keep the hardware system relatively inexpensive, the hardware, called a multi-directional 3-D printing system , only needs to provide unsynchronized rotations. In this system, models are subdivided into different regions, and then, the regions are printed in different directions. We develop a general volume decomposition algorithm for effectively reducing the area that needs supporting structures. When supporting structures cannot be eliminated, we provide a supplementary algorithm for generating supports compatible with multi-directional 3-D printing. Our method can speed up the process of 3-D printing by saving time in producing and removing supports.

Journal ArticleDOI
TL;DR: An ontology building method that is tailored toward the needs of CPSs in the manufacturing domain is presented and a reusable set of ontology design patterns that have been developed with the aforementioned method are presented and illustrate their application in the considered industrial environment.
Abstract: Cyber–physical systems (CPSs) in the manufacturing domain can be deployed to support monitoring and analysis of production systems of a factory in order to improve, support, or automate processes, such as maintenance or scheduling. When a network of CPS is subject to frequent changes, the semantic interoperability between the CPSs is of special interest in order to avoid manual, tedious, and error-prone information model alignments at runtime. Ontologies are a suitable technology to enable semantic interoperability, as they allow the building of information models that lank machine-readable meaning to information, thus enabling CPSs to mutually understand the shared information. The contribution of this article is twofold. First, we present an ontology building method that is tailored toward the needs of CPSs in the manufacturing domain. For this purpose, we introduce the requirements regarding this method and discuss related research concerning ontology building. The method itself is designed to begin with ontological requirements and to yield a formal ontology. As the reuse of ontologies and other information resources (IRs) is crucial to the success of ontology building projects, we put special emphasis on how to reuse IRs in the CPS domain. Second, we present a reusable set of ontology design patterns that have been developed with the aforementioned method in an industrial use case and illustrate their application in the considered industrial environment. The contribution of this article extends the method introduced, as a postconference paper, by a detailed industrial application. Note to Practitioners —With growing digitalization in industry, the exchange and use of manufacturing-related data are becoming increasingly important to improve, support, or automate processes. Thus, it is necessary to combine information from different data sources that have been designed by different vendors and may, therefore, be heterogeneous in structure and semantics. A system that plans a maintenance worker’s daily schedule, for instance, requires information about the status of machines, production plans, and inventory, which resides in other systems, such as programmable logic controllers (PLCs) or databases. When creating such information systems, accessing, searching, and understanding the different data sources is a time-intensive and error-prone procedure due to the heterogeneities of the data sources. Even worse, this procedure has to be repeated for every newly built system and for every newly introduced data source. To allow for eased access, searching, and understanding of these heterogeneous data sources, ontology can be used to integrate all heterogeneous data sources in one schema. This article contributes a method for building such ontologies in the manufacturing domain. Furthermore, a set of ontology design patterns is presented, which can be reused when building ontologies for a domain.

Journal ArticleDOI
TL;DR: A new approach integrating multiobjective optimization and a weighting factor based on the shortage event types of each station is proposed to cope with bike repositioning in bike-sharing systems (BSSs) during peak hours.
Abstract: With the expansion of the sharing economy, growing urban traffic, and increasing environmental pollution, bike-sharing systems (BSSs) are developing rapidly all over the world. A major operational issue in BSS is to reposition the bikes over time such that enough bikes and open parking slots are available to users. Especially during peak hours, it is essential to stabilize BSS in use. To cope with the issue, this article proposes a new approach integrating multiobjective optimization and a weighting factor based on the shortage event types of each station. In addition, the multiobjective artificial bee colony algorithm is modified according to the features of this work to find optimal solutions. The proposed approach is applied to the real-life repositioning of a BSS during peak hours to verify its feasibility and effectiveness. Also, the algorithm is compared with other frequently used multiobjective algorithms. For the comparative study, convergence metric and spacing are adopted to further measure the algorithm performance. The scalability of the proposed approach in addressing the multiobjective repositioning problems during peak hours is also verified by multiple trials. Note to Practitioners —This work deals with bike repositioning in bike-sharing systems (BSSs) during peak hours, which has major significance in the efficient operation of such systems. It builds a multiobjective optimization model and solves it through a modified multiobjective artificial bee colony algorithm. The existing single-objective optimization methods fail to solve the concerned problem. This work can find the optimal routes of the repositioning vehicles along with the number of desired parked bikes of corresponding stations. The experimental results indicate that the proposed method is highly effective and can greatly and readily help decision-makers better manage the BSS of a practical size.

Journal ArticleDOI
TL;DR: A new model building methodology based on a class of Bayesian neural networks (NNs) that directly addresses the challenge of shape deviation models for different computer-aided design products manufactured on its constituent AM processes.
Abstract: A significant challenge in comprehensive geometric accuracy control of an additive manufacturing (AM) system is the specification of shape deviation models for different computer-aided design products manufactured on its constituent AM processes. Current deviation modeling techniques do not satisfactorily address this challenge because they can require substantial user inputs and efforts to implement. We present a new model building methodology based on a class of Bayesian neural networks (NNs) that directly addresses this challenge with much less effort. Our method enables automated deviation modeling of different shapes and AM processes and yields models with higher predictive accuracies compared to the existing modeling methods on the same samples of manufactured products. A fundamental innovation in our methodology is the design of new and connectable NN structures that facilitate the leveraging of previously specified deviation models for adaptive model building of new shapes and AM processes. The power and broad scope of our method are demonstrated with several case studies on both in-plane and out-of-plane deviations for a wide variety of shapes manufactured under different stereolithography processes. Our Bayesian methodology for automated and comprehensive deviation modeling can ultimately help to advance flexible, efficient, and high-quality manufacturing in an AM system. Note to Practitioners —Additive manufacturing (AM) systems possess an intrinsic capability for one-of-a-kind manufacturing of a vast variety of shapes across a wide spectrum of constituent processes. Learning how to control geometric shape accuracy in a comprehensive manner for an AM system is vital to its operation. This task is challenging due to constraints on the number of test shapes that can be manufactured and user efforts that can be devoted for learning and predicting geometric errors of different sets of shapes and AM processes. This article presents an automated machine learning methodology for comprehensive learning and prediction of geometric errors in an AM system based on a limited number of test shapes manufactured under different processes. Several case studies serve to validate the potential of our methodology to learn effective geometric accuracy control policies for general AM systems in practice.

Journal ArticleDOI
TL;DR: A mixed integer mathematical programming model is established and a hybrid algorithm that combines scatter search and mixed integer programming to solve the SCC-scheduling problem and gives decision makers some desired reference to determine a right schedule when an actual production process is conducted.
Abstract: This article studies a steelmaking–continuous casting (SCC) scheduling problem by considering ladle allocation. It takes technological rules in steel manufacturing and ladle-related constraints into account. A scheduling problem is formulated to determine allocation equipment for jobs, production sequence for jobs processed by the same equipment, and modification operations for empty ladles after their service for jobs. To ensure the fastest production and least energy consumption, we present a mixed integer mathematical programming model with the objectives to minimize the maximum completion time, idle time penalties, and energy consumption penalties related to waiting time. To solve it, we develop a two-stage approach based on a combination of scatter search (SS) and mixed integer programming (MIP). The first stage applies an SS algorithm to determine the assignment and sequence variables for charges. For the obtained solution, we construct a temporal constraint network and establish an MIP model at the second stage. We apply ILOG.CPLEX to solve the model and find the final solution. We analyze and compare the performance of the proposed approach with a hybrid method that combines a genetic algorithm with MIP on instances constructed from a real iron–steel plant. To further verify the effectiveness of the proposed algorithm, we compare its results with optimal solutions of the constraint-relaxed original problem. The experimental results show the effectiveness of the proposed approach in solving the SCC-scheduling problem. Note to Practitioners —This article deals with a scheduling problem arising from a steelmaking–continuous casting process in steel manufacturing. It integrates ladles allocation into the scheduling problem to reduce the energy consumption as much as possible. Such a problem in the existing work is handled, respectively, and its solutions tend to cause much energy waste and some mismatched plans. This article takes complex technological constraints into full account and minimizes the maximum job completion time, idle time of equipment, and waiting time of jobs. It establishes a mixed integer mathematical model and proposes a hybrid algorithm that combines scatter search and mixed integer programming to solve it. The extensive results demonstrate that the proposed approach can effectively solve the studied scheduling problem. The obtained solution gives decision makers some desired reference to determine a right schedule when an actual production process is conducted.

Journal ArticleDOI
TL;DR: A multi-level simultaneous minimization (MLSM) scheme is proposed and investigated to remedy the joint-angle drift and non-zero final joint-velocity phenomena as well as to prevent the occurrence of high joint variables of redundant robot manipulators.
Abstract: In this paper, a multi-level simultaneous minimization (MLSM) scheme is proposed and investigated to remedy the joint-angle drift (JAD) and non-zero final joint-velocity (NZFJV) phenomena as well as to prevent the occurrence of high joint variables of redundant robot manipulators. The proposed scheme is novelly designed within multiple levels and finally resolved at the jerk level for a jerk-bounded robot motion, which is desirable for engineering applications. More importantly, the correctness of the proposed MLSM scheme is guaranteed by the corresponding theorems. Then, the MLSM scheme is formulated as a dynamical quadratic program (DQP) that is solved by a piecewise linear projection equation neural network (PLPENN). Furthermore, the path-tracking simulations based on a 6-degrees-of-freedom (DOF) robot manipulator substantiate the effectiveness and advantage of the MLSM scheme. Comparisons between the MLSM scheme and the minimum jerk norm (MJN) scheme illustrate that the proposed scheme is superior and more applicable. Finally, the additional validation on the KUKA robot in the virtual robot experimentation platform (V-REP) is provided for reproducible engineering applications by researchers and practitioners. Note to Practitioners —This paper is motivated by the inverse kinematics problem of jerk-bounded redundant robot manipulators in practical applications. Note that the joint-angle drift (JAD) and non-zero final joint-velocity (NZFJV) phenomena as well as the occurrence of high joint variables always encountered in the traditional norm-based scheme for robot manipulators, which is not suitable for the real-time control of robots. Besides, it would be appealing and desirable to resolve the robot redundancy at the jerk level for industrial robots in engineering. Therefore, an effective, flexible, and stable solution for such robot manipulators is significant for practitioners. This paper proposes a multi-level simultaneous minimization (MLSM) scheme for practitioners interested in robot kinematics to remedy the JAD and NZFJV phenomena as well as to prevent the occurrence of high joint variables of redundant robot manipulators. Unlike traditional single-level schemes, such as the minimum jerk norm (MJN) scheme, the proposed scheme is designed within multiple levels with distinct physical nature and finally resolved at the jerk level to achieve a desirable performance for the jerk-bounded redundant robot manipulators. Besides, for better understanding of practitioners, the corresponding block diagram and principle interpretation of the MLSM scheme are presented. Simulation studies and comparisons are designed and conducted on a 6-degrees-of-freedom (DOF) robot manipulator to substantiate the effectiveness and superiority of the proposed scheme. Extensive tests with different weighting factors fully verify the flexibility and stable performance of the proposed MLSM scheme. For reproducible engineering applications by researchers and practitioners, the additional validation on the KUKA robot in the virtual robot experimentation platform (V-REP) is further presented.

Journal ArticleDOI
TL;DR: The Kalman filter is employed in this paper to progressively estimate the system and the sensor state and it is concluded that disregarding the sensor degradation while it exists will significantly increase the maintenance cost and the negative impact of sensor degradation can be diminished via proper inspection and filtering methods.
Abstract: This paper proposes a condition-based maintenance (CBM) policy for a deteriorating system whose state is monitored by a degraded sensor. In the literature of CBM, it is commonly assumed that inspection of system state is perfect or subject to measurement error. The health condition of the sensor, which is dedicated to inspect the system state, is completely ignored during system operation. However, due to the varying operation environment and aging effect, the sensor itself will suffer a degradation process and its performance deteriorates with time. In the presence of sensor degradation, the Kalman filter is employed in this paper to progressively estimate the system and the sensor state. Since the estimation of system state is subject to uncertainty, maintenance solely based on the estimated state will lead to a suboptimal solution. Instead, predictive reliability is used as a criterion for maintenance decision-making, which is able to incorporate the effect of estimation uncertainty. Preventive replacement is implemented when the estimated system reliability at inspection hits a specific threshold, which is obtained by minimizing the long-run maintenance cost rate. An example of wastewater treatment plant is used to illustrate the effectiveness of the proposed maintenance policy. It can be concluded through our research that: 1) disregarding the sensor degradation while it exists will significantly increase the maintenance cost and 2) the negative impact of sensor degradation can be diminished via proper inspection and filtering methods. Note to Practitioners —This paper was motivated by the observation of sensor degradation in wastewater treatment plants but the developed approach also applies to other systems such as manufacturing systems, chemical plants, and pharmaceutical factories, where sensors are dedicated to a long-time operation in a harsh environment. This paper investigates the impact of sensor degradation on CBM and suggests that the effect of sensor degradation should be carefully addressed while making maintenance decisions. Otherwise, it will lead to a suboptimal maintenance decision and increase the operating cost. An optimal maintenance decision, which contains the optimal inspection interval and reliability threshold, is achieved via minimizing the long-run cost rate. In the presence of measurement noise and intrinsic uncertainty from degradation, a stochastic filtering approach is employed to estimate the system and sensor state. Based on the estimated states and the calculated reliability, a dynamic maintenance decision is obtained at each inspection. This paper can be further extended considering non-Gaussian noise and alternative degradation processes.