scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automation Science and Engineering in 2022"


Journal ArticleDOI
TL;DR: In this article , a stochastic multi-product multi-objective disassembly-sequencing-line balancing problem aiming at maximizing disassembly profit and minimizing energy consumption and carbon emission is proposed.
Abstract: Recycling, reusing, and remanufacturing of end-of-life (EOL) products have been receiving increasing attention. They effectively preserve the ecological environment and promote the development of economy. Disassembly sequencing and line balancing problems are indispensable to recycling and remanufacturing EOL products. A set of subassemblies can be obtained by disassembling an EOL product. In practice, there are many different types of EOL products that can be disassembled on a disassembly line, and a high-level uncertainty exists in the disassembly process of those EOL products. Hence, this paper proposes a stochastic multi-product multi-objective disassembly-sequencing-line-balancing problem aiming at maximizing disassembly profit and minimizing energy consumption and carbon emission. A simulated annealing and multi-objective discrete grey wolf optimizer with a stochastic simulation approach is proposed. Furthermore, real cases are used to examine the efficiency and feasibility of the proposed algorithm. Comparisons with multi-objective discrete grey wolf optimization, non-dominated sorting genetic algorithm II, Multi-population multi-objective evolutionary algorithm, and multi-objective evolutionary algorithm demonstrate the superiority of the proposed approach. Note to Practitioners—Disassembly line balancing has been widely recognized as the most ecological way of retrieving EOL products. Through in-depth research, we present a Stochastic Multi-product Multi-objective Disassembly-sequencing-line-balancing Problem. Furthermore, we consider that the uncertainty of products might cause disassembly failure. To solve this problem effectively and quickly, we combine the simulated annealing algorithm with the Grey Wolf Optimizer. The results show that the algorithm can effectively solve the proposed problem. The disassembly scheme provided by the obtained solution set offers a variety of options for decision-makers.

49 citations


Journal ArticleDOI
TL;DR: In this paper , the authors proposed a novel effective optimization framework for the reconfiguration problem of modern power distribution networks (DNs), where the objective is minimizing the overall power losses while ensuring an enhanced DN voltage profile.
Abstract: Improving the efficiency and sustainability of distribution networks (DNs) is nowadays a challenging objective both for large networks and microgrids connected to the main grid. In this context, a crucial role is played by the so-called network reconfiguration problem, which aims at determining the optimal DN topology. This process is enabled by properly changing the close/open status of all available branch switches to form an admissible graph connecting network buses. The reconfiguration problem is typically modeled as an NP-hard combinatorial problem with a complex search space due to current and voltage constraints. Even though several metaheuristic algorithms have been used to obtain—without guarantees—the global optimal solution, searching for near-optimal solutions in reasonable time is still a research challenge for the DN reconfiguration problem. Facing this issue, this article proposes a novel effective optimization framework for the reconfiguration problem of modern DNs. The objective of reconfiguration is minimizing the overall power losses while ensuring an enhanced DN voltage profile. A multiple-step resolution procedure is then presented, where the recent Harris hawks optimization (HHO) algorithm constitutes the core part. This optimizer is here intelligently accompanied by appropriate preprocessing (i.e., search space preparation and initial feasible population generation) and postprocessing (i.e., solution refinement) phases aimed at improving the search for near-optimal configurations. The effectiveness of the method is validated through numerical experiments on the IEEE 33-bus, the IEEE 85-bus systems, and an artificial 295-bus system under distributed generation and load variation. Finally, the performance of the proposed HHO-based approach is compared with two related metaheuristic techniques, namely the particle swarm optimization algorithm and the Cuckoo search algorithm. The results show that HHO outperforms the other two optimizers in terms of minimized power losses, enhanced voltage profile, and running time. Note to Practitioners —This article is motivated by the emerging need for effective network reconfiguration approaches in modern power distribution systems, including microgrids. The proposed metaheuristic optimization strategy allows the decision maker (i.e., the distribution system operator) to determine in reasonable time the optimal network topology, minimizing the overall power losses and considering the system operational requirements. The proposed optimization framework is generic and flexible, as it can be applied to different architectures both of large distribution networks (DNs) and microgrids, considering various types of system objectives and technical constraints. The presented strategy can be implemented in any decision support system or engineering software for power grids, providing decision makers with an effective information and communication technology tool for the optimal planning of the energy efficiency and environmental sustainability of DNs.

36 citations


Journal ArticleDOI
TL;DR: In this paper , a stochastic multi-product multi-objective disassembly-sequencing-line balancing problem aiming at maximizing disassembly profit and minimizing energy consumption and carbon emission is proposed.
Abstract: Recycling, reusing, and remanufacturing of end-of-life (EOL) products have been receiving increasing attention. They effectively preserve the ecological environment and promote the development of economy. Disassembly sequencing and line balancing problems are indispensable to recycling and remanufacturing EOL products. A set of subassemblies can be obtained by disassembling an EOL product. In practice, there are many different types of EOL products that can be disassembled on a disassembly line, and a high-level uncertainty exists in the disassembly process of those EOL products. Hence, this paper proposes a stochastic multi-product multi-objective disassembly-sequencing-line-balancing problem aiming at maximizing disassembly profit and minimizing energy consumption and carbon emission. A simulated annealing and multi-objective discrete grey wolf optimizer with a stochastic simulation approach is proposed. Furthermore, real cases are used to examine the efficiency and feasibility of the proposed algorithm. Comparisons with multi-objective discrete grey wolf optimization, non-dominated sorting genetic algorithm II, Multi-population multi-objective evolutionary algorithm, and multi-objective evolutionary algorithm demonstrate the superiority of the proposed approach. Note to Practitioners —Disassembly line balancing has been widely recognized as the most ecological way of retrieving EOL products. Through in-depth research, we present a Stochastic Multi-product Multi-objective Disassembly-sequencing-line-balancing Problem. Furthermore, we consider that the uncertainty of products might cause disassembly failure. To solve this problem effectively and quickly, we combine the simulated annealing algorithm with the Grey Wolf Optimizer. The results show that the algorithm can effectively solve the proposed problem. The disassembly scheme provided by the obtained solution set offers a variety of options for decision-makers.

34 citations


Journal ArticleDOI
TL;DR: In this article , the authors proposed a robust model predictive control (RMPC) approach to minimize the total economical cost, while satisfying comfort and energy requests of the final users, with the aim of accounting for the data uncertainties in the microgrid.
Abstract: This paper focuses on the control of microgrids where both gas and electricity are provided to the final customer, i.e., multi-carrier microgrids. Hence, these microgrids include thermal and electrical loads, renewable energy sources, energy storage systems, heat pumps, and combined heat and power units. The parameters characterizing the multi-carrier microgrid are subject to several disturbances, such as fluctuations in the provision of renewable energy, variability in the electrical and thermal demand, and uncertainties in the electricity and gas pricing. With the aim of accounting for the data uncertainties in the microgrid, we propose a Robust Model Predictive Control (RMPC) approach whose goal is to minimize the total economical cost, while satisfying comfort and energy requests of the final users. In the related literature various RMPC approaches have been proposed, focusing either on electrical or on thermal microgrids. Only a few contributions have addressed the robust control of multi-carrier microgrids. Consequently, we propose an innovative RMPC algorithm that employs on an uncertainty set-based method and that can provide better performance compared with deterministic model predictive controllers applied to multi-carrier microgrids. With the aim of mitigating the conservativeness of the approach, we define suitable robustness factors and we investigate the effects of such factors on the robustness of the solution against variations of the uncertain parameters. We show the effectiveness of the proposed RMPC approach by applying it to a realistic residential multi-carrier microgrid and comparing the obtained results with the ones of a baseline robust method. Note to Practitioners —This work is motivated by the emerging need for effective energy management approaches in multi-carrier microgrids. The inherent difficulty of scheduling simultaneously the operations of various energy infrastructures (e.g., electricity, natural gas) is exacerbated by the inevitable presence of uncertainties that affect the inter-dependent dynamics of different energy resources and equipment. The proposed robust MPC-based control strategy allows the energy manager to effectively determine an optimal energy scheduling of multi-faceted system components, making a tradeoff between performance and protection against data uncertainty. The presented strategy is comprehensive and generic, as it can be applied to different microgrid frameworks integrating various types of system components and sources of uncertainty, while at the same time being implementable in any energy management system.

33 citations


Journal ArticleDOI
TL;DR: LeafGAN as discussed by the authors is a novel image-to-image translation system with own attention mechanism, which generates a wide variety of diseased images via transformation from healthy images, as a data augmentation tool for improving the performance of plant disease diagnosis.
Abstract: Many applications for the automated diagnosis of plant disease have been developed based on the success of deep learning techniques. However, these applications often suffer from overfitting, and the diagnostic performance is drastically decreased when used on test data sets from new environments. In this article, we propose LeafGAN, a novel image-to-image translation system with own attention mechanism. LeafGAN generates a wide variety of diseased images via transformation from healthy images, as a data augmentation tool for improving the performance of plant disease diagnosis. Due to its own attention mechanism, our model can transform only relevant areas from images with a variety of backgrounds, thus enriching the versatility of the training images. Experiments with five-class cucumber disease classification show that data augmentation with vanilla CycleGAN cannot help to improve the generalization, i.e., disease diagnostic performance increased by only 0.7% from the baseline. In contrast, LeafGAN boosted the diagnostic performance by 7.4%. We also visually confirmed that the generated images by our LeafGAN were much better quality and more convincing than those generated by vanilla CycleGAN. The code is available publicly at https://github.com/IyatomiLab/LeafGAN. Note to Practitioners Automated plant disease diagnosis systems play an important role in the agricultural automation field. Building a practical image-based automatic plant diagnosis system requires collecting a wide variety of disease images with reliable label information. However, it is quite labor-intensive. Conventional systems have reported relatively high diagnosis performance, but most of their scores were largely biased due to the “latent similarity” between training and test images, and their true diagnosis capabilities were much lower than claimed. To address this issue, we propose LeafGAN, which generates countless diverse and high-quality training images; it works as an efficient data augmentation for the diagnosis classifier. Such generated images can be used as useful resources for improving the performance of the cucumber disease diagnosis systems.

33 citations


Journal ArticleDOI
TL;DR: An iterated greedy algorithm (IGA) is a simple and powerful heuristic algorithm that is widely used to solve flow-shop scheduling problems (FSPs), an important branch of production scheduling problems as mentioned in this paper .
Abstract: An iterated greedy algorithm (IGA) is a simple and powerful heuristic algorithm. It is widely used to solve flow-shop scheduling problems (FSPs), an important branch of production scheduling problems. IGA was first developed to solve an FSP in 2007. Since then, various FSPs have been tackled by using IGA-based methods, including basic IGA, its variants, and hybrid algorithms with IGA integrated. Up until now, over 100 articles related to this field have been published. However, to the best of our knowledge, there is no existing tutorial or review paper of IGA. Thus, we focus on FSPs and provide a tutorial and comprehensive literature review of IGA-based methods. First, we introduce a framework of basic IGA and give an example to clearly show its procedure. To help researchers and engineers learn and apply IGA to their FSPs, we provide an open platform to collect and share related materials. Then, we make classifications of the solved FSPs according to their scheduling scenarios, objective functions, and constraints. Next, we classify and introduce the specific methods and strategies used in each phase of IGA for FSPs. Besides, we summarize IGA variants and hybrid algorithms with IGA integrated, respectively. Finally, we discuss the current IGA-based methods and already-solved FSP instances, as well as some important future research directions according to their deficiency and open issues. Note to Practitioners —Many practical scheduling problems can be transformed into flow-shop scheduling problems (FSPs), most of which are NP-hard. In order to solve them in an industrial system setting, designing effective heuristics is important and practically useful and has, thus, attracted much attention from both researchers and engineers. As an easy and high-performance heuristic, an iterated greedy algorithm (IGA) is widely used and adapted to solve numerous FSPs. Its simple framework makes it easy to be implemented by practitioners, and its high performance implies its great potential to solve industrial scheduling problems. In this work, we aim to give practitioners a comprehensive overview of IGA and help them apply IGA to solve their particular industrial scheduling problems. We review the papers that solve FSPs with IGA-based methods, including basic IGA, its variants, and hybrid algorithms with IGA integrated. First, we provide practitioners with a tutorial on IGA, where an example for solving an FSP is introduced and an open platform is constructed. The platform collects and shares the related materials, e.g., open-source code, benchmarks, and website links of important papers. Then, we introduce various FSPs and specific designs of IGA-based methods. Finally, we discuss the current research and point out future research issues.

32 citations


Journal ArticleDOI
TL;DR: In this article , a hybrid iterated greedy and simulated annealing algorithm is proposed to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP), where two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation.
Abstract: In this study, we propose an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP). Two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation. Different from the methods in the literature, crane lift operations have been investigated for the first time to consider the processing time and energy consumptions involved during the crane lift process. The IGSA algorithm is then developed to solve the CFJSPs considered. In the proposed IGSA algorithm, first, each solution is represented by a 2-D vector, where one vector represents the scheduling sequence and the other vector shows the assignment of machines. Subsequently, an improved construction heuristic considering the problem features is proposed, which can decrease the number of replicated insertion positions for the destruction operations. Furthermore, to balance the exploration abilities and time complexity of the proposed algorithm, a problem-specific exploration heuristic is developed. Finally, a set of randomly generated instances based on realistic industrial processes is tested. Through comprehensive computational comparisons and statistical analyses, the highly effective performance of the proposed algorithm is favorably compared against several efficient algorithms. Note to Practitioners —The flexible job shop scheduling problem (FJSP) can be extended and applied to many types of practical manufacturing processes. Many realistic production processes should consider the transportation procedures, especially for the limited crane resources and energy consumptions during the transportation operations. This study models a realistic production process as an FJSP with crane transportation, wherein two objectives, namely, the makespan and energy consumptions, are to be simultaneously minimized. This study first considers the height of the processing machines, and therefore, the crane lift operations and lift energy consumptions are investigated. A hybrid iterated greedy algorithm is proposed for solving the problem considered, and several problem-specific heuristics are embedded to balance the exploration and exploitation abilities of the proposed algorithm. In addition, the proposed algorithm can be generalized to solve other types of scheduling problems with crane transportations.

30 citations


Journal ArticleDOI
TL;DR: A review of the most recent and relevant contributions to the related literature, focusing on the control perspective is presented in this paper , where researchers and practitioners are provided with a reference source in the related field, which can help them designing and developing suitable solutions to control problems in safe, ergonomic, and efficient collaborative robotics.
Abstract: The fourth industrial revolution, also known as Industry 4.0, is reshaping the way individuals live and work while providing a substantial influence on the manufacturing scenario. The key enabling technology that has made Industry 4.0 a concrete reality is without doubt collaborative robotics, which is also evolving as a fundamental pillar of the next revolution, the so-called Industry 5.0. The improvement of employees’ safety and well-being, together with the increase of profitability and productivity, are indeed the main goals of human-robot collaboration (HRC) in the industrial setting. The robotic controller design and the analysis of existing decision and control techniques are crucially needed to develop innovative models and state-of-the-art methodologies for a safe, ergonomic, and efficient HRC. To this aim, this paper presents an accurate review of the most recent and relevant contributions to the related literature, focusing on the control perspective. All the surveyed works are carefully selected and categorized by target (i.e., safety, ergonomics, and efficiency), and then by problem and type of control, in presence or absence of optimization. Finally, the discussion of the achieved results and the analysis of the emerging challenges in this research field are reported, highlighting the identified gaps and the promising future developments in the context of the digital evolution. Note to Practitioners—The design and development of manufacturing systems are experiencing substantial changes towards full automation. This ongoing challenge is being tackled by academia and industrial practitioners with the adoption of collaborative robots, where the skills and peculiarities of humans (e.g., intelligence, creativity, adaptability, etc.) and robots (e.g., flexibility, pinpoint accuracy, tirelessness, etc.) are combined to better perform a variety of tasks. Nevertheless, due to their different characteristics, there is an emerging need for designing suitable decision and control techniques to ensure a safe and ergonomic HRC, while keeping the highest level of productivity. Against this background, the aim of this paper is to provide researchers and practitioners with a reference source in the related field, which can help them designing and developing suitable solutions to control problems in safe, ergonomic, and efficient collaborative robotics.

29 citations


Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs), which is designed to reduce the model runtime without losing the accuracy of feature extraction.
Abstract: Recently, deep neural network (DNN) models work incredibly well, and edge computing has achieved great success in real-world scenarios, such as fault diagnosis for large-scale rotational machinery. However, DNN training takes a long time due to its complex calculation, which makes it difficult to optimize and retrain models. To address such an issue, this work proposes a novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs). First, a BDNN-based feature extraction method with binary weights and activations in a training process is designed to reduce the model runtime without losing the accuracy of feature extraction. Its generated features are used to train an RF-based fault classifier to relieve the information loss caused by binarization. Second, considering the possible classification accuracy reduction resulting from those very similar binarized features of two instances with different classes, we replace a Gini index with ReliefF as the attribute evaluation measure in training RFs to further enhance the separability of fault features extracted by BDNN and accordingly improve the fault identification accuracy. Third, an edge computing-based fault diagnosis mode is proposed to increase diagnostic efficiency, where our diagnosis model is deployed distributedly on a number of edge nodes close to the end rotational machines in distinct locations. Extensive experiments are conducted to validate the proposed method on the data sets from rolling element bearings, and the results demonstrate that, in almost all cases, its diagnostic accuracy is competitive to the state-of-the-art DNNs and even higher due to a form of regularization in some cases. Benefited from the relatively lower computing and storage requirements of BDNNs, it is easy to be deployed on edge nodes to realize real-time fault diagnosis concurrently. Note to Practitioners—Rotating machines, such as engines and motors, are the cornerstones of the modern industry. Edge computing is an emerging computing paradigm where computation is performed on the edges of networks rather than on the central cloud, thereby reducing system response time, transmission overhead, storage space, and computation resources of the cloud. Motivated by the high demand on computation for deploying DNN models and lower computation complexity for running BDNN models and easiness for large-scale deployment of BDNNs, an edge computing-based method for real-time fault diagnosis of rotating machines is proposed. First, we design a BDNN-based feature extractor to decrease the amount of computation and speed up a diagnosis processes. Then, the resulting binary features are fed to train an RF-based classifier, where we use ReliefF instead of Gini index when training a random forest model to further improve the proposed method’s diagnostic accuracy. Finally, a novel cloud-edge collaborative computing-based fault diagnostic mode is presented, where the model trained from the central cloud is deployed on the edge computing devices distributed in large-scale scenarios to realize real-time fault diagnosis. Experiment results show that the proposed method can maintain the desired accuracy but greatly enhance the diagnosis speed when deployed on the edge nodes near end physical machines. It is easily extended and used for fault detection in many industrial sectors.

25 citations


Journal ArticleDOI
TL;DR: In this paper , a variational autoencoder-based conditional Wasserstein GAN with gradient penalty (CWGAN-GP-VAE) is proposed to diagnose various faults for chillers.
Abstract: Artificial intelligence (AI)-enhanced automated fault diagnosis (AFD) has become increasingly popular for chiller fault diagnosis with promising classification performance. In practice, a sufficient number of fault samples are required by the AI methods in the training phase. However, faulty training samples are generally much more difficult to be collected than normal training samples. Data augmentation is introduced in these scenarios to enhance the training data set with synthetic data. In this study, a variational autoencoder-based conditional Wasserstein GAN with gradient penalty (CWGAN-GP-VAE) is proposed to diagnose various faults for chillers. A detailed comparative study has been conducted with real-world fault data samples to verify the effectiveness and robustness of the proposed methodology. Note to Practitioners—This work attacks the fact that faulty training samples are usually much harder to be collected than the normal training samples in the practice of chiller automated fault diagnosis (AFD). Modern supervised learning chiller AFD relies on a sufficient number of faulty training samples to train the classifier. When the number of faulty training samples is insufficient, the conventional AFD methods fail to work. This study proposed a variational autoencoder-based conditional Wasserstein GAN with gradient penalty (CWGAN-GP-VAE) framework for generating synthetic faulty training samples to enrich the training data set for machine learning-based AFD methods. The proposed algorithm has been carefully designed, implemented, and practically proved to be more effective than the existing methods in the literature.

24 citations


Journal ArticleDOI
TL;DR: In this article , a human motion intention prediction method based on an autoregressive (AR) model for teleoperation is developed, where the robot's motion trajectory can be updated in real time through updating the parameters of the AR model.
Abstract: In this work, a human motion intention prediction method based on an autoregressive (AR) model for teleoperation is developed. Based on this method, the robot’s motion trajectory can be updated in real time through updating the parameters of the AR model. In the teleoperated robot’s control loop, a virtual force model is defined to describe the interaction profile and to correct the robot’s motion trajectory in real time. The proposed human motion prediction algorithm acts as a feedforward model to update the robot’s motion and to revise this motion in the process of human–robot interaction (HRI). The convergence of this method is analyzed theoretically. Comparative studies demonstrate the enhanced performance of the proposed approach. Note to Practitioners—In general, the robot trajectory is predetermined and it does not consider the influence of the interaction profiles in terms of position and interaction force between the human and the robot. In addition, it is hard to quantify the influence of interaction profile for the robot trajectory. For teleoperation, an AR-based model is proposed to predict the trajectory of the human and then to update the trajectory of the robot. The developed method includes the following aspects: 1) the robot trajectory can be regulated based on the interaction profiles; 2) the feedforward model can estimate the trajectory of the human to achieve the purpose of human intention recognition in advance for the robot; and 3) the proposed method can be potentially utilized for telerehabilitation, microsurgery, and so on.

Journal ArticleDOI
TL;DR: In this article , a deep learning-based approach was developed to map the unique relationship between driver distraction and the bioelectric electroencephalography (EEG) signals that are not affected by traffic environments.
Abstract: Distracted driving has been recognized as a major challenge to traffic safety improvement. This article presents a novel driving distraction detection method that is based on a new deep network. Unlike traditional methods, the proposed method uses both temporal information and spatial information of electroencephalography (EEG) signals as model inputs. Convolutional techniques and gated recurrent units were adopted to map the relationship between drivers’ distraction status and EEG signals in the time domain. A driving simulation experiment was conducted to examine the effectiveness of the proposed method. Twenty-four healthy volunteers participated and three types of secondary tasks (i.e., cellphone operation task, clock task, and 2-back task) were used to induce distraction during driving. Drivers’ EEG responses were measured using a 32-channel electrode cap, and the EEG signals were preprocessed to remove artifacts and then split into short EEG sequences. The proposed deep-network-based distraction detection method was trained and tested on the collected EEG data. To evaluate its effectiveness, it was also compared with the networks using temporal or spatial information alone. The results showed that our proposed distraction detection method achieved an overall binary (distraction versus nondistraction) classification accuracy of 0.92. In terms of task-specific distraction detection, its accuracy was 0.88. Further analysis on the individual difference in detection performance showed that drivers’ EEG performance differed across individuals, which suggests that adaptive learning for each individual driver would be needed when developing in-vehicle distraction detection applications. Note to Practitioners—Driver distraction detection is crucial for safety enhancement to avoid crashes caused by nondriving-related activities, such as calling and texting while driving. Related previous studies mainly focus on detection by monitoring head and eye movement using computer vision technologies or by extracting indicators from driving performance measures for driver state inference. However, complex traffic environments (e.g., dynamically changing light distribution on driver’s face and nighttime driving with low illumination) strongly limit the effectiveness of computer vision technologies, and the driving performance characteristics may also be caused by factors other than distraction (e.g., fatigue). To solve these problems, this article seeks to develop a deep learning-based approach to map the unique relationship between driver distraction and the bioelectric electroencephalography (EEG) signals that are not affected by traffic environments. The proposed method can be integrated into the driver assistance systems and autonomous vehicles to deal with emergency situations that need drivers to handle. The timely detection of distraction by our method will significantly facilitate its practical applications in collision avoidance or danger mitigation in the handover process.

Journal ArticleDOI
TL;DR: In this paper , a two-stage stochastic formulation with mixed integer conic program (MICP) recourse decisions is proposed to support the hydrogen-based networked microgrids planning subject to multiple uncertainties (e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles).
Abstract: Networked microgrids that integrate the hydrogen fueling stations (HFSs) with the on-site renewable energy sources (RES), power-to-hydrogen (P2H) facilities, and hydrogen storage could help decarbonize the energy and transportation sectors. In this paper, to support the hydrogen-based networked microgrids planning subject to multiple uncertainties (e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles), we propose a two-stage stochastic formulation with mixed integer conic program (MICP) recourse decisions. Our formulation involves the holistic investment and operation modeling to optimally site and configure the microgrids with HFSs. The MICP problems appearing in the second-stage capture the nonlinear power flow of networked microgrids system with binary decisions on storage charging/discharging status and energy transactions (including the trading of electricity, hydrogen, and carbon credits to recover the capital expenditures). To handle the computational challenges associated with the stochastic program with MICP recourse, an augmented Benders decomposition algorithm (ABD) is developed. Numerical studies on 33- and 47-bus exemplary networks demonstrate the economics viability of electricity-hydrogen coordination on microgrids level, as well as the benefits of stochastic modeling. Also, our augmented algorithm significantly outperforms existing methods, e.g., the progressive hedging algorithm (PHA) and the direct use of a professional MIP solver, which has largely improved the solution quality and reduced the computation time by orders of magnitude. Note to Practitioners—This paper proposes an optimal planning model for electricity-hydrogen microgrids with the renewable hydrogen production, storage, and refueling infrastructures. Our planning model is extended under a two-stage stochastic framework to address the multi-energy-sector uncertainties, e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles. The first-stage problem is to optimize the siting and sizing plan of microgrids. Then, in the second-stage problem, the coordinated scheduling of electricity and hydrogen supply systems is modeled as second-order conic programs (SOCPs) to accurately capture the power flow representation under stochastic scenarios. Also, the logical constraints with binary variables are introduced to describe the energy transactions and storage operations, which results in an MICP recourse structure. Note that the stochastic MICP formulation could be very challenging to compute even with a moderate number of scenarios. One challenge certainly comes from integer variables that cause the problem nonconvex. Another challenge follows from the fact that the strong duality of SOCPs might not hold in general. To mitigate those two challenges, we prove that the continuous relaxation of our recourse problem has strong duality, and make use of that continuous relaxation and other enhancements to design an augmented decomposition algorithm. As revealed by our numerical tests, the proposed decomposition method outperforms PHA in both the solution quality and computational efficiency. Comparing to the PHA, our ABD method often achieves tighter bounds with trivial optimality gaps. Also, it could reduce the computation time by orders of magnitude. With the help of advanced analytical tool, the proposed planning framework can be readily implemented in real-world applications.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a risk assessment method based on interval intuitionistic integrated cloud Petri net (IIICPN), which is widely used in data mining and knowledge discovery, especially in risk assessment problems with linguistic variables.
Abstract: This article proposes a risk assessment method based on interval intuitionistic integrated cloud Petri net (IIICPN). The cloud model is widely used in data mining and knowledge discovery, especially in risk assessment problems with linguistic variables. However, the cloud models proposed in the literature do not express interval-valued intuitionistic linguistic satisfactorily, and the reasoning methods based on the cloud models cannot perform risk assessment well. The work in this article includes the definition of IIIC and IIICPN, the method of converting the interval-valued intuitionistic uncertain linguistic numbers into IIIC, and the reasoning method of IIICPN. As proofs, a subway fire accident model is adopted to confirm the feasibility of the proposed method, and comparison experiments between the IIICPN with general fuzzy Petri net and the trapezium cloud model are conducted to verify the superiority of the proposed model. Note to Practitioners —This work deals with the subway fire risk assessment problem. It proposes a cloud model based on interval-valued intuitionistic uncertain linguistic and builds a cloud-based Petri net model. The methods of fire risk assessment use the existing fault trees or aggregation operators to combine all the factors into consideration, but they do not take the interaction of factors. The goal of this work is to assess the risk of subway fire accident of subway, using fuzzy linguistic decision variables. The simulation results indicate that the proposed method is highly effective. The obtained results can help assessors better determine which factors may cause the disaster.

Journal ArticleDOI
TL;DR: In this article , the authors proposed a robust model predictive control (RMPC) approach to minimize the total economical cost, while satisfying comfort and energy requests of the final users in a multi-carrier microgrid.
Abstract: This paper focuses on the control of microgrids where both gas and electricity are provided to the final customer, i.e., multi-carrier microgrids. Hence, these microgrids include thermal and electrical loads, renewable energy sources, energy storage systems, heat pumps, and combined heat and power units. The parameters characterizing the multi-carrier microgrid are subject to several disturbances, such as fluctuations in the provision of renewable energy, variability in the electrical and thermal demand, and uncertainties in the electricity and gas pricing. With the aim of accounting for the data uncertainties in the microgrid, we propose a Robust Model Predictive Control (RMPC) approach whose goal is to minimize the total economical cost, while satisfying comfort and energy requests of the final users. In the related literature various RMPC approaches have been proposed, focusing either on electrical or on thermal microgrids. Only a few contributions have addressed the robust control of multi-carrier microgrids. Consequently, we propose an innovative RMPC algorithm that employs on an uncertainty set-based method and that can provide better performance compared with deterministic model predictive controllers applied to multi-carrier microgrids. With the aim of mitigating the conservativeness of the approach, we define suitable robustness factors and we investigate the effects of such factors on the robustness of the solution against variations of the uncertain parameters. We show the effectiveness of the proposed RMPC approach by applying it to a realistic residential multi-carrier microgrid and comparing the obtained results with the ones of a baseline robust method. Note to Practitioners—This work is motivated by the emerging need for effective energy management approaches in multi-carrier microgrids. The inherent difficulty of scheduling simultaneously the operations of various energy infrastructures (e.g., electricity, natural gas) is exacerbated by the inevitable presence of uncertainties that affect the inter-dependent dynamics of different energy resources and equipment. The proposed robust MPC-based control strategy allows the energy manager to effectively determine an optimal energy scheduling of multi-faceted system components, making a tradeoff between performance and protection against data uncertainty. The presented strategy is comprehensive and generic, as it can be applied to different microgrid frameworks integrating various types of system components and sources of uncertainty, while at the same time being implementable in any energy management system.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a scoring and dynamic hierarchical-based NSGA-II (Nondominated Sorting Genetic Algorithm II), called SDHN, to minimize both makespan and cost of workflow execution.
Abstract: Cloud computing becomes a promising technology to reduce computation cost by providing users with elastic resources and application-deploying environments as a pay-per-use model. More scientific workflow applications have been moved or are being migrated to the cloud. Scheduling workflows turns to the main bottleneck for increasing resource utilization and quality of service (QoS) for users. This work formulates workflow scheduling as multiobjective optimization problems and proposes a Scoring and Dynamic Hierarchy-based NSGA-II (Nondominated Sorting Genetic Algorithm II), called SDHN for short, to minimize both makespan and cost of workflow execution. First, a scoring criterion is developed to calculate the total score for each individual during population updating, which is used as a quantitative index to evaluate the dominance degree of individuals among the whole population. Hence, SDHN can distinguish individuals within the same dominance level and target its search toward the directions of elite solutions as their different dominance degrees and accordingly improve search efficiency. Second, a population-based dynamic hierarchical structure (HS) and its evolutionary rules are presented to update HS by comparing each child with all parental individuals from bottom to up until finding a proper dominant level. Since traversing all HS levels is not needed in most cases, the number of individual comparisons is reduced and SDHN’s updating efficiency is greatly improved, especially for large-scale and complex applications. Third, to guarantee its converging to the near-optimal solutions, adaptive adjustment strategies (AASs) are designed to prevent the search from falling into local optima or diverging by checking the number of individuals at the highest HS level and then modifying the relevant genetic operations to guide the evolutionary process to approach the global Pareto Front. Extensive experiments are conducted to verify SDHN, and the results show that it outperforms the existing algorithms in the quality and diversity of resulting solutions as well as convergence time. Note to Practitioners—Most scientific applications are computation and/or data-intensive and need large-scale or high-performance resources for their execution. More and more scientists use workflows to manage their applications, but how to efficiently run them in the cloud is a big challenge due to their large scale as well as the dynamic characteristics of the elastic and heterogeneous cloud resources. In this article, we develop a novel multiobjective optimization technique for workflow scheduling such that the makespan and cost can be minimized simultaneously. A scoring criterion, dynamic hierarchical structure and its evolutionary rules, and adaptive adjustment strategies are designed to cooperate with each other and increase the search ability and efficiency of the original and widely used NSGA-II. Adequate experiments are conducted to verify the proposed method’s performance, and the experimental results show that it can provide more near-optimal solutions than the existing methods. It can be readily applied for implementing more efficient and effective cloud data centers to execute large-scale scientific workflows.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a hybrid prediction method named SG and TCN-based LSTM (ST-LSTM), which integrates the merits of the Savitzky-Golay filter, the temporal convolutional network (TCN), and the long short-term memory.
Abstract: Accurate and real-time prediction of network traffic can not only help system operators allocate resources rationally according to their actual business needs but also help them assess the performance of a network and analyze its health status. In recent years, neural networks have been proved suitable to predict time series data, represented by the model of a long short-term memory (LSTM) neural network and a temporal convolutional network (TCN). This article proposes a novel hybrid prediction method named SG and TCN-based LSTM (ST-LSTM) for such network traffic prediction, which synergistically combines the power of the Savitzky–Golay (SG) filter, the TCN, as well as the LSTM. ST-LSTM employs a three-phase end-to-end methodology serving time series prediction. It first eliminates noise in raw data using the SG filter, then extracts short-term features from sequences applying the TCN, and then captures the long-term dependence in the data exploiting the LSTM. Experimental results over real-world datasets demonstrate that the proposed ST-LSTM outperforms state-of-the-art algorithms in terms of prediction accuracy. Note to Practitioners —This work considers real-time and high-accuracy prediction of network traffic. It is highly important to well predict network traffic by capturing long-term dependence and effectively extracting high- and low-frequency information from time series data. Yet, it is a big challenge to achieve it because there are unstable characteristics and strong nonlinear features in the network traffic due to continuous expansion of network scale and fast emergence of new services. Current prediction methods usually have oversimplified theoretical assumptions, need significant time and memory, or suffer problems of gradient disappearance or early convergence. Thus, they fail to effectively capture the nonlinear characteristics of large-scale network sequences. This work proposes a hybrid prediction method named SG and TCN-based LSTM (ST-LSTM), which integrates the merits of the Savitzky–Golay filter, the temporal convolutional network (TCN), and the long short-term memory (LSTM), serving as smoothing time series, capturing short-term local features, and capturing long-term dependence, respectively. Experimental results based on the real-life dataset demonstrate that it achieves better prediction accuracy than its state-of-the-art peers, including the TCN and the LSTM. It can be readily implemented and deployed in many real-life industrial areas including smart city, edge computing, cloud computing, and data centers.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a hybrid variational recurrent neural network (RNN) autoencoder to compute the anomaly level of the body motion based on the acquired point cloud.
Abstract: Elderly fall prevention and detection becomes extremely crucial with the fast aging population globally. In this article, we propose mmFall, a novel fall detection system, which comprises 1) the emerging millimeter-wave (mmWave) radar sensor to collect the human body’s point cloud along with the body centroid and 2) a hybrid variational recurrent neural network (RNN) autoencoder (HVRAE) to compute the anomaly level of the body motion based on the acquired point cloud. A fall is detected when the spike in anomaly level and the drop in centroid height occur simultaneously. The mmWave radar sensor offers privacy-compliance and high sensitivity to motion, over the traditional sensing modalities. However, 1) randomness in radar point cloud and 2) difficulties in fall collection/labeling in the traditional supervised fall detection approaches are the two major challenges. To overcome the randomness in radar data, the proposed HVRAE uses variational inference, a generative approach rather than a discriminative approach, to infer the posterior probability of the body’s latent motion state every frame, followed by a RNN to summarize the temporal features over multiple frames. Moreover, to circumvent the difficulties in fall data collection/labeling, the HVRAE is built upon an autoencoder architecture in a semisupervised approach, which is only trained on the normal activities of daily living (ADL). In the inference stage, the HVRAE will generate a spike in the anomaly level once an abnormal motion, such as fall, occurs. During the experiment,1 we implemented the HVRAE along with two other baselines, and tested on the data set collected in an apartment. The receiver operating characteristic (ROC) curve indicates that our proposed model outperforms baselines and achieves 98% detection out of 50 falls at the expense of just 2 false alarms. Note to Practitioners—Traditional nonwearable fall detection approaches typically make use of a vision-based sensor, such as camera, to monitor and detect fall using a classifier that is trained in a supervised fashion on the collected fall and nonfall data. However, several problems render these methods impractical. First, camera-based monitoring may trigger privacy concerns. Second, fall data collection using human subjects is difficult and costly, not to mention the impossible ask of the elderly repeating simulated falls for data collection. In this article, we propose a new fall detection approach to overcome these problems 1) using a palm-size mmWave radar sensor to monitor the elderly, that is highly sensitive to motion while protecting privacy and 2) using a semisupervised anomaly detection approach to circumvent the fall data collection. Further hardware engineering and more training data from people with different body figures could make the proposed fall detection solution even more practical.

Journal ArticleDOI
TL;DR: In this article , a hierarchical data recovery method based on generative adversarial networks (GANs) is proposed to handle different numbers of incomplete data due to the structure of the semicentral pipeline network.
Abstract: In the real-time status monitoring of pipeline network, incomplete pressure data are unavoidable due to some device or communication errors. To solve this problem, a hierarchical data recovery method based on generative adversarial networks (GANs) is proposed in this article. First, a hierarchical data recovery framework is proposed to handle different numbers of incomplete data due to the structure of the semicentral pipeline network. Second, a joint attention module is presented to capture both interior nature and correlation relationships of multivariate pressure series and further guarantee the consistency of pressure data. Third, the macromicrodual discriminators are proposed to evaluate the recovery result through the combination of the local and global variation in temporal and spatial dependencies. Based on the novel structures, the proposed model is able to recover incomplete data with abnormal fluctuation values, unreasonable fixed values, or missing values. Finally, under a series of data recovery experiments, the efficiency of the proposed method is evaluated. Experimental results demonstrate that the proposed method is a practical way to ensure data recovery performance in the pipeline network. Note to Practitioners —Status monitoring based on pressure data is of great importance for safe and efficient operation in a pipeline network. However, due to unexpected situations, the appearance of incomplete pressure data affects the subsequent data processing and status analysis, resulting in an incorrect decision. In this article, a deep learning-based method is proposed to recover the incomplete data. With the help of the spatiotemporal dependencies of multivariate pressure series, the proposed method can recover different numbers of incomplete data through the no-missing part of pressure data. The experiment results show that the proposed method is better than the similar data recovery methods through three different evaluation metrics. In the future, we will address the data recovery problem without the complete data pairs in the training process.

Journal ArticleDOI
TL;DR: In this article , a low-risk and high-efficiency path planning approach is proposed for autonomous driving based on the high-performance and practical trajectory prediction method using a long short-term memory (LSTM) network.
Abstract: Accurate trajectory prediction of surrounding vehicles enables lower risk path planning in advance for autonomous vehicles, thus promising the safety of automated driving. A low-risk and high-efficiency path planning approach is proposed for autonomous driving based on the high-performance and practical trajectory prediction method. A long short-term memory (LSTM) network is trained and tested using the highD dataset, and the validated LSTM is used to predict the trajectories of surrounding vehicles combining the information extracted from vehicle-to-vehicle (V2V) technology. A risk assessment and mitigation-based local path planning algorithm is proposed according to the information of predicted trajectories of surrounding vehicles. Two driving scenarios are extracted and reconstructed from the highD dataset for validation and evaluation, i.e., an active lane-change scenario and a longitudinal collision-avoidance scenario. The results illustrate that the risk is mitigated and the driving efficiency is improved with the proposed path planning algorithm comparing to the constant-velocity prediction and the prediction method of the nonlinear input–output (NIO) network, especially when the velocity and trajectory with sudden changes. Note to Practitioners—This article was motivated by the problem of promising the safety decision-making and path planning through accurate environment prediction. There are two main parts included in this article. First, this article proposed one pragmatic approach to predict the environment movement correctly based on the long short-term memory (LSTM) approach. The prediction performance of LSTM was compared with nonlinear input–output (NIO). The results showed that the LSTM approach has a significant advantage in motivation prediction of the surrounded vehicles during path planning. The second part of this article is to make the decision and realize local path planning based on the risk assessment. The potential field-based approach is implemented on the risk assessment based on these accurate predictions. Some primary results demonstrate that the decision-making algorithm performs better under the accurate prediction model. The results also show that the safety and driving efficiency of the ego vehicle were improved by tracking the trajectory, which was planned based on the risk assessment. The only concern for the real-time application is the computation time; in future, we will figure it out how to further reduce the computation time.

Journal ArticleDOI
TL;DR: In this paper , a two-stage stochastic formulation with mixed integer conic program (MICP) recourse decisions is proposed to support the hydrogen-based networked microgrids planning subject to multiple uncertainties (e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles).
Abstract: Networked microgrids that integrate the hydrogen fueling stations (HFSs) with the on-site renewable energy sources (RES), power-to-hydrogen (P2H) facilities, and hydrogen storage could help decarbonize the energy and transportation sectors. In this paper, to support the hydrogen-based networked microgrids planning subject to multiple uncertainties (e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles), we propose a two-stage stochastic formulation with mixed integer conic program (MICP) recourse decisions. Our formulation involves the holistic investment and operation modeling to optimally site and configure the microgrids with HFSs. The MICP problems appearing in the second-stage capture the nonlinear power flow of networked microgrids system with binary decisions on storage charging/discharging status and energy transactions (including the trading of electricity, hydrogen, and carbon credits to recover the capital expenditures). To handle the computational challenges associated with the stochastic program with MICP recourse, an augmented Benders decomposition algorithm (ABD) is developed. Numerical studies on 33- and 47-bus exemplary networks demonstrate the economics viability of electricity-hydrogen coordination on microgrids level, as well as the benefits of stochastic modeling. Also, our augmented algorithm significantly outperforms existing methods, e.g., the progressive hedging algorithm (PHA) and the direct use of a professional MIP solver, which has largely improved the solution quality and reduced the computation time by orders of magnitude. Note to Practitioners—This paper proposes an optimal planning model for electricity-hydrogen microgrids with the renewable hydrogen production, storage, and refueling infrastructures. Our planning model is extended under a two-stage stochastic framework to address the multi-energy-sector uncertainties, e.g., RES generation, electric loads, and the refueling demands of hydrogen vehicles. The first-stage problem is to optimize the siting and sizing plan of microgrids. Then, in the second-stage problem, the coordinated scheduling of electricity and hydrogen supply systems is modeled as second-order conic programs (SOCPs) to accurately capture the power flow representation under stochastic scenarios. Also, the logical constraints with binary variables are introduced to describe the energy transactions and storage operations, which results in an MICP recourse structure. Note that the stochastic MICP formulation could be very challenging to compute even with a moderate number of scenarios. One challenge certainly comes from integer variables that cause the problem nonconvex. Another challenge follows from the fact that the strong duality of SOCPs might not hold in general. To mitigate those two challenges, we prove that the continuous relaxation of our recourse problem has strong duality, and make use of that continuous relaxation and other enhancements to design an augmented decomposition algorithm. As revealed by our numerical tests, the proposed decomposition method outperforms PHA in both the solution quality and computational efficiency. Comparing to the PHA, our ABD method often achieves tighter bounds with trivial optimality gaps. Also, it could reduce the computation time by orders of magnitude. With the help of advanced analytical tool, the proposed planning framework can be readily implemented in real-world applications.

Journal ArticleDOI
TL;DR: In this article , a phase-distance model was proposed to estimate the tag location with high accuracy, where a mobile robot equipped with a reader antenna localizes in 2D the tags placed in an indoor scenario and reconstructs the map of the environment through a SLAM algorithm.
Abstract: The use of radio frequency identification (RFID) technology for the traceability of products throughout the production chain, warehouse management, and the retail network is spreading in the last years, especially in those industries in line with the concept of Industry 4.0. The last decade has seen the development of increasingly precise and high-performance methods for the localization of goods. This work proposes a reliable 2-D localization methodology that is faster and provides a competitive accuracy, concerning the state-of-the-art techniques. The proposed method leverages a phase-distance model and exploits the synthetic aperture approach and unwrapping techniques for facing phase ambiguity and multipath phenomena. Trilateration applied on consecutive phase readings allows finding hyperbolae as the localization solution space. Analytic calculus is used to compute intersections among the conics that estimate the tag position. An algorithm computes intersections quality to select the best estimation. Experimental tests are conducted to assess the quality of the proposed strategy. A mobile robot equipped with a reader antenna localizes in 2-D the tags placed in an indoor scenario and reconstructs the map of the environment through a simultaneous localization and mapping (SLAM) algorithm. Note to Practitioners—A localization technology based on passive ultrahigh-frequency (UHF) radio frequency identification (RFID) is an enabling technology for intelligent warehouses, logistics, and retails. For this reason, this work presents a novel method to estimate the tag location with high accuracy. A reader antenna is mounted on an autonomous mobile robot that can move in an indoor or outdoor environment due to a simultaneous localization and mapping (SLAM) algorithm. The motion of the antenna generates a synthetic aperture. The system receives the phase measurements from the RFID tags and generates a distance model through phase unwrapping. In such a way, the possible locations of the tags in the environment are generated, creating conics. The trilateration step is performed analytically, intersecting the obtained conics. The resulting estimations are very accurate and not computation expensive. Therefore, the proposed approach can be employed in any application where localizing objects is fundamental even when reduced computational power is available, e.g., in warehouses where the products are at known heights, or where the items are placed on a fixed infrastructure, such as high-shelves logistics, to produce the inventory of the tagged objects within each shelf.

Journal ArticleDOI
TL;DR: In this article , the authors propose an automatic rescheduling algorithm for real-time control of railway traffic that aims at minimizing the delays induced by the disruption and disturbances, as well as the resulting cancellations of train runs and turnbacks (or short-turns) and shuntings of trains in stations.
Abstract: Railways are a well-recognized sustainable transportation mode that helps to satisfy the continuously growing mobility demand. However, the management of railway traffic in large-scale networks is a challenging task, especially when both a major disruption and various disturbances occur simultaneously. We propose an automatic rescheduling algorithm for real-time control of railway traffic that aims at minimizing the delays induced by the disruption and disturbances, as well as the resulting cancellations of train runs and turn-backs (or short-turns) and shuntings of trains in stations. The real-time control is based on the Model Predictive Control (MPC) scheme where the rescheduling problem is solved by mixed integer linear programming using macroscopic and mesoscopic models. The proposed resolution algorithm combines a distributed optimization method and bi-level heuristics to provide feasible control actions for the whole network in short computation time, without neglecting physical limitations nor operations at disrupted stations. A realistic simulation test is performed on the complete Dutch railway network. The results highlight the effectiveness of the method in properly minimizing the delays and rapidly providing feasible feedback control actions for the whole network. Note to Practitioners —This article aims at contributing to the enhancement of the core functionalities of Automatic Train Control (ATC) systems and, in particular, of the Automatic Train Supervision (ATS) module, which is included in ATC systems. In general, the ATS module allows to automate the train traffic supervision and consequently the rescheduling of the railway traffic in case of unexpected events. However, the implementation of an efficient rescheduling technique that automatically and rapidly provides the control actions necessary to restore the railway traffic operations to the nominal schedule is still an open issue. Most literature contributions fail in providing rescheduling methods that successfully determine high-quality solutions in less than one minute and include real-time information regarding the large-scale railway system state. This research proposes a semi-heuristic control algorithm based on MPC that, on the one hand, overcomes the limitations of manual rescheduling (i.e., suboptimal, stressful, and delayed decisions) and, on the other hand, offers the advantages of online and closed-loop control of railway traffic based on continuous monitoring of the traffic state to rapidly restore railway traffic operations to the nominal schedule. The semi-heuristic procedure permits to significantly reduce the computation time necessary to solve the rescheduling problem compared with an exact procedure; moreover, the use of a distributed optimization approach permits the application of the algorithm to large instances of the rescheduling problem, and the inclusion of both the traffic and rolling stock constraints related to the disrupted area. The method is tested on a realistic simulation environment, thus still requires further refinements for the integration into a real ATS system. Further developments will also consider the occurrence of various simultaneous disruptions in the network.

Journal ArticleDOI
TL;DR: In this article , an iron needle was controlled by a three-degree-of-freedom (3-DoF) manipulator and magnetized by precessing magnetic fields.
Abstract: A dynamic self-assembly is a promising approach for inducing the collective behavior of agents to perform coordinated tasks at small scales. However, efficient pattern formation and navigation in environments with complex conditions remain a challenge. In this article, we propose a strategy for micromanipulation using dynamically self-assembled magnetic droplets with needle guidance. An iron needle was controlled by a three-degree-of-freedom (3-DoF) manipulator and magnetized by precessing magnetic fields. The process of self-assembly was optimized based on real-time vision feedback and a genetic algorithm. Affected by the locally induced field gradient near the needle, reconfigurable assembled magnetic droplets were formed beneath the air-liquid interface with high time efficiency, and the geometric center of the pattern was determined. Following the magnetized needle, assembled patterns were navigated along preplanned paths and exhibited reversible pattern expansion and shrinkage. Moreover, cargo can be trapped and caged by exploiting the induced fluid flow around the assembled droplets. To perform cargo transportation tasks in a multiple-obstacle environment, an optimal path planner with obstacle-avoidance capability was designed based on the particle swarm optimization (PSO) algorithm. Experiments demonstrated effective pattern formation, navigation, cargo trapping, and obstacle-avoidance transportation. The proposed method opens new prospects of using a dynamically self-assembled pattern as an untethered end-effector for micromanipulation. Note to Practitioners —This article was motivated by the recent interest in utilizing the collective behavior of small-scale active agents to perform micromanipulation tasks. Driven by external magnetic fields, building blocks are gathered and assembled, yielding a dynamically stable pattern. To perform practical tasks, efficient pattern formation, control, and navigation are required. Besides, obstacles often exist in the working environment, challenging pattern navigation, and manipulation tasks. The strategy presented here is developed for micromanipulation using dynamically self-assembled magnetic droplets with needle guidance. The three-axis Helmholtz coil system is applied to rotate the droplets and magnetize the iron needle. Algorithms are designed to guide and optimize the pattern formation, navigation, and cargo trapping process. Magnetic droplets are real-time tracked, and ordered assembled patterns are formed in an optimized way. Following the needle, the pattern was navigated and performed cargo manipulation tasks with obstacle-avoidance capability. Experimental results have validated the proposed strategy in pattern formation, navigation, and cargo manipulation in a multiple-obstacle environment.

Journal ArticleDOI
TL;DR: Zhang et al. as mentioned in this paper proposed a fuzzy logic-driven variable time-scale prediction-based reinforcement learning (FLDVTSP-RL) for assembly action control, in which the predicted environment is mapped to the impedance parameter in the proposed impedance action space.
Abstract: Reinforcement learning (RL) has been increasingly used for single peg-in-hole assembly, where assembly skill is learned through interaction with the assembly environment in a manner similar to skills employed by human beings. However, the existing RL algorithms are difficult to apply to the multiple peg-in-hole assembly because the much more complicated assembly environment requires sufficient exploration, resulting in a long training time and less data efficiency. To this end, this article focuses on how to predict the assembly environment and how to use the predicted environment in assembly action control to improve the data efficiency of the RL algorithm. Specifically, first, the assembly environment is exactly predicted by a variable time-scale prediction (VTSP) defined as general value functions (GVFs), reducing the unnecessary exploration. Second, we propose a fuzzy logic-driven variable time-scale prediction-based reinforcement learning (FLDVTSP-RL) for assembly action control to improve the efficiency of the RL algorithm, in which the predicted environment is mapped to the impedance parameter in the proposed impedance action space by a fuzzy logic system (FLS) as the action baseline. To demonstrate the effectiveness of VTSP and the data efficiency of the FLDVTSP-RL methods, a dual peg-in-hole assembly experiment is set up; the results show that FLDVTSP-deep Q-learning (DQN) decreases the assembly time about 44% compared with DQN and FLDVTSP-deep deterministic policy gradient (DDPG) decreases the assembly time about 24% compared with DDPG. Note to Practitioners —The complicated assembly environment of the multiple peg-in-hole assembly results in a contact state that cannot be recognized exactly from the force sensor. Therefore, contact-model-based methods that require tuning of the control parameters based on the contact state recognition cannot be applied directly in this complicated environment. Recently, reinforcement learning (RL) methods without contact state recognition have recently attracted scientific interest. However, the existing RL methods still rely on numerous explorations and a long training time, which cannot be directly applied to real-world tasks. This article takes inspiration from the manner in which human beings can learn assembly skills with a few trials, which relies on the variable time-scale predictions (VTSPs) of the environment and the optimized assembly action control strategy. Our proposed fuzzy logic-driven variable time-scale prediction-based reinforcement learning (FLDVTSP-RL) can be implemented in two steps. First, the assembly environment is predicted by the VTSP defined as general value functions (GVFs). Second, assembly action control is realized in an impedance action space with a baseline defined by the impedance parameter mapped from the predicted environment by the fuzzy logic system (FLS). Finally, a dual peg-in-hole assembly experiment is conducted; compared with deep Q-learning (DQN), FLDVTSP-DQN can decrease the assembly time about 44%; compared with deep deterministic policy gradient (DDPG), FLDVTSP-DDPG can decrease the assembly time about 24%.

Journal ArticleDOI
TL;DR: A holistic framework with image-guided and automation techniques to robotize this operation even under complex environments, and fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying.
Abstract: To realize a higher-level autonomy of surgical knot tying in minimally invasive surgery (MIS), automated suture grasping, which bridges the suture stitching and looping procedures, is an important yet challenging task needs to be achieved. This paper presents a holistic framework with image-guided and automation techniques to robotize this operation even under complex environments. The whole task is initialized by suture segmentation, in which we propose a novel semi-supervised learning architecture featured with a suture-aware loss to pertinently learn its slender information using both annotated and unannotated data. With successful segmentation in stereo-camera, we develop a Sampling-based Sliding Pairing (SSP) algorithm to online optimize the suture’s 3D shape. By jointly studying the robotic configuration and the suture’s spatial characteristics, a target function is introduced to find the optimal grasping pose of the surgical tool with Remote Center of Motion (RCM) constraints. To compensate for inherent errors and practical uncertainties, a unified grasping strategy with a novel vision-based mechanism is introduced to autonomously accomplish this grasping task. Our framework is extensively evaluated from learning-based segmentation, 3D reconstruction, and image-guided grasping on the da Vinci Research Kit (dVRK) platform, where we achieve high performances and successful rates in perceptions and robotic manipulations. These results prove the feasibility of our approach in automating the suture grasping task, and this work fills the gap between automated surgical stitching and looping, stepping towards a higher-level of task autonomy in surgical knot tying. Note to Practitioners—This paper aims to automate the suture grasping task in surgical knot tying by leveraging stereo visual guidance. To effectively robotize this procedure, it requires multidisciplinary knowledge to achieve suture segmentation, 3D shape reconstruction, and reliable automated grasping, while there are no existing works tackling this procedure especially using robots with RCM kinematics constraints and under complex environments. In this article, we propose a learning-driven method along with a 3D shape optimizer, which can conduct the suture segmentation and output its accurate spatial coordinates, serving as guidance for automated grasping operation. Apart from this, we introduce a unified function to optimize the grasping pose, and a vision-based grasping strategy is also proposed to intelligently complete this task. The experiments extensively validate the feasibility of our framework for automated suture grasp, and its successful completion can serve as a basis for the following looping manipulation, hence filling a step gap in robot-assisted knot tying. This framework can be also encapsulated into the medical robotic system, and by simply indicating (e.g. mouse click) the rough position of the suture’s tip in one camera frame, the overall framework can be initialized and further accomplish the suture grasping task, which further prompts a full autonomy of surgical knot tying in the near future.

Journal ArticleDOI
TL;DR: In this paper , a two-level memetic algorithm is proposed to solve the path planning problem of the UAV/UGV cooperative system for illegal urban building detection, by taking the limits of UGV speed, UAV load power, and UAV communication restriction into consideration.
Abstract: The studies of Unmanned Air/Ground Vehicle (UAV/UGV) cooperative detection systems have received much attention due to their wide applications in the disaster rescue, target tracking, intelligent surveillance, and automatical package delivery missions. UAVs provide a broad view and have a fast speed in the air, while UGVs have sufficient load capacity and can serve as repeater stations on the ground. The path planning of a UAV/UGV cooperative system is an important but difficult issue, which aims to plan paths for both the UAVs and the UGVs in the system to cooperatively complete a mission. In this article, we consider the path planning problem of the UAV/UGV cooperative system for illegal urban building detection, by taking the limits of UGV speed, UAV load power, and UAV/UGV communication restriction into consideration. To solve this problem, we first model the path planning problem as a constraint optimization problem which tries to minimize an overall execution time for completing the illegal urban building detection tasks, and then propose a two-level memetic algorithm (called Two-MA) to solve the path planning problems of both the UAV and the UGV. Experiments on both synthetic and real-world data sets show the superiority of the proposed Two-MA over several states-of-the-art algorithms in solving the path planning problems of the UAV and UGV for illegal urban building detection tasks. Note to Practitioners—This article was motivated by the task of detecting illegal buildings in cities by unmanned vehicles. Previous works mainly focus on path planning of either UAVs or UGVs in this task. This article proposes a new approach using an Unmanned Air/Ground Vehicle (UAV/UGV) cooperative system for detecting illegal buildings in parks, by taking the limits of UGV speed, UAV load power, and UAV/UGV communication restriction into consideration. This cooperative system consists of a UAV, UGV, and control center. The UAV equipped with cameras takes aerial photography in the air, and can transmit collected photos to the control center. The UGV executes loading and transportation on the ground, and can serve as takeoff and landing platforms for the UAV. The control center executes computationally intensive tasks such as data transmission and processing, task scheduling, and vehicle coordination. To quickly complete all detection tasks, a memetic algorithm is proposed for path planning of both the UAV and the UGV. The simulated results show that the proposed algorithm enables the UAV/UGV cooperative system to visit all buildings in cities with a minimum task execution time.

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors developed a novel data-driven methodology termed augmented time regularized generative adversarial network (ATR-GAN), which is capable of generating more effective artificial samples for training supervised learning models.
Abstract: Supervised machine learning techniques, such as classification models, have been widely applied to online process anomaly detection in advanced manufacturing. However, since abnormal process states rarely occur in regular manufacturing settings, the data collected for model training may be highly imbalanced, which may result in significant training bias for supervised learning and, thus, further deteriorate the anomaly detection accuracy. To reduce the training bias, a natural idea is to incorporate data augmentation techniques to generate effective artificial sample data for the abnormal process states. However, most of the existing data augmentation methods do not effectively consider the temporal orders of the sensor signals, and they also usually require large amounts of actual samples to ensure satisfactory augmentation performance. To address these limitations, this article developed a novel data-driven methodology termed augmented time regularized generative adversarial network (ATR-GAN). By incorporating a proposed augmented generator, ATR-GAN is capable of generating more effective artificial samples for training supervised learning models. The novelty of this augmented generator in the proposed methodology can be summarized into three aspects: 1) an augmented filter layer is introduced in the augmented generator to identify the high-quality artificial samples; 2) in the augmented filter layer, a new distance metric termed time-regularized Hausdorff (TRH) distance is developed to accurately measure the similarity between actual samples and the generated artificial samples; and 3) batching techniques are also employed in the proposed augmented generator to further increase the diversity of the artificial data and fully utilize the relatively limited training data. In addition, the effectiveness of the proposed ATR-GAN is also validated by both numerical simulation and a real-world case study in additive manufacturing. Note to Practitioners—Online process anomaly detection currently plays a significant role in advanced manufacturing since unexpected anomalies may damage product quality and even result in catastrophic loss. In practice, processes are mostly under normal conditions, and anomalies rarely occur. Therefore, the data collected under abnormal conditions are very limited compared to normal conditions, which causes the data imbalanced issue, leading to deterioration in detection accuracy. Many existing data augmentation methods, such as generative adversarial network (GAN), cannot synthesize diversified high-quality artificial samples only using relatively limited actual samples. There is an urgent need in developing an effective data augmentation methodology to address the data imbalanced issue in process anomaly detection. This article developed a novel approach called augmented time regularized GAN (ATR-GAN) for online sensor data augmentation. With this new approach, the applications in additive manufacturing demonstrate that the performance of data augmentation can be improved effectively, and thereafter, the anomaly detection accuracy is also increased significantly. Moreover, the developed methodology is inherently integrated into a generic framework. Thus, it can be further transformed for applications in many other areas that need data augmentation.

Journal ArticleDOI
TL;DR: A deep reinforcement learning-based approach is proposed to detect against data integrity attacks, which utilizes the Long Short-Term Memory layer to extract the state features of previous time steps in determining whether the system is currently under attack.
Abstract: A smart grid integrates advanced sensors, efficient measurement methods, progressive control technologies, and other techniques and devices to achieve safe, efficient and economical operation of the grid system. However, the diversified and open environment of a smart grid makes energy and information of the smart grid vulnerable to malicious attacks. As a representative cyber-physical attack, the data integrity attack has an extremely severe impact on the grid operation for it can bypass the traditional detection mechanisms by adjusting the attack vector. In this paper, we first present the attack strategy against dynamic state estimation of power grid in the perspective of adversary and formulate the data integrity attack detection problem that has the characteristic of sequential decision making as a partially observable Markov decision process. Then, a deep reinforcement learning-based approach is proposed to detect against data integrity attacks, which utilizes the Long Short-Term Memory layer to extract the state features of previous time steps in determining whether the system is currently under attack. Moreover, the noisy networks are employed to ensure effective agent exploration, which prevents the agent from sticking to the non-optimal policy. The principle of a multi-step learning is adopted to increase the estimation accuracy of Q value. To address the sparse rewards problem, the prioritized experience replay is proposed to increase training efficiency. Simulation results demonstrated that the proposed detection approach surpasses the benchmarks in the comparison metrics: delay error rate and false rate. Note to Practitioners—In this paper, we present a deep reinforcement learning-based algorithm to defend against the data integrity attacks of smart grid. Most of the previous works discretized the system states and utilized the current state information to identify whether the system is under attack. For this reason, the detection policy may totally ignored the continuously changing characteristics of the grid states, which will lead to poor detection performance. Moreover, the attacked system states only accounts for a small part of the entire grid operation states, the probability of sampling the experience containing the attack state is extremely small, which limits the learning efficiency of previous RL-based detection approaches. In order to increase the accuracy of detection, we first present the attack strategy against power grid’s dynamic state estimation in the perspective of adversary and formulate the partially observable Markov decision process model of attack detection problem. Moreover, we propose a deep reinforcement learning-based detection approach combining the LSTM network to extract the system state features of the previous time steps to determine whether the system is currently being attacked. To address the sparse rewards problem, the prioritized experience replay is used to increase learning efficiency. The experiments demonstrate the effectiveness of proposed detection scheme compared with benchmarks in terms of detection delay as well as accuracy. In conclusion, the proposed detection scheme is helpful in defending against the data integrity attacks without obtaining the opponent’s strategy in advance and can be conveniently applied to the real-world security management system of smart grid.

Journal ArticleDOI
TL;DR: In this article , a new metric is designed to quantify the covertness of the UAV, based on which a multi-objective UAV trajectory planning problem is formulated to maximize the disguising performance and minimize the trajectory length.
Abstract: This article considers the use of an unmanned aerial vehicle (UAV) for covert video surveillance of a mobile target on the ground and presents a new online UAV trajectory planning technique with a balanced consideration of the energy efficiency, covertness, and aeronautic maneuverability of the UAV. Specifically, a new metric is designed to quantify the covertness of the UAV, based on which a multiobjective UAV trajectory planning problem is formulated to maximize the disguising performance and minimize the trajectory length of the UAV. A forward dynamic programming method is put forth to solve the problem online and plan the trajectory for the foreseeable future. In addition, the kinematic model of the UAV is considered in the planning process so that it can be tracked without any later adjustment. Extensive computer simulations are conducted to demonstrate the effectiveness of the proposed technique. Note to Practitioners —The “Follow Me” flight mode is available in many unmanned aerial vehicle (UAV) products, and this technique enables a UAV to automatically follow a target. However, this flight mode may make the UAV noticeable to the target and compromise the video surveillance missions of the UAV. Inspired by some security surveillance applications where UAV surveillance is conducted so that a target would not take actions to avoid being monitored, we propose an efficient method to construct the trajectory for the UAV. The proposed method considers the visual covertness and the battery capacity limitation of the UAV, and it can produce a trajectory online for the UAV. The proposed method and scenario can potentially extend the “Follow Me” flight mode and generate new applications and market for UAVs.