scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Transactions on Automation Science and Engineering in 2021"


Journal ArticleDOI
TL;DR: In this article, the authors used TbD to transfer motion skills from multiple human demonstrations in open surgery to robot manipulators in robot assisted minimally invasive surgery (RA-MIS) by using a decoupled controller to respect the remote center of motion constraint exploiting the redundancy of the robot.
Abstract: Learning manipulation skills from open surgery provides more flexible access to the organ targets in the abdomen cavity and this could make the surgical robot working in a highly intelligent and friendly manner. Teaching by demonstration (TbD) is capable of transferring the manipulation skills from human to humanoid robots by employing active learning of multiple demonstrated tasks. This work aims to transfer motion skills from multiple human demonstrations in open surgery to robot manipulators in robot-assisted minimally invasive surgery (RA-MIS) by using TbD. However, the kinematic constraint should be respected during the performing of the learned skills by using a robot for minimally invasive surgery. In this article, we propose a novel methodology by integrating the cognitive learning techniques and the developed control techniques, allowing the robot to be highly intelligent to learn senior surgeons’ skills and to perform the learned surgical operations in semiautonomous surgery in the future. Finally, experiments are performed to verify the efficiency of the proposed strategy, and the results demonstrate the ability of the system to transfer human manipulation skills to a robot in RA-MIS and also shows that the remote center of motion (RCM) constraint can be guaranteed simultaneously. Note to Practitioners —This article is inspired by limited access to the manipulation of laparoscopic surgery under a kinematic constraint at the point of incision. Current commercial surgical robots are mostly operated by teleoperation, which is representing less autonomy on surgery. Assisting and enhancing the surgeon’s performance by increasing the autonomy of surgical robots has fundamental importance. The technique of teaching by demonstration (TbD) is capable of transferring the manipulation skills from human to humanoid robots by employing active learning of multiple demonstrated tasks. With the improved ability to interact with humans, such as flexibility and compliance, the new generation of serial robots becomes more and more popular in nonclinical research. Thus, advanced control strategies are required by integrating cognitive functions and learning techniques into the processes of surgical operation between robots, surgeon, and minimally invasive surgery (MIS). In this article, we propose a novel methodology to model the manipulation skill from multiple demonstrations and execute the learned operations in robot-assisted minimally invasive surgery (RA-MIS) by using a decoupled controller to respect the remote center of motion (RCM) constraint exploiting the redundancy of the robot. The developed control scheme has the following functionalities: 1) it enables the 3-D manipulation skill modeling after multiple demonstrations of the surgical tasks in open surgery by integrating dynamic time warping (DTW) and Gaussian mixture model (GMM)-based dynamic movement primitive (DMP) and 2) it maintains the RCM constraint in a smaller safe area while performing the learned operation in RA-MIS. The developed control strategy can also be potentially used in other industrial applications with a similar scenario.

110 citations


Journal ArticleDOI
TL;DR: A novel robust framework for the day-ahead energy scheduling of a residential microgrid comprising interconnected smart users, each owning individual RESs, noncontrollable loads (NCLs), energy- and comfort-based CLs, and individual plug-in electric vehicles (PEVs) and an energy storage system (ESS).
Abstract: Smart microgrids are experiencing an increasing growth due to their economic, social, and environmental benefits. However, the inherent intermittency of renewable energy sources (RESs) and users’ behavior lead to significant uncertainty, which implies important challenges on the system design. Facing this issue, this article proposes a novel robust framework for the day-ahead energy scheduling of a residential microgrid comprising interconnected smart users, each owning individual RESs, noncontrollable loads (NCLs), energy- and comfort-based CLs, and individual plug-in electric vehicles (PEVs). Moreover, users share a number of RESs and an energy storage system (ESS). We assume that the microgrid can buy/sell energy from/to the grid subject to quadratic/linear dynamic pricing functions. The objective of scheduling is minimizing the expected energy cost while satisfying device/comfort/contractual constraints, including feasibility constraints on energy transfer between users and the grid under RES generation and users’ demand uncertainties. To this aim, first, we formulate a min–max robust problem to obtain the optimal CLs scheduling and charging/discharging strategies of the ESS and PEVs. Then, based on the duality theory for multi-objective optimization, we transform the min–max problem into a mixed-integer quadratic programming problem to solve the equivalent robust counterpart of the scheduling problem effectively. We deal with the conservativeness of the proposed approach for different scenarios and quantify the effects of the budget of uncertainty on the cost saving, the peak-to-average ratio, and the constraints’ violation rate. We validate the effectiveness of the method on a simulated case study and we compare the results with a related robust approach. Note to Practitioners —This article is motivated by the emerging need for intelligent demand-side management (DSM) approaches in smart microgrids in the presence of both power generation and demand uncertainties. The proposed robust energy scheduling strategy allows the decision maker (i.e., the energy manager of the microgrid) to make a satisfactory tradeoff between the users’ payment and constraints’ violation rate considering the energy cost saving, the system technical limitations and the users’ comfort by adjusting the values of the budget of uncertainty. The proposed framework is generic and flexible as it can be applied to different structures of microgrids considering various types of uncertainties in energy generation or demand.

109 citations


Journal ArticleDOI
TL;DR: A novel algorithm is proposed to maximize the profit of distributed cloud and edge computing systems while meeting response time limits of tasks and realizes a larger profit than several typical offloading strategies.
Abstract: Edge computing is a new architecture to provide computing, storage, and networking resources for achieving the Internet of Things. It brings computation to the network edge in close proximity to users. However, nodes in the edge have limited energy and resources. Completely running tasks in the edge may cause poor performance. Cloud data centers (CDCs) have rich resources for executing tasks, but they are located in places far away from users. CDCs lead to long transmission delays and large financial costs for utilizing resources. Therefore, it is essential to smartly offload users’ tasks between a CDC layer and an edge computing layer. This work proposes a cloud and edge computing system, which has a terminal layer, edge computing layer, and CDC layer. Based on it, this work designs a profit-maximized collaborative computation offloading and resource allocation algorithm to maximize the profit of systems and guarantee that response time limits of tasks are strictly met. In each time slot, this work jointly considers CPU, memory, and bandwidth resources, load balance of all heterogeneous nodes in the edge layer, maximum amount of energy, maximum number of servers, and task queue stability in the CDC layer. Considering the abovementioned factors, a single-objective constrained optimization problem is formulated and solved by a proposed simulated-annealing-based migrating birds optimization procedure to obtain a close-to-optimal solution. The proposed method achieves joint optimization of computation offloading between CDC and edge, and resource allocation in CDC. Realistic data-based simulation results demonstrate that it realizes higher profit than its peers. Note to Practitioners —This work considers the joint optimization of computation offloading between Cloud data center (CDC) and edge computing layers, and resource allocation in CDC. It is important to maximize the profit of distributed cloud and edge computing systems by optimally scheduling all tasks between them given user-specific response time limits of tasks. It is challenging to execute them in nodes in the edge computing layer because their computation resources and battery capacities are often constrained and heterogeneous. Current offloading methods fail to jointly optimize computation offloading and resource allocation for nodes in the edge and servers in CDC. They are insufficient and coarse-grained to schedule arriving tasks. In this work, a novel algorithm is proposed to maximize the profit of distributed cloud and edge computing systems while meeting response time limits of tasks. It explicitly specifies the task service rate and the selected node for each task in each time slot by considering resource limits, load balance requirement, and processing capacities of nodes in the edge, and server and energy constraints in CDC. Real-life data-driven simulations show that the proposed method realizes a larger profit than several typical offloading strategies. It can be readily implemented and incorporated into large-scale industrial computing systems.

102 citations


Journal ArticleDOI
TL;DR: A multiobjective genetic algorithm based on an external archive to solve optimal disassembly problems subject to multiresource constraints is proposed and the experimental results show that the proposed approach can solve them effectively.
Abstract: Industrial products’ reuse, recovery, and recycling are very important due to the exhaustion of ecological resources Effective product disassembly planning methods can improve the recovery efficiency and reduce harmful impact on the environment However, the existing approaches pay little attention to disassembly resources, such as tools and operators that can significantly influence the optimal disassembly sequences This article considers a multiobjective resource-constrained disassembly optimization problem modeled with timed Petri nets such that energy consumption is minimized, while disassembly profit is maximized Since its solution complexity has exponential growth with the number of components in a product, a multiobjective genetic algorithm based on an external archive is used to solve it Its effectiveness is verified by comparing it with nondominated sorting genetic algorithm II and a collaborative resource allocation strategy for a multiobjective evolutionary algorithm based on decomposition Note to Practitioners —This article establishes a novel dual-objective optimization model for product disassembly subject to multiresource constraints In an actual disassembly process, a decision-maker may want to minimize energy consumption and maximize disassembly profit This article considers both objectives and proposes a multiobjective genetic algorithm based on an external archive to solve optimal disassembly problems The experimental results show that the proposed approach can solve them effectively The obtained solutions give decision-makers multiple choices to select the right disassembly process when an actual product is disassembled

101 citations


Journal ArticleDOI
TL;DR: This article builds an end-to-end deep neural network that takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels and demonstrates that the experimental results demonstrate that the network outperforms the state-of-the-art networks.
Abstract: Semantic segmentation of urban scenes is an essential component in various applications of autonomous driving. It makes great progress with the rise of deep learning technologies. Most of the current semantic segmentation networks use single-modal sensory data, which are usually the RGB images produced by visible cameras. However, the segmentation performance of these networks is prone to be degraded when lighting conditions are not satisfied, such as dim light or darkness. We find that thermal images produced by thermal imaging cameras are robust to challenging lighting conditions. Therefore, in this article, we propose a novel RGB and thermal data fusion network named FuseSeg to achieve superior performance of semantic segmentation in urban scenes. The experimental results demonstrate that our network outperforms the state-of-the-art networks. Note to Practitioners —This article investigates the problem of semantic segmentation of urban scenes when lighting conditions are not satisfied. We provide a solution to this problem via information fusion with RGB and thermal data. We build an end-to-end deep neural network, which takes as input a pair of RGB and thermal images and outputs pixel-wise semantic labels. Our network could be used for urban scene understanding, which serves as a fundamental component of many autonomous driving tasks, such as environment modeling, obstacle avoidance, motion prediction, and planning. Moreover, the simple design of our network allows it to be easily implemented using various deep learning frameworks, which facilitates the applications on different hardware or software platforms.

96 citations


Journal ArticleDOI
TL;DR: A multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC.
Abstract: The industry of data centers is the fifth largest energy consumer in the world. Distributed green data centers (DGDCs) consume 300 billion kWh per year to provide different types of heterogeneous services to global users. Users around the world bring revenue to DGDC providers according to actual quality of service (QoS) of their tasks. Their tasks are delivered to DGDCs through multiple Internet service providers (ISPs) with different bandwidth capacities and unit bandwidth price. In addition, prices of power grid, wind, and solar energy in different GDCs vary with their geographical locations. Therefore, it is highly challenging to schedule tasks among DGDCs in a high-profit and high-QoS way. This work designs a multiobjective optimization method for DGDCs to maximize the profit of DGDC providers and minimize the average task loss possibility of all applications by jointly determining the split of tasks among multiple ISPs and task service rates of each GDC. A problem is formulated and solved with a simulated-annealing-based biobjective differential evolution (SBDE) algorithm to obtain an approximate Pareto-optimal set. The method of minimum Manhattan distance is adopted to select a knee solution that specifies the Pareto-optimal task service rates and task split among ISPs for DGDCs in each time slot. Real-life data-based experiments demonstrate that the proposed method achieves lower task loss of all applications and larger profit than several existing scheduling algorithms. Note to Practitioners —This work aims to maximize the profit and minimize the task loss for DGDCs powered by renewable energy and smart grid by jointly determining the split of tasks among multiple ISPs. Existing task scheduling algorithms fail to jointly consider and optimize the profit of DGDC providers and QoS of tasks. Therefore, they fail to intelligently schedule tasks of heterogeneous applications and allocate infrastructure resources within their response time bounds. In this work, a new method that tackles drawbacks of existing algorithms is proposed. It is achieved by adopting the proposed SBDE algorithm that solves a multiobjective optimization problem. Simulation experiments demonstrate that compared with three typical task scheduling approaches, it increases profit and decreases task loss. It can be readily and easily integrated and implemented in real-life industrial DGDCs. The future work needs to investigate the real-time green energy prediction with historical data and further combine prediction and task scheduling together to achieve greener and even net-zero-energy data centers.

88 citations


Journal ArticleDOI
TL;DR: An iterated greedy algorithm (IGA) is a simple and powerful heuristic algorithm that is widely used to solve flow-shop scheduling problems (FSPs), an important branch of production scheduling problems as discussed by the authors.
Abstract: An iterated greedy algorithm (IGA) is a simple and powerful heuristic algorithm. It is widely used to solve flow-shop scheduling problems (FSPs), an important branch of production scheduling problems. IGA was first developed to solve an FSP in 2007. Since then, various FSPs have been tackled by using IGA-based methods, including basic IGA, its variants, and hybrid algorithms with IGA integrated. Up until now, over 100 articles related to this field have been published. However, to the best of our knowledge, there is no existing tutorial or review paper of IGA. Thus, we focus on FSPs and provide a tutorial and comprehensive literature review of IGA-based methods. First, we introduce a framework of basic IGA and give an example to clearly show its procedure. To help researchers and engineers learn and apply IGA to their FSPs, we provide an open platform to collect and share related materials. Then, we make classifications of the solved FSPs according to their scheduling scenarios, objective functions, and constraints. Next, we classify and introduce the specific methods and strategies used in each phase of IGA for FSPs. Besides, we summarize IGA variants and hybrid algorithms with IGA integrated, respectively. Finally, we discuss the current IGA-based methods and already-solved FSP instances, as well as some important future research directions according to their deficiency and open issues.

77 citations


Journal ArticleDOI
TL;DR: This article proposes a multichannel-based generative adversarial network (MGAN) with semisupervision to grade DR and demonstrates that the proposed model outperforms the other representative models in terms of accuracy, area under ROC curve (AUC), sensitivity, and specificity.
Abstract: Diabetic retinopathy (DR) is one of the major causes of blindness. It is of great significance to apply deep-learning techniques for DR recognition. However, deep-learning algorithms often depend on large amounts of labeled data, which is expensive and time-consuming to obtain in the medical imaging area. In addition, the DR features are inconspicuous and spread out over high-resolution fundus images. Therefore, it is a big challenge to learn the distribution of such DR features. This article proposes a multichannel-based generative adversarial network (MGAN) with semisupervision to grade DR. The multichannel generative model is developed to generate a series of subfundus images corresponding to the scattering DR features. By minimizing the dependence on labeled data, the proposed semisupervised MGAN can identify the inconspicuous lesion features by using high-resolution fundus images without compression. Experimental results on the public Messidor data set show that the proposed model can grade DR effectively. Note to Practitioners —This article is motivated by the challenging problem due to the inadequacy of labeled data in medical image analysis and the dispersion of efficient features in high-resolution medical images. As for the inadequacy of labeled data in medical image analysis, the reasons mainly include the followings: 1) the high-quality annotation of medical imaging sample depends heavily on scarce medical expertise which is very expensive and 2) comparing with natural issues, it is more difficult to collect medical images because of privacy issues. It is of great significance to apply deep-learning techniques for diabetic retinopathy (DR) recognition. In this article, the multichannel generative adversarial network (GAN) with semisupervision is developed for DR-aided diagnosis. The proposed model can deal with DR classification problem with inadequacy of labeled data in the following ways: 1) the multichannel generative scheme is proposed to generate a series of subfundus images corresponding to the scattering DR features and 2) the proposed multichannel-based GAN (MGAN) model with semisupervision can make full use of both labeled data and unlabeled data. The experimental results demonstrate that the proposed model outperforms the other representative models in terms of accuracy, area under ROC curve (AUC), sensitivity, and specificity.

74 citations


Journal ArticleDOI
TL;DR: A novel data augmentation classifier (DAC) for imbalanced fault classification that combines supervised learning and data generation processes to obtain an end-to-end model and results show superiority of DAC and MDAC compared to existing methods.
Abstract: The problem of fault classification in industry has been studied extensively. Most classification algorithms are modeled on the premise of data balance. However, the difficulty of collecting industrial data in different modes is quite different. This inevitably leads to data imbalance, which will adversely affect the fault classification performance. This article proposes a novel data augmentation classifier (DAC) for imbalanced fault classification. Data augmentation based on generative adversarial networks (GANs) is an effective way to solve the problem of unbalanced classification. However, the randomness of the GAN generation process restricts the effect of data enhancement. DAC proposes a data selection strategy based on data filtering and data purification in model training to solve this problem. In addition, DAC combines supervised learning and data generation processes to obtain an end-to-end model. Meanwhile, multigenerator structure of DAC (MDAC) is proposed to solve the problem of incomplete learning of a single generator when data imbalances get complicated. The proposed DAC and MDAC are applied in two fault classification cases of the Tennessee Eastman (TE) benchmark process, results of which show superiority of DAC and MDAC compared to existing methods. Note to Practitioners —Data imbalances are common in fault classification and affect the effectiveness of modeling in industry. As a generative model, generative adversarial networks (GANs) provide new ideas for small-class data augmentation. However, the instability of its training process and the randomness of data generation affect the results of data augmentation. In this article, the GAN generation process is analyzed in detail. The results of the visualization indicate that no data generation was perfect at any one time. Based on the rules of GAN data generation, we propose a data selection strategy during training. High-quality data are selected for data augmentation through data filtering and data purification. Apart from this, we combine the training process of GAN and classification model for imbalanced data to reduce modeling time. Through industrial examples, we have evaluated the effectiveness of this method.

63 citations


Journal ArticleDOI
TL;DR: In this article, an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) was proposed to solve the flexible job shop scheduling problem with crane transportation processes.
Abstract: In this study, we propose an efficient optimization algorithm that is a hybrid of the iterated greedy and simulated annealing algorithms (hereinafter, referred to as IGSA) to solve the flexible job shop scheduling problem with crane transportation processes (CFJSP). Two objectives are simultaneously considered, namely, the minimization of the maximum completion time and the energy consumptions during machine processing and crane transportation. Different from the methods in the literature, crane lift operations have been investigated for the first time to consider the processing time and energy consumptions involved during the crane lift process. The IGSA algorithm is then developed to solve the CFJSPs considered. In the proposed IGSA algorithm, first, each solution is represented by a 2-D vector, where one vector represents the scheduling sequence and the other vector shows the assignment of machines. Subsequently, an improved construction heuristic considering the problem features is proposed, which can decrease the number of replicated insertion positions for the destruction operations. Furthermore, to balance the exploration abilities and time complexity of the proposed algorithm, a problem-specific exploration heuristic is developed. Finally, a set of randomly generated instances based on realistic industrial processes is tested. Through comprehensive computational comparisons and statistical analyses, the highly effective performance of the proposed algorithm is favorably compared against several efficient algorithms.

59 citations


Journal ArticleDOI
TL;DR: A robust nonlinear model predictive control scheme is presented for the case of underactuated autonomous underwater vehicles (AUVs) and is presented a reliable control strategy that takes into account the aforementioned issues, along with dynamic uncertainties of the model and the presence of ocean currents.
Abstract: This article addresses the tracking control problem of 3-D trajectories for underactuated underwater robotic vehicles operating in a constrained workspace including obstacles More specifically, a robust nonlinear model predictive control (NMPC) scheme is presented for the case of underactuated autonomous underwater vehicles (AUVs) (ie, unicycle-like vehicles actuated only in the surge, heave, and yaw) The purpose of the controller is to steer the unicycle-like AUV to the desired trajectory with guaranteed input and state constraints (eg, obstacles, predefined vehicle velocity bounds, and thruster saturations) inside a partially known and dynamic environment where the knowledge of the operating workspace is constantly updated via the vehicle’s onboard sensors In particular, considering the sensing range of the vehicle, obstacle avoidance with any of the detected obstacles is guaranteed by the online generation of a collision-free trajectory tracking path, despite the model dynamic uncertainties and the presence of external disturbances representing ocean currents and waves Finally, realistic simulation studies verify the performance and efficiency of the proposed framework Note to Practitioners —This article was motivated by the problem of robust trajectory tracking for an autonomous underwater vehicle (AUV) operating in an uncertain environment where the knowledge of the operating workspace (eg, obstacle positions) is constantly updated online via the vehicle’s onboard sensors (eg, multibeam imaging sonars and laser-based vision systems) In addition, there may be other system limitations (eg, thruster saturation limits) and other operational constraints, induced by the need of various common underwater tasks (eg, a predefined vehicle speed limit for inspecting the seabed, and mosaicking), where it should also be considered into the control strategy However, based on the existing trajectory tracking control approaches for underwater robotics, there is a lack of an autonomous control scheme that provides a complete and credible control strategy that takes the aforementioned issues into consideration Based on this, we present a reliable control strategy that takes into account the aforementioned issues, along with dynamic uncertainties of the model and the presence of ocean currents In future research, we will extend the proposed methodology for multiple AUV performing collaborative inspection tasks in an uncertain environment

Journal ArticleDOI
TL;DR: A scheduling method for collaborative assembly tasks that allows to optimally plan assembly activities based on the knowledge acquired during runtime and so adapts to variations along the life cycle of a manufacturing process is proposed.
Abstract: The novel paradigm of collaborative automation, with machines and industrial robots that synergically share the same workspace with human workers, requires to rethink how activities are prioritized in order to account for possible variabilities in their durations. This article proposes a scheduling method for collaborative assembly tasks that allows to optimally plan assembly activities based on the knowledge acquired during runtime and so adapts to variations along the life cycle of a manufacturing process. The scheduler is based on time Petri nets and the output plan is optimized by minimizing the idle time of each agent. The experimental validation carried out on a realistic industrial use-case consisting of a small assembly line with two robots and a human operator confirms the effectiveness of the approach. Note to Practitioners —The optimization of manufacturing execution is a long standing problem in production engineering. Modern engineering tools are available to monitor and help decision-makers to reduce waste and schedule resources to optimize the efficiency of a manufacturing process. This article proposes a scheduling algorithm that continuously collects data from the manufacturing process and iteratively plans an optimal resource allocation strategy, trying to reduce the idle time of each agent. The approach is demonstrated on a realistic case study, where two robots and a human worker cooperate to assemble a USB/microSD adapter.

Journal ArticleDOI
TL;DR: An extended version of a flexible job shop problem that allows the precedence between the operations to be given by an arbitrary directed acyclic graph instead of a linear order is considered.
Abstract: Scheduling of complex manufacturing systems entails complicated constraints such as the mating operational one. Focusing on the real settings, this article considers an extended version of a flexible job shop problem that allows the precedence between the operations to be given by an arbitrary directed acyclic graph instead of a linear order. In order to obtain its reliable and high-performance schedule in a reasonable time, this article contributes a knowledge-based cuckoo search algorithm (KCSA) to the scheduling field. The proposed knowledge base is initially trained off-line on models before operations based on reinforcement learning and hybrid heuristics to store scheduling information and appropriate parameters. In its off-line training phase, the algorithm SARSA is used, for the first time, to build a self-adaptive parameter control scheme of the CS algorithm. In each iteration, the proposed knowledge base selects suitable parameters to ensure the desired diversification and intensification of population. It is then used to generate new solutions by probability sampling in a designed mutation phase. Moreover, it is updated via feedback information from a search process. Its influence on KCSA’s performance is investigated and the time complexity of the KCSA is analyzed. The KCSA is validated with the benchmark and randomly generated cases. Various simulation experiments and comparisons between it and several popular methods are performed to validate its effectiveness. Note to Practitioners —Complex manufacturing scheduling problems are usually solved via intelligent optimization algorithms. However, most of them are parameter-sensitive, and thus selecting their proper parameters is highly challenging. On the other hand, it is difficult to ensure their robustness since they heavily rely on some random mechanisms. In order to deal with the above obstacles, we design a knowledge-based intelligent optimization algorithm. In the proposed algorithm, a reinforcement learning algorithm is proposed to self-adjust its parameters to tackle the parameter selection issue. Two probability matrices for machine allocation and operation sequencing are built via hybrid heuristics as a guide for searching a new and efficient assignment scheme. To further improve the performance of our algorithm, a feedback control framework is constructed to ensure the desired state of population. As a result, our algorithm can obtain a high-quality schedule in a reasonable time to fulfill a real-time scheduling purpose. In addition, it possesses high robustness via the proposed feedback control technique. Simulation results show that the knowledge-based cuckoo search algorithm (KCSA) outperforms well some existing algorithms. Hence, it can be readily applied to real manufacturing facility scheduling problems.

Journal ArticleDOI
TL;DR: A novel effective optimization framework for the reconfiguration problem of modern DNs using the recent Harris hawks optimization (HHO) algorithm, which allows the decision maker to determine in reasonable time the optimal network topology, minimizing the overall power losses and considering the system operational requirements.
Abstract: Improving the efficiency and sustainability of distribution networks (DNs) is nowadays a challenging objective both for large networks and microgrids connected to the main grid. In this context, a crucial role is played by the so-called network reconfiguration problem, which aims at determining the optimal DN topology. This process is enabled by properly changing the close/open status of all available branch switches to form an admissible graph connecting network buses. The reconfiguration problem is typically modeled as an NP-hard combinatorial problem with a complex search space due to current and voltage constraints. Even though several metaheuristic algorithms have been used to obtain--without guarantees--the global optimal solution, searching for near-optimal solutions in reasonable time is still a research challenge for the DN reconfiguration problem. Facing this issue, this article proposes a novel effective optimization framework for the reconfiguration problem of modern DNs. The objective of reconfiguration is minimizing the overall power losses while ensuring an enhanced DN voltage profile. A multiple-step resolution procedure is then presented, where the recent Harris hawks optimization (HHO) algorithm constitutes the core part. This optimizer is here intelligently accompanied by appropriate preprocessing (i.e., search space preparation and initial feasible population generation) and postprocessing (i.e., solution refinement) phases aimed at improving the search for near-optimal configurations. The effectiveness of the method is validated through numerical experiments on the IEEE 33-bus, the IEEE 85-bus systems, and an artificial 295-bus system under distributed generation and load variation. Finally, the performance of the proposed HHO-based approach is compared with two related metaheuristic techniques, namely the particle swarm optimization algorithm and the Cuckoo search algorithm. The results show that HHO outperforms the other two optimizers in terms of minimized power losses, enhanced voltage profile, and running time.

Journal ArticleDOI
TL;DR: A novel condition-driven data analytics method which can provide enhanced physical interpretation for the monitoring results with concurrent analysis of the static and dynamic information which carry different information, analogous to the concepts of “position” and “velocity” in physics, respectively.
Abstract: Frequent and wide changes in operation conditions are quite common in real process industry, resulting in typical wide-range nonstationary and transient characteristics along time direction. The considerable challenge is, thus, how to solve the conflict between the learning model accuracy and change complexity for analysis and monitoring of nonstationary and transient continuous processes. In this work, a novel condition-driven data analytics method is developed to handle this problem. A condition-driven data reorganization strategy is designed which can neatly restore the time-wise nonstationary and transient process into different condition slices, revealing similar process characteristics within the same condition slice. Process analytics can then be conducted for the new analysis unit. On the one hand, coarse-grained automatic condition-mode division is implemented with slow feature analysis to track the changing operation characteristics along condition dimension. On the other hand, fine-grained distribution evaluation is performed for each condition mode with Gaussian mixture model. Bayesian inference-based distance (BID) monitoring indices are defined which can clearly indicate the fault effects and distinguish different operation scenarios with meaningful physical interpretation. A case study on a real industrial process shows the feasibility of the proposed method which, thus, can be generalized to other continuous processes with typical wide-range nonstationary and transient characteristics along time direction. Note to Practitioners —Industrial processes in general have nonstationary characteristics which are ubiquitous in real world data, often reflected by a time-variant mean, a time-variant autocovariance, or both resulting from various factors. The focus of this study is to develop a universal analytics and monitoring method for wide-range nonstationary and transient continuous processes. Condition-driven concept takes the place of time-driven thought. For the first time, it is recognized that there are similar process characteristics within the same condition slice and changes in the process correlations may relate to its condition modes. Besides, the proposed method can provide enhanced physical interpretation for the monitoring results with concurrent analysis of the static and dynamic information which carry different information, analogous to the concepts of “position” and “velocity” in physics, respectively. The static information can tell the current operation condition, while the dynamic information can clarify whether the process status is switching between different steady states. It is noted that the condition-driven concept is universal and can be extended to other applications for industrial manufacturing applications.

Journal ArticleDOI
TL;DR: An RFID-based mobile robot global localization method combining phase difference and readability is proposed, by which the mobile robot can be accurately localized in an environment with a relatively sparse reference tag distribution and without the need for offline phase drift calibration.
Abstract: A novel radio frequency identification (RFID)-based mobile robot global localization method combining two kinds of RFID signal information, i.e., phase difference and readability, is proposed. Specifically, a phase difference model and a classification logic strategy based on readability are built and integrated into a particle filter localization algorithm. Compared with existing RFID localization methods, the proposed localization method can achieve competitive localization performance in an environment with a relatively sparse reference tag distribution and without the need for offline phase drift calibration. A series of real experimental tests were performed, and the results show that the proposed method can localize a mobile robot with centimeter-level position accuracy and satisfactory attitude angle accuracy when the distance between adjacent reference tags is approximately 60 cm, even if all RFID devices are commercial off-the-shelf (COTS). The proposed method provides a promising option for mobile robot localization applications, such as path tracking of mobile robots. Note to Practitioners —Mobile robot localization is a key technology for its location-based services. Considering that radio frequency identification (RFID) is entirely unaffected by light interference and has a globally unique ID, RFID has been regarded as a localization sensor with broad application prospects. This article proposes an RFID-based mobile robot global localization method combining phase difference and readability, by which the mobile robot can be accurately localized in an environment with a relatively sparse reference tag distribution and without the need for offline phase drift calibration. The experimental results indicate that the proposed method can localize the mobile robot with good performance, including centimeter-level position accuracy and satisfactory attitude angle accuracy. The proposed method can effectively contribute to many practical applications, such as the path tracking of a mobile robot.

Journal ArticleDOI
Wujing Cao1, Chunjie Chen1, Hongyue Hu1, Kai Fang1, Xinyu Wu1 
TL;DR: A comparison of three hip assistance modes is conducted to discuss the balance of system weight and assistance efficiency and preliminary experiments suggest that the hip extension assistance (HEA) is more suitable than hip flexion assistance (HFA) for hip assistance during single motion assistance.
Abstract: Understanding the effects of different hip assistance modes is a fundamental step in the process of designing hip assistance devices and controllers that can provide better performance in terms of metabolic cost. We have developed and tested a soft exoskeleton for hip assistance, which includes three assistance modes: hip extension assistance (HEA), hip flexion assistance (HFA), and hip extension and flexion assistance (HEFA). A proportional derivative (PD) iterative learning controller based on the feedforward model was proposed to control the assistive force accurately. The three hip assistance modes were evaluated on seven male subjects walking on a treadmill at a speed of 5 km/h in two scenarios—first with a 15-kg backpack and then without any backpack. The net metabolic costs could be reduced during the loaded condition, compared with those under no exoskeleton condition, by 9.95%, 6.25%, and 15.28% for HEA, HFA, and HEFA, respectively. The reductions were found significant in HEA ( $p=0.048$ ) and HEFA ( $p=0.005$ ) modes, while the HFA mode ( $p=0.202$ ) was not found statistically significant. It indicates that the HEA and HEFA modes with the soft exoskeleton provide more benefit to the net metabolic cost compared with the HFA mode. The net metabolic costs reduced during the unloaded condition were 9.21%, 2.58%, and 13.05% for HEA, HFA, and HEFA, respectively. The improvements in the walking efficiency during both the conditions with the developed soft exoskeleton are demonstrated. Note to Practitioners —This article was motivated by the problem that how to reduce the metabolic cost most appropriately of walking by hip assistance of soft exoskeleton. In this article, we conduct a comparison of three hip assistance modes to discuss the balance of system weight and assistance efficiency. We then propose a PD iterative learning controller based on the feedforward model to track the desired assistive force accurately. Preliminary experiments suggest that the hip extension assistance (HEA) is more suitable than hip flexion assistance (HFA) for hip assistance during single motion assistance. Multiple motion assistance is more beneficial for metabolic cost reduction when the weight of the soft exoskeleton is the same. The experimental tests show that the proposed soft exoskeleton and the control algorithm are effective for walking assistance during the loaded condition. However, the performance of the hip assistance device is tested based on the treadmill walking only. In future research, we will conduct a performance evaluation of the soft exoskeleton on a complex road environment.

Journal ArticleDOI
TL;DR: In this article, the authors use simulation to design, validate, and continuously improve complex processes, and help practitioners to gain insight into the operation and justify future investments, while state-of-the-art simulators fall short of accurately modeling physical phenomena, such as friction, impact, and deformation.
Abstract: To Perform reliably and consistently over sustained periods of time, large-scale automation critically relies on computer simulation. Simulation allows us and supervisory AI to effectively design, validate, and continuously improve complex processes, and helps practitioners to gain insight into the operation and justify future investments. While numerous successful applications of simulation in industry exist, such as circuit simulation, finite element methods, and computeraided design (CAD), state-of-the-art simulators fall short of accurately modeling physical phenomena, such as friction, impact, and deformation.

Journal ArticleDOI
TL;DR: A novel driving distraction detection method that is based on a new deep network that uses both temporal information and spatial information of electroencephalography (EEG) signals as model inputs is presented.
Abstract: Distracted driving has been recognized as a major challenge to traffic safety improvement. This article presents a novel driving distraction detection method that is based on a new deep network. Unlike traditional methods, the proposed method uses both temporal information and spatial information of electroencephalography (EEG) signals as model inputs. Convolutional techniques and gated recurrent units were adopted to map the relationship between drivers' distraction status and EEG signals in the time domain. A driving simulation experiment was conducted to examine the effectiveness of the proposed method. Twenty-four healthy volunteers participated and three types of secondary tasks (i.e., cellphone operation task, clock task, and 2-back task) were used to induce distraction during driving. Drivers' EEG responses were measured using a 32-channel electrode cap, and the EEG signals were preprocessed to remove artifacts and then split into short EEG sequences. The proposed deep-network-based distraction detection method was trained and tested on the collected EEG data. To evaluate its effectiveness, it was also compared with the networks using temporal or spatial information alone. The results showed that our proposed distraction detection method achieved an overall binary (distraction versus nondistraction) classification accuracy of 0.92. In terms of task-specific distraction detection, its accuracy was 0.88. Further analysis on the individual difference in detection performance showed that drivers' EEG performance differed across individuals, which suggests that adaptive learning for each individual driver would be needed when developing in-vehicle distraction detection applications.

Journal ArticleDOI
TL;DR: Two new active learning algorithms for the Gaussian process with uncertainties are proposed, which take variance-based information measure and Fisher information measure into consideration and can incorporate the impact of uncertainties and realize better prediction performance.
Abstract: In the machine learning domain, active learning is an iterative data selection algorithm for maximizing information acquisition and improving model performance with limited training samples. It is very useful, especially for industrial applications where training samples are expensive, time-consuming, or difficult to obtain. Existing methods mainly focus on active learning for classification, and a few methods are designed for regression, such as linear regression or Gaussian process (GP). Uncertainties from measurement errors and intrinsic input noise inevitably exist in the experimental data, which further affects the modeling performance. The existing active learning methods do not incorporate these uncertainties for GP. In this article, we propose two new active learning algorithms for the GP with uncertainties, which are variance-based weighted active learning algorithm and D-optimal weighted active learning algorithm. Through numerical study, we show that the proposed approach can incorporate the impact of uncertainties and realize better prediction performance. This approach has been applied to improving the predictive modeling for automatic shape control of composite fuselage. Note to Practitioners —This article was motivated by automatic shape control of composite fuselage. The main objective is to realize active learning for predictive analytics, which means maximizing information acquisition with limited experimental samples. This kind of need for active learning is very common in the industrial systems where it is expensive, time-consuming, or difficult to obtain experimental data. Existing approaches, from either a machine learning perspective or a statistics perspective, mainly focus on active learning for classification or regression models without incorporating impacts from intrinsic input uncertainties. However, intrinsic uncertainties widely exist in industrial systems. This article develops two active learning algorithms for the Gaussian process with uncertainties. The algorithms take variance-based information measure and Fisher information measure into consideration. The proposed algorithms can also be applied in other active learning scenarios, specifically for predictive models with multiple uncertainties.

Journal ArticleDOI
TL;DR: This study developed a practical end-to-end framework that makes use of physical features embedded in raw data and an elaborated hybrid deep learning model, namely 1DCNN-LSTM, featuring two algorithms - Convolutional Neural Network and Long-Short Term Memory.
Abstract: Smart structural health monitoring (SHM) for large-scale infrastructure is an intriguing subject for engineering communities thanks to its significant advantages such as timely damage detection, optimal maintenance strategy, and reduced resource requirement. Yet, it is a challenging topic as it requires handling a large amount of collected sensors data continuously, which is inevitably contaminated by random noises. Therefore, this study developed a practical end-to-end framework that makes use of physical features embedded in raw data and an elaborated hybrid deep learning model, namely 1-DCNN-LSTM, featuring two algorithms—convolutional neural network (CNN) and long-short term memory (LSTM). In order to extract relevant features from sensory data, the method combines various signal processing techniques such as the autoregressive model, discrete wavelet transform, and empirical mode decomposition. The hybrid deep learning 1-DCNN-LSTM is designed based on the CNN’s capacity of capturing local information and the LSTM network’s prominent ability to learn long-term dependencies. Through three case studies involving both experimental and synthetic data sets, it is demonstrated that the proposed approach achieves highly accurate damage detection, as accurate as the powerful 2-D CNN, but with a lower time and memory complexity, making it suitable for real-time SHM. Note to Practitioners —This article aims to develop a practical data-driven method for automatically monitoring the operational state of structures. In order to achieve consistently and highly accurate results in performing different tasks for diverse structures, we combine underlying features in both time and frequency domains extracted from measured signal vibration data. Three popular data featuring methods are combined to achieve the diversity gain which would not be possible with each individual method. As the vibration is usually measured by long time-series signals, the most efficient deep learning architecture for time-series signal, namely long-short term memory (LSTM), is considered for this work. Besides, each structure has its own dynamic properties, i.e., eigenfrequencies, around which the most relevant information is in the frequency domain, thus convolutional neural network specifically designed for capturing local information is used in combination with LSTM, forming a hybrid deep learning architecture. The applicability and effectiveness of the proposed approach are supported by three case studies with different types of structures, showing highly accurate damage detection with reduced resource requirements. These advantages can be valuable for developing a model for live monitoring of structural health in the future life-line infrastructures.

Journal ArticleDOI
TL;DR: A novel hybrid prediction method named SG and TCN-based LSTM (ST-LSTM) is proposed, which synergistically combines the power of the Savitzky-Golay (SG) filter, the TCN, as well as the L STM, for network traffic prediction.
Abstract: Accurate and real-time prediction of network traffic can not only help system operators allocate resources rationally according to their actual business needs but also help them assess the performance of a network and analyze its health status. In recent years, neural networks have been proved suitable to predict time series data, represented by the model of a long short-term memory (LSTM) neural network and a temporal convolutional network (TCN). This article proposes a novel hybrid prediction method named SG and TCN-based LSTM (ST-LSTM) for such network traffic prediction, which synergistically combines the power of the Savitzky-Golay (SG) filter, the TCN, as well as the LSTM. ST-LSTM employs a three-phase end-to-end methodology serving time series prediction. It first eliminates noise in raw data using the SG filter, then extracts short-term features from sequences applying the TCN, and then captures the long-term dependence in the data exploiting the LSTM. Experimental results over real-world datasets demonstrate that the proposed ST-LSTM outperforms state-of-the-art algorithms in terms of prediction accuracy.

Journal ArticleDOI
TL;DR: A Voronoi-based path generation algorithm for an energy-constrained mobile robot, such as an unmanned aerial vehicle (UAV), that takes energy constraints into account to generate waypoints for the robot to follow in a near-optimal configuration while maintaining path-length constraints is presented.
Abstract: This article describes a Voronoi-based path generation (VPG) algorithm for an energy-constrained mobile robot, such as an unmanned aerial vehicle (UAV). The algorithm solves a variation of the coverage path-planning problem where complete coverage of an area is not possible due to path-length limits caused by energy constraints on the robot. The algorithm works by modeling the path as a connected network of mass-spring-damper systems. The approach further leverages the properties of Voronoi diagrams to generate a potential field to move path waypoints to near-optimal configurations while maintaining path-length constraints. Simulation and physical experiments on an aerial vehicle are described. Simulated runtimes show linear-time complexity with respect to the number of path waypoints. Tests in variously shaped areas demonstrate that the method can generate paths in both convex and nonconvex areas. Comparison tests with other path generation methods demonstrate that the VPG algorithm strikes a good balance between runtime and optimality, with significantly better runtime than direct optimization, lower cost coverage paths than a lawnmower-style coverage path, and moderately better performance in both metrics than the most conceptually similar method. Physical experiments demonstrate the applicability of the VPG method to a physical UAV, and comparisons between real-world results and simulations show that the costs of the generated paths are within a few percent of each other, implying that analysis performed in simulation will hold for real-world application, assuming that the robot is capable of closely following the path and a good energy model is available. Note to Practitioners —For autonomous mobile-robotics-based applications where a robot equipped with a tool or sensor is required to survey an area for inspection, monitoring, cleaning, and so on, effectively covering the area is desirable. However, for energy-constrained systems such as aerial vehicles with limited flight time, complete coverage is not possible. Presented here is a new Voronoi-based path generation algorithm that takes energy constraints into account to generate waypoints for the robot to follow in a near-optimal configuration while maintaining path-length constraints. The approach is applied in simulation and experiments for an application in environmental monitoring using unmanned aerial vehicles.

Journal ArticleDOI
TL;DR: A novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs) is proposed, which can maintain the desired accuracy but greatly enhance the diagnosis speed when deployed on the edge nodes near end physical machines.
Abstract: Recently, deep neural network (DNN) models work incredibly well, and edge computing has achieved great success in real-world scenarios, such as fault diagnosis for large-scale rotational machinery. However, DNN training takes a long time due to its complex calculation, which makes it difficult to optimize and retrain models. To address such an issue, this work proposes a novel fault diagnosis model by combining binarized DNNs (BDNNs) with improved random forests (RFs). First, a BDNN-based feature extraction method with binary weights and activations in a training process is designed to reduce the model runtime without losing the accuracy of feature extraction. Its generated features are used to train an RF-based fault classifier to relieve the information loss caused by binarization. Second, considering the possible classification accuracy reduction resulting from those very similar binarized features of two instances with different classes, we replace a Gini index with ReliefF as the attribute evaluation measure in training RFs to further enhance the separability of fault features extracted by BDNN and accordingly improve the fault identification accuracy. Third, an edge computing-based fault diagnosis mode is proposed to increase diagnostic efficiency, where our diagnosis model is deployed distributedly on a number of edge nodes close to the end rotational machines in distinct locations. Extensive experiments are conducted to validate the proposed method on the data sets from rolling element bearings, and the results demonstrate that, in almost all cases, its diagnostic accuracy is competitive to the state-of-the-art DNNs and even higher due to a form of regularization in some cases. Benefited from the relatively lower computing and storage requirements of BDNNs, it is easy to be deployed on edge nodes to realize real-time fault diagnosis concurrently.

Journal ArticleDOI
TL;DR: This paper presents a novel meta-reinforcement learning (MRL)-based optimization method to improve the generalization by training optimizer with multiple machining tasks, and is the first MRL-based method of adaptive parameter decision for energy-efficient flexible machining.
Abstract: Energy-efficient machining has become imperative for energy conservation, emission reduction, and cost saving of manufacturing sectors. Optimal machining parameter decision is regarded as an effective way to achieve energy efficient turning. For flexible machining, it is of utmost importance to determine the optimal parameters adaptive to various machines, workpieces, and tools. However, very little research has focused on this issue. Hence, this paper undertakes this challenge by integrated meta-reinforcement learning (MRL) of machining parameters to explore the commonalities of optimization models and use the knowledge to respond quickly to new machining tasks. Specifically, the optimization problem is first formulated as a finite Markov decision process (MDP). Then, the continuous parametric optimization is approached with actor-critic (AC) framework. On the basis of the framework, meta-policy training is performed to improve the generalization capacity of the optimizer. The significance of the proposed method is exemplified and elucidated by a case study with a comparative analysis. Note to Practitioners —Here, we consider a real-world application problem of energy-aware machining parameter optimization encountered in flexible turning operations, namely, design of a parametric optimization method that can be generalized to various machining tasks where multiple objectives and constraints varying with the machining configurations. This paper presents a novel meta-reinforcement learning (MRL)-based optimization method to improve the generalization by training optimizer with multiple machining tasks. To the best of our knowledge, this is the first MRL-based method of adaptive parameter decision for energy-efficient flexible machining. It should be highly emphasized that technologists benefit from the reduced decision-making time and the improved energy saving opportunity.

Journal ArticleDOI
TL;DR: A multiobjective optimization method that addresses the disadvantages of the existing methods is proposed andSimulations demonstrate that in comparison with the two state-of-the-art scheduling algorithms, the proposed one increases the profit and reduces the convergence time.
Abstract: The significant growth in the number and types of tasks of heterogeneous applications in green cloud data centers (GCDCs) dramatically increases their providers’ revenue from users as well as energy consumption. It is a big challenge to maximize such revenue, while minimizing energy cost in a market where prices of electricity, availability of renewable power generation, and behind-the-meter renewable generation contract models differ among the geographical sites of the GCDCs. A multiobjective optimization method that investigates such spatial differences in the GCDCs is for the first time proposed to trade off such two objectives by cost-effectively executing all tasks while meeting their delay constraints. In each time slot, a constrained biobjective optimization problem is formulated and solved by an improved multiobjective evolutionary algorithm based on decomposition. Realistic data-based simulations prove that the proposed method achieves a larger total profit in faster convergence speed than the two state-of-the-art algorithms. Note to Practitioners —This article considers the tradeoff between profit maximization and energy cost minimization for the green cloud data center (GCDC) providers while meeting the delay constraints of all tasks. Current task-scheduling methods fail to take the advantage of spatial variations in many factors, e.g., prices of electricity and availability of renewable power generation at geographically distributed GCDC locations. As a result, they fail to execute all tasks of heterogeneous applications within their delay bounds in a high-revenue and low-energy-cost manner. In this article, a multiobjective optimization method that addresses the disadvantages of the existing methods is proposed. It is realized by a proposed intelligent optimization algorithm. Simulations demonstrate that in comparison with the two state-of-the-art scheduling algorithms, the proposed one increases the profit and reduces the convergence time. It can be readily implemented and integrated into actual industrial GCDCs.

Journal ArticleDOI
TL;DR: A novel strategy was proposed, that is, using the human mind to choose the ambulation mode of prosthetic legs using EEG-based volitional control, and a way for human to perceive the movement of prosthetics legs, which can make human control the prosthetic leg more smoothly.
Abstract: More natural and intuitive control is expected to maximize the auxiliary effect of the powered prosthetic leg for lower limb amputees. In order to realize the stable and flexible walking of prosthetic legs in different terrains according to human intention, a brain–computer interface (BCI) based on motor imagery (MI) is developed. For the raw electroencephalogram (EEG) signals, discrete wavelet transform (DWT) is utilized to extract the time–frequency domain features, which are used as the input signals of the common spatial pattern (CSP) to obtain the time–frequency–space domain features of EEG signals. Then, a support vector machine (SVM) classifier and a directed acyclic graph (DAG) structure are combined to classify multiclass imaginary tasks. According to the result of human intention recognition, the prosthetic leg performs the corresponding gait trajectory generated by coding the ground reaction force (GRF). In addition, a sensory feedback loop is established by functional electrical stimulation (FES), which feeds back the movement of the prosthetic leg to human in real time. The effectiveness and feasibility of the developed EEG-based volitional control of powered prosthetic legs have been validated by three subjects, all of whom were able to fulfill smoothly walking on the floor, ascending stairs, and descending stairs according to their own intentions using prosthetic legs. Note to Practitioners —This article was inspired by the problem that the control mode of the prosthetic leg walking is not natural and intuitive, but it can also be applicable to other powered prosthetic limbs. Existing methods of controlling the walking patterns of the prosthetic leg usually require explicit human manipulation and are not convenient to use. In this article, a novel strategy was proposed, that is, using the human mind to choose the ambulation mode of prosthetic legs. This article described the methods of human brain activity information acquisition and intention recognition. Then, gaits inspired by the healthy leg are designed for the prosthetic leg in three terrains, including walking on the floor, ascending stairs, and descending stairs. In addition, this article provided a way for human to perceive the movement of prosthetic legs, which can make human control the prosthetic leg more smoothly. Experiments on healthy subjects have shown that this approach is feasible. However, no experiments have been conducted on lower limb amputees, and adding more walking patterns may reduce the accuracy of human intention recognition. In the future work, we will study how to accurately identify more categories of human intentions and further improve the control strategy so as to enhance the auxiliary effect of prosthetic legs on amputees.

Journal ArticleDOI
TL;DR: It is shown that direct applications of popular health indices in a time domain are sensitive to impulsive noises, which causes failures of health indices for machine health monitoring in the occurrence of impulsive noise.
Abstract: Prognostics and health management of the rotating machine aim to use monitoring data to infer the health conditions of the rotating machine in order to avoid unexpected accidents and minimize economic losses. Since health indices can detect an abnormality and provide observations for prognostic modeling, they are the basis of prognostics and health management. The spectral Lp / Lq norm ratio and the spectral Gini index have been recognized as popular health indices to characterize the impulsiveness of repetitive transients caused by machine faults for rotating machine health monitoring. Here, some special forms of the spectral Lp / Lq norm ratio include spectral kurtosis, the spectral L2 / L1 norm ratio, the reciprocal of the spectral smoothness index, and so on. In this article, theoretical and experimental investigations on the spectral Lp / Lq norm ratio and the spectral Gini index for machine health monitoring are conducted to prove how they characterize the impulsiveness of repetitive transients. Results showed that an increase in the total length of the nonimpulsive regions of repetitive transients makes the spectral Lp/Lq norm ratio and the spectral Gini index become large, which, in turn, can be used to explain changes of health indices during machine degradation at varying operating conditions and in the case of impulsive noises. To solve the problem of the sensitiveness of popular health indices to impulsive noises, a fused health index for characterizing cyclostationarity of repetitive transients is proposed. Analyses of bearing run-to-failure showed that the proposed fused index has better monitoring performance than the aforementioned popular health indices. Note to Practitioners —This article was motivated by the problem of characterizing nonprocessed signals for automatic machine health monitoring. Practically, nonprocessed signals, such as vibration signals, acoustic signals, and so on, cannot directly be applied to reflect machine health conditions. The transformation of nonprocessed signals by using health indices into process signals is great of a concern. Most existing methods directly use intuition and experience to choose health indices in order to realize machine health monitoring. Thus, these methods do not provide theoretical investigations and support to illustrate how health indices characterize nonprocessed signals generated from a faulty machine. This article uses mathematical models and inferences to explain how popular health indices characterize impulsive signals generated from a faulty machine. Then, it is shown that direct applications of popular health indices in a time domain are sensitive to impulsive noises, which causes failures of health indices for machine health monitoring in the occurrence of impulsive noises. To solve this problem, it is suggested to use indices to characterize frequency information of faulty signals. Finally, an efficient and reliable fusion of health indices in a frequency domain is proposed for machine health monitoring.

Journal ArticleDOI
TL;DR: This article proposes a novel deep learning algorithm to enable the automated capability for cone beam computed tomography segmentation and lesion detection, and demonstrates that the proposed algorithm outperforms the standard Dense U-Net in both lesions detection accuracy and dice coefficient (DICE) indices in multilabel segmentation.
Abstract: Compared with the rapidly growing artificial intelligence (AI) research in other branches of healthcare, the pace of developing AI capacities in dental care is relatively slow. Dental care automation, especially the automated capability for dental cone beam computed tomography (CBCT) segmentation and lesion detection, is highly needed. CBCT is an important imaging modality that is experiencing ever-growing utilization in various dental specialties. However, little research has been done for segmenting different structures, restorative materials, and lesions using deep learning. This is due to multifold challenges such as content-rich oral cavity and significant within-label variation on each CBCT image as well as the inherent difficulty of obtaining many high-quality labeled images for training. On the other hand, oral-anatomical knowledge exists in dentistry, which shall be leveraged and integrated into the deep learning design. In this article, we propose a novel anatomically constrained Dense U-Net for integrating oral-anatomical knowledge with data-driven Dense U-Net. The proposed algorithm is formulated as a regularized or constrained optimization and solved using mean-field variational approximation to achieve computational efficiency. Mathematical encoding for transforming descriptive knowledge into a quantitative form is also proposed. Our experiment demonstrates that the proposed algorithm outperforms the standard Dense U-Net in both lesion detection accuracy and dice coefficient (DICE) indices in multilabel segmentation. Benefited from the integration with anatomical domain knowledge, our algorithm performs well with data from a small number of patients included in the training. Note to Practitioners —This article proposes a novel deep learning algorithm to enable the automated capability for cone beam computed tomography (CBCT) segmentation and lesion detection. Despite the growing adoption of CBCT in various dental specialties, such capability is currently lacking. The proposed work will provide tools to help reduce subjectivity and human errors, as well as streamline and expedite the clinical workflow. This will greatly facilitate dental care automation. Furthermore, due to the capacity of integrating oral-anatomical knowledge into the deep learning design, the proposed algorithm does not require many high-quality labeled images to train. The algorithm can provide good accuracy under limited training samples. This ability is highly desirable for practitioners by saving labor-intensive, costly labeling efforts, and enjoying the benefits provided by AI.

Journal ArticleDOI
TL;DR: A novel vision-based context prediction framework for lower limb prostheses to simultaneously predict human’s environmental context for multiple forecast windows by leveraging the Bayesian neural networks (BNNs) and producing a calibrated predicted probability for online decision-making.
Abstract: Reliable environmental context prediction is critical for wearable robots (e.g., prostheses and exoskeletons) to assist terrain-adaptive locomotion. This article proposed a novel vision-based context prediction framework for lower limb prostheses to simultaneously predict human’s environmental context for multiple forecast windows. By leveraging the Bayesian neural networks (BNNs), our framework can quantify the uncertainty caused by different factors (e.g., observation noise, and insufficient or biased training) and produce a calibrated predicted probability for online decision-making. We compared two wearable camera locations (a pair of glasses and a lower limb device), independently and conjointly. We utilized the calibrated predicted probability for online decision-making and fusion. We demonstrated how to interpret deep neural networks with uncertainty measures and how to improve the algorithms based on the uncertainty analysis. The inference time of our framework on a portable embedded system was less than 80 ms/frame. The results in this study may lead to novel context recognition strategies in reliable decision-making, efficient sensor fusion, and improved intelligent system design in various applications. Note to Practitioners —This article was motivated by two practical problems in computer vision for wearable robots: First, the performance of deep neural networks is challenged by real-life disturbances. However, reliable confidence estimation is usually unavailable and the factors causing failures are hard to identify. Second, evaluating wearable robots by intuitive trial and error is expensive due to the need for human experiments. Our framework produces a calibrated predicted probability as well as three uncertainty measures. The calibrated probability makes it easy to customize prediction decision criteria by considering how much the corresponding application can tolerate error. This study demonstrated a practical procedure to interpret and improve the performance of deep neural networks with uncertainty quantification. We anticipate that our methodology could be extended to other applications as a general scientific and efficient procedure of evaluating and improving intelligent systems.