Showing papers in "IEEE/CAA Journal of Automatica Sinica in 2021"
TL;DR: A review of swarm intelligence algorithms can be found in this paper, where the authors highlight the functions and strengths from 127 research literatures and briefly provide the description of their successful applications in optimization problems of engineering fields.
Abstract: Swarm intelligence algorithms are a subset of the artificial intelligence (AI) field, which is increasing popularity in resolving different optimization problems and has been widely utilized in various applications. In the past decades, numerous swarm intelligence algorithms have been developed, including ant colony optimization (ACO), particle swarm optimization (PSO), artificial fish swarm (AFS), bacterial foraging optimization (BFO), and artificial bee colony (ABC). This review tries to review the most representative swarm intelligence algorithms in chronological order by highlighting the functions and strengths from 127 research literatures. It provides an overview of the various swarm intelligence algorithms and their advanced developments, and briefly provides the description of their successful applications in optimization problems of engineering fields. Finally, opinions and perspectives on the trends and prospects in this relatively new research domain are represented to support future developments.
247 citations
TL;DR: This paper studies the trajectory tracking problem of flapping-wing micro aerial vehicles in the longitudinal plane and shows that the tracking errors are bounded and ultimately converge to a small neighborhood around the origin.
Abstract: This paper studies the trajectory tracking problem of flapping-wing micro aerial vehicles ( FWMAVs ) in the longitudinal plane. First of all, the kinematics and dynamics of the FWMAV are established, wherein the aerodynamic force and torque generated by flapping wings and the tail wing are explicitly formulated with respect to the flapping frequency of the wings and the degree of tail wing inclination. To achieve autonomous tracking, an adaptive control scheme is proposed under the hierarchical framework. Specifically, a bounded position controller with hyperbolic tangent functions is designed to produce the desired aerodynamic force, and a pitch command is extracted from the designed position controller. Next, an adaptive attitude controller is designed to track the extracted pitch command, where a radial basis function neural network is introduced to approximate the unknown aerodynamic perturbation torque. Finally, the flapping frequency of the wings and the degree of tail wing inclination are calculated from the designed position and attitude controllers, respectively. In terms of Lyapunovʼ s direct method, it is shown that the tracking errors are bounded and ultimately converge to a small neighborhood around the origin. Simulations are carried out to verify the effectiveness of the proposed control scheme.
230 citations
TL;DR: In this article, a comprehensive review of emerging technologies for the internet of things (IoT)-based smart agriculture is presented, including unmanned aerial vehicles, wireless technologies, open-source IoT platforms, software defined networking (SDN), network function virtualization (NFV), cloud/fog computing, and middleware platforms.
Abstract: This paper presents a comprehensive review of emerging technologies for the internet of things (IoT)-based smart agriculture. We begin by summarizing the existing surveys and describing emergent technologies for the agricultural IoT, such as unmanned aerial vehicles, wireless technologies, open-source IoT platforms, software defined networking (SDN), network function virtualization (NFV) technologies, cloud/fog computing, and middleware platforms. We also provide a classification of IoT applications for smart agriculture into seven categories: including smart monitoring, smart water management, agrochemicals applications, disease management, smart harvesting, supply chain management, and smart agricultural practices. Moreover, we provide a taxonomy and a side-by-side comparison of the state-of-the-art methods toward supply chain management based on the blockchain technology for agricultural IoTs. Furthermore, we present real projects that use most of the aforementioned technologies, which demonstrate their great performance in the field of smart agriculture. Finally, we highlight open research challenges and discuss possible future research directions for agricultural IoTs.
218 citations
TL;DR: This work proposes an automatic detection method for COVID-19 infection based on chest X-ray images using different architectures of convolutional neural networks trained on ImageNet, and adapt them to behave as feature extractors for the X-Ray images.
Abstract: The new coronavirus ( COVID-19 ) , declared by the World Health Organization as a pandemic, has infected more than 1 million people and killed more than 50 thousand. An infection caused by COVID-19 can develop into pneumonia, which can be detected by a chest X-ray exam and should be treated appropriately. In this work, we propose an automatic detection method for COVID-19 infection based on chest X-ray images. The datasets constructed for this study are composed of 194 X-ray images of patients diagnosed with coronavirus and 194 X-ray images of healthy patients. Since few images of patients with COVID-19 are publicly available, we apply the concept of transfer learning for this task. We use different architectures of convolutional neural networks ( CNNs ) trained on ImageNet, and adapt them to behave as feature extractors for the X-ray images. Then, the CNNs are combined with consolidated machine learning methods, such as k-Nearest Neighbor, Bayes, Random Forest, multilayer perceptron ( MLP ) , and support vector machine ( SVM ) . The results show that, for one of the datasets, the extractor-classifier pair with the best performance is the MobileNet architecture with the SVM classifier using a linear kernel, which achieves an accuracy and an F1-score of 98.5 & . For the other dataset, the best pair is DenseNet201 with MLP, achieving an accuracy and an F1-score of 95.6 & . Thus, the proposed approach demonstrates efficiency in detecting COVID-19 in X-ray images.
167 citations
TL;DR: The security challenges of smart agriculture are analyzed and organized into two aspects: 1) agricultural production, and 2) information technology.
Abstract: With the deep combination of both modern information technology and traditional agriculture, the era of agriculture 4.0, which takes the form of smart agriculture, has come. Smart agriculture provides solutions for agricultural intelligence and automation. However, information security issues cannot be ignored with the development of agriculture brought by modern information technology. In this paper, three typical development modes of smart agriculture (precision agriculture, facility agriculture, and order agriculture) are presented. Then, 7 key technologies and 11 key applications are derived from the above modes. Based on the above technologies and applications, 6 security and privacy countermeasures (authentication and access control, privacy-preserving, blockchain-based solutions for data integrity, cryptography and key management, physical countermeasures, and intrusion detection systems) are summarized and discussed. Moreover, the security challenges of smart agriculture are analyzed and organized into two aspects: 1) agricultural production, and 2) information technology. Most current research projects have not taken agricultural equipment as potential security threats. Therefore, we did some additional experiments based on solar insecticidal lamps Internet of Things, and the results indicate that agricultural equipment has an impact on agricultural security. Finally, more technologies (5 G communication, fog computing, Internet of Everything, renewable energy management system, software defined network, virtual reality, augmented reality, and cyber security datasets for smart agriculture) are described as the future research directions of smart agriculture.
158 citations
TL;DR: In comparison to existing decentralized fine-grained searchable encryption schemes, the proposed scheme has achieved a significant reduction in storage and computational cost for the secret key associated with users.
Abstract: The concept of sharing of personal health data over cloud storage in a healthcare-cyber physical system has become popular in recent times as it improves access quality. The privacy of health data can only be preserved by keeping it in an encrypted form, but it affects usability and flexibility in terms of effective search. Attribute-based searchable encryption (ABSE) has proven its worth by providing fine-grained searching capabilities in the shared cloud storage. However, it is not practical to apply this scheme to the devices with limited resources and storage capacity because a typical ABSE involves serious computations. In a healthcare cloud-based cyber-physical system (CCPS), the data is often collected by resource-constraint devices; therefore, here also, we cannot directly apply ABSE schemes. In the proposed work, the inherent computational cost of the ABSE scheme is managed by executing the computationally intensive tasks of a typical ABSE scheme on the blockchain network. Thus, it makes the proposed scheme suitable for online storage and retrieval of personal health data in a typical CCPS. With the assistance of blockchain technology, the proposed scheme offers two main benefits. First, it is free from a trusted authority, which makes it genuinely decentralized and free from a single point of failure. Second, it is computationally efficient because the computational load is now distributed among the consensus nodes in the blockchain network. Specifically, the task of initializing the system, which is considered the most computationally intensive, and the task of partial search token generation, which is considered as the most frequent operation, is now the responsibility of the consensus nodes. This eliminates the need of the trusted authority and reduces the burden of data users, respectively. Further, in comparison to existing decentralized fine-grained searchable encryption schemes, the proposed scheme has achieved a significant reduction in storage and computational cost for the secret key associated with users. It has been verified both theoretically and practically in the performance analysis section.
145 citations
TL;DR: A comprehensive survey of the research on MOPs with irregular Pareto fronts can be found in this article, where a taxonomy of the existing methodologies for handling irregular problems is given and representative algorithms are reviewed.
Abstract: Evolutionary algorithms have been shown to be very successful in solving multi-objective optimization problems (MOPs). However, their performance often deteriorates when solving MOPs with irregular Pareto fronts. To remedy this issue, a large body of research has been performed in recent years and many new algorithms have been proposed. This paper provides a comprehensive survey of the research on MOPs with irregular Pareto fronts. We start with a brief introduction to the basic concepts, followed by a summary of the benchmark test problems with irregular problems, an analysis of the causes of the irregularity, and real-world optimization problems with irregular Pareto fronts. Then, a taxonomy of the existing methodologies for handling irregular problems is given and representative algorithms are reviewed with a discussion of their strengths and weaknesses. Finally, open challenges are pointed out and a few promising future directions are suggested.
144 citations
TL;DR: In this article, a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time is proposed. But the problem is not solved by a memetic algorithm.
Abstract: Group scheduling problems have attracted much attention owing to their many practical applications. This work proposes a new bi-objective serial-batch group scheduling problem considering the constraints of sequence-dependent setup time, release time, and due time. It is originated from an important industrial process, i.e., wire rod and bar rolling process in steel production systems. Two objective functions, i.e., the number of late jobs and total setup time, are minimized. A mixed integer linear program is established to describe the problem. To obtain its Pareto solutions, we present a memetic algorithm that integrates a population-based nondominated sorting genetic algorithm II and two single-solution-based improvement methods, i.e., an insertion-based local search and an iterated greedy algorithm. The computational results on extensive industrial data with the scale of a one-week schedule show that the proposed algorithm has great performance in solving the concerned problem and outperforms its peers. Its high accuracy and efficiency imply its great potential to be applied to solve industrial-size group scheduling problems.
133 citations
TL;DR: In this article, the authors present a comprehensive survey on recent results on fault estimation, detection, diagnosis and fault-tolerant control of multi-agent systems, and cyber attack detection and secure control of MASs subject to two typical cyber attacks.
Abstract: Multi-agent systems (MASs) are typically composed of multiple smart entities with independent sensing, communication, computing, and decision-making capabilities. Nowadays, MASs have a wide range of applications in smart grids, smart manufacturing, sensor networks, and intelligent transportation systems. Control of the MASs are often coordinated through information interaction among agents, which is one of the most important factors affecting coordination and cooperation performance. However, unexpected physical faults and cyber attacks on a single agent may spread to other agents via information interaction very quickly, and thus could lead to severe degradation of the whole system performance and even destruction of MASs. This paper is concerned with the safety/security analysis and synthesis of MASs arising from physical faults and cyber attacks, and our goal is to present a comprehensive survey on recent results on fault estimation, detection, diagnosis and fault-tolerant control of MASs, and cyber attack detection and secure control of MASs subject to two typical cyber attacks. Finally, the paper concludes with some potential future research topics on the security issues of MASs.
128 citations
TL;DR: Inspired by the two-layered structure of GSA, four layers consisting of population, iteration-best, personal-best and global-best layers are constructed and dynamically implemented in different search stages to greatly improve both exploration and exploitation abilities of population.
Abstract: A gravitational search algorithm ( GSA ) uses gravitational force among individuals to evolve population. Though GSA is an effective population-based algorithm, it exhibits low search performance and premature convergence. To ameliorate these issues, this work proposes a multi-layered GSA called MLGSA. Inspired by the two-layered structure of GSA, four layers consisting of population, iteration-best, personal-best and global-best layers are constructed. Hierarchical interactions among four layers are dynamically implemented in different search stages to greatly improve both exploration and exploitation abilities of population. Performance comparison between MLGSA and nine existing GSA variants on twenty-nine CEC2017 test functions with low, medium and high dimensions demonstrates that MLGSA is the most competitive one. It is also compared with four particle swarm optimization variants to verify its excellent performance. Moreover, the analysis of hierarchical interactions is discussed to illustrate the influence of a complete hierarchy on its performance. The relationship between its population diversity and fitness diversity is analyzed to clarify its search performance. Its computational complexity is given to show its efficiency. Finally, it is applied to twenty-two CEC2011 real-world optimization problems to show its practicality.
123 citations
TL;DR: The parameter identification work of existing serial and parallel robots is introduced, and the methods for parameter calibration are summarized and discussed and their advantages and disadvantages are discussed.
Abstract: With the continuous improvement of automation, industrial robots have become an indispensable part of automated production lines. They widely used in a number of industrial production activities, such as spraying, welding, handling, etc., and have a great role in these sectors. Recently, the robotic technology is developing towards high precision, high intelligence. Robot calibration technology has a great significance to improve the accuracy of robot. However, it has much work to be done in the identification of robot parameters. The parameter identification work of existing serial and parallel robots is introduced. On the one hand, it summarizes the methods for parameter calibration and discusses their advantages and disadvantages. On the other hand, the application of parameter identification is introduced. This overview has a great reference value for robot manufacturers to choose proper identification method, points further research areas for researchers. Finally, this paper analyzes the existing problems in robot calibration, which may be worth researching in the future.
TL;DR: In this paper, a literature review describes the role that deep learning plays in EMG-based human-machine interaction (HMI) applications and provides an overview of typical network structures and processing schemes.
Abstract: Electromyography (EMG) has already been broadly used in human-machine interaction (HMI) applications Determining how to decode the information inside EMG signals robustly and accurately is a key problem for which we urgently need a solution Recently, many EMG pattern recognition tasks have been addressed using deep learning methods In this paper, we analyze recent papers and present a literature review describing the role that deep learning plays in EMG-based HMI An overview of typical network structures and processing schemes will be provided Recent progress in typical tasks such as movement classification, joint angle prediction, and force/torque estimation will be introduced New issues, including multimodal sensing, inter-subject/inter-session, and robustness toward disturbances will be discussed We attempt to provide a comprehensive analysis of current research by discussing the advantages, challenges, and opportunities brought by deep learning We hope that deep learning can aid in eliminating factors that hinder the development of EMG-based HMI systems Furthermore, possible future directions will be presented to pave the way for future research
TL;DR: This paper discusses the recent progress of disassembly sequencing planning in four major aspects: product disassembly modeling methods, mathematical programming methods, artificial intelligence methods, and uncertainty handling.
Abstract: It is well-recognized that obsolete or discarded products can cause serious environmental pollution if they are poorly be handled. They contain reusable resource that can be recycled and used to generate desired economic benefits. Therefore, performing their efficient disassembly is highly important in green manufacturing and sustainable economic development. Their typical examples are electronic appliances and electromechanical/mechanical products. This paper presents a survey on the state of the art of disassembly sequence planning. It can help new researchers or decision makers to search for the right solution for optimal disassembly planning. It reviews the disassembly theory and methods that are applied for the processing, repair, and maintenance of obsolete/discarded products. This paper discusses the recent progress of disassembly sequencing planning in four major aspects: product disassembly modeling methods, mathematical programming methods, artificial intelligence methods, and uncertainty handling. This survey should stimulate readers to be engaged in the research, development and applications of disassembly and remanufacturing methodologies in the Industry 4.0 era.
TL;DR: In this paper, a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm is proposed to accelerate the convergence rate by integrating momentum effects into its training process.
Abstract: A recommender system (RS) relying on latent factor analysis usually adopts stochastic gradient descent (SGD) as its learning algorithm. However, owing to its serial mechanism, an SGD algorithm suffers from low efficiency and scalability when handling large-scale industrial problems. Aiming at addressing this issue, this study proposes a momentum-incorporated parallel stochastic gradient descent (MPSGD) algorithm, whose main idea is two-fold: a) implementing parallelization via a novel data-splitting strategy, and b) accelerating convergence rate by integrating momentum effects into its training process. With it, an MPSGD-based latent factor (MLF) model is achieved, which is capable of performing efficient and high-quality recommendations. Experimental results on four high-dimensional and sparse matrices generated by industrial RS indicate that owing to an MPSGD algorithm, an MLF model outperforms the existing state-of-the-art ones in both computational efficiency and scalability.
TL;DR: A novel deep learning network for hand gesture recognition that integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation.
Abstract: Hand gestures are a natural way for human-robot interaction. Vision based dynamic hand gesture recognition has become a hot research topic due to its various applications. This paper presents a novel deep learning network for hand gesture recognition. The network integrates several well-proved modules together to learn both short-term and long-term features from video inputs and meanwhile avoid intensive computation. To learn short-term features, each video input is segmented into a fixed number of frame groups. A frame is randomly selected from each group and represented as an RGB image as well as an optical flow snapshot. These two entities are fused and fed into a convolutional neural network ( ConvNet ) for feature extraction. The ConvNets for all groups share parameters. To learn long-term features, outputs from all ConvNets are fed into a long short-term memory ( LSTM ) network, by which a final classification result is predicted. The new model has been tested with two popular hand gesture datasets, namely the Jester dataset and Nvidia dataset. Comparing with other models, our model produced very competitive results. The robustness of the new model has also been proved with an augmented dataset with enhanced diversity of hand gestures.
TL;DR: This paper presents an innovative investigation on prototyping a digital twin (DT) as the platform for human-robot interactive welding and welder behavior analysis, which provides better capability in engaging human users in interacting with welding scenes, through an augmented VR.
Abstract: This paper presents an innovative investigation on prototyping a digital twin (DT) as the platform for human-robot interactive welding and welder behavior analysis. This human-robot interaction (HRI) working style helps to enhance human users' operational productivity and comfort; while data-driven welder behavior analysis benefits to further novice welder training. This HRI system includes three modules: 1) a human user who demonstrates the welding operations offsite with her/his operations recorded by the motion-tracked handles; 2) a robot that executes the demonstrated welding operations to complete the physical welding tasks onsite; 3) a DT system that is developed based on virtual reality (VR) as a digital replica of the physical human-robot interactive welding environment. The DT system bridges a human user and robot through a bi-directional information flow: a) transmitting demonstrated welding operations in VR to the robot in the physical environment; b) displaying the physical welding scenes to human users in VR. Compared to existing DT systems reported in the literatures, the developed one provides better capability in engaging human users in interacting with welding scenes, through an augmented VR. To verify the effectiveness, six welders, skilled with certain manual welding training and unskilled without any training, tested the system by completing the same welding job; three skilled welders produce satisfied welded workpieces, while the other three unskilled do not. A data-driven approach as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed to analyze their behaviors. Given an operation sequence, i.e., motion speed sequence of the welding torch, frequency features are firstly extracted by FFT and then reduced in dimension through PCA, which are finally routed into SVM for classification. The trained model demonstrates a 94.44% classification accuracy in the testing dataset. The successful pattern recognition in skilled welder operations should benefit to accelerate novice welder training.
TL;DR: In this paper, a parameter-dependent set-membership filter was proposed for linear time-varying systems with norm-bounded noises and impulsive measurement outliers.
Abstract: This paper is concerned with the set-membership filtering problem for a class of linear time-varying systems with norm-bounded noises and impulsive measurement outliers. A new representation is proposed to model the measurement outlier by an impulsive signal whose minimum interval length (i.e., the minimum duration between two adjacent impulsive signals) and minimum norm (i.e., the minimum of the norms of all impulsive signals) are larger than certain thresholds that are adjustable according to engineering practice. In order to guarantee satisfactory filtering performance, a so-called parameter-dependent set-membership filter is put forward that is capable of generating a time-varying ellipsoidal region containing the true system state. First, a novel outlier detection strategy is developed, based on a dedicatedly constructed input-output model, to examine whether the received measurement is corrupted by an outlier. Then, through the outcome of the outlier detection, the gain matrix of the desired filter and the corresponding ellipsoidal region are calculated by solving two recursive difference equations. Furthermore, the ultimate boundedness issue on the time-varying ellipsoidal region is thoroughly investigated. Finally, a simulation example is provided to demonstrate the effectiveness of our proposed parameter-dependent set-membership filtering strategy.
TL;DR: Zhang et al. as discussed by the authors proposed a smooth $L 1 -norm-oriented latent factor (SL-LF) model, which is more robust to outlier data.
Abstract: High-dimensional and sparse (HiDS) matrices commonly arise in various industrial applications, e.g., recommender systems (RSs), social networks, and wireless sensor networks. Since they contain rich information, how to accurately represent them is of great significance. A latent factor (LF) model is one of the most popular and successful ways to address this issue. Current LF models mostly adopt $L_{2}$ -norm-oriented Loss to represent an HiDS matrix, i.e., they sum the errors between observed data and predicted ones with $L_{2}$ -norm. Yet $L_{2}$ -norm is sensitive to outlier data. Unfortunately, outlier data usually exist in such matrices. For example, an HiDS matrix from RSs commonly contains many outlier ratings due to some heedless/malicious users. To address this issue, this work proposes a smooth $L_{1}$ -norm-oriented latent factor (SL-LF) model. Its main idea is to adopt smooth $L_{1}$ -norm rather than $L_{2}$ -norm to form its Loss, making it have both strong robustness and high accuracy in predicting the missing data of an HiDS matrix. Experimental results on eight HiDS matrices generated by industrial applications verify that the proposed SL-LF model not only is robust to the outlier data but also has significantly higher prediction accuracy than state-of-the-art models when they are used to predict the missing data of HiDS matrices.
TL;DR: In this paper, the authors proposed a cloud workflow scheduling approach which combines particle swarm optimization and idle time slot-aware rules, to minimize the execution cost of a workflow application under a deadline constraint.
Abstract: Workflow scheduling is a key issue and remains a challenging problem in cloud computing. Faced with the large number of virtual machine (VM) types offered by cloud providers, cloud users need to choose the most appropriate VM type for each task. Multiple task scheduling sequences exist in a workflow application. Different task scheduling sequences have a significant impact on the scheduling performance. It is not easy to determine the most appropriate set of VM types for tasks and the best task scheduling sequence. Besides, the idle time slots on VM instances should be used fully to increase resources' utilization and save the execution cost of a workflow. This paper considers these three aspects simultaneously and proposes a cloud workflow scheduling approach which combines particle swarm optimization (PSO) and idle time slot-aware rules, to minimize the execution cost of a workflow application under a deadline constraint. A new particle encoding is devised to represent the VM type required by each task and the scheduling sequence of tasks. An idle time slot-aware decoding procedure is proposed to decode a particle into a scheduling solution. To handle tasks' invalid priorities caused by the randomness of PSO, a repair method is used to repair those priorities to produce valid task scheduling sequences. The proposed approach is compared with state-of-the-art cloud workflow scheduling algorithms. Experiments show that the proposed approach outperforms the comparative algorithms in terms of both of the execution cost and the success rate in meeting the deadline.
TL;DR: A new finite-time convergence disturbance rejection control scheme design for a flexible Timoshenko manipulator subject to extraneous disturbances, which is guaranteed to be uniformly bounded stable and disturbance estimation errors converge to zero in a finite time.
Abstract: This paper focuses on a new finite-time convergence disturbance rejection control scheme design for a flexible Timoshenko manipulator subject to extraneous disturbances. To suppress the shear deformation and elastic oscillation, position the manipulator in a desired angle, and ensure the finitetime convergence of disturbances, we develop three disturbance observers ( DOs ) and boundary controllers. Under the derived DOs-based control schemes, the controlled system is guaranteed to be uniformly bounded stable and disturbance estimation errors converge to zero in a finite time. In the end, numerical simulations are established by finite difference methods to demonstrate the effectiveness of the devised scheme by selecting appropriate parameters.
TL;DR: In this article, a multi-round allocation (MMA) algorithm is proposed to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints in multi-cloud systems.
Abstract: The rise of multi-cloud systems has been spurred. For safety-critical missions, it is important to guarantee their security and reliability. To address trust constraints in a heterogeneous multi-cloud environment, this work proposes a novel scheduling method called matching and multi-round allocation (MMA) to optimize the makespan and total cost for all submitted tasks subject to security and reliability constraints. The method is divided into two phases for task scheduling. The first phase is to find the best matching candidate resources for the tasks to meet their preferential demands including performance, security, and reliability in a multi-cloud environment; the second one iteratively performs multiple rounds of re-allocating to optimize tasks execution time and cost by minimizing the variance of the estimated completion time. The proposed algorithm, the modified cuckoo search (MCS), hybrid chaotic particle search (HCPS), modified artificial bee colony (MABC), max-min, and min-min algorithms are implemented in CloudSim to create simulations. The simulations and experimental results show that our proposed method achieves shorter makespan, lower cost, higher resource utilization, and better trade-off between time and economic cost. It is more stable and efficient.
TL;DR: Wang et al. as mentioned in this paper proposed a decentralized blockchain-based route registration framework-decentralized route registration system based on blockchain (DRRS-BC), which produces a global transaction ledge by the information of address prefixes and autonomous system numbers between multiple organizations and ASs, which is maintained by all blockchain nodes and further used for authentication.
Abstract: The border gateway protocol (BGP) has become the indispensible infrastructure of the Internet as a typical inter-domain routing protocol. However, it is vulnerable to misconfigurations and malicious attacks since BGP does not provide enough authentication mechanism to the route advertisement. As a result, it has brought about many security incidents with huge economic losses. Exiting solutions to the routing security problem such as S-BGP, So-BGP, Ps-BGP, and RPKI, are based on the Public Key Infrastructure and face a high security risk from the centralized structure. In this paper, we propose the decentralized blockchain-based route registration framework-decentralized route registration system based on blockchain (DRRS-BC). In DRRS-BC, we produce a global transaction ledge by the information of address prefixes and autonomous system numbers between multiple organizations and ASs, which is maintained by all blockchain nodes and further used for authentication. By applying blockchain, DRRS-BC perfectly solves the problems of identity authentication, behavior authentication as well as the promotion and deployment problem rather than depending on the authentication center. Moreover, it resists to prefix and subprefix hijacking attacks and meets the performance and security requirements of route registration.
TL;DR: Based on the novel Lyapunov-Krasovskii functional and the necessary and sufficient condition on matrix-valued polynomial inequalities, two stability criteria are derived for two cases of the time-varying delay.
Abstract: One of challenging issues on stability analysis of time-delay systems is how to obtain a stability criterion from a matrix-valued polynomial on a time-varying delay. The first contribution of this paper is to establish a necessary and sufficient condition on a matrix-valued polynomial inequality over a certain closed interval. The degree of such a matrix-valued polynomial can be an arbitrary finite positive integer. The second contribution of this paper is to introduce a novel Lyapunov-Krasovskii functional, which includes a cubic polynomial on a time-varying delay, in stability analysis of time-delay systems. Based on the novel Lyapunov-Krasovskii functional and the necessary and sufficient condition on matrix-valued polynomial inequalities, two stability criteria are derived for two cases of the time-varying delay. A well-studied numerical example is given to show that the proposed stability criteria are of less conservativeness than some existing ones.
TL;DR: In this paper, the authors highlight vehicle networks' evolution from vehicular ad-hoc networks (VANET) to the internet of vehicles (IoVs), listing their benefits and limitations.
Abstract: Determining how to structure vehicular network environments can be done in various ways. Here, we highlight vehicle networks' evolution from vehicular ad-hoc networks (VANET) to the internet of vehicles (IoVs), listing their benefits and limitations. We also highlight the reasons in adopting wireless technologies, in particular, IEEE 802.11p and 5G vehicle-to-everything, as well as the use of paradigms able to store and analyze a vast amount of data to produce intelligence and their applications in vehicular environments. We also correlate the use of each of these paradigms with the desire to meet existing intelligent transportation systems' requirements. The presentation of each paradigm is given from a historical and logical standpoint. In particular, vehicular fog computing improves on the deficiences of vehicular cloud computing, so both are not exclusive from the application point of view. We also emphasize some security issues that are linked to the characteristics of these paradigms and vehicular networks, showing that they complement each other and share problems and limitations. As these networks still have many opportunities to grow in both concept and application, we finally discuss concepts and technologies that we believe are beneficial. Throughout this work, we emphasize the crucial role of these concepts for the well-being of humanity.
TL;DR: In this article, the authors consider the problem of solving complex multi-stage decision problems using methods that are based on the idea of policy iteration (PI), i.e., start from some base policy and generate an improved policy.
Abstract: We discuss the solution of complex multistage decision problems using methods that are based on the idea of policy iteration (PI), i.e., start from some base policy and generate an improved policy. Rollout is the simplest method of this type, where just one improved policy is generated. We can view PI as repeated application of rollout, where the rollout policy at each iteration serves as the base policy for the next iteration. In contrast with PI, rollout has a robustness property: it can be applied on-line and is suitable for on-line replanning. Moreover, rollout can use as base policy one of the policies produced by PI, thereby improving on that policy. This is the type of scheme underlying the prominently successful AlphaZero chess program. In this paper we focus on rollout and PI-like methods for problems where the control consists of multiple components each selected (conceptually) by a separate agent. This is the class of multiagent problems where the agents have a shared objective function, and a shared and perfect state information. Based on a problem reformulation that trades off control space complexity with state space complexity, we develop an approach, whereby at every stage, the agents sequentially (one-at-a-time) execute a local rollout algorithm that uses a base policy, together with some coordinating information from the other agents. The amount of total computation required at every stage grows linearly with the number of agents. By contrast, in the standard rollout algorithm, the amount of total computation grows exponentially with the number of agents. Despite the dramatic reduction in required computation, we show that our multiagent rollout algorithm has the fundamental cost improvement property of standard rollout: it guarantees an improved performance relative to the base policy. We also discuss autonomous multiagent rollout schemes that allow the agents to make decisions autonomously through the use of precomputed signaling information, which is sufficient to maintain the cost improvement property, without any on-line coordination of control selection between the agents. For discounted and other infinite horizon problems, we also consider exact and approximate PI algorithms involving a new type of one-agent-at-a-time policy improvement operation. For one of our PI algorithms, we prove convergence to an agent-by-agent optimal policy, thus establishing a connection with the theory of teams. For another PI algorithm, which is executed over a more complex state space, we prove convergence to an optimal policy. Approximate forms of these algorithms are also given, based on the use of policy and value neural networks. These PI algorithms, in both their exact and their approximate form are strictly off-line methods, but they can be used to provide a base policy for use in an on-line multiagent rollout scheme.
TL;DR: A comprehensive review of typical recommendation techniques and their applications in the field of healthcare is presented in this article, where an overview is provided on three famous recommendation techniques, namely, content-based, collaborative filtering (CF)-based, and hybrid methods.
Abstract: With the increasing amount of information on the internet, recommendation system (RS) has been utilized in a variety of fields as an efficient tool to overcome information overload. In recent years, the application of RS for health has become a growing research topic due to its tremendous advantages in providing appropriate recommendations and helping people make the right decisions relating to their health. This paper aims at presenting a comprehensive review of typical recommendation techniques and their applications in the field of healthcare. More concretely, an overview is provided on three famous recommendation techniques, namely, content-based, collaborative filtering (CF)-based, and hybrid methods. Next, we provide a snapshot of five application scenarios about health RS, which are dietary recommendation, lifestyle recommendation, training recommendation, decision-making for patients and physicians, and disease-related prediction. Finally, some key challenges are given with clear justifications to this new and booming field.
TL;DR: A novel automatic classification system for homogenous ( anomaly-free ) and non-homogenous ( with defects ) nanofibers is proposed, and is shown to outperform other standard machine learning models and other recent state-of-the-art techniques.
Abstract: The manufacturing of nanomaterials by the electrospinning process requires accurate and meticulous inspection of related scanning electron microscope ( SEM ) images of the electrospun nanofiber, to ensure that no structural defects are produced. The presence of anomalies prevents practical application of the electrospun nanofibrous material in nanotechnology. Hence, the automatic monitoring and quality control of nanomaterials is a relevant challenge in the context of Industry 4.0. In this paper, a novel automatic classification system for homogenous ( anomaly-free ) and non-homogenous ( with defects ) nanofibers is proposed. The inspection procedure aims at avoiding direct processing of the redundant full SEM image. Specifically, the image to be analyzed is first partitioned into sub-images ( nanopatches ) that are then used as input to a hybrid unsupervised and supervised machine learning system. In the first step, an autoencoder ( AE ) is trained with unsupervised learning to generate a code representing the input image with a vector of relevant features. Next, a multilayer perceptron ( MLP ) , trained with supervised learning, uses the extracted features to classify non-homogenous nanofiber ( NH-NF ) and homogenous nanofiber ( H-NF ) patches. The resulting novel AE-MLP system is shown to outperform other standard machine learning models and other recent state-of-the-art techniques, reporting accuracy rate up to 92.5% . In addition, the proposed approach leads to model complexity reduction with respect to other deep learning strategies such as convolutional neural networks ( CNN ) . The encouraging performance achieved in this benchmark study can stimulate the application of the proposed scheme in other challenging industrial manufacturing tasks.
TL;DR: In this article, an RUL estimation method with risk-averse adaptation is developed which can reduce the overestimation rate while maintaining a reasonable under-estimation level, and a cost metric is proposed to measure the benefit of such a risk averse predictive maintenance method.
Abstract: Remaining useful life (RUL) prediction is an advanced technique for system maintenance scheduling. Most of existing RUL prediction methods are only interested in the precision of RUL estimation; the adverse impact of over-estimated RUL on maintenance scheduling is not of concern. In this work, an RUL estimation method with risk-averse adaptation is developed which can reduce the over-estimation rate while maintaining a reasonable under-estimation level. The proposed method includes a module of degradation feature selection to obtain crucial features which reflect system degradation trends. Then, the latent structure between the degradation features and the RUL labels is modeled by a support vector regression (SVR) model and a long short-term memory (LSTM) network, respectively. To enhance the prediction robustness and increase its marginal utility, the SVR model and the LSTM model are integrated to generate a hybrid model via three connection parameters. By designing a cost function with penalty mechanism, the three parameters are determined using a modified grey wolf optimization algorithm. In addition, a cost metric is proposed to measure the benefit of such a risk-averse predictive maintenance method. Verification is done using an aero-engine data set from NASA. The results show the feasibility and effectiveness of the proposed RUL estimation method and the predictive maintenance strategy.
TL;DR: In this paper, a generative adversarial network (GAN) for foggy image semantic segmentation (FISS GAN) is proposed, which contains two parts: an edge GAN and a semantic segmentation GAN.
Abstract: Because pixel values of foggy images are irregularly higher than those of images captured in normal weather (clear images), it is difficult to extract and express their texture. No method has previously been developed to directly explore the relationship between foggy images and semantic segmentation images. We investigated this relationship and propose a generative adversarial network (GAN) for foggy image semantic segmentation (FISS GAN), which contains two parts: an edge GAN and a semantic segmentation GAN. The edge GAN is designed to generate edge information from foggy images to provide auxiliary information to the semantic segmentation GAN. The semantic segmentation GAN is designed to extract and express the texture of foggy images and generate semantic segmentation images. Experiments on foggy cityscapes datasets and foggy driving datasets indicated that FISS GAN achieved state-of-the-art performance.
TL;DR: In this paper, an adaptive dynamic programming (ADP) strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation, and a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights.
Abstract: In this paper, an adaptive dynamic programming (ADP) strategy is investigated for discrete-time nonlinear systems with unknown nonlinear dynamics subject to input saturation. To save the communication resources between the controller and the actuators, stochastic communication protocols (SCPs) are adopted to schedule the control signal, and therefore the closed-loop system is essentially a protocol-induced switching system. A neural network (NN)-based identifier with a robust term is exploited for approximating the unknown nonlinear system, and a set of switch-based updating rules with an additional tunable parameter of NN weights are developed with the help of the gradient descent. By virtue of a novel Lyapunov function, a sufficient condition is proposed to achieve the stability of both system identification errors and the update dynamics of NN weights. Then, a value iterative ADP algorithm in an offline way is proposed to solve the optimal control of protocol-induced switching systems with saturation constraints, and the convergence is profoundly discussed in light of mathematical induction. Furthermore, an actor-critic NN scheme is developed to approximate the control law and the proposed performance index function in the framework of ADP, and the stability of the closed-loop system is analyzed in view of the Lyapunov theory. Finally, the numerical simulation results are presented to demonstrate the effectiveness of the proposed control scheme.