scispace - formally typeset
Search or ask a question

Showing papers by "Zhihan Lv published in 2022"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed a distributed parallelism strategy of convolutional neural network (CNN) for big data analysis (BDA) on the massive data generated in the smart city Internet of things (IoT).

87 citations


Journal ArticleDOI
TL;DR: Through experiments, the BIM BD processing algorithm based on Bayesian Network Structural Learning (BNSL) helps decision-makers use complex data in smart cities efficiently.
Abstract: With the rapid development of information technology and the spread of Corona Virus Disease 2019 (COVID-19), the government and urban managers are looking for ways to use technology to make the city smarter and safer. Intelligent transportation can play a very important role in the joint prevention. This work expects to explore the building information modeling (BIM) big data (BD) processing method of digital twins (DTs) of Smart City, thus speeding up the construction of Smart City and improve the accuracy of data processing. During construction, DTs build the same digital copy of the smart city. On this basis, BIM designs the building's keel and structure, optimizing various resources and configurations of the building. Regarding the fast data growth in smart cities, a complex data fusion and efficient learning algorithm, namely Multi-Graphics Processing Unit (GPU), is proposed to process the multi-dimensional and complex BD based on the compositive rough set model. The Bayesian network solves the multi-label classification. Each label is regarded as a Bayesian network node. Then, the structural learning approach is adopted to learn the label Bayesian network's structure from data. On the P53-old and the P53-new datasets, the running time of Multi-GPU decreases as the number of GPUs increases, approaching the ideal linear speedup ratio. With the continuous increase of K value, the deterministic information input into the tag BN will be reduced, thus reducing the classification accuracy. When K = 3, MLBN can provide the best data analysis performance. On genbase dataset, the accuracy of MLBN is 0.982 ± 0.013. Through experiments, the BIM BD processing algorithm based on Bayesian Network Structural Learning (BNSL) helps decision-makers use complex data in smart cities efficiently.

55 citations


Journal ArticleDOI
TL;DR: It is proved that the built SHPE model shows higher prediction accuracy and smaller error while ensuring the safety performance, which provides an experimental reference for the prediction and evaluation of smart healthcare treatment in the later stage.
Abstract: Two-dimensional1 arrays of bi-component structures made of cobalt and permalloy elliptical dots with thickness of 25 nm, length 1 mm and width of 225 nm, have been prepared by a self-aligned shadow deposition technique. Brillouin light scattering has been exploited to study the frequency dependence of thermally excited magnetic eigenmodes on the intensity of the external magnetic field, applied along the easy axis of the elements. This study aims to enhance the security for people's health, improve the medical level further, and increase the confidentiality of people's privacy information. Under the trend of wide application of deep learning algorithms, the convolutional neural network (CNN) is modified to build an interactive smart healthcare prediction and evaluation model (SHPE model) based on the deep learning model. The model is optimized and standardized for data processing. Then, the constructed model is simulated to analyze its performance. The results show that accuracy of the constructed system reaches 82.4%, which is at least 2.4% higher than other advanced CNN algorithms and 3.3% higher than other classical machine algorithms. It is proved based on comparison that the accuracy, precision, recall, and F1 of the constructed model are the highest. Further analysis on error shows that the constructed model shows the smallest error of 23.34 pixels. Therefore, it is proved that the built SHPE model shows higher prediction accuracy and smaller error while ensuring the safety performance, which provides an experimental reference for the prediction and evaluation of smart healthcare treatment in the later stage.

55 citations


Journal ArticleDOI
Xin Liu, Jianwei Zhao, Jie Li, Bin Cao, Zhihan Lv 
TL;DR: Experimental verification demonstrates that the designed multiobjective CIT2FR-FL-NAS framework can achieve high accuracy superior to state-of-the-art models and reduce network complexity under the condition of protecting medical data security.
Abstract: Medical data widely exist in the hospital and personal life, usually across institutions and regions. They have essential diagnostic value and therapeutic significance. The disclosure of patient information causes people’s panic, therefore, medical data security solution is very crucial for intelligent health care. The emergence of federated learning (FL) provides an effective solution, which only transmits model parameters, breaking through the bottleneck of medical data sharing, protecting data security, and avoiding economic losses. Meanwhile, the neural architecture search (NAS) has become a popular method to automatically search the optimal neural architecture for solving complex practical problems. However, few papers have combined the FL and NAS for simultaneous privacy protection and model architecture selection. Convolutional neural network (CNN) has outstanding performance in the image recognition field. Combining CNN and fuzzy rough sets can effectively improve the interpretability of deep neural networks. This article aims to develop a multiobjective convolutional interval type-2 fuzzy rough FL model based on NAS (CIT2FR-FL-NAS) for medical data security with an improved multiobjective evolutionary algorithm. We test the proposed framework on the LC25000 lung and colon histopathological image dataset. Experimental verification demonstrates that the designed multiobjective CIT2FR-FL-NAS framework can achieve high accuracy superior to state-of-the-art models and reduce network complexity under the condition of protecting medical data security.

50 citations


Journal ArticleDOI
TL;DR: This work presents a deep-federated-learning-based framework for securing POI microservices in CPS and implements and evaluates the performance of the proposed approach using two real-world POI-related datasets.
Abstract: An essential consideration in cyber-physical systems (CPS) is the ability to support secure communication services, such as points of interest (POI) microservices. Existing approaches to support secure POI microservices generally rely on anonymity and/or differential privacy technologies. There are, however, a number of known limitations with such approaches. Hence, this work presents a deep-federated-learning-based framework for securing POI microservices in CPS. In order to enhance data security, the system architecture is designed to isolate the cloud center from accessing user data on edge nodes, and an interactive training mechanism is introduced between the cloud center and edge nodes. Specifically, edge nodes pre-train reliable deep-learning-based models for users, and the cloud server coordinates parameter updating via federated learning. The connected and isolated structure between cloud center and edges facilitates deep federated learning. Finally, we implement and evaluate the performance of our proposed approach using two real-world POI-related datasets. The results show that our proposed approach achieves optimal scheduling performance and demonstrates its practical utility.

45 citations


Journal ArticleDOI
TL;DR: In this article , a distributed hybrid fish swarm optimization algorithm (FSOA) based on mobility of underwater environment and artificial fish swarm (AFS) theory is proposed in response to the actual needs of UWSNs.
Abstract: The particularity of the marine underwater environment has brought many challenges to the development of underwater sensor networks (UWSNs). This research realized the effective monitoring of targets by UWSNs and achieved higher quality of service in various applications such as communication, monitoring, and data transmission in the marine environment. After analysis of the architecture, the marine integrated communication network system (MICN system) is constructed based on the maritime wireless Mesh network (MWMN) by combining with the UWSNs. A distributed hybrid fish swarm optimization algorithm (FSOA) based on mobility of underwater environment and artificial fish swarm (AFS) theory is proposed in response to the actual needs of UWSNs. The proposed FSOA algorithm makes full use of the perceptual communication of sensor nodes and lets the sensor nodes share the information covered by each other as much as possible, enhancing the global search ability. In addition, a reliable transmission protocol NC-HARQ is put forward based on the combination of network coding (NC) and hybrid automatic repeat request (HARQ). In this work, three sets of experiments are performed in an area of 200 × 200 × 200 m. The simulation results show that the FSOA algorithm can fully cover the events, effectively avoid the blind movement of nodes, and ensure consistent distribution density of nodes and events. The NC-HARQ protocol proposed uses relay nodes for retransmission, and the probability of successful retransmission is much higher than that of the source node. At a distance of more than 2,000 m, the successful delivery rate of data packets is as high as 99.6%. Based on the MICN system, the intelligent ship constructed with the digital twins framework can provide effective ship operating state prediction information. In summary, this study is of great value for improving the overall performance of UWSNs and advancing the monitoring of marine data information.

42 citations


Journal ArticleDOI
TL;DR: A novel neural network model is designed by integrating the gene expression programming into the interval type-2 fuzzy rough neural network, aiming to generate fuzzy rules with more expressiveness utilizing various logical operators.
Abstract: The fuzzy logic-based neural network usually forms fuzzy rules via multiplying the input membership degrees, which lacks expressiveness and flexibility. In this article, a novel neural network model is designed by integrating the gene expression programming into the interval type-2 fuzzy rough neural network, aiming to generate fuzzy rules with more expressiveness utilizing various logical operators. The network training is regarded as a multiobjective optimization problem through simultaneously considering network precision, explainability, and generalization. Specifically, the network complexity can be minimized to generate concise and few fuzzy rules for improving the network explainability. Inspired by the extreme learning machine and the broad learning system, an enhanced distributed parallel multiobjective evolutionary algorithm is proposed. This evolutionary algorithm can flexibly explore the forms of fuzzy rules, and the weight refinement of the final layer can significantly improve precision and convergence by solving the pseudoinverse. Experimental results show that the proposed multiobjective evolutionary network framework is superior in both effectiveness and explainability.

36 citations


Journal ArticleDOI
01 May 2022-Patterns
TL;DR: In this paper , the authors proposed a secure multidimensional data storage solution called BlockNet that can ensure the security of the digital mapping process of the Internet of Things, thereby improving the data reliability of Digital Twins.

33 citations


Journal ArticleDOI
TL;DR: In this article , a digital twins framework in the construction field is constructed based on the building information modeling (BIM), and the digital twins process is fully applied to the various stages of building construction.

27 citations


Journal ArticleDOI
TL;DR: In this article , a summary of concepts and few practical applications of Digital Twins are introduced, combined with the current development status of DT, predict the future development trend of DT and make a summary.
Abstract: Abstract With the development of science and technology, the high-tech industry is developing rapidly, and various new-age technologies continue to appear, and Digital Twins (DT) is one of them. As a brand-new interactive technology, DT technology can handle the interaction between the real world and the virtual world well. It has become a hot spot in the academic circles of all countries in the world. DT have developed rapidly in recent years result from centrality, integrity and dynamics. It is integrated with other technologies and has been applied in many fields, such as smart factory in industrial production, digital model of life in medical field, construction of smart city, security guarantee in aerospace field, immersive shopping in commercial field and so on. The introduction of DT is mostly a summary of concepts, and few practical applications of Digital Twins are introduced. The purpose of this paper is to enable people to understand the application status of DT technology. At the same time, the introduction of core technologies related to DT is interspersed in the application introduction. Finally, combined with the current development status of DT, predict the future development trend of DT and make a summary.

26 citations


Journal ArticleDOI
TL;DR: In this article , the main contents of Intelligent Green Buildings (IGB) and the application and role of Digital Twins (DTs) in intelligent buildings are summarized and the advantages of DTs are further investigated in the context of IGB for DT smart cities.
Abstract: At present, the integration of green building, the intelligent building industry and high-quality development are facing a series of new opportunities and challenges. This review aims to analyze the digital development of smart green buildings to make it easier to create contiguous ecological development areas in green ecological cities. It sorts out the main contents of Intelligent Green Buildings (IGB) and summarizes the application and role of Digital Twins (DTs) in intelligent buildings. Firstly, the basic connotations and development direction of IGB are deeply discussed, and the current realization and applications of IGB are analyzed. Then, the advantages of DTs are further investigated in the context of IGB for DT smart cities. Finally, the development trends and challenges of IGB are analyzed. After a review and research, it is found that the realization and application of IGB have been implemented, but the application of DTs remains not quite integrated into the design of IGB. Therefore, a forward-looking design is required when designing the IGBs, such as prioritizing sustainable development, people’s livelihoods and green structures. At the same time, an IGB can only show its significance after the basic process of building the application layer is performed correctly. Therefore, this review contributes to the proper integration of IGB and urban development strategies, which are crucial to encouraging the long-term development of cities, thus providing a theoretical basis and practical experience for promoting the development of smart cities.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors proposed a simple yet effective residual learning diagnosis system (RLDS) for diagnosing fetal CHD to improve diagnostic accuracy, which adopts convolutional neural networks to extract discriminative features of the fetal cardiac anatomical structures.

Journal ArticleDOI
TL;DR: In this paper , a novel DFA is constructed by combining Backpropagation Neural Network (BPNN) with Dynamic Host Configuration Protocol (DCHP), recorded as BP-DCHC, which can prolong sensor nodes' survival time and provide the highest data fusion quality.

Journal ArticleDOI
TL;DR: The constructed algorithm can significantly reduce the data transmission delay, improve the prediction accuracy of the safe interaction between autonomous vehicles and pedestrians, and increase the recognition accuracy remarkably, which can provide experimental references for the intelligent development of the transportation industry in the future.

Journal ArticleDOI
TL;DR: In this article , the authors analyzed that Digital Twins (DTs) technology and AI technology show significant advantages in the classification of transportation infrastructure and the management of transportation spatial information network.

Journal ArticleDOI
01 Nov 2022
TL;DR: In this paper , a secure communication architecture for the Internet of Vehicles (IoV) nodes in intelligent transportation through studying the safety of IoV in smart transportation based on Blockchain (BC).
Abstract: The present work aims to improve the communication security of Internet of Vehicles (IoV) nodes in intelligent transportation through studying the safety of IoV in smart transportation based on Blockchain (BC). An IoV DTs model is built by combining big data with Digital Twins (DTs). Then, regarding the current IoV communication security issues, a secure communication architecture for the IoV system is proposed based on the immutable and trackable BC data. Besides, Wasserstein Distance Based Generative Adversarial Network (WaGAN) model constructs the IoV node risk forecast model. Because the WaGAN model calculates the loss function through Wasserstein distance, the learning rate of the model accelerates remarkably. After ten iterations, the loss rate of the WaGAN model is close to zero. Massive in-vehicle devices in IoV are connected simultaneously to the base station, causing network channel congestion. Therefore, a Group Authentication and Privacy-preserving (GAP) scheme is put forward. As users increase during authentication, the GAP scheme performs better than other authentication access schemes. In summary, the Intelligent Transportation System driven by DTs can promote intelligent transportation management. Besides, introducing BC into IoV can improve access control’s accuracy and response efficiency. The research reported here has significant value for improving the security of the information sharing of the IoV.

Journal ArticleDOI
TL;DR: The results indicate that the tracking state transition and target wake-up module can effectively track the gas boundary and reduce the network energy consumption and the three-layer network edge computing architecture proposed here has an excellent performance in industrial gas diffusion boundary tracking.
Abstract: The development of the Industrial Internet of Things (IIoT) and digital twins (DTs) technology brings new opportunities and challenges to all walks of life. The work aims to study the cross-layer optimization of DTs in IIoT. The specific application scenarios of hazardous gas leakage boundary tracking in the industry is explored. The work proposes an industrial hazardous gas tracking algorithm based on a parallel optimization framework, establishes a three-layer network of distributed edge computing based on IIoT, and develops a two-stage industrial hazardous gas tracking algorithm based on a state transition model. The performance of different algorithms is analyzed. The results indicate that the tracking state transition and target wake-up module can effectively track the gas boundary and reduce the network energy consumption. The task success rate of the parallel optimization algorithm exceeds 0.9 in 5 s. When the number of network nodes in the state transition algorithm is N = 600, the energy consumption is only 2.11 J. The minimum tracking error is 0.31, which is at least 1.33 lower than that of the exact conditional tracking algorithm. Therefore, the three-layer network edge computing architecture proposed here has an excellent performance in industrial gas diffusion boundary tracking.

Journal ArticleDOI
TL;DR: In this article , the authors investigated the impact of intelligent transportation systems (ITS) on the energy conservation and emission reduction (ECER) of transportation networks, and the realization path of the ECER transportation system was studied from three aspects: transportation, transportation organization and management, and energy upgrading and replacement.

Journal ArticleDOI
TL;DR: The work aims to improve the stability of wireless energy transfer in the Internet of Things (IoT), prolong the service life of wireless devices, and promote green communication by optimizing the energy efficiency of large-scale multiple-input multiple-output (MIMO) systems under WET technology.
Abstract: The work aims to improve the stability of wireless energy transfer (WET) in the Internet of Things (IoT), prolong the service life of wireless devices, and promote green communication. Based on a digital twins (DTs) IoT environment, we depict how to optimize the energy efficiency of large-scale multiple-input multiple-output (MIMO) systems under WET technology. The large-scale distributed antenna array is applied to the wireless sensor network. MIMO can produce extremely narrow beams so that the system reduces interference to other users. Our MIMO system's energy efficiency optimization uses fractional planning and the block coordinate descent algorithm. The simulation results show that the algorithm has the best throughput performance when the maximum transmission power reaches 19 dBm. The total energy consumption of the proposed resource allocation algorithm is only about 9 percent higher than that of the power minimization algorithm. In the case of different maximum transfer powers, the number of iterations in which the proposed algorithm is required to converge is within four. Changes in the number of users cannot affect the convergence performance of the proposed algorithm. After the antenna selection mechanism is introduced, the average power of the energy received by the user is improved notably compared to the case of simply using the largescale distributed antenna array. The research results can reference large-scale MIMO systems' energy efficiency optimization problems under WET conditions in the DTs IoT environment.

Journal ArticleDOI
TL;DR: In this article , an anomaly detection model in distributed edge computing is presented, which can detect the anomalies from both single source time series or multi-source time series is proposed, and a series comparison experiment is conducted to demonstrate the effectiveness of the proposed algorithm.

Journal ArticleDOI
TL;DR: In this paper , a hierarchical multi-vehicle longitudinal collision avoidance controller is proposed to guarantee safety of multi-cars using Vehicle-to-Infrastructure (V2I) communication capability in addition to radar for longitudinal vehicle control.
Abstract: Shortening inter-vehicle distance can increase traffic throughput on roads for increasing volume of vehicles. In the process, traffic accidents occur more frequently, especially for multi-car accidents. Furthermore, it is difficult for drivers to drive safely under such complex driving conditions. This article investigates multi-vehicle longitudinal collision avoidance issue under such traffic conditions based on the Advanced Emergency Braking System (AEBS). AEBS is used to avoid collisions or mitigate the impact during critical situations by applying brake automatically. Hierarchical multi-vehicle longitudinal collision avoidance controller is proposed to guarantee safety of multi-cars using Vehicle-to-Infrastructure (V2I) communication capability in addition to radar for longitudinal vehicle control. High-level controller is designed to ensure safety of multi-cars and optimize total energy by calculating the target braking force. Vehicle network is used to get the key vehicle-road interaction data and constrained hybrid genetic algorithm (CHGA) is adopted to decouple the vehicle-road interactive system,which can obtain the maximum ground friction through vehicle-road data, and provide key predictive parameters for multi-vehicle safety controller. Lower level non-singular Fractional Terminal Sliding Mode(NFTSM) Controller is built to achieve control goals of high-level controller. Simulations are carried out under typical driving conditions. Results verify that the proposed system in this article can avoid or mitigate the collision risk compared to the vehicle without this system.

Journal ArticleDOI
TL;DR: In this article , a modified standard normal deviation (MSND) incident detection algorithm that uses CAVs as data sources and considers multiple traffic parameters was proposed, where MSND is utilized in conjunction with two other incident detection algorithms, Standard Normal Deviation (SNS) and California (CAL), in a method of incident management known as Variable Speed Limits (VSL).
Abstract: Advances in IoT and IoV technology have made connected autonomous vehicles (CAVs) data sources. Using CAVs as data sources and in incident management algorithms can create faster, more reliable, and more effective algorithms. This paper proposes a modified standard normal deviation (MSND) incident detection algorithm that uses CAVs as data sources and considers multiple traffic parameters. MSND is utilized in conjunction with two other incident detection algorithms, Standard Normal Deviation (SNS) and California (CAL), in a method of incident management known as Variable Speed Limits (VSL). SUMO Traffic Simulation Software is used to evaluate the effectiveness of the proposed method. A 10.4-kilometer road network is developed. Numerous scenarios are simulated on this road network, with variables including traffic demand, autonomous vehicle penetration rate, incident location, incident length, and incident lane. On the effectiveness metrics of detection rate, false alarm rate, and mean time to detect, simulation results demonstrate that the proposed method outperforms the SND and California methods. In terms of detection rate, the MSND algorithm performs the best, with a 12.27% improvement over the SND algorithm and a 21.99% improvement over the California method. After integrating all incident detection algorithms with the VSL traffic management method and simulating each combination, it was determined that the MSND-VSL integration reduced average density in the critical region by 19.73 percent, followed by SND-VSL with a 13.94 percent reduction and CAL-VSL with a 9.9 percent reduction.


Journal ArticleDOI
TL;DR: In this paper , the authors proposed an adaptive key residual algorithm based on quantum key distribution mechanism to improve the communication security of the industrial Internet of Things (IIoT) based on digital twins.
Abstract: This work aims to improve the communication security of the industrial Internet of Things (IIoT) based on digital twins (DTs). The related technologies of quantum communication are introduced to improve network communication. Firstly, the key DTs technologies in the construction of IIoT are expounded. Also, the characteristics of quantum communication are analyzed. Secondly, a channel encryption scheme based on quantum communication is proposed to ensure the communication security of IIoT. The scheme uses the five-particle entanglement state and two-particle bell state as entanglement channels to realize two-particle quantum teleportation. Finally, an Adaptive Key Residue algorithm is proposed based on the quantum key distribution mechanism. The algorithm verification suggests that the success rate of service distribution decreases with the increase in network load. When the service load reaches 1000, the Adaptive Key Residue algorithm can maintain a success rate of service distribution in the network higher than 0.6. Besides, the success rate of service distribution increases with the growth of the total key generation rate V and the key pool capacity C. The research results reported here are of great significance for realizing the secure communication of IIoT systems based on digital twins to ensure the effective operation of network communication and the secure transmission of data.

Journal ArticleDOI
TL;DR: In this article , the authors used the Google Academic and literature database Web of Science (WoS) to search for studies related to HCI and deep learning, such as intelligent HCI, speech recognition, emotion recognition, and intelligent robot direction.
Abstract: In recent years, gesture recognition and speech recognition, as important input methods in Human–Computer Interaction (HCI), have been widely used in the field of virtual reality. In particular, with the rapid development of deep learning, artificial intelligence, and other computer technologies, gesture recognition and speech recognition have achieved breakthrough research progress. The search platform used in this work is mainly the Google Academic and literature database Web of Science. According to the keywords related to HCI and deep learning, such as “intelligent HCI”, “speech recognition”, “gesture recognition”, and “natural language processing”, nearly 1000 studies were selected. Then, nearly 500 studies of research methods were selected and 100 studies were finally selected as the research content of this work after five years (2019–2022) of year screening. First, the current situation of the HCI intelligent system is analyzed, the realization of gesture interaction and voice interaction in HCI is summarized, and the advantages brought by deep learning are selected for research. Then, the core concepts of gesture interaction are introduced and the progress of gesture recognition and speech recognition interaction is analyzed. Furthermore, the representative applications of gesture recognition and speech recognition interaction are described. Finally, the current HCI in the direction of natural language processing is investigated. The results show that the combination of intelligent HCI and deep learning is deeply applied in gesture recognition, speech recognition, emotion recognition, and intelligent robot direction. A wide variety of recognition methods were proposed in related research fields and verified by experiments. Compared with interactive methods without deep learning, high recognition accuracy was achieved. In Human–Machine Interfaces (HMIs) with voice support, context plays an important role in improving user interfaces. Whether it is voice search, mobile communication, or children’s speech recognition, HCI combined with deep learning can maintain better robustness. The combination of convolutional neural networks and long short-term memory networks can greatly improve the accuracy and precision of action recognition. Therefore, in the future, the application field of HCI will involve more industries and greater prospects are expected.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors investigated the system prediction and safety performance of the Digital Twins (DTs) of autonomous cars based on artificial intelligence technology, and the intelligent development of transportation in the smart city.
Abstract: This exploration is aimed at the system prediction and safety performance of the Digital Twins (DTs) of autonomous cars based on artificial intelligence technology, and the intelligent development of transportation in the smart city. On the one hand, considering the problem of safe driving of autonomous cars in intelligent transportation systems, it is essential to ensure the transmission safety of vehicle data and realize the load balancing scheduling of data transmission resources. On the other hand, convolution neural network (CNN) of the deep learning algorithm is adopted and improved, and then, the DTs technology is introduced. Finally, an autonomous cars DTs prediction model based on network load balancing and spatial-temporal graph convolution network is constructed. Moreover, through simulation, the performance of this model is analyzed from perspectives of Accuracy, Precision, Recall, and F1-score. The experimental results demonstrate that in comparative analysis, the accuracy of road network prediction of the model reported here is 92.70%, which is at least 2.92% higher than that of the models proposed by other scholars. Through the analysis of the security performance of network data transmission, it is found that this model achieves a lower average delay time than other comparative models. Besides, the message delivery rate is basically stable at 80%, and the message leakage rate is basically stable at about 10%. Therefore, the prediction model for autonomous cars constructed here not only ensures low delay but also has excellent network security performance, so that information can interact more efficiently. The research outcome can provide an experimental basis for intelligent development and safety performance improvement in the transportation field of smart cities.

Journal ArticleDOI
TL;DR: A two-channel convolutional neural network combined with a data augmentation method is proposed to detect AF from single-lead short ECG recordings and the effectiveness and advantages of the proposed method are demonstrated.
Abstract: With the popularity of the wireless body sensor network, real-time and continuous collection of single-lead electrocardiogram (ECG) data becomes possible in a convenient way. Data mining from the collected single-lead ECG waves has therefore aroused extensive attention worldwide, where early detection of atrial fibrillation (AF) is a hot research topic. In this paper, a two-channel convolutional neural network combined with a data augmentation method is proposed to detect AF from single-lead short ECG recordings. It consists of three modules, the first module denoises the raw ECG signals and produces 9-s ECG signals and heart rate (HR) values. Then, the ECG signals and HR rate values are fed into the convolutional layers for feature extraction, followed by three fully connected layers to perform the classification. The data augmentation method is used to generate synthetic signals to enlarge the training set and increase the diversity of the single-lead ECG signals. Validation experiments and the comparison with state-of-the-art studies demonstrate the effectiveness and advantages of the proposed method.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper used a double-scale decomposition equation to decompose the original traffic accident time series data into multiple sub-layers, and a particle filter was proposed to improve particle dilution.
Abstract: This work aims to prevent Traffic Accident (TA) and ensure drivers’ and pedestrians’ life and property safety. A TA prevention and prediction system is established based on Digital Twins (DTs) and Artificial Intelligence (AI). Firstly, the double-scale decomposition equation decomposes the original TA Time Series Data (TSD) into multiple sub-layers. The Long-Short Term Memory (LSTM) network is used to predict the low-frequency sub-layers. Then, the double-scale LSTM network prediction model is constructed based on the prediction results. Secondly, a Particle Filter (PF) is proposed based on target block tracking and improved resampling against the possible occlusion problem in target tracking. The proposed PF can improve particle dilution. Finally, the proposed target tracking algorithm and DTs are combined and applied to TA processing, and a motor vehicle road TA-oriented video analysis system is designed. Then, the proposed system is tested. The results corroborate that the proposed research model can effectively predict the TSD of TA compared with other models and has strong robustness. Compared with the original LSTM model and Stacked Auto Encoders (SAEs) prediction model, the prediction accuracy of the proposed model is improved by 6% and 8%, respectively. Besides, the training and prediction time of the proposed model is less than the original LSTM and SAEs models. The optimized Particle Swarm Optimization (PSO) model makes the target identification easier. Additionally, the proposed model has good generalization performance. In short, the proposed system can effectively improve the efficiency of TA handling and ensure accuracy and fairness, which provides some data support for applying DTs in intelligent transportation.

Journal ArticleDOI
TL;DR: Several deep learning algorithms are introduced, including Artificial Neural Network (NN), FM-Deep Learning, Convolutional NN and Recurrent NN, and their theory, development history and applications in disease prediction; the defects in the current disease prediction field are analyzed and some current solutions are given.

Journal ArticleDOI
TL;DR: In this paper , a resilient supply chain based on digital twins (DTs) is proposed to solve supply chain interruption caused by public health emergencies in real life through the resilient supply chains.
Abstract: The coronavirus disease 2019 (COVID-19) has put enormous pressure on the global supply chain. This work aims to solve supply chain interruption caused by public health emergencies in real life through the resilient supply chain based on digital twins (DTs). The research example used here is the disruption of the supply chain of N95 medical masks under the COVID-19 epidemic. First, the resilient supply chain's emergency decision cost and profit model is established under the manufacturer-supplier shared mode. The supply chain of M company of N95 medical masks in Hubei under the COVID-19 pandemic is selected to discuss the cost of emergency decision-making in the resilient supply chain. Moreover, a product supply chain model is built, including H suppliers, J manufacturers, K distributors, and L retailers. Supply failures result in lower supplier capacity ratios. Accordingly, the supply chain will adopt emergency strategies to reduce operating costs and increase profits. Activating alternative suppliers and distributors can mitigate the loss caused by partial supply chain disruption in emergencies. The elasticity of supply chains based on DTs discussed here is of significant value in helping the automation of critical links of the supply chain. The resilient supply chain combined with the capacity recovery strategy can significantly improve the traditional supply chain's response to supply disruption events.