scispace - formally typeset
Search or ask a question

Showing papers by "Alireza Jolfaei published in 2020"


Journal ArticleDOI
TL;DR: This article proposes a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory, and proves that the proposed framework can achieve guaranteed performance.
Abstract: Edge computing provides a promising paradigm to support the implementation of Industrial Internet of Things (IIoT) by offloading computational-intensive tasks from resource-limited machine-type devices (MTDs) to powerful edge servers. However, the performance gain of edge computing may be severely compromised due to limited spectrum resources, capacity-constrained batteries, and context unawareness. In this article, we consider the optimization of channel selection that is critical for efficient and reliable task delivery. We aim at maximizing the long-term throughput subject to long-term constraints of energy budget and service reliability. We propose a learning-based channel selection framework with service reliability awareness, energy awareness, backlog awareness, and conflict awareness, by leveraging the combined power of machine learning, Lyapunov optimization, and matching theory. We provide rigorous theoretical analysis, and prove that the proposed framework can achieve guaranteed performance with a bounded deviation from the optimal performance with global state information (GSI) based on only local and causal information. Finally, simulations are conducted under both single-MTD and multi-MTD scenarios to verify the effectiveness and reliability of the proposed framework.

214 citations


Journal ArticleDOI
TL;DR: A class of n-order nonlinear systems is considered as a model of CPS while it is in presence of cyber attacks only in the forward channel, and an intelligent-classic control system is developed to compensate cyber-attacks.
Abstract: This article proposes a hybrid intelligent-classic control approach for reconstruction and compensation of cyber attacks launched on inputs of nonlinear cyber-physical systems (CPS) and industrial Internet of Things systems, which work through shared communication networks. In this article, a class of n -order nonlinear systems is considered as a model of CPS while it is in presence of cyber attacks only in the forward channel. An intelligent-classic control system is developed to compensate cyber-attacks. Neural network (NN) is designed as an intelligent estimator for attack estimation and a classic nonlinear control system based on the variable structure control method is designed to compensate the effect of attacks and control the system performance in tracking applications. In the proposed strategy, nonlinear control theory is applied to guarantee the stability of the system when attacks happen. In this strategy, a Gaussian radial basis function NN is used for online estimation and reconstruction of cyber-attacks launched on the networked system. An adaptation law of the intelligent estimator is derived from a Lyapunov function. Simulation results demonstrate the validity and feasibility of the proposed strategy in car cruise control application as the testbed.

190 citations


Journal ArticleDOI
TL;DR: Challenges and recommendations for SSC network traffic classification with the dataset of features are presented and some well-known and most used datasets with details statistical features are described.

137 citations


Journal ArticleDOI
TL;DR: This paper has presented a systematic literature review of existing clone node detection schemes and provided the theoretical and analytical survey of the existing centralized and distributed schemes for the detection of clones in static WSNs with their drawbacks and challenges.
Abstract: The recent state of the art innovations in technology enables the development of low-cost sensor nodes with processing and communication capabilities. The unique characteristics of these low-cost sensor nodes such as limited resources in terms of processing, memory, battery, and lack of tamper resistance hardware make them susceptible to clone node or node replication attack. The deployment of WSNs in the remote and harsh environment helps the adversary to capture the legitimate node and extract the stored credential information such as ID which can be easily re-programmed and replicated. Thus, the adversary would be able to control the whole network internally and carry out the same functions as that of the legitimate nodes. This is the main motivation of researchers to design enhanced detection protocols for clone attacks. Hence, in this paper, we have presented a systematic literature review of existing clone node detection schemes. We have also provided the theoretical and analytical survey of the existing centralized and distributed schemes for the detection of clone nodes in static WSNs with their drawbacks and challenges.

106 citations


Journal ArticleDOI
TL;DR: A novel parallelization method of genetic algorithm (GA) solution of the Traveling Salesman Problem (TSP) is presented and the results confirm the efficiency of the proposed method for parallelizing GAs on many-core as well as on multi-core systems.
Abstract: A novel parallelization method of genetic algorithm (GA) solution of the Traveling Salesman Problem (TSP) is presented. The proposed method can considerably accelerate the solution of the equivalent TSP of many complex vehicle routing problems (VRPs) in the cloud implementation of intelligent transportation systems. The solution provides routing information besides all the services required by the autonomous vehicles in vehicular clouds. GA is considered as an important class of evolutionary algorithms that can solve optimization problems in growing intelligent transport systems. But, to meet time criteria in time-constrained problems of intelligent transportation systems like routing and controlling the autonomous vehicles, a highly parallelizable GA is needed. The proposed method parallelizes the GA by designing three concurrent kernels, each of which running some dependent effective operators of GA. It can be straightforwardly adapted to run on many-core and multi-core processors. To best use the valuable resources of such processors in parallel execution of the GA, threads that run any of the triple kernels are synchronized by a low-cost switching mechanism. The proposed method was experimented for parallelizing a GA-based solution of TSP over multi-core and many-core systems. The results confirm the efficiency of the proposed method for parallelizing GAs on many-core as well as on multi-core systems.

75 citations


Journal ArticleDOI
TL;DR: Two methods based on XCS learning classifier systems, namely, XCS and BCM-XCS, are proposed to balance the power consumption at the edge of the network and to reduce delays in the processing of workloads and the results are indicative of the superiority of BCM -XCS over the basic XCS-based method.

54 citations


Journal ArticleDOI
TL;DR: This paper analyzed the attacks that already targeted self-driving cars and extensively present potential cyber-attacks and their impacts on those cars along with their vulnerabilities and the possible mitigation strategies taken by the manufacturers and governments.
Abstract: Intelligent Traffic Systems (ITS) are currently evolving in the form of a cooperative ITS or connected vehicles. Both forms use the data communications between Vehicle-to-Vehicle (V2V), Vehicle-to-Infrastructure (V2I/I2V) and other on-road entities, and are accelerating the adoption of self-driving cars. The development of cyber-physical systems containing advanced sensors, sub-systems, and smart driving assistance applications over the past decade is equipping unmanned aerial and road vehicles with autonomous decision-making capabilities. The level of autonomy depends upon the make-up and degree of sensor sophistication and the vehicle’s operational applications. As a result, self-driving cars are being compromised perceived as a serious threat. Therefore, analyzing the threats and attacks on self-driving cars and ITSs, and their corresponding countermeasures to reduce those threats and attacks are needed. For this reason, some survey papers compiling potential attacks on VANETs, ITSs and self-driving cars, and their detection mechanisms are available in the current literature. However, up to our knowledge, they have not covered the real attacks already happened in self-driving cars. To bridge this research gap, in this paper, we analyze the attacks that already targeted self-driving cars and extensively present potential cyber-attacks and their impacts on those cars along with their vulnerabilities. For recently reported attacks, we describe the possible mitigation strategies taken by the manufacturers and governments. This survey includes recent works on how a self-driving car can ensure resilient operation even under ongoing cyber-attack. We also provide further research directions to improve the security issues associated with self-driving cars.

54 citations


Journal ArticleDOI
TL;DR: This study focuses on improving the quality of stroke data implementing a rigorous pre-processing technique using a multimodal stroke dataset available in the publicly available Kaggle repository and proves the superiority of proposed model.
Abstract: Stroke is enlisted as one of the leading causes of death and serious disability affecting millions of human lives across the world with high possibilities of becoming an epidemic in the next few decades. Timely detection and prompt decision making pertinent to this disease, plays a major role which can reduce chances of brain death, paralysis and other resultant outcomes. Machine learning algorithms have been a popular choice for the diagnosis, analysis and predication of this disease but there exists issues related to data quality as they are collected cross-institutional resources. The present study focuses on improving the quality of stroke data implementing a rigorous pre-processing technique. The present study uses a multimodal stroke dataset available in the publicly available Kaggle repository. The missing values in this dataset are replaced with attribute means and LabelEncoder technique is applied to achieve homogeneity. However the dataset considered was observed to be imbalanced which reflect that the results may not represent the actual accuracy and would be biased. In order to overcome this imbalance, resampling technique was used. In case of oversampling, some data points in the minority class are replicated to increase the cardinality value and rebalance the dataset. transformed and oversampled data is further normalized using Standardscalar technique. Antlion optimization (ALO) algorithm is implemented on the deep neural network (DNN) model to select optimal hyperparameters in minimal time consumption. The proposed model consumed only 38.13% of the training time which was also a positive aspect. The experimental results proved the superiority of proposed model.

48 citations


Proceedings ArticleDOI
06 Jul 2020
TL;DR: A federated learning approach that uses decentralized learning with blockchain-based security and a proposition that accompanies that training intelligent systems using distributed and locally-stored data for the use of all patients is proposed.
Abstract: In today's technological climate, users require fast automation and digitization of results for large amounts of data at record speeds. Especially in the field of medicine, where each patient is often asked to undergo many different examinations within one diagnosis or treatment. Each examination can help in the diagnosis or prediction of further disease progression. Furthermore, all produced data from these examinations must be stored somewhere and available to various medical practitioners for analysis who may be in geographically diverse locations. The current medical climate leans towards remote patient monitoring and AI-assisted diagnosis. To make this possible, medical data should ideally be secured and made accessible to many medical practitioners, which makes them prone to malicious entities. Medical information has inherent value to malicious entities due to its privacy-sensitive nature in a variety of ways. Furthermore, if access to data is distributively made available to AI algorithms (particularly neural networks) for further analysis/diagnosis, the danger to the data may increase (e.g., model poisoning with fake data introduction). In this paper, we propose a federated learning approach that uses decentralized learning with blockchain-based security and a proposition that accompanies that training intelligent systems using distributed and locally-stored data for the use of all patients. Our work in progress hopes to contribute to the latest trend of the Internet of Medical Things security and privacy.

45 citations


Journal ArticleDOI
16 Mar 2020
TL;DR: The security issues at each layer in the IoT protocol stack are examined, the underlying challenges and key security requirements are identified and a brief overview of existing security solutions to safeguard the IoT from the layered context are provided.
Abstract: © 2020 John Wiley & Sons, Ltd. Internet of Things (IoT) is a novel paradigm, which not only facilitates a large number of devices to be ubiquitously connected over the Internet but also provides a mechanism to remotely control these devices. The IoT is pervasive and is almost an integral part of our daily life. These connected devices often obtain user's personal data and store it online. The security of collected data is a big concern in recent times. As devices are becoming increasingly connected, privacy and security issues become more and more critical and these need to be addressed on an urgent basis. IoT implementations and devices are eminently prone to threats that could compromise the security and privacy of the consumers, which, in turn, could influence its practical deployment. In recent past, some research has been carried out to secure IoT devices with an intention to alleviate the security concerns of users. There have been research on blockchain technologies to tackle the privacy and security issues of the collected data in IoT. The purpose of this paper is to highlight the security and privacy issues in IoT systems. To this effect, the paper examines the security issues at each layer in the IoT protocol stack, identifies the under-lying challenges and key security requirements and provides a brief overview of existing security solutions to safeguard the IoT from the layered context.

41 citations


Journal ArticleDOI
TL;DR: The results show that the proposed offloading strategy can achieve fast convergence, and the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes.
Abstract: With the rapid increase of vehicles, the explosive growth of data flow and the increasing shortage of spectrum resources, the performance of existing task offloading scheme is poor, and the on-board terminal can’t achieve efficient computing. Therefore, this article proposes a task offload strategy based on reinforcement learning computing in edge computing architecture of Internet of vehicles. Firstly, the system architecture of Internet of vehicles is designed. The Road Side Unit receives the vehicle data in community and transmits it to Mobile Edge Computing server for data analysis, while the control center collects all vehicle information. Then, the calculation model, communication model, interference model and privacy issues are constructed to ensure the rationality of task offloading in Internet of vehicles. Finally, the user cost function is minimized as objective function, and double-layer deep Q-network in deep reinforcement learning algorithm is used to solve the problem for real-time change of network state caused by user movement. The results show that the proposed offloading strategy can achieve fast convergence. Besides, the impact of user number, vehicle speed and MEC computing power on user cost is the least compared with other offloading schemes. The task offloading rate of our proposed strategy is the highest with better performance, which is more suitable for the scenario of Internet of vehicles.

Journal ArticleDOI
TL;DR: A novel spectrum sharing technique is proposed using 5G enabled bidirectional cognitive deep learning nodes (BCDLN) along with dynamic spectrum sharing long short-term memory (DSLSTM), and expressions are derived for the spectrum allocated to multiple sources to obtain their spectrum targets as a variant of the participation node spectrum sharing ratio (PNSSR).
Abstract: With the rapid increase in communication technologies, shortage of spectrum will be a major issue faced in the coming years. Cognitive radio is a promising solution to this problem and works on the principle of sharing between cellular subscribers and ad-hoc Device to Device (D2D) users. Existing 5G spectrum sharing techniques work as per a fixed rule and are pre-established. Also, recent game theoretic approaches for spectrum sharing uses unrealistic assumptions with less likely practical implications. Here, a novel spectrum sharing technique is proposed using 5G enabled bidirectional cognitive deep learning nodes (BCDLN) along with dynamic spectrum sharing long short-term memory (DSLSTM). A joint spectrum allocation and management is carried out with wireless cyclic prefix orthogonal frequency division multiple access (CP-OFDMA). The BCDLN self-learning nodes with decision making capability route information to several destinations at a constant spectrum sharing target, and cooperate via DSLSTM. BCDLN based on time balanced and unbalanced channel knowledge is also examined. With the proposed framework, expressions are derived for the spectrum allocated to multiple sources to obtain their spectrum targets as a variant of the participation node spectrum sharing ratio (PNSSR). The impression of noise when all nodes broadcast with equal spectrum allocation is also investigated.

Journal ArticleDOI
TL;DR: This paper proposes a combination of machine learning techniques to mitigate the relay attacks on Passive Keyless Entry and Start (PKES) systems and uses a Long Short-Term Memory recurrent neural network for driver identification based on the real-world driving data.
Abstract: Due to the rapid developments in intelligent transportation systems, modern vehicles have turned into intelligent transportation means which are able to exchange data through various communication protocols. Today’s vehicles portray best example of a cyber-physical system because of their integration of computational components and physical systems. As the IoT and data remain intrinsically linked together, the evolving nature of the transportation network comes with a risk of virtual vehicle hijacking. In this paper, we propose a combination of machine learning techniques to mitigate the relay attacks on Passive Keyless Entry and Start (PKES) systems. The proposed algorithm uses a set of key fob features that accurately profiles the PKES system and a set of driving features to identify the driver. First relay attack detection is performed, and if a relay attack is not detected, the vehicle is unlocked and algorithm proceeds to gain the driving features and use neural networks to identify whether the current driver is whom he/she claims to be. To assess the machine learning model, we compared the decision tree, SVM, and KNN method using a three-month log of a PKES system. Our test results confirm the effectiveness of the proposed method in recognizing relayed messages. The proposed methods achieve 99.8% accuracy rate. We used a Long Short-Term Memory recurrent neural network for driver identification based on the real-world driving data, which are collected from a driver who drives the vehicles on several routes in real-world traffic conditions.

Posted Content
TL;DR: In this article, the authors survey the recent literature of EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensated for the gaps in the systematic summary of the past five years (2015-2019).
Abstract: Brain-Computer Interface (BCI) is a powerful communication tool between users and systems, which enhances the capability of the human brain in communicating and interacting with the environment directly. Advances in neuroscience and computer science in the past decades have led to exciting developments in BCI, thereby making BCI a top interdisciplinary research area in computational neuroscience and intelligence. Recent technological advances such as wearable sensing devices, real-time data streaming, machine learning, and deep learning approaches have increased interest in electroencephalographic (EEG) based BCI for translational and healthcare applications. Many people benefit from EEG-based BCIs, which facilitate continuous monitoring of fluctuations in cognitive states under monotonous tasks in the workplace or at home. In this study, we survey the recent literature of EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensated for the gaps in the systematic summary of the past five years (2015-2019). In specific, we first review the current status of BCI and its significant obstacles. Then, we present advanced signal sensing and enhancement technologies to collect and clean EEG signals, respectively. Furthermore, we demonstrate state-of-art computational intelligence techniques, including interpretable fuzzy models, transfer learning, deep learning, and combinations, to monitor, maintain, or track human cognitive states and operating performance in prevalent applications. Finally, we deliver a couple of innovative BCI-inspired healthcare applications and discuss some future research directions in EEG-based BCIs.

Journal ArticleDOI
TL;DR: This study addresses the challenges of using renewable power supplies in delay-sensitive fogs and proposes an efficient workload allocation method based on a learning classifier system that reduces the long-term costs of the system including service delay and operating costs.
Abstract: Nowadays, renewable energies have been considered as one of the important sources of energy supply in delay-sensitive fog computations in intelligent transportation systems due to their cheapness and availability. This study addresses the challenges of using renewable power supplies in delay-sensitive fogs and proposes an efficient workload allocation method based on a learning classifier system. The system dynamically learns the workload allocation policies between the cloud and the fog servers and then converges on the optimal allocation that fulfils the energy and delay requirements in the overall transportation system. Simulation results confirm that the proposed algorithm reduces the long-term costs of the system including service delay and operating costs. Also, compared to some other techniques, when the proposed method presents the most successful solution for reducing the average delay of the workloads and converging on the minimum value as well as retaining or even increasing the battery levels of fog nodes up to 100%. The lowest cost of the delay is 5 among other available methods, whereas in the proposed method, this value approaches 4.5.

Journal ArticleDOI
TL;DR: The proposed framework is a binary cuckoo search-based stacking model that collectively exploits multiple base learners for human activities recognition from the gathered accelerometer sensors data mounted on wearable and mobile devices.
Abstract: Human activity recognition has been a topic of attraction among researchers and developers because of its enormous usage in widespread region of human life. The varied human activities and the way they are executed at individual level are the main challenges to be recognized in human behavior modeling. This paper proposes a novel methodology that recognizes human activities from the behavior of individuals in a smart home environment. The dataset considered in this work is captured using Bluetooth low energy, a popular technology for indoor localization. The proposed framework is a binary cuckoo search-based stacking model that collectively exploits multiple base learners for human activities recognition from the gathered accelerometer sensors data mounted on wearable and mobile devices. The work is tested on the newly developed SPHERE dataset to recognize user activities in smart home environment. The experimental results confirm the effectiveness of the proposed approach, which outperforms MLP, DT, KNN, SGD, NB, RF, LR and SVM classifiers on the dataset and gives a high predictive accuracy value of 93.77% via a tenfold cross-validation. The proposed approach gives a better performance at the expense of more computation time, that is, due to the integration of cuckoo search metaheuristic algorithm.

Journal ArticleDOI
TL;DR: An RNS-based ECC core hardware for the two families of elliptic curves that are short Weierstraß and twisted Edwards curves is proposed and the test results confirm that the performance of the fully RNS ECC point multiplication is better than the fastest E CC point multiplication cores in the literature.
Abstract: In today's technology, a sheer number of Internet of Things applications use hardware security modules for secure communications. The widely used algorithms in security modules, for example, digital signatures and key agreement, are based upon elliptic curve cryptography (ECC). A core operation used in ECC is the point multiplication, which is computationally expensive for many Internet of things applications. In many IoT applications, such as intelligent transportation systems and distributed control systems, thousands of safety messages need to be signed and verified within a very short time-frame. Considerable research has been conducted in the design of a fast elliptic curve arithmetic on finite fields using residue number systems (RNS). In this article, we propose an RNS-based ECC core hardware for the two families of elliptic curves that are short Weierstras and twisted Edwards curves. Specifically, we present RNS implementations for SECP256K1 and ED25519 standard curves. We propose an RNS hardware architecture supporting fast elliptic curve point-addition (ECPA), point-doubling (ECPD), and point-tripling (ECPT). We implemented different ECC point multiplication algorithms on the Xilinx FPGA platform. The test results confirm that the performance of our fully RNS ECC point multiplication is better than the fastest ECC point multiplication cores in the literature.

Journal ArticleDOI
TL;DR: Numerical results demonstrate that the Data-driven Fair-Hierarchical Scheduling (DFHS) outperforms the classical FCFS, the latest EARS and Tian's schemes in terms of service missing ratio, queueing delay and fair index.
Abstract: The 6G space-air-ground integrated network can achieve global full-area three-dimensional coverage and ultrawide-area broadband access capabilities anytime and anywhere. Through the integration of satellite networks and terrestrial networks, it can provide a better user experience and has become the core development direction of 6G networks. Inspired by the potential of service differentiation ability of 6G IEEE 802.11ax protocols, say 6G Wi-Fi 6, a Data-driven Fair-Hierarchical Scheduling (DFHS) is proposed in this paper to schedule packets from diversified applications in dense IoT networks, which allows data streams to fairly share the spectrum and at the same time satisfy deadline requirements. Our DFHS is divided into the outer-layer and inner-layer schedulers. First, diversified packets of IoT services are classified into categories according to delay requirement and transmission frequency. After that, the outerlayer scheduler assign classified packets to different Access Categories with differentiated channel competition capabilities. Next, the inner-layer scheduler optimally schedules the AC queues according to access response ratio of packets. A Particle Swarm Optimization algorithm is further introduced to solve the optimal scheduling problem of packets. Numerical results demonstrate that DFHS outperforms classic FCFS, the latest EARS and Tians schemes in terms of Service Missing Ratio, queueing delay and fair index.

Journal ArticleDOI
TL;DR: A new data-sharing framework and a data access control mechanism is proposed to support the selective sharing of electronic medical records from different medical institutions between different doctors and ensures that privacy concerns are taken into account when processing requests for access to patients’ medical information.
Abstract: In virtue of advances in smart networks and the cloud computing paradigm, smart healthcare is transforming. However, there are still challenges, such as storing sensitive data in untrusted and controlled infrastructure and ensuring the secure transmission of medical data, among others. The rapid development of watermarking provides opportunities for smart healthcare. In this article, we propose a new data-sharing framework and a data access control mechanism. The applications are submitted by the doctors, and the data is processed in the medical data center of the hospital, stored in semi-trusted servers to support the selective sharing of electronic medical records from different medical institutions between different doctors. Our approach ensures that privacy concerns are taken into account when processing requests for access to patients’ medical information. For accountability, after data is modified or leaked, both patients and doctors must add digital watermarks associated with their identification when uploading data. Extensive analytical and experimental results are presented that show the security and efficiency of our proposed scheme.

Journal ArticleDOI
TL;DR: Experimental results show that the proposed DaaC framework improves QoS through distributed data transmission and helps in protecting the underlying network through a registration scheme and preserves the privacy of mobile sinks through group-based data collection requests.
Abstract: Industrial Internet of Things applications demand trustworthiness in terms of quality of service (QoS), security, and privacy, to support the smooth transmission of data. To address these challenges, in this article, we propose a distributed and anonymous data collection (DaaC) framework based on a multilevel edge computing architecture. This framework distributes captured data among multiple level-one edge devices (LOEDs) to improve the QoS and minimize packet drop and end-to-end delay. Mobile sinks are used to collect data from LOEDs and upload to cloud servers. Before data collection, the mobile sinks are registered with a level-two edge-device to protect the underlying network. The privacy of mobile sinks is preserved through group-based signed data collection requests. Experimental results show that our proposed framework improves QoS through distributed data transmission. It also helps in protecting the underlying network through a registration scheme and preserves the privacy of mobile sinks through group-based data collection requests.

Journal ArticleDOI
TL;DR: Comparison results confirm the effectiveness of CVSS IoT-ICS framework as it is equally applicable to all nodes of a hybrid network and evaluates the vulnerabilities based on the distinct features of each node type.
Abstract: With the emergence of internet-based devices, the traditional industrial control system (ICS) networks have evolved to co-exist with the conventional IT and internet enabled IoT networks, hence facing various security challenges. The IT industry around the world has widely adopted the common vulnerability scoring system (CVSS) as an industry standard to numerically evaluate the vulnerabilities in software systems. This mathematical score of vulnerabilities is combined with environmental knowledge to determine the vulnerable nodes and attack paths. IoT and ICS systems have unique dynamics and specific functionality as compared to traditional computer networks, and therefore, the legacy cyber security models would not fit these advanced networks. In this paper, we studied the CVSS v3.1 framework’s application to ICS embedded networks and an improved vulnerability framework, named CVSSIoT-ICS, is proposed. CVSSIoT-ICS and CVSS v3.1 are applied to a realistic supply chain hybrid network which consists of IT, IoT, and ICS nodes. This hybrid network is assigned with actual vulnerabilities listed in the national vulnerability database (NVD). The comparison results confirm the effectiveness of CVSSIoT-ICS framework as it is equally applicable to all nodes of a hybrid network and evaluates the vulnerabilities based on the distinct features of each node type.

Journal ArticleDOI
TL;DR: A new hybrid control scheme is designed using linear quadratic regulation, sliding mode control, and artificial radial basis function neural network to alleviate the effect of DoS attacks and maintain the performance of the cyber-rotary gantry system in tracking applications.
Abstract: This paper presents an approach for tolerant control and compensation of cyber attacks on the inputs and outputs of a cyber-physical system of rotary gantry type. The proposed control schemes are designed based on classic–intelligent control strategies for trajectory tracking and vibration control of a networked control system, which are developed for tip angular position control, while the system is prone to cyber attacks. The malicious attacks are assumed to be of denial of service (DoS) kind and cause packet loss with high probability in the two signals; control input and sensor output. In this paper, several classic and intelligent control strategies are studied in terms of robustness and effectiveness to attacks. Based on the results, a new hybrid control scheme is designed using linear quadratic regulation, sliding mode control, and artificial radial basis function neural network to alleviate the effect of DoS attacks and maintain the performance of the cyber-rotary gantry system in tracking applications. The neural network controller is trained during the control process. Its learning algorithm is based on the minimization of a cost function which contains the sliding surface. The hybrid control system is analyzed from the stability perspective. Moreover, the efficiency of the proposed scheme is validated by simulation on MATLAB Simulink platform.

Journal ArticleDOI
TL;DR: The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications and confirms the resiliency of the framework with respect to security and privacy threats.
Abstract: Industrial applications generate big data with redundant information that is transmitted over heterogeneous networks. The transmission of big data with redundant information not only increases the overall end-to-end delay but also increases the computational load on servers which affects the performance of industrial applications. To address these challenges, we propose an intelligent framework named Reliable and Secure multi-level Edge Computing (RaSEC), which operates in three phases. In the first phase, level-one edge devices apply a lightweight aggregation technique on the generated data. This technique not only reduces the size of the generated data but also helps in preserving the privacy of data sources. In the second phase, a multistep process is used to register level-two edge devices (LTEDs) with high-level edge devices (HLEDs). Due to the registration process, only legitimate LTEDs can forward data to the HLEDs, and as a result, the computational load on HLEDs decreases. In the third phase, the HLEDs use a convolutional neural network to detect the presence of moving objects in the data forwarded by LTEDs. If a movement is detected, the data is uploaded to the cloud servers for further analysis; otherwise, the data is discarded to minimize the use of computational resources on cloud computing platforms. The proposed framework reduces the response time by forwarding useful information to the cloud servers and can be utilized by various industrial applications. Our theoretical and experimental results confirm the resiliency of our framework with respect to security and privacy threats.

Journal ArticleDOI
TL;DR: A dynamic cluster algorithm based on coefficient of variation, which learns the local spatial distribution of data and hierarchically clusters the majority and is validated on three artificial datasets, 22 KEEL datasets and two gene expression cancer datasets, indicating that these algorithms are not only effective imbalance algorithms, but also provide potential for building a reliable biological cyber-physical system.
Abstract: Our paper aims at learning from imbalance data based on ensemble learning. At the stage, the main solution is to combine under-sampling, oversampling or cost sensitivity learning with ensemble learning. However, these feature space-based methods fail to reflect the transformation of distribution and are usually accompanied with high computational complexity and risk of overfitting. In this paper, we propose a dynamic cluster algorithm based on coefficient of variation (or entropy), which learns the local spatial distribution of data and hierarchically clusters the majority. This algorithm has low complexity and can dynamically adjust the cluster according to the iteration of AdaBoost, adaptively synchronized with changes caused by sample weight changes. Then, we design an index to measure the importance of each cluster. Based on this index, a dynamic sampling algorithm based on maximum weight is proposed. The effectiveness of the sampling algorithm is proved by visual experiments. Finally, we propose a cost-sensitive algorithm based on Bagging, and combine it with the dynamic sampling algorithm to propose a multi-fusion imbalanced ensemble learning algorithm. In experimental research, our algorithms have been validated on three artificial datasets, 22 KEEL datasets and two gene expression cancer datasets, and have shown ideal or better performance than SOTA in terms of AUC, indicating that our algorithms are not only effective imbalance algorithms, but also provide potential for building a reliable biological cyber-physical system.

Journal ArticleDOI
TL;DR: The simulation results show that the method proposed in this paper can optimize the performance of the secondary system while guaranteeing the priority of the primary user, and it is superior to several advanced algorithms.

Journal ArticleDOI
TL;DR: A cognitive popularity-based AI service distribution architecture based on SD-ICN is proposed to generate accurate AI service models over decentralized big data samples and provides user request-oriented cognitive popularity model for caching and distribution optimization.
Abstract: As an important architecture of next-generation network, Software-Defined Information-Centric Networking (SD-ICN) enables flexible and fast content sharing in beyond the fifth-generation (B5G). The clear advantages of SD-ICN in fast and efficient content distribution and flexible control make it a perfect platform for solving the rapid sharing and cognitive caching of AI services, including data samples sharing and pre-trained models transferring. With the explosive growth of decentralized artificial intelligence (AI) services, the training and sharing efficiency of edge AI is affected. Various applications usually request the same AI samples and training models, but the efficient and cognitive sharing of AI services remain unsolved. To address these issues, we propose a cognitive popularity-based AI service distribution architecture based on SD-ICN. First, an SD-ICN enabled edge training scheme is proposed to generate accurate AI service models over decentralized big data samples. Second, Pure Birth Process (PBP) and error correction-based AI service caching and distribution schemes are proposed, which provides user request-oriented cognitive popularity model for caching and distribution optimization. Simulation results indicate the superiority of the proposed architecture, and the proposed cognitive SD-ICN scheme has 62.11% improved to the conventional methods.

Journal ArticleDOI
TL;DR: This article integrates the information-centric network (ICN) and the network function virtualization (NFV) with ADASs to support an efficient AR-assisted content sharing and distribution and proposes an incentive trading model for assistance content caching services and a novel mechanism for optimal content cache allocation.
Abstract: Advanced driver-assistance systems (ADASs) have been proposed as an alternative to driverless vehicles to provide support for automotive vehicle decisions. As a significant driving force for ADASs, the augmented reality (AR) provides comprehensive location-based content services for in-vehicle consumers. With the increase in request for information sharing, the current standalone mode of ADASs needs a shift to the multiuser sharing mode. In this article, to address the high mobility and real time requirements of ADASs in 5G environments, and also to address the resource orchestration and service management of big data in intelligent transportation systems, we integrate the information-centric network (ICN) and the network function virtualization (NFV) with ADASs to support an efficient AR-assisted content sharing and distribution. This integration eliminates the imbalance between the content requests and the resource limitation by splitting the virtual resources and providing an on-demand network and resource slicing in ADASs. We propose an incentive trading model for assistance content caching services and also propose a novel mechanism for optimal content cache allocation. Our extensive evaluation confirms that our proposed mechanism outperforms the past literature in terms of the cache hit ratio and latency.

Posted Content
TL;DR: In this paper, the authors study the security of adaptive cruise control systems in the presence of covert attacks and propose a novel intrusion detection and compensation method to disclose and respond to such attacks.
Abstract: With the benefits of Internet of Vehicles (IoV) paradigm, come along unprecedented security challenges. Among many applications of inter-connected systems, vehicular networks and smart cars are examples that are already rolled out. Smart vehicles not only have networks connecting their internal components e.g. via Controller Area Network (CAN) bus, but also are connected to the outside world through road side units and other vehicles. In some cases, the internal and external network packets pass through the same hardware and are merely isolated by software defined rules. Any misconfiguration opens a window for the hackers to intrude into vehicles' internal components e.g. central lock system, Engine Control Unit (ECU), Anti-lock Braking System (ABS) or Adaptive Cruise Control (ACC) system. Compromise of any of these can lead to disastrous outcomes. In this paper, we study the security of smart vehicles' adaptive cruise control systems in the presence of covert attacks. We define two covert/stealth attacks in the context of cruise control and propose a novel intrusion detection and compensation method to disclose and respond to such attacks. More precisely, we focus on the covert cyber attacks that compromise the integrity of cruise controller and employ a neural network identifier in the IDS engine to estimate the system output dynamically and compare it against the ACC output. If any anomaly is detected, an embedded substitute controller kicks in and takes over the control. We conducted extensive experiments in MATLAB to evaluate the effectiveness of the proposed scheme in a simulated environment.

Journal ArticleDOI
TL;DR: The structural weakness of the state-of-the-art indistinguishable obfuscation mechanism is shown, the future direction to resolve such privacy issues for IoMT applications is discussed, and approximate eigen Values are used to remove the influence of noise on the matrix eigenvalues and build a specific relationship between the determinant and matrix rank.

Journal ArticleDOI
TL;DR: The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels.
Abstract: In Digital Video Broadcasting-Handheld (DVB-H) devices for cyber-physical social systems, the Discrete Fractional Fourier Transform-Orthogonal Chirp Division Multiplexing (DFrFT-OCDM) has been suggested to enhance the performance over Orthogonal Frequency Division Multiplexing (OFDM) systems under time and frequency-selective fading channels. In this case, the need for equalizers like the Minimum Mean Square Error (MMSE) and Zero-Forcing (ZF) arises, though it is excessively complex due to the need for a matrix inversion, especially for DVB-H extensive symbol lengths. In this work, a low complexity equalizer, Least-Squares Minimal Residual (LSMR) algorithm, is used to solve the matrix inversion iteratively. The paper proposes the LSMR algorithm for linear and nonlinear equalizers with the simulation results, which indicate that the proposed equalizer has significant performance and reduced complexity over the classical MMSE equalizer and other low complexity equalizers, in time and frequency-selective fading channels.