scispace - formally typeset
Search or ask a question

Showing papers in "Iet Communications in 2020"


Journal ArticleDOI
TL;DR: In this survey, WiFi-based indoor positioning techniques are divided into the active positioning technique and the passive positioning technique based on whether the target carries certain devices.
Abstract: With the rapid development of wireless communication technology, various indoor location-based services (ILBSs) have gradually penetrated into daily life. Although many other methods have been proposed to be applied to ILBS in the past decade, WiFi-based positioning techniques with a wide range of infrastructure have attracted attention in the field of wireless transmission. In this survey, the authors divide WiFi-based indoor positioning techniques into the active positioning technique and the passive positioning technique based on whether the target carries certain devices. After reviewing a large number of excellent papers in the related field, the authors make a detailed summary of these two types of positioning techniques. In addition, they also analyse the challenges and future development trends in the current technological environment.

116 citations


Journal ArticleDOI
TL;DR: A novel feature selection algorithm, which selects an optimal number of features from the data set and an intelligent fuzzy temporal decision tree algorithm integrated with convolution neural networks to detect the intruders effectively are proposed.
Abstract: Intrusion detection systems assume a noteworthy job in the provision of security in wireless Sensor networks. The existing intrusion detection systems focus only on the detection of the known types of attacks. However, it neglects to recognise the new types of attacks, which are introduced by malicious users leading to vulnerability and information loss in the network. In order to address this challenge, a new intrusion detection system, which detects the known and unknown types of attacks using an intelligent decision tree classification algorithm, has been proposed. For this purpose, a novel feature selection algorithm named dynamic recursive feature selection algorithm, which selects an optimal number of features from the data set is proposed. In addition, an intelligent fuzzy temporal decision tree algorithm is also proposed by extending the decision tree algorithm and integrated with convolution neural networks to detect the intruders effectively. The experimental analysis carried out using KDD cup data set and network trace data set demonstrates the effectiveness of this proposed approach. It proved that the false positive rate, energy consumption, and delay are reduced in the proposed work. In addition, the proposed system increases the network performance through increased packet delivery ratio.

92 citations


Journal ArticleDOI
TL;DR: This study proposes an efficient policy, called MinRE, for SPP in fog–cloud systems, to provide both QoS for IoT services and energy efficiency for fog service providers, and classify services into two categories: critical services and normal ones.
Abstract: Fog computing is a decentralised model which can help cloud computing for providing high quality-of-service (QoS) for the Internet of Things (IoT) application services. Service placement problem (SPP) is the mapping of services among fog and cloud resources. It plays a vital role in response time and energy consumption in fog–cloud environments. However, providing an efficient solution to this problem is a challenging task due to difficulties such as different requirements of services, limited computing resources, different delay, and power consumption profile of devices in fog domain. Motivated by this, in this study, we propose an efficient policy, called MinRE, for SPP in fog–cloud systems. To provide both QoS for IoT services and energy efficiency for fog service providers, we classify services into two categories: critical services and normal ones. For critical services, we propose MinRes, which aims to minimise response time, and for normal ones, we propose MinEng, whose goal is reducing the energy consumption of fog environment. Our extensive simulation experiments show that our policy improves the energy consumption up to 18%, the percentage of deadline satisfied services up to 14% and the average response time up to 10% in comparison with the second-best results.

52 citations


Journal ArticleDOI
TL;DR: In this paper, the authors focus on the research and analysis of B5G and 6G vehicular channel measurements and modelling and provide guidelines on the channel model development and adoption for various system development and verification objectives.
Abstract: As vehicular communications for beyond fifth-generation (B5G) and sixth-generation (6G) is picking up interests from academia and industry recently, more and more research and development have been devoted towards the establishment of vehicular communications for B5G and 6G that is capable of supporting the ever more intelligent transportation systems. One key facilitating the design and improvement of vehicular communications for B5G and 6G is channel modelling, which is widely regarded as the foundation of all communication and networking systems. In this paper, the authors focus on the research and analysis of B5G and 6G vehicular channel measurements and modelling. By emphasising the new requirements and challenges that the emerging B5G and 6G technologies and frequency bands bring to vehicular communication channel measurements and modelling, they present an overview of the existing work and identify the limitations therein, and provide guidelines on the channel model development and adoption for various system development and verification objectives. Finally, future challenges related to vehicular channel measurements, modelling, and their application for B5G and 6G are addressed.

51 citations


Journal ArticleDOI
TL;DR: This systematic study attempts to analyse how the combination of IoT and cloud has been presented and detects the challenges and metrics of such integration.
Abstract: There are two different concepts [Internet of Things (IoT) and cloud computing] influencing our lives in many ways as they will further be used and highlighted in the future of the Internet. The present systematic study discusses a combination of these two concepts. Many studies have focused on IoT and cloud computing separately. These studies lack a deep investigation of their combination, which has new challenges and issues. Yet, the recent integration of them has been paid a primary focus. This systematic study attempts to analyse how the combination of IoT and cloud has been presented and detects the challenges and metrics of such integration. Further, this analysis aims to develop an understanding of the current affair of this integration by overviewing a collection of 38 recent papers. The contributions of this study, in brief, are: (i) overviewing the current challenges correlated with combination of cloud computing and IoT; (ii) presenting the anatomy of some proposed combination platforms, applications, and integrations; (iii) summarising major areas to boost the integration of cloud and IoT in the upcoming works.

49 citations


Journal ArticleDOI
TL;DR: VM consolidation based on the Fruit fly Hybridised Cuckoo Search (FHCS) algorithm is proposed to obtain the optimal solution with the help of two objective functions in cloud DC and energy consumption is reduced with less number of active PMs than other conventional approaches.
Abstract: Cloud computing and virtualisation are recent approaches to develop minimum energy usage in virtualised cloud data centre (DC) for resource management. One of the major problems faced by cloud DCs is energy consumption which increases the cost of cloud user and environmental influence. Therefore, virtual machine (VM) consolidation is properly proposed in many approaches which reallocate the VMs by VM migration with the objective of minimum energy consumption. Here, VM consolidation based on the Fruit fly Hybridised Cuckoo Search (FHCS) algorithm is proposed to obtain the optimal solution with the help of two objective functions in cloud DC. This FHCS approach efficiently minimises the energy usage and resource depletion in cloud DC. The proposed work comparison is done with Ant Colony System (ACS), Particle Swarm Optimisation (PSO) algorithm and Genetic Algorithm (GA). The simulation conclusion reveals the advantage of the FHCS and VM migration method over existing procedures such as GA, PSO and ACS in terms of energy consumption and resource utilisation. The proposed method achieves 68 Kwh less energy and 72% less resources than existing methods. Simulation results have shown that energy consumption of the proposed method is reduced with less number of active PMs than other conventional approaches.

46 citations


Journal ArticleDOI
TL;DR: It is shown that each underwater vehicle in the MU-massive MIMO transmission scenario can achieve an effective bit rate as high as 198.7 kbps over the 1 km UAC using four transmitting transducers.
Abstract: In this study, multi-user (MU) underwater acoustic communication is investigated using massive multiple-input multiple-output (MIMO) orthogonal frequency division multiplexing (OFDM)-based orthogonal time frequency space modulation (OTFS) systems. The performance of a 4-user scenario is evaluated over a simulated 1 km vertically-configured time-varying underwater acoustic channel (UAC) in terms of bit error rate and maximum achievable bit rate. Considering 64-QAM and frequency-domain pilot-based channel estimation, it is shown that the underwater vehicles employing the OFDM-based OTFS modulation outperform those using the conventional OFDM modulation in a dynamic UAC. The application of massive MIMO allows the four underwater vehicles to use the same time and frequency resources to transmit their information reliably to a surface station. Furthermore, it is shown that each underwater vehicle in the MU-massive MIMO transmission scenario can achieve an effective bit rate as high as 198.7 kbps over the 1 km UAC using four transmitting transducers.

33 citations


Journal ArticleDOI
TL;DR: This study aims to describe the functions and features of the key 5G technologies and conduct a survey on the latest development of driving technologies for 5G, focusing on health care applications that would benefit from the advantages brought by 5G.
Abstract: In 2019, 5G was introduced and it is being gradually deployed all over the world. 5G introduces new concepts, such as network slicing to better support various applications with different performance requirements on data rate and latency; and edge and cloud computing that will be responsible for the leverage of computational requirements. This study aims to describe the functions and features of the key 5G technologies and conduct a survey on the latest development of driving technologies for 5G. This survey focuses on health care applications that would benefit from the advantages brought by 5G.

33 citations


Journal ArticleDOI
TL;DR: The authors derive the novel tight closed-form expressions for the secrecy outage probability (SOP), probability of non-zero secrecy capacity, and intercept probability of the considered system and formulate and investigate analytically two optimisation problems, namely, optimal relay location with fixed power allocation and optimal power allocation with fixed relay location, to minimise the system SOP.
Abstract: The authors investigate the secrecy performance of a cooperative relaying network, wherein a fixed source communicates with a fixed destination via an amplify-and-forward mobile relay in the presence of a passive mobile eavesdropper. They assume that the source-to-relay channel and relay-to-destination channel experience Nakagami-m fading, whereas relay-to-eavesdropper link follows double Nakagami-m fading. Under such a mixed fading environment, they derive the novel tight closed-form expressions for the secrecy outage probability (SOP), probability of non-zero secrecy capacity, and intercept probability of the considered system. They also deduce the asymptotic secrecy outage and intercept probability expressions in the high signal-to-noise ratio regime to highlight the impact of fading parameters and channel conditions on the secrecy diversity order. It is shown that the system can achieve secrecy diversity order of min( m SR , m RD ) in terms of SOP, and m RD in terms of intercept probability, where m SR and m RD represent the fading severity parameters between source and relay and between relay and destination, respectively. They also formulate and investigate analytically two optimisation problems, namely, optimal relay location with fixed power allocation and optimal power allocation with fixed relay location, to minimise the system SOP. They verify analytical findings via numerical and simulation results.

31 citations


Journal ArticleDOI
TL;DR: A novel hybrid optimisation technique named as dragonfly–firefly algorithm (DA–FA) is proposed, which outperforms well in terms of localisation error in comparison to existing localisation solutions.
Abstract: Localisation has become a major attraction of research in recent years in the field of wireless sensor networks (WSNs). It is required for various applications like monitoring of objects placed in indoors and outdoors environments. The main requirement in localisation is to assign a location to each node, since multiple sensor nodes in WSN are used to retrieve information. The aim of this research is to address a WSN localisation problem using various optimisation techniques. The concept of single anchor node placement at the centre of sensing field with its projection using hexagonal pattern is introduced. In this study, a novel hybrid optimisation technique named as dragonfly–firefly algorithm (DA–FA) is proposed. DA is an optimisation algorithm recently suggested based on the dragonfly's static and dynamic swarming behaviour. The suggested hybrid technique combines the exploration capability of explore DA and Firefly algorithm's to exploit to obtain ideal global solutions. To check the effectiveness of DA–FA CEC 2019 benchmark functions are used for comparison with competitive algorithms. DA–FA converges fast and provide optimum solution for most of the benchmark functions. In addition, DA–FA outperforms well in terms of localisation error in comparison to existing localisation solutions.

30 citations


Journal ArticleDOI
TL;DR: Simulation results confirm that the proposed performance-based SLA (PerSLA) framework for cost, performance, penalties and revenue optimisation is adequate in revenue generation and customers satisfaction.
Abstract: Cost, performance, and penalties are the key factors to revenue generation and customer satisfaction. They have a complex correlation, that gets more complicated when missing a proper framework that unambiguously defines these factors. Service-level agreement (SLA) is the initial document discussing selected parameters as a precondition to business initialisation. The clear definition and application of the SLA is of paramount importance as for modern as a Service online businesses no direct communication between provider and consumer is expected. For the proper implementation of SLA, there should be a satisfactory approach for measuring and monitoring quality of service metrics. This study investigated these issues and proposed performance-based SLA (PerSLA) framework for cost, performance, penalties and revenue optimisation. PerSLA optimises these parameters and maximises both provider revenue and customers satisfaction. Simulation results confirm that the proposed framework is adequate in revenue generation and customers satisfaction. Customers and providers monitor the business with respect to agreed terms and conditions. On violation, the provider is penalised. This agreement increases the trust in relationship between provider and consumer.

Journal ArticleDOI
TL;DR: It is demonstrated that in comparison with the existing NGSMs, the proposed model possesses the ability to better mimic characteristics of real vehicular channels and derive some significant statistical properties in terms of the power delay profile, tap correlation coefficient matrix and Doppler power spectral density.
Abstract: In this study, the authors propose a new non-geometrical stochastic model (NGSM) for non-stationary wideband vehicular communication channels. To include the line-of-sight component, the proposed model first generates a non-uniformly distributed tap phase, which can be obtained from the widely used uniformly distributed tap phase. Moreover, the proposed model can practically experience variable types of Doppler spectra for different delays by modifying the autocorrelation function used in the existing NGSM. In consideration of the non-stationarity in the frequency domain of vehicular communication channels, the authors further consider that the amplitude and phase of different taps are correlated. To evaluate the performance of the proposed model, they derive some significant statistical properties in terms of the power delay profile, tap correlation coefficient matrix and Doppler power spectral density. It is demonstrated that in comparison with the existing NGSMs, the proposed model possesses the ability to better mimic characteristics of real vehicular channels. Finally, the excellent agreement is achieved between the simulation results and the corresponding measured data, confirming the accuracy of the proposed model.

Journal ArticleDOI
TL;DR: An association rule mining algorithm, in particular, the Apriori algorithm is employed to extract appropriate features from the raw data including rules and repetitive patterns that would be used for classifying the data and detecting anomalies in communication networks.
Abstract: Nowadays, detecting anomaly events in communication networks is highly under consideration by many researchers. In a large communication network, traffic is massive, which leads to a larger amount of data travelling and also the growth of noise. Therefore, to extract meaningful data for anomaly detection would be very challenging. Each attack has its own behaviour that determines the type of attack. However, some attacks may have similar behaviours and only differ in some features. Extracting such meaningful features is of special importance. In this study, an association rule mining algorithm, in particular, the Apriori algorithm is employed to extract appropriate features from the raw data including rules and repetitive patterns. The extracted features would be used then for classifying the data and detecting anomalies in communication networks. A hybrid of artificial neural network and AdaBoost classification algorithms are employed for classifying the detected events with normal behaviour and attack events. The proposed method is compared with previous methods reported in this field such as CART, CHAID, multiple linear regression and logistic regression on KDDCUP99 data set. The results showed that the proposed method outperformed other classifiers examined. The strategy of reinforcement learning is used to combine the classifier's results which is based on Max vote strategy.

Journal ArticleDOI
TL;DR: The applications of directional antenna in WSNs are classified into security benefits, localisation, convergecast, contention reduction and energy efficiency, which provide advantages like the extended range of transmission, energy savings, better prevention of wormhole attack, long coverage and high interference tolerance.
Abstract: Directional antennas in wireless sensor networks (WSNs) have several advantages including the enhanced capacity of the network, longer range of transmission, better spatial reuse and lower interference. By enhancing the alleviating contention and the range of communication, the directional antennas improve the performance of WSNs. In common, various methods like localisation, power management, data aggregation and optimisation obtain these advantages in WSNs; however, the reliability requirements and the energy concerns make the antenna technology more advantageous. When compared with methods, in which the directional antennas provide advantages like the extended range of transmission, energy savings, better prevention of wormhole attack, long coverage and high interference tolerance. This review provides a detailed study of recent improvements and open research problems on directional antennas in WSN applications. First, the introduction and the background of directional antennas briefly based on their classification radiation patterns and advantages in directional communication in WSNs. During the past years, a number of directional antennas are designed for the applications of WSNs. In this study, recently developed directional antennas and their applications in WSNs are reviewed. The applications of directional antenna in WSNs are classified into security benefits, localisation, convergecast, contention reduction and energy efficiency.

Journal ArticleDOI
TL;DR: A scheme based on grid clustering and fuzzy reinforcement-learning to maximise network lifetime as well as energy-efficient data aggregation for distributed WSN and shows superior performance in terms of energy consumption and network lifetime compared to earlier systems.
Abstract: The widely acceptable problem in wireless sensor networks (WSNs) is to develop a practical scheme for data aggregation in the massive range of sensor nodes that are randomly distributed over a network region. The essential operation of cluster heads (CHs) in such a network is to transmit the aggregated data to the sink node through multi-hop communication, thus the energy to be used in a better way during the period of aggregation and transmission. Therefore, this study presents a scheme based on grid clustering and fuzzy reinforcement-learning to maximise network lifetime as well as energy-efficient data aggregation for distributed WSN. Initially, grid clustering is employed for cluster formation and CH selection. Further, a fuzzy rule system-based reinforcement learning algorithm is used to select the data aggregator node based on the parameters, such as distance, neighbourhood overlap, and algebraic connectivity. Finally, the dynamic relocation of the mobile sink is performed within a grid-based clustered network region using a fruit fly optimisation algorithm. The experimental outcomes revealed that the proposed data aggregation scheme provides superior performance in terms of energy consumption and network lifetime compared to earlier systems.

Journal ArticleDOI
TL;DR: In this article, a UAV-enabled edge computing framework is proposed, where a group of UAVs fly around to provide the near-users edge computing service, and the computation migration decision making problem is formulated as a Markov decision process, where the state contains the extracted observations from the environment.
Abstract: The implementation of computation offloading is a challenging issue in the remote areas where traditional edge infrastructures are sparsely deployed. In this study, the authors propose a unmanned aerial vehicle (UAV)-enabled edge computing framework, where a group of UAVs fly around to provide the near-users edge computing service. They study the computation migration problem for the complex missions, which can be decomposed as some typical task-flows considering the inter-dependency of tasks. Each time a task appears, it should be allocated to a proper UAV for execution, which is defined as the computation migration or task migration. Since the UAV-ground communication data rate is strongly associated with the UAV location, selecting a proper UAV to execute each task will largely benefit the missions response time. They formulate the computation migration decision making problem as a Markov decision process, in which the state contains the extracted observations from the environment. To cope with the dynamics of the environment, they propose an advantage actor–critic reinforcement learning approach to learn the near-optimal policy on-the-fly. Simulation results show that the proposed approach has a desirable convergence property, and can significantly reduce the average response time of missions compared with the benchmark greedy method.

Journal ArticleDOI
TL;DR: Simulation results show the performance of the proposed distance-based dynamic duty-cycle allocation (DBDDCA) algorithm is significantly better than the existing strategies under the investigated network parameters.
Abstract: Wireless sensor network (WSN) consists of spatially distributed miniature size and autonomous nodes along with batteries as a power source. The major bottleneck of WSN is efficient energy utilization. The energy consumption for transmission of signals increases with the distance. This problem of energy consumption is addressed in this study. This study presents a strategy, namely distance-based dynamic duty-cycle allocation (DBDDCA) algorithm. In DBDDCA, longer distance nodes from cluster head (CH) transmit relatively less time in order to save energy. Conversely, transmit for the higher time when the distance is near to CH. The proposed DBDDCA is compared with the other existing strategies: low-energy adaptive cluster hierarchy (LEACH), modified leach, and stable election protocol and with two existing medium access control (MAC) protocols: sensor (S)-MAC and timeout (T)-MAC. The performance of the proposed and existing strategies is evaluated with the following network parameters: energy consumption, network energy utilization, network lifetime, latency, and packets delivery. These parameters have been evaluated with different network scenarios such as number of nodes increases, number of rounds, and with variation in initial energy of nodes. Simulation results show the performance of the proposed strategy is significantly better than the existing strategies under the investigated network parameters.

Journal ArticleDOI
TL;DR: A multi-user space division multiple access (SDMA) model is developed to investigate the trade-off between the quality of service (QoS), computational complexity in beamforming and cooling requirements for various use cases, and a number of users.
Abstract: The future fifth generation (5G) systems will aim to design low-cost phased array base station antenna systems at mm-waves for simultaneous multiple beamforming with enhanced spatial multiplexing, limited interference, acceptable power consumption, suitable processing complexity, and passive cooling. In this study, a multi-user space division multiple access (SDMA) model is developed to investigate the trade-off between the quality of service (QoS), computational complexity in beamforming and cooling requirements for various use cases, and a number of users. The QoS at the user ends is rated by assessing the statistical signal-to-interference-plus-noise ratios (SINRs). Two beamforming algorithms, namely conjugate beamforming (CB) and zero-forcing (ZF), are considered and compared. Depending on the deployment scenario, rotated and optimised array layouts are proposed to be used in CB with the least computational complexity while providing relatively good QoS. Different reduced-complexity ZF algorithms are introduced as a compromise between the SINR performance and computational burden. The impact of the number of simultaneously served users on the thermal management in active integrated 5G base station antenna arrays is investigated as well.

Journal ArticleDOI
TL;DR: The outage probability and sum-rate analysis of downlink multi-user NOMA system over the generalised fading channels and the impact of system parameters such as number of users, power allocation coefficients, fading parameters etc. on the performance of Noma system is investigated.
Abstract: Non-orthogonal multiple access (NOMA) is recognised as an improved multiple access technique as compared with OMA technique to fulfil the requirements of fifth-generation wireless communication systems. In this study, the outage probability and sum-rate analysis of downlink multi-user NOMA system over the generalised fading channels, i.e. η-μ and κ-μ , are presented. Specifically, the outage probability of NOMA is analysed with fixed target rate and OMA rate. The mathematical expressions for the outage probability and sum rate with a fixed set of power allocation coefficients are derived and verified through the simulation results. The impact of system parameters such as number of users, power allocation coefficients, fading parameters etc. on the performance of NOMA system is investigated. NOMA with given fixed target rate offers better outage performance as compared with OMA. However, NOMA with OMA rate offers better outage performance for the near user only. The sum rate and near user rate of NOMA improves with signal-to-noise ratio; however, the far user rate does not follow the same trend.

Journal ArticleDOI
TL;DR: A distributed air–ground integrated deployment algorithm is proposed to jointly optimise the position of the UAV and the coalition formation of ground nodes and the property of Stackelberg equilibrium is proven for the air-ground cooperative relationship.
Abstract: In this study, the air–ground integrated deployment method is studied for the unmanned aerial vehicle (UAV)-enabled mobile edge computing (MEC) system. The UAV can help to reduce the delay and energy consumption of MEC. However, the limited coverage range of UAV limits the quality of data offloading. To improve the efficiency of data transmission, a hierarchical game model is designed. Ground nodes form multiple coalitions actively according to the position of UAV and the UAV adjusts the position based on the data distribution of ground networks. The relationship between the UAV and ground nodes is modelled as a Stackelberg game. A coalition formation game (CFG) is constructed for the data gathering among ground nodes. It is proved that the proposed CFG is an exact potential game with at least one Nash equilibrium. Moreover, the property of Stackelberg equilibrium is proven for the air–ground cooperative relationship. Based on the hierarchical model, a distributed air–ground integrated deployment algorithm is proposed to jointly optimise the position of the UAV and the coalition formation of ground nodes. The simulation results show that the proposed method promotes the efficiency of data transmission greatly and can converge to a stable state with reasonable iteration times.

Journal ArticleDOI
TL;DR: The authors propose an efficient Q-learning based computation offloading algorithm (QCOA) to reduce the complexity of optimisation problem and achieve a 5% benefits compared with the traditional two-layer network architecture in terms of MUs' energy and time consumptions.
Abstract: Unmanned aerial vehicles (UAVs) have been recently considered as a flying platform to provide wide coverage and relaying services for mobile users (MUs). Mobile edge computing (MEC) is developed as a new paradigm to improve quality of experience of MUs in future networks. Motivated by the high flexibility and controllability of UAVs, in this study, the authors study a multi-UAV-enabled MEC system, in which UAVs have computation resources to offer computation offloading opportunities for MUs, aiming to reduce MUs' total consumptions in terms of time and energy. Considering the rich computation resource in the remote cloud centre, they propose the MUs-Edge-Cloud three-layer network architecture, where UAVs play the role of flying edge servers. Based on this framework, they formulate the computation offloading issue as a mixed-integer non-linear programming problem, which is difficult to obtain an optimal solution in general. To address this, they propose an efficient Q-learning based computation offloading algorithm (QCOA) to reduce the complexity of optimisation problem. Numerical results show that the proposed QCOA outperforms benchmark offloading policies (e.g. random offloading, traversal offloading). Furthermore, the proposed three-layer network architecture achieves a 5% benefits compared with the traditional two-layer network architecture in terms of MUs' energy and time consumptions.

Journal ArticleDOI
TL;DR: The authors propose a novel fault-tolerant scheduling algorithm of modules in FC and optimise it using an energy-efficient checkpointing and load balancing technique based on the Bayesian classification and call it ECLB.
Abstract: Fog computing (FC) with a distributed architecture plays an essential role in Internet-of-Things (IoT). This paradigm utilises the processing abilities of Fog devices (FDs) and decreases latency. The large volume of data and its process in IoT can cause network failures. Researchers tend to consider communication reliability to reduce fault effects and achieve high performance. Fault tolerance becomes a necessary matter to enhance the reliability of the Fog. Notably, fault tolerance studies have been performed mostly on the Cloud system. To counter this issue, the authors propose a novel fault-tolerant scheduling algorithm of modules in FC and optimise it. The main idea of this approach is a classification method for different modules alongside of computing the energy consumption of all FDs and finding minimal FDs' energy consumption. To distribute modules between FDs, they present an energy-efficient checkpointing and load balancing technique based on the Bayesian classification and call it by ECLB. The performance of the proposed method is evaluated by comparing it with the state-of-the-art algorithms in terms of delay, energy consumption, execution cost, network usage, and total executed modules. Analysis and simulation results indicate that the authors' methods are efficient and superior to others.

Journal ArticleDOI
TL;DR: A comparative analysis is performed to study the bit error rate and error vector magnitude achieved with the least-squares, the minimum mean squared error, and the Kalman filter channel estimators when these are applied to the maximum-ratio combining (MRC) and the regularised zero-forcing (RZF) receivers.
Abstract: In this work, a comparative analysis is performed to study the bit error rate and error vector magnitude achieved with the least-squares (LS), the minimum mean squared error (MSE), and the Kalman filter (KF) channel estimators when these are applied to the maximum-ratio combining (MRC) and the regularised zero-forcing (RZF) receivers. The MSE achieved with the different channel estimators was also compared by varying the noise and interference power at the receiver. The proposed methodology relies on the characterisation of a massive multiple-input multiple-output (MIMO) channel with a quasi-deterministic radio channel generator and a cyclic prefix orthogonal frequency division multiplexing link-level radio simulation. The fifth-generation (5G) new radio (NR) frame structure was used to perform channel estimation and equalisation for operation frequencies below 6 GHz. Numerical results show that the MRC receiver achieves its maximum performance with the KF estimator, especially at low signal-to-noise ratio scenarios, while the RZF receiver achieves its maximum performance with the LS estimation even in high interference scenarios.

Journal ArticleDOI
TL;DR: A model to analyse the node in the PCNs from the physical topology, traffic distribution, and service importance distribution to calculate the node importance in the physicalTopology layer, the transport layer, and the service layer, combined with a multi-layer critical nodes identification algorithm (MCNIA) proposed, the node critical degree is obtained.
Abstract: As the support networks of the electric power grid, power communication networks (PCNs) become more complex and vulnerable due to the increasing scale of the electric power grid. Identifying and protecting the critical nodes in PCNs in advance is an effective way to reduce network vulnerability. Owing to the large differences in vulnerability indicators of different layers in the PCN, it is difficult to find the critical nodes, which have great impacts on all vulnerability indicators. Therefore, the goal of this study is to identify the critical nodes that have greater impacts on different layers, rather than nodes that have the greatest impact on a single layer. Therefore, the authors present a model to analyse the node in the PCNs from the physical topology, traffic distribution, and service importance distribution to calculate the node importance in the physical topology layer, the transport layer, and the service layer, respectively. Combined with a multi-layer critical nodes identification algorithm (MCNIA) proposed, the node critical degree is obtained so that it can identify the critical nodes in the PCNs. The vulnerability analyses of PCNs under critical nodes attacking prove that MCNIA can identify critical nodes in the PCNs precisely.

Journal ArticleDOI
TL;DR: In this article, an update code using vehicles joint unmanned aerial vehicle (UC-VU) scheme is proposed to disseminate code for smart sensing devices (SSDs) in the smart city.
Abstract: In this study, an update code using vehicles joint unmanned aerial vehicle (UC-VU) scheme is proposed to disseminate code for smart sensing devices (SSDs) in the smart city. The innovations of the UC-VU scheme are as follows. (i) The unmanned aerial vehicle (UAV) is adopted to disseminate code at the beginning of code dissemination for the SSDs which are difficult to get code through mobile vehicles. Thus, the average delay required for SSDs to get code can be reduced. (ii) k Smart storage devices are selected as interim code stations and the strategy of selecting interim code stations’ location and number for code dissemination is proposed to disseminate code efficiently. This makes mobile vehicles can get code from k + 1 code station for code dissemination which changed the situation where code dissemination is slow due to only one code station disseminates code. (iii) This study also gives the optimisation strategy of the UAV flight trajectory to optimise the flight distance. The key performance indicators such as code dissemination rate compared to previous strategies increases 34.6%, the average dissemination time, the time that all SSDs get the code and the UAV flight trajectory's path decrease by 26.27, 25.30, and 71.67%, respectively.

Journal ArticleDOI
TL;DR: This study reviews Long-Range (LoRa) technology and advances in the literature of LoRaWAN protocol to date and diverts the attention towards applying Ultra-Dense Network concept on LPWAN.
Abstract: Internet of Things (IoT) is one of the most cited terms within the communication research communities. Next generation wireless networks technologies are expected to have massive-connections of tens of billions of devices. Such a huge number of devices raised a number of concerns in regards to how much accessible resources are available and what are the best technologies for managing those resources, all in order to avoid shutdowns/collapses in every means. In terms of wireless networks, and in regards to energy being the backbone of IoT devices, Low Power Wide Area Networks (LPWAN) technologies are considered to be a potential solution for IoT applications. In particular, this study reviews Long-Range (LoRa) technology and advances in the literature of LoRaWAN protocol to date. Furthermore, it discusses the challenges in LoRaWAN and diverts the attention towards applying Ultra-Dense Network concept on LPWAN.

Journal ArticleDOI
TL;DR: Non-orthogonal multiple access (NOMA) is considered for multiusers wireless communications over Rayleigh fading channel and the performance analysis is verified via Monte-Carlo simulations demonstrating closed-match with theoretical analysis.
Abstract: In this study, non-orthogonal multiple access (NOMA) is considered for multiusers wireless communications over Rayleigh fading channel. The base station (BS) utilises NOMA technique to secure connectivity, users fairness and high spectral efficiency for multiusers with different channel conditions. Moreover, a power allocation mechanism is applied at the BS by giving each user its required power allocation factor (PAF) in order to share the available power. Therefore, this technique allows the users of interest to communicate with the BS over the same frequency band simultaneously in the power domain. Moreover, successive interference cancellation is applied for users with the lower PAF to remove the strong signal of the other users. Furthermore, exact expressions are derived for different performance metrics, and the probability density function of the signal-to-interference-plus-noise ratio is derived at each terminal and then exploited to obtain the outage probability and the probability of error. The performance analysis is verified via Monte-Carlo simulations demonstrating closed-match with theoretical analysis.

Journal ArticleDOI
TL;DR: The results demonstrate that the MLP ML technique presents a better trade-off between training time and channel detection performance, than multilayer perceptron (MLP), support vector machine, and Naive Bayes.
Abstract: In this study, the authors consider the application of machine learning (ML) models in cooperative spectrum sensing of cognitive radio networks (CRNs). Based on a statistical analysis of the classic energy detection scheme, the probability of detection and false alarm is derived, which depends solely on the number of samples and signal-to-noise ratio of the secondary users. The channel occupancy detection obtained from the established analytical techniques such as maximum ratio combining and AND/OR rules is compared to different ML techniques, including multilayer perceptron (MLP), support vector machine, and Naive Bayes, based on receiver operating characteristic and area under the curve metrics. By using standard profiling tools, they obtain the computational performance of the analysed models during the training phase, a critical step for operating in CRNs. Ultimately, the results demonstrate that the MLP ML technique presents a better trade-off between training time and channel detection performance.

Journal ArticleDOI
TL;DR: In this article, the authors have combined a nature-inspired optimisation, such as a moth search algorithm (MSA) with ECC, to select the correct and optimal value of the elliptic curve.
Abstract: In this study, elliptic curve cryptography (ECC) is elected for tenant authentication, data encryption, and data decryption due to its minimum key size. The proposed ECC-based authentication approach allows the authorised person to access private data; it protects different related attacks effectively. To develop more secure data encryption, the authors have combined a nature-inspired optimisation, such as a moth search algorithm (MSA) with ECC, to select the correct and optimal value of the elliptic curve. The proposed encryption and decryption approach combines DNA encoding with the ECC encryption algorithm. The mechanism of DNA encoded ECC provided multi-level security with less computational power. The security analysis of the proposed method has been provided to prove its effectiveness against certain attacks, such as denial-of-service attack, impersonation attack, reply attack, plaintext attack and chosen-ciphertext attack. The experimental result is evaluated based on encryption time, decryption time, throughput and key size of the security model. The average execution time of the proposed encryption and decryption is only 83.153 and 86.076 s, respectively. From the evaluation, it is clearly determined that the proposed technique provides two-layer security with minimum key size and less storage space.

Journal ArticleDOI
TL;DR: In this article, the outage behavior of the network is investigated in a unified manner for the proposed suboptimal antenna selection (AS) schemes by deriving the closed-form expression of the exact outage probability (OP).
Abstract: This paper develops new suboptimal antenna selection (AS) schemes, majority based transmit antenna selection/maximal ratio combining (TAS-maj/MRC) and joint transmit and receive antenna selection (JTRAS-maj), in a multiple-input multiple-output non-orthogonal multiple access (MIMO-NOMA) network. The impact of the channel estimation errors (CEEs) and feedback delay (FD) on the performance of the network is studied in Nakagami-m fading channels. First, the outage behavior of the network is investigated in a unified manner for the proposed AS schemes by deriving the closed-form expression of the exact outage probability (OP). Next, in the presence of the CEEs and FD, the corresponding upper bound of the OP is obtained. The OP expression in high signal-to-noise ratio region is then provided to illustrate an error floor value in the presence of the CEEs and FD as well as diversity and array gains in the absence of the CEEs and FD. Finally, the analytical results in the presence and absence of the CEEs and FD are verified by the Monte Carlo simulations. The numerical results show that the proposed majority based AS schemes are superior to both max-max-max and max-min-max based AS schemes and the system performance is more sensitive to the CEE than FD.