scispace - formally typeset
Search or ask a question

Showing papers in "Wireless Communications and Mobile Computing in 2018"


Journal ArticleDOI
TL;DR: An Enhanced Power Efficient Gathering in Sensor Information Systems (EPEGASIS) algorithm is proposed to alleviate the hot spots problem from four aspects: optimal communication distance is determined, threshold value is set to protect the dying nodes, mobile sink technology is used to balance the energy consumption among nodes, and extensive experiments have been performed.
Abstract: Energy efficiency has been a hot research topic for many years and many routing algorithms have been proposed to improve energy efficiency and to prolong lifetime for wireless sensor networks (WSNs). Since nodes close to the sink usually need to consume more energy to forward data of its neighbours to sink, they will exhaust energy more quickly. These nodes are called hot spot nodes and we call this phenomenon hot spot problem. In this paper, an Enhanced Power Efficient Gathering in Sensor Information Systems (EPEGASIS) algorithm is proposed to alleviate the hot spots problem from four aspects. Firstly, optimal communication distance is determined to reduce the energy consumption during transmission. Then threshold value is set to protect the dying nodes and mobile sink technology is used to balance the energy consumption among nodes. Next, the node can adjust its communication range according to its distance to the sink node. Finally, extensive experiments have been performed to show that our proposed EPEGASIS performs better in terms of lifetime, energy consumption, and network latency.

250 citations


Journal ArticleDOI
TL;DR: In this paper, a unified model for NOMA, including uplink and downlink transmissions, along with the extensions to multiple input multiple output (MIMO) and cooperative communication scenarios is presented.
Abstract: Today’s wireless networks allocate radio resources to users based on the orthogonal multiple access (OMA) principle. However, as the number of users increases, OMA based approaches may not meet the stringent emerging requirements including very high spectral efficiency, very low latency, and massive device connectivity. Nonorthogonal multiple access (NOMA) principle emerges as a solution to improve the spectral efficiency while allowing some degree of multiple access interference at receivers. In this tutorial style paper, we target providing a unified model for NOMA, including uplink and downlink transmissions, along with the extensions to multiple input multiple output and cooperative communication scenarios. Through numerical examples, we compare the performances of OMA and NOMA networks. Implementation aspects and open issues are also detailed.

195 citations


Journal ArticleDOI
TL;DR: A comprehensive analysis of the concept and existing platforms of Smart Cities is performed and a clear understanding of the services that a Smart City must provide, the technology it should employ for the development of these services, and the scope that this concept covers is gained.
Abstract: Technology is starting to play a key role in cities’ urban sustainability plans. This is because new technologies can provide them with robust solutions that are of benefit to citizens. Cities aim to incorporate smart systems in their industrial, infrastructural, educational, and social activities. A Smart City is managed with intelligent technologies which allow improving the quality of the services offered to citizens and make all processes more efficient. However, the Smart City concept is fairly recent. The ideas that it encompasses have not yet been consolidated due to the large number of fields and technologies that fit under this concept. All of this led to confusion about the definition of a Smart City and this is evident in the literature. This article explores the literature that addresses the topic of Smart Cities; a comprehensive analysis of the concept and existing platforms is performed. We gain a clear understanding of the services that a Smart City must provide, the technology it should employ for the development of these services, and the scope that this concept covers. Moreover, the shortcomings and needs of Smart Cities are identified and a model for designing a Smart City architecture is proposed. In addition, three case studies have been proposed: the first is a simulator to study the implementation of various services and technologies, the second case study to manage incidents that occur in a Smart City, and the third case study to monitor the deployment of large-scale sensors in a Smart City.

173 citations


Journal ArticleDOI
TL;DR: The traditional LSH is improved and a novel LSH-based service recommendation approach named , to protect users’ privacy over multiple quality dimensions during the distributed mobile service recommendation process is put forward.
Abstract: With the ever-increasing popularity of mobile computing technology, a wide range of computational resources or services (e.g., movies, food, and places of interest) are migrating to the mobile infrastructure or devices (e.g., mobile phones, PDA, and smart watches), imposing heavy burdens on the service selection decisions of users. In this situation, service recommendation has become one of the promising ways to alleviate such burdens. In general, the service usage data used to make service recommendation are produced by various mobile devices and collected by distributed edge platforms, which leads to potential leakage of user privacy during the subsequent cross-platform data collaboration and service recommendation process. Locality-Sensitive Hashing (LSH) technique has recently been introduced to realize the privacy-preserving distributed service recommendation. However, existing LSH-based recommendation approaches often consider only one quality dimension of services, without considering the multidimensional recommendation scenarios that are more complex but more common. In view of this drawback, we improve the traditional LSH and put forward a novel LSH-based service recommendation approach named , to protect users’ privacy over multiple quality dimensions during the distributed mobile service recommendation process.

160 citations


Journal ArticleDOI
TL;DR: A dynamic resource allocation method, named DRAM, for load balancing in fog environment is proposed in this paper and a system framework for fog computing and the load-balance analysis for various types of computing nodes are presented.
Abstract: Fog computing is emerging as a powerful and popular computing paradigm to perform IoT (Internet of Things) applications, which is an extension to the cloud computing paradigm to make it possible to execute the IoT applications in the network of edge. The IoT applications could choose fog or cloud computing nodes for responding to the resource requirements, and load balancing is one of the key factors to achieve resource efficiency and avoid bottlenecks, overload, and low load. However, it is still a challenge to realize the load balance for the computing nodes in the fog environment during the execution of IoT applications. In view of this challenge, a dynamic resource allocation method, named DRAM, for load balancing in fog environment is proposed in this paper. Technically, a system framework for fog computing and the load-balance analysis for various types of computing nodes are presented first. Then, a corresponding resource allocation method in the fog environment is designed through static resource allocation and dynamic service migration to achieve the load balance for the fog computing systems. Experimental evaluation and comparison analysis are conducted to validate the efficiency and effectiveness of DRAM.

154 citations


Journal ArticleDOI
TL;DR: The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA,WSGA, WSGA, and MTCT algorithms, and reduces the execution cost.
Abstract: Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

154 citations



Journal ArticleDOI
TL;DR: In this article, the authors explore existing networking communication technologies for the Internet of Things (IoT), with emphasis on encapsulation and routing protocols, and the relation between the IoT network protocols and the emerging IoT applications is also examined.
Abstract: Internet of Things (IoT) constitutes the next step in the field of technology, bringing enormous changes in industry, medicine, environmental care, and urban development. Various challenges are to be met in forming this vision, such as technology interoperability issues, security and data confidentiality requirements, and, last but not least, the development of energy efficient management systems. In this paper, we explore existing networking communication technologies for the IoT, with emphasis on encapsulation and routing protocols. The relation between the IoT network protocols and the emerging IoT applications is also examined. A thorough layer-based protocol taxonomy is provided, while how the network protocols fit and operate for addressing the recent IoT requirements and applications is also illustrated. What is the most special feature of this paper, compared to other survey and tutorial works, is the thorough presentation of the inner schemes and mechanisms of the network protocols subject to IPv6. Compatibility, interoperability, and configuration issues of the existing and the emerging protocols and schemes are discussed based on the recent advanced of IPv6. Moreover, open networking challenges such as security, scalability, mobility, and energy management are presented in relation to their corresponding features. Lastly, the trends of the networking mechanisms in the IoT domain are discussed in detail, highlighting future challenges.

127 citations


Journal ArticleDOI
TL;DR: Most significant fog applications (e.g., health care monitoring, smart cities, connected vehicles, and smart grid) will be discussed here to create a well-organized green computing paradigm to support the next generation of IoT applications.
Abstract: A huge amount of data, generated by Internet of Things (IoT), is growing up exponentially based on nonstop operational states. Those IoT devices are generating an avalanche of information that is disruptive for predictable data processing and analytics functionality, which is perfectly handled by the cloud before explosion growth of IoT. Fog computing structure confronts those disruptions, with powerful complement functionality of cloud framework, based on deployment of micro clouds (fog nodes) at proximity edge of data sources. Particularly big IoT data analytics by fog computing structure is on emerging phase and requires extensive research to produce more proficient knowledge and smart decisions. This survey summarizes the fog challenges and opportunities in the context of big IoT data analytics on fog networking. In addition, it emphasizes that the key characteristics in some proposed research works make the fog computing a suitable platform for new proliferating IoT devices, services, and applications. Most significant fog applications (e.g., health care monitoring, smart cities, connected vehicles, and smart grid) will be discussed here to create a well-organized green computing paradigm to support the next generation of IoT applications.

123 citations


Journal ArticleDOI
TL;DR: The current review presents the state of the art in the energy management schemes, the remaining challenges, and the open issues for future research work in wireless sensor networks.
Abstract: There has been an increase in research interest in wireless sensor networks (WSNs) as a result of the potential for their widespread use in many different areas like home automation, security, environmental monitoring, and many more. Despite the successes gained, the widespread adoption of WSNs particularly in remote and inaccessible places where their use is most beneficial is hampered by the major challenge of limited energy, being in most instances battery powered. To prolong the lifetime for these energy hungry sensor nodes, energy management schemes have been proposed in the literature to keep the sensor nodes alive making the network more operational and efficient. Currently, emphasis has been placed on energy harvesting, energy transfer, and energy conservation methods as the primary means of maintaining the network lifetime. These energy management techniques are designed to balance the energy in the overall network. The current review presents the state of the art in the energy management schemes, the remaining challenges, and the open issues for future research work.

120 citations


Journal ArticleDOI
TL;DR: Time Difference of Arrival (TDoA) localization accuracy, update probability, and update frequency were evaluated for different trajectories (walking, cycling, and driving) and LoRa spreading factors.
Abstract: The performance of LoRa geolocation for outdoor tracking purposes has been investigated on a public LoRaWAN network. Time Difference of Arrival (TDoA) localization accuracy, update probability, and update frequency were evaluated for different trajectories (walking, cycling, and driving) and LoRa spreading factors. A median accuracy of 200 m was obtained for the raw TDoA output data. In 90% of the cases, the error was less than 480 m. Taking into account the road map and movement speed significantly improves accuracy to a median of 75 m and a 90th percentile error of less than 180 m.

Journal ArticleDOI
TL;DR: The analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact, and the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions.
Abstract: Edge computing is promoted to meet increasing performance needs of data-driven services using computational and storage resources close to the end devices at the edge of the current network. To achieve higher performance in this new paradigm, one has to consider how to combine the efficiency of resource usage at all three layers of architecture: end devices, edge devices, and the cloud. While cloud capacity is elastically extendable, end devices and edge devices are to various degrees resource-constrained. Hence, an efficient resource management is essential to make edge computing a reality. In this work, we first present terminology and architectures to characterize current works within the field of edge computing. Then, we review a wide range of recent articles and categorize relevant aspects in terms of 4 perspectives: resource type, resource management objective, resource location, and resource use. This taxonomy and the ensuing analysis are used to identify some gaps in the existing research. Among several research gaps, we found that research is less prevalent on data, storage, and energy as a resource and less extensive towards the estimation, discovery, and sharing objectives. As for resource types, the most well-studied resources are computation and communication resources. Our analysis shows that resource management at the edge requires a deeper understanding of how methods applied at different levels and geared towards different resource types interact. Specifically, the impact of mobility and collaboration schemes requiring incentives are expected to be different in edge architectures compared to the classic cloud solutions. Finally, we find that fewer works are dedicated to the study of nonfunctional properties or to quantifying the footprint of resource management techniques, including edge-specific means of migrating data and services.

Journal ArticleDOI
TL;DR: This work presents two example scenarios for timely dissemination of safety messages in future VANETs based on fog and a combination of fog and SDN, and explains the issues that need to be resolved for the deployment of three different cloud-based approaches.
Abstract: Vehicular ad hoc networks (VANETs) have been studied intensively due to their wide variety of applications and services, such as passenger safety, enhanced traffic efficiency, and infotainment. With the evolution of technology and sudden growth in the number of smart vehicles, traditional VANETs face several technical challenges in deployment and management due to less flexibility, scalability, poor connectivity, and inadequate intelligence. Cloud computing is considered a way to satisfy these requirements in VANETs. However, next-generation VANETs will have special requirements of autonomous vehicles with high mobility, low latency, real-time applications, and connectivity, which may not be resolved by conventional cloud computing. Hence, merging of fog computing with the conventional cloud for VANETs is discussed as a potential solution for several issues in current and future VANETs. In addition, fog computing can be enhanced by integrating Software-Defined Network (SDN), which provides flexibility, programmability, and global knowledge of the network. We present two example scenarios for timely dissemination of safety messages in future VANETs based on fog and a combination of fog and SDN. We also explained the issues that need to be resolved for the deployment of three different cloud-based approaches.

Journal ArticleDOI
TL;DR: A detection model based on Deep Belief Networks (DBN) is presented and it is shown that the detecting model can achieve an approximately 90% true positive rate and 0.6% false positive rate.
Abstract: Web service is one of the key communications software services for the Internet. Web phishing is one of many security threats to web services on the Internet. Web phishing aims to steal private information, such as usernames, passwords, and credit card details, by way of impersonating a legitimate entity. It will lead to information disclosure and property damage. This paper mainly focuses on applying a deep learning framework to detect phishing websites. This paper first designs two types of features for web phishing: original features and interaction features. A detection model based on Deep Belief Networks (DBN) is then presented. The test using real IP flows from ISP (Internet Service Provider) shows that the detecting model based on DBN can achieve an approximately 90% true positive rate and 0.6% false positive rate.

Journal ArticleDOI
TL;DR: An assessment of LoRaWAN's performance for typical IIoT employments such as those represented by indoor industrial monitoring applications and a comparison with the IEEE 802.15.4 network protocol are proposed.
Abstract: Low-Power Wide-Area Networks (LPWANs) have recently emerged as appealing communication systems in the context of the Internet of Things (IoT). Particularly, they proved effective in typical IoT applications such as environmental monitoring and smart metering. Such networks, however, have a great potential also in the industrial scenario and, hence, in the context of the Industrial Internet of Things (IIoT), which represents a dramatically growing field of application. In this paper we focus on a specific LPWAN, namely, LoRaWAN, and provide an assessment of its performance for typical IIoT employments such as those represented by indoor industrial monitoring applications. In detail, after a general description of LoRaWAN, we discuss how to set some of its parameters in order to achieve the best performance in the considered industrial scenario. Subsequently we present the outcomes of a performance assessment, based on realistic simulations, aimed at evaluating the behavior of LoRaWAN for industrial monitoring applications. Moreover, the paper proposes a comparison with the IEEE 802.15.4 network protocol, which is often adopted in similar application contexts. The obtained results confirm that LoRaWAN can be considered as a strongly viable opportunity, since it is able to provide high reliability and timeliness, while ensuring very low energy consumption.

Journal ArticleDOI
TL;DR: A movie recommendation framework based on a hybrid recommendation model and sentiment analysis on Spark platform is proposed to improve the accuracy and timeliness of mobile movie recommender system.
Abstract: Movie recommendation in mobile environment is critically important for mobile users. It carries out comprehensive aggregation of user’s preferences, reviews, and emotions to help them find suitable movies conveniently. However, it requires both accuracy and timeliness. In this paper, a movie recommendation framework based on a hybrid recommendation model and sentiment analysis on Spark platform is proposed to improve the accuracy and timeliness of mobile movie recommender system. In the proposed approach, we first use a hybrid recommendation method to generate a preliminary recommendation list. Then sentiment analysis is employed to optimize the list. Finally, the hybrid recommender system with sentiment analysis is implemented on Spark platform. The hybrid recommendation model with sentiment analysis outperforms the traditional models in terms of various evaluation criteria. Our proposed method makes it convenient and fast for users to obtain useful movie suggestions.

Journal ArticleDOI
TL;DR: This work presents a new lightweight IDS called sample selected extreme learning machine (SS-ELM), which performs well in intrusion detection in terms of accuracy, training time, and the receiver operating characteristic (ROC) value.
Abstract: Fog computing, as a new paradigm, has many characteristics that are different from cloud computing. Due to the resources being limited, fog nodes/MEC hosts are vulnerable to cyberattacks. Lightweight intrusion detection system (IDS) is a key technique to solve the problem. Because extreme learning machine (ELM) has the characteristics of fast training speed and good generalization ability, we present a new lightweight IDS called sample selected extreme learning machine (SS-ELM). The reason why we propose “sample selected extreme learning machine” is that fog nodes/MEC hosts do not have the ability to store extremely large amounts of training data sets. Accordingly, they are stored, computed, and sampled by the cloud servers. Then, the selected sample is given to the fog nodes/MEC hosts for training. This design can bring down the training time and increase the detection accuracy. Experimental simulation verifies that SS-ELM performs well in intrusion detection in terms of accuracy, training time, and the receiver operating characteristic (ROC) value.

Journal ArticleDOI
TL;DR: A novel blockchain-based contractual routing (BCR) protocol for a network of untrusted IoT devices that enables distributed routing in heterogeneous IoT networks and is fairly resistant to both Blackhole and Greyhole attacks.
Abstract: In this paper, we propose a novel blockchain-based contractual routing (BCR) protocol for a network of untrusted IoT devices. In contrast to conventional secure routing protocols in which a central authority (CA) is required to facilitate the identification and authentication of each device, the BCR protocol operates in a distributed manner with no CA. The BCR protocol utilizes smart contracts to discover a route to a destination or data gateway within heterogeneous IoT networks. Any intermediary device can guarantee a route from a source IoT device to a destination device or gateway. We compare the performance of BCR with that of the Ad-hoc On-Demand Distance Vector (AODV) routing protocol in a network of devices. The results show that the routing overhead of the BCR protocol is times lower compared to AODV at the cost of a slightly lower packet delivery ratio. BCR is fairly resistant to both Blackhole and Greyhole attacks. The results show that the BCR protocol enables distributed routing in heterogeneous IoT networks.

Journal ArticleDOI
TL;DR: The main features of Fog Computing are discussed, a comprehensive comparison among previously developed distributed data storage systems which consist of a promising solution for data storage allocation in Fog Computing is provided, and various aspects of issues the authors may encounter when designing and implementing social IoT systems are identified.
Abstract: In the emerging area of the Internet of Things (IoT), the exponential growth of the number of smart devices leads to a growing need for efficient data storage mechanisms. Cloud Computing was an efficient solution so far to store and manipulate such huge amount of data. However, in the next years it is expected that Cloud Computing will be unable to handle the huge amount of the IoT devices efficiently due to bandwidth limitations. An arising technology which promises to overwhelm many drawbacks in large-scale networks in IoT is Fog Computing. Fog Computing provides high-quality Cloud services in the physical proximity of mobile users. Computational power and storage capacity could be offered from the Fog, with low latency and high bandwidth. This survey discusses the main features of Fog Computing, introduces representative simulators and tools, highlights the benefits of Fog Computing in line with the applications of large-scale IoT networks, and identifies various aspects of issues we may encounter when designing and implementing social IoT systems in the context of the Fog Computing paradigm. The rationale behind this work lies in the data storage discussion which is performed by taking into account the importance of storage capabilities in modern Fog Computing systems. In addition, we provide a comprehensive comparison among previously developed distributed data storage systems which consist of a promising solution for data storage allocation in Fog Computing.

Journal ArticleDOI
TL;DR: This work proposes a privacy-preserving and user-controlled data sharing architecture with fine-grained access control, based on the blockchain model and attribute-based cryptosystem and the consensus algorithm is the Byzantine fault tolerance mechanism, rather than Proof of Work.
Abstract: Internet of Things (IoT) and cloud computing are increasingly integrated, in the sense that data collected from IoT devices (generally with limited computational and storage resources) are being sent to the cloud for processing, etc., in order to inform decision making and facilitate other operational and business activities. However, the cloud may not be a fully trusted entity, like leaking user data or compromising user privacy. Thus, we propose a privacy-preserving and user-controlled data sharing architecture with fine-grained access control, based on the blockchain model and attribute-based cryptosystem. Also, the consensus algorithm in our system is the Byzantine fault tolerance mechanism, rather than Proof of Work.

Journal ArticleDOI
TL;DR: A comprehensive survey of the MEC research from the perspective of service adoption and provision is presented, including the existing MUs-oriented service adoption of MEC, i.e., offloading.
Abstract: Mobile cloud computing (MCC) integrates cloud computing (CC) into mobile networks, prolonging the battery life of the mobile users (MUs). However, this mode may cause significant execution delay. To address the delay issue, a new mode known as mobile edge computing (MEC) has been proposed. MEC provides computing and storage service for the edge of network, which enables MUs to execute applications efficiently and meet the delay requirements. In this paper, we present a comprehensive survey of the MEC research from the perspective of service adoption and provision. We first describe the overview of MEC, including the definition, architecture, and service of MEC. After that we review the existing MUs-oriented service adoption of MEC, i.e., offloading. More specifically, the study on offloading is divided into two key taxonomies: computation offloading and data offloading. In addition, each of them is further divided into single MU offloading scheme and multi-MU offloading scheme. Then we survey edge server- (ES-) oriented service provision, including technical indicators, ES placement, and resource allocation. In addition, other issues like applications on MEC and open issues are investigated. Finally, we conclude the paper.

Journal ArticleDOI
TL;DR: A sparse channel sample construction method is proposed, which saves system resources effectively without weakening performances, and an early stopping strategy to avoid the overfitting of BP neural network is introduced.
Abstract: This paper presents a multi-time channel prediction system based on backpropagation (BP) neural network with multi-hidden layers, which can predict channel information effectively and benefit for massive MIMO performance, power control, and artificial noise physical layer security scheme design. Meanwhile, an early stopping strategy to avoid the overfitting of BP neural network is introduced. By comparing the predicted normalized mean square error (NMSE), the simulation results show that the performances of the proposed scheme are extremely improved. Moreover, a sparse channel sample construction method is proposed, which saves system resources effectively without weakening performances.

Journal ArticleDOI
Yan Zhang1, Jinxiao Wen1, Guanshu Yang1, Zunwen He1, Xinran Luo1 
TL;DR: It is shown that the machine-learning-based models are able to provide high prediction accuracy and acceptable computational efficiency in the AA scenario and Random Forest outperforms other models and has the smallest prediction errors.
Abstract: Recently, unmanned aerial vehicle (UAV) plays an important role in many applications because of its high flexibility and low cost. To realize reliable UAV communications, a fundamental work is to investigate the propagation characteristics of the channels. In this paper, we propose path loss models for the UAV air-to-air (AA) scenario based on machine learning. A ray-tracing software is employed to generate samples for multiple routes in a typical urban environment, and different altitudes of Tx and Rx UAVs are taken into consideration. Two machine-learning algorithms, Random Forest and KNN, are exploited to build prediction models on the basis of the training data. The prediction performance of trained models is assessed on the test set according to the metrics including the mean absolute error (MAE) and root mean square error (RMSE). Meanwhile, two empirical models are presented for comparison. It is shown that the machine-learning-based models are able to provide high prediction accuracy and acceptable computational efficiency in the AA scenario. Moreover, Random Forest outperforms other models and has the smallest prediction errors. Further investigation is made to evaluate the impacts of five different parameters on the path loss. It is demonstrated that the path visibility is crucial for the path loss.

Journal ArticleDOI
TL;DR: AODV is enhanced by integrating a new lightweight technique that uses timers and baiting in order to detect and isolate single and cooperative black-hole attacks.
Abstract: Mobile Ad hoc Network (MANET) is a type of wireless networks that provides numerous applications in different areas Security of MANET had become one of the hottest topics in networks fields MANET is vulnerable to different types of attacks that affect its functionality and connectivity The black-hole attack is considered one of the most widespread active attacks that degrade the performance and reliability of the network as a result of dropping all incoming packets by the malicious node Black-hole node aims to fool every node in the network that wants to communicate with another node by pretending that it always has the best path to the destination node AODV is a reactive routing protocol that has no techniques to detect and neutralize the black-hole node in the network In this research, we enhanced AODV by integrating a new lightweight technique that uses timers and baiting in order to detect and isolate single and cooperative black-hole attacks During the dynamic topology changing the suggested technique enables the MANET nodes to detect and isolate the black-hole nodes in the network The implementation of the proposed technique is performed by using NS-235 simulation tools The results of the suggested technique in terms of Throughput, End-to-End Delay, and Packet Delivery Ratio are very close to the native AODV without black holes

Journal ArticleDOI
TL;DR: The theoretical security analysis and evaluation results show that the proposed blockchain-based secure service provisioning mechanism helps the lightweight clients get rid of untrusted edge service providers and insecure services effectively with acceptable latency and affordable costs.
Abstract: The emerging network computing technologies have significantly extended the abilities of the resource-constrained IoT devices through the network-based service sharing techniques. However, such a flexible and scalable service provisioning paradigm brings increased security risks to terminals due to the untrustworthy exogenous service codes loading from the open network. Many existing security approaches are unsuitable for IoT environments due to the high difficulty of maintenance or the dependencies upon extra resources like specific hardware. Fortunately, the rise of blockchain technology has facilitated the development of service sharing methods and, at the same time, it appears a viable solution to numerous security problems. In this paper, we propose a novel blockchain-based secure service provisioning mechanism for protecting lightweight clients from insecure services in network computing scenarios. We introduce the blockchain to maintain all the validity states of the off-chain services and edge service providers for the IoT terminals to help them get rid of untrusted or discarded services through provider identification and service verification. In addition, we take advantage of smart contracts which can be triggered by the lightweight clients to help them check the validities of service providers and service codes according to the on-chain transactions, thereby reducing the direct overhead on the IoT devices. Moreover, the adoptions of the consortium blockchain and the proof of authority consensus mechanism also help to achieve a high throughput. The theoretical security analysis and evaluation results show that our approach helps the lightweight clients get rid of untrusted edge service providers and insecure services effectively with acceptable latency and affordable costs.

Journal ArticleDOI
TL;DR: This paper analyzes the state-of-the-art NOMA schemes by comparing the operations applied at the transmitter, as well as the typical grant-free NomA schemes and the detection techniques, and envision the future research challenges deduced from the recently proposed N OMA technologies.
Abstract: Owing to the superior performance in spectral efficiency, connectivity, and flexibility, nonorthogonal multiple access (NOMA) is recognized as the promising access protocol and is now undergoing the standardization process in 5G. Specifically, dozens of NOMA schemes have been proposed and discussed as the candidate multiple access technologies for the future radio access networks. This paper aims to make a comprehensive overview about the promising NOMA schemes. First of all, we analyze the state-of-the-art NOMA schemes by comparing the operations applied at the transmitter. Typical multiuser detection algorithms corresponding to these NOMA schemes are then introduced. Next, we focus on grant-free NOMA, which incorporates the NOMA techniques with uplink uncoordinated access and is expected to address the massive connectivity requirement of 5G. We present the motivation of applying grant-free NOMA, as well as the typical grant-free NOMA schemes and the detection techniques. In addition, this paper discusses the implementation issues of NOMA for practical deployment. Finally, we envision the future research challenges deduced from the recently proposed NOMA technologies.

Journal ArticleDOI
TL;DR: This work proposes a novel task scheduling model and a TSFC (Task Scheduling in Fog Computing) algorithm based on the I-Apriori algorithm, which has better performance on reducing the total execution time of tasks and average waiting time.
Abstract: Fog computing (FC) is an emerging paradigm that extends computation, communication, and storage facilities towards the edge of a network. In this heterogeneous and distributed environment, resource allocation is very important. Hence, scheduling will be a challenge to increase productivity and allocate resources appropriately to the tasks. We schedule tasks in fog computing devices based on classification data mining technique. A key contribution is that a novel classification mining algorithm I-Apriori is proposed based on the Apriori algorithm. Another contribution is that we propose a novel task scheduling model and a TSFC (Task Scheduling in Fog Computing) algorithm based on the I-Apriori algorithm. Association rules generated by the I-Apriori algorithm are combined with the minimum completion time of every task in the task set. Furthermore, the task with the minimum completion time is selected to be executed at the fog node with the minimum completion time. We finally evaluate the performance of I-Apriori and TSFC algorithm through experimental simulations. The experimental results show that TSFC algorithm has better performance on reducing the total execution time of tasks and average waiting time.

Journal ArticleDOI
TL;DR: This paper determines the maximum number of LoRa nodes that can communicate with a Gateway considering the LoRaWAN protocol specifications and proposes a series of solutions for reducing the number of collisions and increasing the capacity of the communication channel.
Abstract: The LoRaWAN communication protocol can be used for the implementation of the IoT (Internet of Things) concept. Currently, most of the information regarding the scalability of the LoRa technology is commercial and deals with the best-case scenario. Thus, we need realistic models, enabling the proper assessment of the performance level. Most of the time, the IoT concept entails a large number of nodes distributed over a wide geographical area, therefore forming a high density, large-scale architecture. It is important to determine the number of collisions so that we can assess the network performance. The present paper aims at assessing the performance level of the LoRaWAN technology by analyzing the number of packet collisions that can occur. Thus, this paper determines the maximum number of LoRa nodes that can communicate with a Gateway considering the LoRaWAN protocol specifications. Furthermore, we have proposed a series of solutions for reducing the number of collisions and increasing the capacity of the communication channel.

Journal ArticleDOI
TL;DR: This research investigates the system performance of a two-way amplify-and-forward energy harvesting relay network over the Rician fading environment and shows that the analytical and the simulation results agree well with each other in all system parameters.
Abstract: We investigate the system performance of a two-way amplify-and-forward (AF) energy harvesting relay network over the Rician fading environment. For details, the delay-limited (DL) and delay-tolerant (DT) transmission modes are proposed and investigated when both energy and information are transferred between the source node and the destination node via a relay node. In the first stage, the analytical expressions of the achievable throughput, ergodic capacity, the outage probability, and symbol error ratio (SER) were proposed, analyzed, and demonstrated. After that, the closed-form expressions for the system performance are studied in connection with all system parameters. Moreover, the analytical results are also demonstrated by Monte Carlo simulation in comparison with the closed-form expressions. Finally, the research results show that the analytical and the simulation results agree well with each other in all system parameters.

Journal ArticleDOI
TL;DR: This paper proposes a procedure of predicting channel characteristics based on a well-known machine learning (ML) algorithm and convolutional neural network (CNN) for three-dimensional (3D) millimetre wave (mmWave) massive multiple-input multiple-output (MIMO) indoor channels.
Abstract: This paper proposes a procedure of predicting channel characteristics based on a well-known machine learning (ML) algorithm and convolutional neural network (CNN), for three-dimensional (3D) millimetre wave (mmWave) massive multiple-input multiple-output (MIMO) indoor channels. The channel parameters, such as amplitude, delay, azimuth angle of departure (AAoD), elevation angle of departure (EAoD), azimuth angle of arrival (AAoA), and elevation angle of arrival (EAoA), are generated by a ray tracing software. After the data preprocessing, we can obtain the channel statistical characteristics (including expectations and spreads of the above-mentioned parameters) to train the CNN. The channel statistical characteristics of any subchannels in a specified indoor scenario can be predicted when the location information of the transmitter (Tx) antenna and receiver (Rx) antenna is input into the CNN trained by limited data. The predicted channel statistical characteristics can well fit the real channel statistical characteristics. The probability density functions (PDFs) of error square and root mean square errors (RMSEs) of channel statistical characteristics are also analyzed.