scispace - formally typeset
Search or ask a question

Showing papers in "Annales Des Télécommunications in 2021"


Journal ArticleDOI
TL;DR: This work introduces a comprehensive review of the main information-theoretic metrics used to measure the secrecy performance in physical layer security, and a theoretical framework related to the most commonly used physical layerSecurity techniques to improve secrecy performance is provided.
Abstract: Physical layer security is a promising approach that can benefit traditional encryption methods. The idea of physical layer security is to take advantage of the propagation medium’s features and impairments to ensure secure communication in the physical layer. This work introduces a comprehensive review of the main information-theoretic metrics used to measure the secrecy performance in physical layer security. Furthermore, a theoretical framework related to the most commonly used physical layer security techniques to improve secrecy performance is provided. Finally, our work surveys physical layer security research over several enabling 5G technologies, such as massive multiple-input multiple-output, millimeter-wave communications, heterogeneous networks, non-orthogonal multiple access, and full-duplex. We also include the key concepts of each of the technologies mentioned above. Also identified are future fields of research and technical challenges of physical layer security.

35 citations


Journal ArticleDOI
TL;DR: This work presents Wi-Sense—a human activity recognition system that uses a convolutional neural network (CNN) to recognize human activities based on the environment-independent fingerprints extracted from the Wi-Fi channel state information (CSI).
Abstract: A human activity recognition (HAR) system acts as the backbone of many human-centric applications, such as active assisted living and in-home monitoring for elderly and physically impaired people. Although existing Wi-Fi-based human activity recognition methods report good results, their performance is affected by the changes in the ambient environment. In this work, we present Wi-Sense—a human activity recognition system that uses a convolutional neural network (CNN) to recognize human activities based on the environment-independent fingerprints extracted from the Wi-Fi channel state information (CSI). First, Wi-Sense captures the CSI by using a standard Wi-Fi network interface card. Wi-Sense applies the CSI ratio method to reduce the noise and the impact of the phase offset. In addition, it applies the principal component analysis to remove redundant information. This step not only reduces the data dimension but also removes the environmental impact. Thereafter, we compute the processed data spectrogram which reveals environment-independent time-variant micro-Doppler fingerprints of the performed activity. We use these spectrogram images to train a CNN. We evaluate our approach by using a human activity data set collected from nine volunteers in an indoor environment. Our results show that Wi-Sense can recognize these activities with an overall accuracy of 97.78%. To stress on the applicability of the proposed Wi-Sense system, we provide an overview of the standards involved in the health information systems and systematically describe how Wi-Sense HAR system can be integrated into the eHealth infrastructure.

28 citations


Journal ArticleDOI
TL;DR: This paper explores the vehicle clustering techniques from the aspects of cluster head selection, cluster formation, and cluster maintenance procedures, and summarizes the existing clustering performance metrics and performance evaluation approaches.
Abstract: In vehicular ad hoc network (VANET), lots of information should be delivered on a large scale in a limited time. Meanwhile, vehicles are quite dynamic with high velocities, which causes a large number of vehicle disconnections. Both of these characteristics lead to unreliable information transmission in VANET. A vehicle clustering algorithm, which organizes vehicles in groups, is introduced in VANET to improve network scalability and connection reliability. However, different clustering techniques and algorithms are required for different scenarios, such as information transmission, routing, and accident detections. This paper explores the vehicle clustering techniques from the aspects of cluster head selection, cluster formation, and cluster maintenance procedures. Meanwhile, context-based clustering algorithms are summarized, and the hybrid-clustering algorithms are highlighted. The paper also summarizes the existing clustering performance metrics and performance evaluation approaches.

25 citations


Journal ArticleDOI
TL;DR: In this scheme, an unmanned aerial vehicle (UAV) is adopted to collect baseline data from sensors to evaluate the trust of MVs, and a high-trust MV priority recruitment (HTMPR) strategy is proposed to recruit credible MVs at a low cost.
Abstract: A vehicular delay-tolerant network (VDTN) allows mobile vehicles (MVs) to collect data from widely deployed delay-tolerant sensors in a smart city through opportunistic routing, which has proven to be an efficient and low-cost data collection method. However, malicious MVs may report false data to obtain rewards, which will compromise applications. In this paper, the Active Trust Verification Data Collection (ATVDC) scheme is proposed for efficient, cheap, and secure data collection. In this scheme, an unmanned aerial vehicle (UAV) is adopted to collect baseline data from sensors to evaluate the trust of MVs, and a high-trust MV priority recruitment (HTMPR) strategy is proposed to recruit credible MVs at a low cost. In addition, a genetic-algorithm-based trajectory planning (GATP) algorithm is proposed to allow the UAV to collect more baseline data at the minimum flight cost. After sufficient experiments, the strategy proposed in this paper is seen to greatly improve performance in terms of the error-free ratio EF, the symbol error ratio ES, and the data coverage ratio ϑ.

24 citations


Journal ArticleDOI
TL;DR: This investigation will benefit the researchers and developers to identify and solve blockchain and IoT integration challenges in order to realize efficient BIoT applications.
Abstract: Blockchain-based Internet of Things (BIoT) is an emerging paradigm of Internet of Things (IoT) which utilizes the blockchain technology to provide security services to the IoT applications. In essence, the blockchain built-in security mechanism can provide services such as availability, authentication, authorization, confidentiality, and integrity to the IoT applications. While most of the IoT devices are inherently resource-constrained in terms of computational power and storage capacity, the downside for blockchain is a requirement of massive amount of energy and computational resources for its operation, which poses challenges to the realization of BIoT. This paper strives to explore the challenges associated with the integration of blockchain and IoT and review their solutions. First, a brief introduction of blockchain technology is presented, followed by characterization of blockchain-based IoT applications as per their heterogeneous traffic demand and Quality of Service (QoS) requirements. Next, challenges that limit the design, development, and deployment of BIoT applications are explained in detail such as energy efficiency, privacy, throughput, latency, fork problem, security, legal issues, smart contracts, storage, and network broadcast mechanism and their proposed solutions are discussed. Finally, future research directions of blockchain and IoT integration are indicated. This investigation will benefit the researchers and developers to identify and solve blockchain and IoT integration challenges in order to realize efficient BIoT applications.

17 citations


Journal ArticleDOI
TL;DR: The intelligent Named Data Caching (iNDC), a machine-learning–based data caching technique for ROOF-based named data networking, which predicts the number of content requests, such that popular contents are kept as long as possible on roadside units.
Abstract: Things are interconnected using information and communication technologies in smart cities, forming Internet of Things (IoT). The Internet of Vehicles (IoV) refers to an IoT application, where the urban vehicle fleet forms a worldwide network, using V2X (Vehicle-to-Everything) communications. The 5G is the new generation of cellular networks that will eliminate the bounds of bandwidth, performance, and latency limitations. IoV is one of the high-priority application domains for 5G. Among the under development IEEE Standard regarding 5G, the IEEE P1931.1 standard (named also Real-time Onsite Operations Facilitation (ROOF) Standard) seems to be very promising for IoV requirements. This paper proposes ROOF-based Named Data Vehicular Networking (RND − Vn), a named data networking (NDN) architecture for IoV. In addition to the proposal, we provide SeCrNDn (Searchable Encryption for Content Retrieval in NDN), a searchable encryption technique for NDN content retrieval. Furthermore, we propose the intelligent Named Data Caching (iNDC), a machine-learning–based data caching technique for ROOF-based named data networking. The iNDC predicts the number of content requests, such that popular contents are kept as long as possible on roadside units. The proposed iNDC is also used to predict the storage capacity required by each roadside unit. A performance study was conducted to evaluate the performance of machine learning algorithms applied to iNDC. The results show that linear and ridge regressions are the most efficient in terms of content popularity prediction. To predict the capacity of new roadside units, iNDC provides better accuracy using k-Nearest Neighbors.

17 citations


Journal ArticleDOI
TL;DR: This paper identifies a set of strategies that can be used by attackers to efficiently track vehicles without being visually detected and builds an efficient machine learning model to detect tracking attacks based only on the receiving beacons.
Abstract: Detecting passive attacks is always considered difficult in vehicular networks. Passive attackers can eavesdrop on the wireless medium to collect beacons. These beacons can be exploited to track the positions of vehicles not only to violate their location privacy but also for criminal purposes. In this paper, we propose a novel federated learning-based scheme for detecting passive mobile attackers in 5G vehicular edge computing. We first identify a set of strategies that can be used by attackers to efficiently track vehicles without being visually detected. We then build an efficient machine learning (ML) model to detect tracking attacks based only on the receiving beacons. Our scheme enables federated learning (FL) at the edge to ensure collaborative learning while preserving the privacy of vehicles. Moreover, FL clients use a semi-supervised learning approach to ensure accurate self-labeling. Our experiments demonstrate the effectiveness of our proposed scheme to detect passive mobile attackers quickly and with high accuracy. Indeed, only 20 received beacons are required to achieve 95% accuracy. This accuracy can be achieved within 60 FL rounds using 5 FL clients in each FL round. The obtained results are also validated through simulations.

14 citations


Journal ArticleDOI
TL;DR: This paper introduces the modified Local Outlier Factor (LOF)–based outlier characterization approach and applies it to enhance the IoT security and reliability.
Abstract: The Internet of Things (IoT) is a growing paradigm that is revolutionary for information and communication technology (ICT) because it gathers numerous application domains by integrating several enabling technologies Outlier detection is a field of tremendous importance, including in IoT In previous works on outlier detection, the proposed methods mainly tackled the efficacy and the efficiency challenges However, a growing interest in the interpretation of the detected anomalies has been noticed by the research community, and only a few works have contributed in this direction Furthermore, characterizing anomalous events in IoT-related problems has not been conducted Hence, in this paper, we introduce our modified Local Outlier Factor (LOF)–based outlier characterization approach and apply it to enhance the IoT security and reliability Experiments on both synthetic and real-world datasets show the good performance of our solution

13 citations


Journal ArticleDOI
TL;DR: A strategic reference is introduced that guides HEIs on the development of an ISM framework (ISMF) and provides recommendations that should be considered for its implementation in an era of ever-evolving security threats.
Abstract: Effective information security management (ISM) practices to protect the information assets of organizations from security intrusions and attacks is imperative. In that sense, a systematic literature review of academic articles focused on ISM in higher education institutions (HEIs) is conducted. For this purpose, an empirical study was performed. Studies carried out from 2012 onward reporting results from HEIs data that perform the ISM through various means, such as a set of framework functions, implementation phases, infrastructure services, and securities to their assets, have been explored. The articles found were then analyzed following a methodological procedure consisting of a systematic mapping study with their research questions, inclusion and exclusion criteria, selection of digital libraries, and analysis of the respective search strings. A set of competencies, resources, directives, and strategies that contribute to designing and to developing an ISM framework (ISMF) for HEIs is identified based on standards such as ISO 27000, COBIT, ITIL, NIST, and EDUCAUSE. This study introduces a strategic reference that guides HEIs on the development of an ISMF and provides recommendations that should be considered for its implementation in an era of ever-evolving security threats.

13 citations


Journal ArticleDOI
TL;DR: This paper studies the multi-agent resource allocation problem in vehicular networks using non-orthogonal multiple access (NOMA) and network slicing using a deep reinforcement learning (DRL) approach and proposes a deep Q learning (DQL) algorithm that is practical because it can be implemented in an online and distributed manner.
Abstract: This paper studies the multi-agent resource allocation problem in vehicular networks using non-orthogonal multiple access (NOMA) and network slicing. Vehicles want to broadcast multiple packets with heterogeneous quality-of-service (QoS) requirements, such as safety-related packets (e.g., accident reports) that require very low latency communication, while raw sensor data sharing (e.g., high-definition map sharing) requires high-speed communication. To ensure heterogeneous service requirements for different packets, we propose a network slicing architecture. We focus on a non-cellular network scenario where vehicles communicate by the broadcast approach via the direct device-to-device interface (i.e., sidelink communication). In such a vehicular network, resource allocation among vehicles is very difficult, mainly due to (i) the rapid variation of wireless channels among highly mobile vehicles and (ii) the lack of a central coordination point. Thus, the possibility of acquiring instantaneous channel state information to perform centralized resource allocation is precluded. The resource allocation problem considered is therefore very complex. It includes not only the usual spectrum and power allocation, but also coverage selection (which target vehicles to broadcast to) and packet selection (which network slice to use). This problem must be solved jointly since selected packets can be overlaid using NOMA and therefore spectrum and power must be carefully allocated for better vehicle coverage. To do so, we first provide a mathematical programming formulation and a thorough NP-hardness analysis of the problem. Then, we model it as a multi-agent Markov decision process. Finally, to solve it efficiently, we use a deep reinforcement learning (DRL) approach and specifically propose a deep Q learning (DQL) algorithm. The proposed DQL algorithm is practical because it can be implemented in an online and distributed manner. It is based on a cooperative learning strategy in which all agents perceive a common reward and thus learn cooperatively and distributively to improve the resource allocation solution through offline training. We show that our approach is robust and efficient when faced with different variations of the network parameters and compared to centralized benchmarks.

12 citations


Journal ArticleDOI
TL;DR: In this paper, a blockchain-based Personal Health Information Management System (PHIMS) for managing health data originating from medical IoT devices and connected applications is proposed, which consists of four layers, which are a blockchain layer for hosting a blockchain database, an IoT device layer for obtaining personal health data, an application layer for facilitating health data sharing, and an adapter layer, which interfaces the blockchain layer with an application.
Abstract: Medical IoT devices that use miniature sensors to collect patient’s bio-signals and connected medical applications are playing a crucial role in providing pervasive and personalized healthcare. This technological improvement has also created opportunities for the better management of personal health information. The Personal Health Information Management System (PHIMS) supports activities such as acquisition, storage, organization, integration, and privacy-sensitive retrieval of consumer’s health information. For usability and wide acceptance, the PHIMS should follow the design principles that guarantee privacy-aware health information sharing, individual information control, integration of information obtained from multiple medical IoT devices, health information security, and flexibility. Recently, blockchain technology has emerged as a lucrative option for the management of personal health information. In this paper, we propose eHealthChain—a blockchain-based PHIMS for managing health data originating from medical IoT devices and connected applications. The eHealthChain architecture consists of four layers, which are a blockchain layer for hosting a blockchain database, an IoT device layer for obtaining personal health data, an application layer for facilitating health data sharing, and an adapter layer, which interfaces the blockchain layer with an application layer. Compared to existing systems, eHealthChain provides complete control to the user in terms of personal health data acquisition, sharing, and self-management. We also present a detailed implementation of a Proof of Concept (PoC) prototype of eHealthChain system built using Hyperledger Fabric platform.

Journal ArticleDOI
TL;DR: In this paper, the authors present and compare the main proof-based consensus protocols, focusing on the security and performance of each consensus protocol, and highlight the centralization tendency and the main vulnerabilities of Proof of Work (PoW), Proof of Stake (PoS), and their countermeasures.
Abstract: Blockchain is a disruptive technology that will revolutionize the Internet and our way of living, working, and trading. However, the consensus protocols of most blockchain-based public systems show vulnerabilities and performance limitations that hinder the mass adoption of blockchain. This paper presents and compares the main proof-based consensus protocols, focusing on the security and performance of each consensus protocol. Proof-based protocols use the probabilistic consensus model and are more suitable for public environments with many participants, such as the Internet of Things (IoT). We highlight the centralization tendency and the main vulnerabilities of Proof of Work (PoW), Proof of Stake (PoS), and their countermeasures. We also analyze and compare alternative proof-based protocols, such as Proof of Elapsed Time (PoET), Proof of Burn (PoB), Proof of Authority (PoA), and Delegated Proof of Stake (DPoS). Finally, we analyze the security of the IOTA consensus protocol, a DAG-based platform suited for the IoT environment.

Journal ArticleDOI
TL;DR: In this paper, two types of microstrip antennas (slot and fractal) have been reported by researchers (as a single element) using a survey that included the evaluation of several important specifications of the antennas in previous research, such as operating bandwidth, gain, efficiency, axial ratio bandwidth (ARBW), and size.
Abstract: Portable communication devices such as WLAN, WiMAX, LTE, ISM, and 5G utilize one or more of the triple bands at (2.3–2.7 GHz, 3.4–3.6 GHz, and 5–6 GHz) and suffer from the effect of multipath problems because they are used in urban regions. To date, no one has performed a review of the antennas used for these types of wireless communications. This study reviewed two types of microstrip antennas (slot and fractal) that have been reported by researchers (as a single element) using a survey that included the evaluation of several important specifications of the antennas in previous research, such as operating bandwidth, gain, efficiency, axial ratio bandwidth (ARBW), and size. The weaknesses in the design of all antennas were carefully identified to determine the most important challenges in the design of these antennas and name the most important limits in the design to be overcome in future research. This study also indicated some antennas that have circular polarization characteristics, the techniques used to generate the circular polarization characteristics, and the challenges. Finally, several suggestions as a guideline for antenna design and future work in designing antennas for Wi-Fi, LTE, and WiMAX communications, according to the market demands, and methods for overcoming the identified limits are presented.

Journal ArticleDOI
TL;DR: The PLS data confidentiality schemes for NOMA and their limitations, challenges, and countermeasures are discussed, and different methods to address the remaining security properties are proposed.
Abstract: More and more attention is being directed towards the Non-Orthogonal Multiple Access (NOMA) technology due to its many advantages such as high data rate, enhanced spectral and energy efficiency, massive connectivity, and low latency. On the other hand, secure data transmission remains a critical challenge in wireless communication systems since wireless channels are, in general, exposed. To increase the robustness of NOMA systems and overcome the issues related to wireless transmission, several Physical Layer Security (PLS) schemes have been recently presented. Unlike conventional security algorithms, this type of solutions exploits the dynamicity of the physical layer to secure data using a single iteration and minimum operations. In this paper, we survey the various NOMA-based PLS schemes in the literature, which target all kinds of security properties. From this study, we have noticed that the majority of the research work in this area is mainly focused on data confidentiality and privacy and not on other security properties such as device and source authentication, key generation, and message integrity. Therefore, we discuss the PLS data confidentiality schemes for NOMA and their limitations, challenges, and countermeasures, and we propose different methods to address the remaining security properties.

Journal ArticleDOI
TL;DR: This paper focuses on deriving closed-form expressions for different adaptive transmission techniques based on instantaneous channel state information in BX fading by using the characteristic function approach.
Abstract: The present paper analyzes an L-branch maximal ratio combining (MRC) receiver over the Beaulieu-Xie (BX) fading channel. The expression of the probability density function (PDF) of the signal-to-noise ratio (SNR) of an MRC receiver in BX fading is derived by using the characteristic function approach. The obtained PDF is then utilized to calculate the various link-level and system-level parameters. This paper focuses on deriving closed-form expressions for different adaptive transmission techniques based on instantaneous channel state information in BX fading. The accuracy of our derived expressions is verified by comparing them with Monte Carlo simulations.

Journal ArticleDOI
TL;DR: Wang et al. as mentioned in this paper proposed a traffic identification model based on generating adversarial deep convolutional networks (GADCN), which effectively fits and expands traffic images, maintains a balance between classes of the dataset, and enhances the dataset stability.
Abstract: With the rapid development of network technology, the Internet has accelerated the generation of network traffic, which has made network security a top priority. In recent years, due to the limitations of deep packet inspection technology and port number-based network traffic identification technology, machine learning-based network traffic identification technology has gradually become the most concerned method in the field of traffic identification with its advantages. As the learning ability of deep learning in machine learning becomes more substantial and more able to adapt to highly complex tasks, deep learning has become more widely used in natural language processing, image identification, and computer vision. Therefore, more and more researchers are applying deep learning to network traffic identification and classification. To address the imbalance of current network traffic, we propose a traffic identification model based on generating adversarial deep convolutional networks (GADCN), which effectively fits and expands traffic images, maintains a balance between classes of the dataset, and enhances the dataset stability. We use the USTC-TFC2016 dataset as training and test samples, and experimental results show that the method based on GADCN has better performance than general deep learning models.

Journal ArticleDOI
TL;DR: This work proposes an end-to-end methodology allowing a neural network to outperform traditional machine learning algorithms, and demonstrates high performance score on CIC-IDS2017 data set, showing an accuracy greater than 99% and a false positive rate lower than 0.5%.
Abstract: The Internet connection is becoming ubiquitous in embedded systems, making them potential victims of intrusion. Although gaining popularity in recent years, deep learning based intrusion detection systems tend to produce worse results than those using traditional machine learning algorithms. On the contrary, we propose an end-to-end methodology allowing a neural network to outperform traditional machine learning algorithms. We demonstrate high performance score on CIC-IDS2017 data set, showing an accuracy greater than 99% and a false positive rate lower than 0.5%. Our results are compared to traditional machine learning algorithms and previous studies. Then, we show that our approach can be successfully applied to CSE-CIC-IDS2018 data set, confirming that neural network can reach better scores than other machine learning algorithms. Our performance is compared to previous work on this data set. We further deployed our solution on a system-on-chip for automotive, allowing to characterize real-time performance aspect on an embedded system, both for feature extraction and inference. Finally, a discussion opens up on problems related to some attacks that are particularly difficult to detect with flow-based techniques and weaknesses found in the data sets.

Journal ArticleDOI
TL;DR: This paper suggests a novel approach, called scalable and optimal near-sighted location selection for fog node deployment and routing in SDN-based wireless networks for IoT systems (SOSW), which uses singular-value decomposition and QR factorization with column pivoting linear algebra methods on the traffic matrix of the network to compute the optimal locations for fog nodes.
Abstract: In a fog computing (FC) architecture, cloud services migrate towards the network edge and operate via edge devices such as access points (AP), routers, and switches. These devices become part of a virtualization infrastructure and are referred to as “fog nodes.” Recently, software-defined networking (SDN) has been used in FC to improve its control and manageability. The current SDN-based FC literature has overlooked two issues: (a) fog nodes’ deployment at optimal locations and (b) SDN best path computation for data flows based on constraints (i.e., end-to-end delay and link utilization). To solve these optimization problems, this paper suggests a novel approach, called scalable and optimal near-sighted location selection for fog node deployment and routing in SDN-based wireless networks for IoT systems (SOSW). First, the SOSW model uses singular-value decomposition (SVD) and QR factorization with column pivoting linear algebra methods on the traffic matrix of the network to compute the optimal locations for fog nodes, and second, it introduces a new heuristic-based traffic engineering algorithm, called the constraint-based shortest path algorithm (CSPA), which uses ant colony optimization (ACO) to optimize the path computation process for task offloading. The results show that our proposed approach significantly reduces average latency and energy consumption in comparison with existing approaches.

Journal ArticleDOI
TL;DR: In this paper, the authors employ Fast Gradient Sign Method (FGSM) to generate adversarial examples to test the robustness of three intrusion detection models based on convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU).
Abstract: With the advent of the Internet of Things (IoT), network attacks have become more diverse and intelligent. In order to ensure the security of the network, Intrusion Detection system (IDS) has become very important. However, when met with the adversarial examples, IDS has itself become no longer secure, and the attackers can increase the success rate of attacks by misleading IDS. Therefore, it is necessary to improve the robustness of the IDS. In this paper, we employ Fast Gradient Sign Method (FGSM) to generate adversarial examples to test the robustness of three intrusion detection models based on convolutional neural network (CNN), long short-term memory (LSTM), and gated recurrent unit (GRU). We employ three training methods: the first is to train the models with normal examples, the second is to train the models directly with adversarial examples, and the last is to pretrain the models with normal examples, and then employ adversarial examples to train the models. We evaluate the performance of the three models under different training methods, and find that under normal training method, CNN is the most robust model to adversarial examples. After adversarial training, the robustness of GRU and LSTM to adversarial examples has greatly been improved.

Journal ArticleDOI
TL;DR: A portable and multifunctional software-defined radio (SDR) platform is designed to detect different activities of human life, in particular for the monitoring of health, and the results achieved by detecting hand motion activity ensure that the system is capable of detecting human body motions and vital signs.
Abstract: The future of dependable wireless communication will encompass a much eclectic range of applications. Not only are traditional telecommunication facilities such as text messaging, audio and video calling, video download and upload, web browsing, and social networking being improved but also a wide range of sensors and devices in the “Internet of things,” such as “smart cities” and smart hospital applications are being adopted. Researchers are trying hard to ensure timely detection of various diseases anytime and anywhere. In this research, a portable and multifunctional software-defined radio (SDR) platform is designed to detect different activities of human life, in particular for the monitoring of health. The wireless channel state information (WCSI) in the presence of the human body is investigated to capture movements using different frequency bands and is the key idea of this work. Orthogonal frequency division multiplexing (OFDM) with 64 subcarriers and the magnitude and phase responses in the frequency domain are used to capture the WCSI of the activity. The design is validated through simulation and real-time experiments. However, it is widely accepted that simulation results fail to capture real-life situations. Extensive and repeated real-time experiments are carried out on the hardware platform to ensure that the activity is detected accurately. The results achieved by detecting hand motion activity ensure that the system is capable of detecting human body motions and vital signs.

Journal ArticleDOI
TL;DR: A comprehensive analysis of the ASER performance for the MIMO TAS/MRC networks is demonstrated by varying the fading parameter values, numbers of transmitter-receiver antennas and constellation size.
Abstract: In this paper, the average symbol error rate (ASER) evaluation of multiple-input-multiple-output (MIMO) systems with transmit antenna selection (TAS) and maximal ratio combining (MRC) are analysed under Weibull fading conditions. The impact of additive white Gaussian noise (AWGN) and additive white generalized Gaussian noise (AWGGN) on the ASER evaluation of the MIMO TAS/MRC networks are considered. Closed form approximate and asymptotic ASER expressions of the considered network with AWGN for different quadrature amplitude modulation techniques are derived based on the probability density function approach. The closed-form approximate and asymptotic ASER expressions for the AWGGN case are also obtained. In addition, a comprehensive analysis of the ASER performance for the MIMO TAS/MRC networks is demonstrated by varying the fading parameter values, numbers of transmitter-receiver antennas and constellation size. Finally, the obtained theoretical results are confirmed through the exact simulation results.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a new robust and dynamic mobility-based clustering algorithm junction based clustering for VANET (JCV) which considers transmission range, moving direction of the vehicle at the next junction, and vehicle density in the creation of a cluster, whereas relative position, movement at the junction, degree of a node, and time spent on the road are considered to select the cluster head.
Abstract: Vehicular communication is an essential part of a smart city. Scalability is a major issue for vehicular communication. Clustering can solve the issues of vehicular ad hoc network (VANET); however, due to the high mobility of the vehicles, clustering in VANET suffers stability issue. Previously proposed clustering algorithms for VANET are optimized for either cluster head or cluster member duration. Moreover, the absence of the intelligent use of mobility parameters, such as direction, movement, position, and velocity, results in cluster stability issues. A dynamic clustering algorithm considering the efficient use of mobility parameters can solve the stability problem in VANET. To achieve higher stability for VANET, a new robust and dynamic mobility-based clustering algorithm junction-based clustering for VANET (JCV) is proposed in this paper. In contrast to previous studies, transmission range, moving direction of the vehicle at the next junction, and vehicle density are considered in the creation of a cluster, whereas relative position, movement at the junction, degree of a node, and time spent on the road are considered to select the cluster head. The performance of JCV is compared with two existing VANET clustering algorithms in terms of the average cluster head duration, the average cluster member duration, the average number of cluster head change, and the percentage of vehicles participating in the clustering process. The simulation result shows JCV outperforms the existing algorithms and achieved better stability.

Journal ArticleDOI
TL;DR: Different link quality estimators are reviewed with a more particular focus on those based on Received Signal Strength Indicator (RSSI) and Packet Delivery Ratio (PDR), and a proposed to go even further than link quality estimation with link quality prediction.
Abstract: The use of poor-quality links in Internet of Things (IoT) networks leads to a bad quality of experience (QoE) with long delivery delays, low reliability, short lifetime of battery-operated nodes, to name but a few. In addition, network resources, such as bandwidth and node energy, are wasted by retransmissions. An accurate estimation of link quality will enable the network to better select the links used for data gathering. Hence, the number of retransmissions needed to achieve the required end-to-end reliability is decreased, leading to shorter end-to-end delivery times, a higher network throughput and an increased network lifetime. In this paper, different link quality estimators are reviewed with a more particular focus on those based on Received Signal Strength Indicator (RSSI) and Packet Delivery Ratio (PDR). We propose to go even further than link quality estimation with link quality prediction. The expected benefit of using link quality prediction is to anticipate link breakages and route changes before loosing packets. It should result in a better QoE provided by the network. For that purpose, we evaluate the performance of four machine learning techniques (i.e. Linear Support Vector Machine, Logistic Regression, Support Vector Machine and Random Forest) working on the traces collected from a real IoT network. They are compared in terms of per-class metrics as well as global metrics. In addition, issues dealing with the deployment of such machine learning techniques in IoT networks with limited resources and energy are presented.

Journal ArticleDOI
TL;DR: This paper mathematically analyze DRX and presents an analysis model that fully reflects the DRX operation, and proposes a new metric called a real power-saving (RPS) factor by considering all the states and state transitions in theDRX specification.
Abstract: Discontinuous reception (DRX) is a way for user equipment (UE) to save energy. DRX forces a UE to turn off its transceivers for a DRX cycle when it does not have a packet to receive from a base station, called an eNB. However, if a packet arrives at an eNB when the UE is performing a DRX cycle, the transmission of the packet is delayed until the UE finishes the DRX cycle. Therefore, as the length of the DRX cycle increases, not only the amount of UE energy saved by the DRX but also the transmission delay of a packet increase. Different applications have different traffic arrival patterns and require different optimal balances between energy efficiency and transmission delay. Thus, understanding the tradeoff between these two performance metrics is important for achieving the optimal use of DRX in a wide range of use cases. In this paper, we mathematically analyze DRX to understand this tradeoff. We note that previous studies were limited in that their analysis models only partially reflect the DRX operation, and they make assumptions to simplify the analysis, which creates a gap between the analysis results and the actual performance of the DRX. To fill this gap, in this paper, we present an analysis model that fully reflects the DRX operation. To quantify the energy efficiency of the DRX, we also propose a new metric called a real power-saving (RPS) factor by considering all the states and state transitions in the DRX specification. In addition, we improve the accuracy of the analysis result for the average packet transmission delay by removing unrealistic assumptions. Through extensive simulation studies, we validate our analysis results. We also show that compared with the other analysis results, our analysis model improves the accuracy of the performance metrics.

Journal ArticleDOI
TL;DR: This paper proposes to design a new tool which is “a decision tree” that allows identifying when a blockchain may be the appropriate technical infrastructure for a given IT application, and when another classical system (centralized or distributed peer-to-peer) is more adapted.
Abstract: Blockchain technology has gained increasing attention from research and industry over the recent years. It allows implementing in its environment the smart-contracts technology which is used to automate and execute agreements between users. The blockchain is proposed today as a new technical infrastructure for several types of IT applications. This interest is mainly due to its core property that allows two users to perform transactions without going through a Trusted Third Party, while offering a transparent and fully protected data storage. However, a blockchain comes along a number of other intrinsic properties, which may not be suitable or beneficial in all the envisaged application cases. Consequently, we propose in this paper to design a new tool which is “a decision tree” that allows identifying when a blockchain may be the appropriate technical infrastructure for a given IT application, and when another classical system (centralized or distributed peer-to-peer) is more adapted. The proposed decision tree allows also identifying whether or not it is necessary to use the smart-contracts technology.

Journal ArticleDOI
TL;DR: The physical layer features of ZigBee devices are analyzed and methods based on deep learning algorithms to achieve high classification accuracy, based on wavelet decomposition and on the autoencoder representation of the original dataset are presented.
Abstract: In modern wireless systems such as ZigBee, sensitive information which is produced by the network is transmitted through different wired or wireless nodes. Providing the requisites of communication between diverse communication system types, such as mobiles, laptops, and desktop computers, does increase the risk of being attacked by outside nodes. Malicious (or unintentional) threats, such as trying to obtain unauthorized accessibility to the network, increase the requirements of data security against the rogue devices trying to tamper with the identity of authorized devices. In such manner, focusing on Radio Frequency Distinct Native Attributes (RF-DNA) of features extracted from physical layer responses (referred to as preambles) of ZigBee devices, a dataset of distinguishable features of all devices can be produced which can be exploited for the detection and rejection of spoofing/rogue devices. Through this procedure, distinction of devices manufactured by the different/same producer(s) can be realized resulting in an improvement of classification system accuracy. The two most challenging problems in initiating RF-DNA are (1) the mechanism of features extraction in the generation of a dataset in the most effective way for model classification and (2) the design of an efficient model for device discrimination of spoofing/rogue devices. In this paper, we analyze the physical layer features of ZigBee devices and present methods based on deep learning algorithms to achieve high classification accuracy, based on wavelet decomposition and on the autoencoder representation of the original dataset.

Journal ArticleDOI
TL;DR: A general framework for facilitating the exchange of trust and reputation information is proposed that defines messages and a protocol that allowsTrust and reputation systems to query each other for ratings, provide responses, and signal errors.
Abstract: The fifth mobile generation (5G) will enable massive distributed applications that run on various platforms and cater diverse and interacting entities. If such interactions are to be successful, the entities will have to learn to trust each other and one way of addressing this is to use trust and reputation systems. These systems estimate the trustworthiness of potential interaction partners and are now being increasingly deployed. However, their inability to share information across applications is concerning: as entities traverse application boundaries their trust and reputation information does not. Instead, it is kept in silos forcing entities to remake it in every application they join. The lack of appropriate standards further impedes such sharing attempts. To address this, we propose a general framework for facilitating the exchange of trust and reputation information. The framework defines messages and a protocol that allows trust and reputation systems to query each other for ratings, provide responses, and signal errors. We analyze the proposal and provide an implementation as free software.

Journal ArticleDOI
TL;DR: The performance of iterative interference alignment (IA) with spatial hole sensing in K -user multi-input multi-output (MIMO) cognitive radio (CR) networks is investigated and the impact of the relaying architecture on the system performance is analyzed.
Abstract: This paper investigates the performance of iterative interference alignment (IA) with spatial hole sensing in K-user multi-input multi-output (MIMO) cognitive radio (CR) networks. In the considered network, there are some unused degrees of freedom (DoF) or equivalently spatial holes in the primary network (PN) where the secondary network (SN) users communicate without causing harmful interference to the PN receivers. First, the generalized likelihood ratio test method is utilized to determine the availability of the unused DoFs; then, it is decided whether individual primary streams are present in the PN. With the aid of precoding and suppression matrices generated by an iterative IA approach, the interferences in the PN that are caused by the SN are aligned, and due to the secondary transmission, interference leakage on the kth primary receiver decreases below 10− 6. The effects of the detection threshold values and the number of transmitter and receiver antennas are investigated in terms of detection and false alarm probability. Finally, the amplify-and-forward (AF) relaying scheme in the SN is evaluated and the impact of the relaying architecture on the system performance is analyzed.

Journal ArticleDOI
TL;DR: This research proposes a cyberbullying life cycle model through topic modeling and conceptualizes the different stages of the attack considering criteria associated with computer attacks.
Abstract: Nowadays, cyberbullying cases are more common due to free access to technological resources. Studies related to this phenomenon from the fields of computer science and computer security are still limited. Several factors such as the access to specific databases on cyberbullying, the unification of scientific criteria that assess the nature of the problem, or the absence of real proposals that prevent and mitigate this problem could motivate the lack of interest by researchers in the field of information security to generate significant contributions. This research proposes a cyberbullying life cycle model through topic modeling and conceptualizes the different stages of the attack considering criteria associated with computer attacks. This proposal is supported by a review of the specific literature and knowledge bases gained from experiences of victims of online harassment and tweets from attackers.

Journal ArticleDOI
TL;DR: This work proposes an opportunistic data dissemination protocol (3DMAT) for disaster management in which both WSNs and VANETs participate in decision-making for the dissemination process so that messages are delivered in a timely manner.
Abstract: Disaster management systems (DMSs) aim to mitigate the potential damage from disasters by ensuring immediate and suitable assistance to victims. Disaster management is a challenging problem because while information needs to be processed in real time, the damaged environment can prevent it from being disseminated to processing centres. Our goal is to exploit available technologies such as Wireless Sensor Network (WSNs) and Vehicular Ad hoc NETworks (VANETs) to forward alerts from victims to rescue services. We propose an opportunistic data dissemination protocol (3DMAT) for disaster management in which both WSNs and VANETs participate in decision-making for the dissemination process so that messages are delivered in a timely manner. The simulation results show that the proposed protocol performs data dissemination more efficiently than other protocols. 3DMAT calculates the quality of nodes and the link between them to select the most relevant relay. Our protocol is 47% faster and generates a 17% lower communication load and 14% fewer redundant messages. These improvements are due to a selection strategy that targets the most relevant nodes to relay information.