scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Communications Surveys and Tutorials in 2019"


Journal ArticleDOI
TL;DR: This paper aims to provide a detailed survey of different indoor localization techniques, such as angle of arrival (AoA), time of flight (ToF), return time ofFlight (RTOF), and received signal strength (RSS) based on technologies that have been proposed in the literature.
Abstract: Indoor localization has recently witnessed an increase in interest, due to the potential wide range of services it can provide by leveraging Internet of Things (IoT), and ubiquitous connectivity. Different techniques, wireless technologies and mechanisms have been proposed in the literature to provide indoor localization services in order to improve the services provided to the users. However, there is a lack of an up-to-date survey paper that incorporates some of the recently proposed accurate and reliable localization systems. In this paper, we aim to provide a detailed survey of different indoor localization techniques, such as angle of arrival (AoA), time of flight (ToF), return time of flight (RTOF), and received signal strength (RSS); based on technologies, such as WiFi, radio frequency identification device (RFID), ultra wideband (UWB), Bluetooth, and systems that have been proposed in the literature. This paper primarily discusses localization and positioning of human users and their devices. We highlight the strengths of the existing systems proposed in the literature. In contrast with the existing surveys, we also evaluate different systems from the perspective of energy efficiency, availability, cost, reception range, latency, scalability, and tracking accuracy. Rather than comparing the technologies or techniques, we compare the localization systems and summarize their working principle. We also discuss remaining challenges to accurate indoor localization.

1,447 citations


Journal ArticleDOI
TL;DR: In this article, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented, and the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated.
Abstract: The use of flying platforms such as unmanned aerial vehicles (UAVs), popularly known as drones, is rapidly growing. In particular, with their inherent attributes such as mobility, flexibility, and adaptive altitude, UAVs admit several key potential applications in wireless systems. On the one hand, UAVs can be used as aerial base stations to enhance coverage, capacity, reliability, and energy efficiency of wireless networks. On the other hand, UAVs can operate as flying mobile terminals within a cellular network. Such cellular-connected UAVs can enable several applications ranging from real-time video streaming to item delivery. In this paper, a comprehensive tutorial on the potential benefits and applications of UAVs in wireless communications is presented. Moreover, the important challenges and the fundamental tradeoffs in UAV-enabled wireless networks are thoroughly investigated. In particular, the key UAV challenges such as 3D deployment, performance analysis, channel modeling, and energy efficiency are explored along with representative results. Then, open problems and potential research directions pertaining to UAV communications are introduced. Finally, various analytical frameworks and mathematical tools, such as optimization theory, machine learning, stochastic geometry, transport theory, and game theory are described. The use of such tools for addressing unique UAV problems is also presented. In a nutshell, this tutorial provides key guidelines on how to analyze, optimize, and design UAV-based wireless communication systems.

1,395 citations


Journal ArticleDOI
TL;DR: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking, and presents applications of DRL for traffic routing, resource sharing, and data collection.
Abstract: This paper presents a comprehensive literature review on applications of deep reinforcement learning (DRL) in communications and networking. Modern networks, e.g., Internet of Things (IoT) and unmanned aerial vehicle (UAV) networks, become more decentralized and autonomous. In such networks, network entities need to make decisions locally to maximize the network performance under uncertainty of network environment. Reinforcement learning has been efficiently used to enable the network entities to obtain the optimal policy including, e.g., decisions or actions, given their states when the state and action spaces are small. However, in complex and large-scale networks, the state and action spaces are usually large, and the reinforcement learning may not be able to find the optimal policy in reasonable time. Therefore, DRL, a combination of reinforcement learning with deep learning, has been developed to overcome the shortcomings. In this survey, we first give a tutorial of DRL from fundamental concepts to advanced models. Then, we review DRL approaches proposed to address emerging issues in communications and networking. The issues include dynamic network access, data rate control, wireless caching, data offloading, network security, and connectivity preservation which are all important to next generation networks, such as 5G and beyond. Furthermore, we present applications of DRL for traffic routing, resource sharing, and data collection. Finally, we highlight important challenges, open issues, and future research directions of applying DRL.

1,153 citations


Journal ArticleDOI
TL;DR: This paper bridges the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas, and provides an encyclopedic review of mobile and Wireless networking research based on deep learning, which is categorize by different domains.
Abstract: The rapid uptake of mobile devices and the rising popularity of mobile applications and services pose unprecedented demands on mobile and wireless networking infrastructure. Upcoming 5G systems are evolving to support exploding mobile traffic volumes, real-time extraction of fine-grained analytics, and agile management of network resources, so as to maximize user experience. Fulfilling these tasks is challenging, as mobile environments are increasingly complex, heterogeneous, and evolving. One potential solution is to resort to advanced machine learning techniques, in order to help manage the rise in data volumes and algorithm-driven applications. The recent success of deep learning underpins new and powerful tools that tackle problems in this space. In this paper, we bridge the gap between deep learning and mobile and wireless networking research, by presenting a comprehensive survey of the crossovers between the two areas. We first briefly introduce essential background and state-of-the-art in deep learning techniques with potential applications to networking. We then discuss several techniques and platforms that facilitate the efficient deployment of deep learning onto mobile systems. Subsequently, we provide an encyclopedic review of mobile and wireless networking research based on deep learning, which we categorize by different domains. Drawing from our experience, we discuss how to tailor deep learning to mobile environments. We complete this survey by pinpointing current challenges and open future directions for research.

975 citations


Journal ArticleDOI
TL;DR: In this paper, the authors provide a comprehensive survey of all of these developments promoting smooth integration of UAVs into cellular networks, including the types of consumer UAV currently available off-the-shelf, the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial BSs, challenges and opportunities for assisting cellular communications with UAV-based flying relays and BSs.
Abstract: The rapid growth of consumer unmanned aerial vehicles (UAVs) is creating promising new business opportunities for cellular operators On the one hand, UAVs can be connected to cellular networks as new types of user equipment, therefore generating significant revenues for the operators that can guarantee their stringent service requirements On the other hand, UAVs offer the unprecedented opportunity to realize UAV-mounted flying base stations (BSs) that can dynamically reposition themselves to boost coverage, spectral efficiency, and user quality of experience Indeed, the standardization bodies are currently exploring possibilities for serving commercial UAVs with cellular networks Industries are beginning to trial early prototypes of flying BSs or user equipments, while academia is in full swing researching mathematical and algorithmic solutions to address interesting new problems arising from flying nodes in cellular networks In this paper, we provide a comprehensive survey of all of these developments promoting smooth integration of UAVs into cellular networks Specifically, we survey: 1) the types of consumer UAVs currently available off-the-shelf; 2) the interference issues and potential solutions addressed by standardization bodies for serving aerial users with the existing terrestrial BSs; 3) the challenges and opportunities for assisting cellular communications with UAV-based flying relays and BSs; 4) the ongoing prototyping and test bed activities; 5) the new regulations being developed to manage the commercial use of UAVs; and 6) the cyber-physical security of UAV-assisted cellular communications

667 citations


Journal ArticleDOI
TL;DR: This paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks and overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems.
Abstract: In order to effectively provide ultra reliable low latency communications and pervasive connectivity for Internet of Things (IoT) devices, next-generation wireless networks can leverage intelligent, data-driven functions enabled by the integration of machine learning (ML) notions across the wireless core and edge infrastructure. In this context, this paper provides a comprehensive tutorial that overviews how artificial neural networks (ANNs)-based ML algorithms can be employed for solving various wireless networking problems. For this purpose, we first present a detailed overview of a number of key types of ANNs that include recurrent, spiking, and deep neural networks, that are pertinent to wireless networking applications. For each type of ANN, we present the basic architecture as well as specific examples that are particularly important and relevant wireless network design. Such ANN examples include echo state networks, liquid state machine, and long short term memory. Then, we provide an in-depth overview on the variety of wireless communication problems that can be addressed using ANNs, ranging from communication using unmanned aerial vehicles to virtual reality applications over wireless networks as well as edge computing and caching. For each individual application, we present the main motivation for using ANNs along with the associated challenges while we also provide a detailed example for a use case scenario and outline future works that can be addressed using ANNs. In a nutshell, this paper constitutes the first holistic tutorial on the development of ANN-based ML techniques tailored to the needs of future wireless networks.

666 citations


Journal ArticleDOI
TL;DR: This survey aims to shape a coherent and comprehensive picture of the current state-of-the-art efforts in this direction by starting with fundamental working principles of blockchains and how blockchain-based systems achieve the characteristics of decentralization, security, and auditability.
Abstract: The blockchain technology has revolutionized the digital currency space with the pioneering cryptocurrency platform named Bitcoin. From an abstract perspective, a blockchain is a distributed ledger capable of maintaining an immutable log of transactions happening in a network. In recent years, this technology has attracted significant scientific interest in research areas beyond the financial sector, one of them being the Internet of Things (IoT). In this context, the blockchain is seen as the missing link toward building a truly decentralized, trustless, and secure environment for the IoT and, in this survey, we aim to shape a coherent and comprehensive picture of the current state-of-the-art efforts in this direction. We start with fundamental working principles of blockchains and how blockchain-based systems achieve the characteristics of decentralization, security, and auditability. From there, we build our narrative on the challenges posed by the current centralized IoT models, followed by recent advances made both in industry and research to solve these challenges and effectively use blockchains to provide a decentralized, secure medium for the IoT.

553 citations


Journal ArticleDOI
TL;DR: In this paper, a comprehensive survey is provided on available air-to-ground (AG) channel measurement campaigns, large and small scale fading channel models, their limitations, and future research directions for UAV communication scenarios.
Abstract: In recent years, there has been a dramatic increase in the use of unmanned aerial vehicles (UAVs), particularly for small UAVs, due to their affordable prices, wide availability, and relative ease of operability. Existing and future applications of UAVs include remote surveillance and monitoring, relief operations, package delivery, and communication backhaul infrastructure. Additionally, UAVs are envisioned as an important component of 5G wireless technology and beyond. The unique application scenarios for UAVs necessitate accurate air-to-ground (AG) propagation channel models for designing and evaluating UAV communication links for control/non-payload as well as payload data transmissions. These AG propagation models have not been investigated in detail, relative to terrestrial propagation models. In this paper, a comprehensive survey is provided on available AG channel measurement campaigns, large and small scale fading channel models, their limitations, and future research directions for UAV communication scenarios.

522 citations


Journal ArticleDOI
TL;DR: This survey classifies the IoT security threats and challenges for IoT networks by evaluating existing defense techniques and provides a comprehensive review of NIDSs deploying different aspects of learning techniques for IoT, unlike other top surveys targeting the traditional systems.
Abstract: Pervasive growth of Internet of Things (IoT) is visible across the globe. The 2016 Dyn cyberattack exposed the critical fault-lines among smart networks. Security of IoT has become a critical concern. The danger exposed by infested Internet-connected Things not only affects the security of IoT but also threatens the complete Internet eco-system which can possibly exploit the vulnerable Things (smart devices) deployed as botnets. Mirai malware compromised the video surveillance devices and paralyzed Internet via distributed denial of service attacks. In the recent past, security attack vectors have evolved bothways, in terms of complexity and diversity. Hence, to identify and prevent or detect novel attacks, it is important to analyze techniques in IoT context. This survey classifies the IoT security threats and challenges for IoT networks by evaluating existing defense techniques. Our main focus is on network intrusion detection systems (NIDSs); hence, this paper reviews existing NIDS implementation tools and datasets as well as free and open-source network sniffing software. Then, it surveys, analyzes, and compares state-of-the-art NIDS proposals in the IoT context in terms of architecture, detection methodologies, validation strategies, treated threats, and algorithm deployments. The review deals with both traditional and machine learning (ML) NIDS techniques and discusses future directions. In this survey, our focus is on IoT NIDS deployed via ML since learning algorithms have a good success rate in security and privacy. The survey provides a comprehensive review of NIDSs deploying different aspects of learning techniques for IoT, unlike other top surveys targeting the traditional systems. We believe that, this paper will be useful for academia and industry research, first, to identify IoT threats and challenges, second, to implement their own NIDS and finally to propose new smart techniques in IoT context considering IoT limitations. Moreover, the survey will enable security individuals differentiate IoT NIDS from traditional ones.

494 citations


Journal ArticleDOI
TL;DR: This survey investigates some of the work that has been done to enable the integrated blockchain and edge computing system and discusses the research challenges, identifying several vital aspects of the integration of blockchain andEdge computing: motivations, frameworks, enabling functionalities, and challenges.
Abstract: Blockchain, as the underlying technology of crypto-currencies, has attracted significant attention. It has been adopted in numerous applications, such as smart grid and Internet-of-Things. However, there is a significant scalability barrier for blockchain, which limits its ability to support services with frequent transactions. On the other side, edge computing is introduced to extend the cloud resources and services to be distributed at the edge of the network, but currently faces challenges in its decentralized management and security. The integration of blockchain and edge computing into one system can enable reliable access and control of the network, storage, and computation distributed at the edges, hence providing a large scale of network servers, data storage, and validity computation near the end in a secure manner. Despite the prospect of integrated blockchain and edge computing systems, its scalability enhancement, self organization, functions integration, resource management, and new security issues remain to be addressed before widespread deployment. In this survey, we investigate some of the work that has been done to enable the integrated blockchain and edge computing system and discuss the research challenges. We identify several vital aspects of the integration of blockchain and edge computing: motivations, frameworks, enabling functionalities, and challenges. Finally, some broader perspectives are explored.

488 citations


Journal ArticleDOI
TL;DR: A comprehensive survey on the literature involving blockchain technology applied to smart cities, from the perspectives of smart citizen, smart healthcare, smart grid, smart transportation, supply chain management, and others is provided.
Abstract: In recent years, the rapid urbanization of world’s population causes many economic, social, and environmental problems, which affect people’s living conditions and quality of life significantly. The concept of “smart city” brings opportunities to solve these urban problems. The objectives of smart cities are to make the best use of public resources, provide high-quality services to the citizens, and improve the people’s quality of life. Information and communication technology plays an important role in the implementation of smart cities. Blockchain as an emerging technology has many good features, such as trust-free, transparency, pseudonymity, democracy, automation, decentralization, and security. These features of blockchain are helpful to improve smart city services and promote the development of smart cities. In this paper, we provide a comprehensive survey on the literature involving blockchain technology applied to smart cities. First, the related works and background knowledge are introduced. Then, we review how blockchain technology is applied in the realm of smart cities, from the perspectives of smart citizen, smart healthcare, smart grid, smart transportation, supply chain management, and others. Finally, some challenges and broader perspectives are discussed.

Journal ArticleDOI
TL;DR: A conceptual, generic, and expandable framework for classifying the existing PLS techniques against wireless passive eavesdropping is proposed, and the security techniques that are reviewed are divided into two primary approaches: signal-to-interference-plus-noise ratio- based approach and complexity-based approach.
Abstract: Physical layer security (PLS) has emerged as a new concept and powerful alternative that can complement and may even replace encryption-based approaches, which entail many hurdles and practical problems for future wireless systems. The basic idea of PLS is to exploit the characteristics of the wireless channel and its impairments including noise, fading, interference, dispersion, diversity, etc. in order to ensure the ability of the intended user to successfully perform data decoding while preventing eavesdroppers from doing so. Thus, the main design goal of PLS is to increase the performance difference between the link of the legitimate receiver and that of the eavesdropper by using well-designed transmission schemes. In this survey, we propose a conceptual, generic, and expandable framework for classifying the existing PLS techniques against wireless passive eavesdropping. In this flexible framework, the security techniques that we comprehensively review in this treatise are divided into two primary approaches: signal-to-interference-plus-noise ratio-based approach and complexity-based approach. The first approach is classified into three major categories: first, secrecy channel codes-based schemes; second, security techniques based on channel adaptation; third, schemes based on injecting interfering artificial (noise/jamming) signals along with the transmitted information signals. The second approach (complexity-based), which is associated with the mechanisms of extracting secret sequences from the shared channel, is classified into two main categories based on which layer the secret sequence obtained by channel quantization is applied on. The techniques belonging to each one of these categories are divided and classified into three main signal domains: time, frequency and space. For each one of these domains, several examples are given and illustrated along with the review of the state-of-the-art security advances in each domain. Moreover, the advantages and disadvantages of each approach alongside the lessons learned from existing research works are stated and discussed. The recent applications of PLS techniques to different emerging communication systems such as visible light communication, body area network, power line communication, Internet of Things, smart grid, mm-Wave, cognitive radio, vehicular ad-hoc network, unmanned aerial vehicle, ultra-wideband, device-to-device, radio-frequency identification, index modulation, and 5G non-orthogonal multiple access based-systems, are also reviewed and discussed. The paper is concluded with recommendations and future research directions for designing robust, efficient and strong security methods for current and future wireless systems.

Journal ArticleDOI
TL;DR: A unique taxonomy is provided, which sheds the light on IoT vulnerabilities, their attack vectors, impacts on numerous security objectives, attacks which exploit such vulnerabilities, corresponding remediation methodologies and currently offered operational cyber security capabilities to infer and monitor such weaknesses.
Abstract: The security issue impacting the Internet-of-Things (IoT) paradigm has recently attracted significant attention from the research community. To this end, several surveys were put forward addressing various IoT-centric topics, including intrusion detection systems, threat modeling, and emerging technologies. In contrast, in this paper, we exclusively focus on the ever-evolving IoT vulnerabilities. In this context, we initially provide a comprehensive classification of state-of-the-art surveys, which address various dimensions of the IoT paradigm. This aims at facilitating IoT research endeavors by amalgamating, comparing, and contrasting dispersed research contributions. Subsequently, we provide a unique taxonomy, which sheds the light on IoT vulnerabilities, their attack vectors, impacts on numerous security objectives, attacks which exploit such vulnerabilities, corresponding remediation methodologies and currently offered operational cyber security capabilities to infer and monitor such weaknesses. This aims at providing the reader with a multidimensional research perspective related to IoT vulnerabilities, including their technical details and consequences, which is postulated to be leveraged for remediation objectives. Additionally, motivated by the lack of empirical (and malicious) data related to the IoT paradigm, this paper also presents a first look on Internet-scale IoT exploitations by drawing upon more than 1.2 GB of macroscopic, passive measurements’ data. This aims at practically highlighting the severity of the IoT problem, while providing operational situational awareness capabilities, which undoubtedly would aid in the mitigation task, at large. Insightful findings, inferences and outcomes in addition to open challenges and research problems are also disclosed in this paper, which we hope would pave the way for future research endeavors addressing theoretical and empirical aspects related to the imperative topic of IoT security.

Journal ArticleDOI
TL;DR: An overview of the application of ML to optical communications and networking is provided, relevant literature is classified and surveyed, and an introductory tutorial on ML is provided for researchers and practitioners interested in this field.
Abstract: Today’s telecommunication networks have become sources of enormous amounts of widely heterogeneous data. This information can be retrieved from network traffic traces, network alarms, signal quality indicators, users’ behavioral data, etc. Advanced mathematical tools are required to extract meaningful information from these data and take decisions pertaining to the proper functioning of the networks from the network-generated data. Among these mathematical tools, machine learning (ML) is regarded as one of the most promising methodological approaches to perform network-data analysis and enable automated network self-configuration and fault management. The adoption of ML techniques in the field of optical communication networks is motivated by the unprecedented growth of network complexity faced by optical networks in the last few years. Such complexity increase is due to the introduction of a huge number of adjustable and interdependent system parameters (e.g., routing configurations, modulation format, symbol rate, coding schemes, etc.) that are enabled by the usage of coherent transmission/reception technologies, advanced digital signal processing, and compensation of nonlinear effects in optical fiber propagation. In this paper we provide an overview of the application of ML to optical communications and networking. We classify and survey relevant literature dealing with the topic, and we also provide an introductory tutorial on ML for researchers and practitioners interested in this field. Although a good number of research papers have recently appeared, the application of ML to optical networks is still in its infancy: to stimulate further work in this area, we conclude this paper proposing new possible research directions.

Journal ArticleDOI
TL;DR: This paper provides a comprehensive survey on the literature involving machine learning algorithms applied to SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security.
Abstract: In recent years, with the rapid development of current Internet and mobile communication technologies, the infrastructure, devices and resources in networking systems are becoming more complex and heterogeneous. In order to efficiently organize, manage, maintain and optimize networking systems, more intelligence needs to be deployed. However, due to the inherently distributed feature of traditional networks, machine learning techniques are hard to be applied and deployed to control and operate networks. Software defined networking (SDN) brings us new chances to provide intelligence inside the networks. The capabilities of SDN (e.g., logically centralized control, global view of the network, software-based traffic analysis, and dynamic updating of forwarding rules) make it easier to apply machine learning techniques. In this paper, we provide a comprehensive survey on the literature involving machine learning algorithms applied to SDN. First, the related works and background knowledge are introduced. Then, we present an overview of machine learning algorithms. In addition, we review how machine learning algorithms are applied in the realm of SDN, from the perspective of traffic classification, routing optimization, quality of service/quality of experience prediction, resource management and security. Finally, challenges and broader perspectives are discussed.

Journal ArticleDOI
TL;DR: A detailed investigation and analysis of various machine learning techniques have been carried out for finding the cause of problems associated with variousMachine learning techniques in detecting intrusive activities and future directions are provided for attack detection using machinelearning techniques.
Abstract: Intrusion detection is one of the important security problems in todays cyber world. A significant number of techniques have been developed which are based on machine learning approaches. However, they are not very successful in identifying all types of intrusions. In this paper, a detailed investigation and analysis of various machine learning techniques have been carried out for finding the cause of problems associated with various machine learning techniques in detecting intrusive activities. Attack classification and mapping of the attack features is provided corresponding to each attack. Issues which are related to detecting low-frequency attacks using network attack dataset are also discussed and viable methods are suggested for improvement. Machine learning techniques have been analyzed and compared in terms of their detection capability for detecting the various category of attacks. Limitations associated with each category of them are also discussed. Various data mining tools for machine learning have also been included in the paper. At the end, future directions are provided for attack detection using machine learning techniques.

Journal ArticleDOI
TL;DR: This tutorial paper helps the reader to smoothly enter into the several major 802.11ax breakthroughs, including a brand new orthogonal frequency-division multiple access-based random access approach as well as novel spatial frequency reuse techniques.
Abstract: While celebrating the 21st year since the very first IEEE 802.11 “legacy” 2 Mbit/s wireless local area network standard, the latest Wi-Fi newborn is today reaching the finish line, topping the remarkable speed of 10 Gbit/s. IEEE 802.11ax was launched in May 2014 with the goal of enhancing throughput-per-area in high-density scenarios. The first 802.11ax draft versions, namely, D1.0 and D2.0, were released at the end of 2016 and 2017. Focusing on a more mature version D3.0, in this tutorial paper, we help the reader to smoothly enter into the several major 802.11ax breakthroughs, including a brand new orthogonal frequency-division multiple access-based random access approach as well as novel spatial frequency reuse techniques. In addition, this tutorial will highlight selected significant improvements (including physical layer enhancements, multi-user multiple input multiple output extensions, power saving advances, and so on) which make this standard a very significant step forward with respect to its predecessor 802.11ac.

Journal ArticleDOI
TL;DR: A comprehensive review of state-of-the-art results for autonomous car technology is presented and several challenges that must be addressed by designers, implementers, policymakers, regulatory organizations, and car manufacturers are discussed.
Abstract: Throughout the last century, the automobile industry achieved remarkable milestones in manufacturing reliable, safe, and affordable vehicles. Because of significant recent advances in computation and communication technologies, autonomous cars are becoming a reality. Already autonomous car prototype models have covered millions of miles in test driving. Leading technical companies and car manufacturers have invested a staggering amount of resources in autonomous car technology, as they prepare for autonomous cars’ full commercialization in the coming years. However, to achieve this goal, several technical and nontechnical issues remain: software complexity, real-time data analytics, and testing and verification are among the greater technical challenges; and consumer stimulation, insurance management, and ethical/moral concerns rank high among the nontechnical issues. Tackling these challenges requires thoughtful solutions that satisfy consumers, industry, and governmental requirements, regulations, and policies. Thus, here we present a comprehensive review of state-of-the-art results for autonomous car technology. We discuss current issues that hinder autonomous cars’ development and deployment on a large scale. We also highlight autonomous car applications that will benefit consumers and many other sectors. Finally, to enable cost-effective, safe, and efficient autonomous cars, we discuss several challenges that must be addressed (and provide helpful suggestions for adoption) by designers, implementers, policymakers, regulatory organizations, and car manufacturers.

Journal ArticleDOI
TL;DR: It will be illustrated that the best strategy depends on the specific environment in which the nodes are deployed, and guidelines to inform the optimal choice as a function of the system parameters are given.
Abstract: The millimeter wave (mmWave) frequencies offer the availability of huge bandwidths to provide unprecedented data rates to next-generation cellular mobile terminals. However, mmWave links are highly susceptible to rapid channel variations and suffer from severe free-space pathloss and atmospheric absorption. To address these challenges, the base stations and the mobile terminals will use highly directional antennas to achieve sufficient link budget in wide area networks. The consequence is the need for precise alignment of the transmitter and the receiver beams, an operation which may increase the latency of establishing a link, and has important implications for control layer procedures, such as initial access, handover and beam tracking. This tutorial provides an overview of recently proposed measurement techniques for beam and mobility management in mmWave cellular networks, and gives insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR (NR) cellular network. We will illustrate that the best strategy depends on the specific environment in which the nodes are deployed, and give guidelines to inform the optimal choice as a function of the system parameters.

Journal ArticleDOI
TL;DR: This paper surveys the networking and communication technologies in autonomous driving from two aspects: intra- and inter-vehicle.
Abstract: The development of light detection and ranging, Radar, camera, and other advanced sensor technologies inaugurated a new era in autonomous driving. However, due to the intrinsic limitations of these sensors, autonomous vehicles are prone to making erroneous decisions and causing serious disasters. At this point, networking and communication technologies can greatly make up for sensor deficiencies, and are more reliable, feasible and efficient to promote the information interaction, thereby improving autonomous vehicle’s perception and planning capabilities as well as realizing better vehicle control. This paper surveys the networking and communication technologies in autonomous driving from two aspects: intra- and inter-vehicle. The intra-vehicle network as the basis of realizing autonomous driving connects the on-board electronic parts. The inter-vehicle network is the medium for interaction between vehicles and outside information. In addition, we present the new trends of communication technologies in autonomous driving, as well as investigate the current mainstream verification methods and emphasize the challenges and open issues of networking and communications in autonomous driving.

Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of the recent advances of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in network layer, and localization in the application layer.
Abstract: As a key technique for enabling artificial intelligence, machine learning (ML) is capable of solving complex problems without explicit programming. Motivated by its successful applications to many practical tasks like image recognition, both industry and the research community have advocated the applications of ML in wireless communication. This paper comprehensively surveys the recent advances of the applications of ML in wireless communication, which are classified as: resource management in the MAC layer, networking and mobility management in the network layer, and localization in the application layer. The applications in resource management further include power control, spectrum management, backhaul management, cache management, and beamformer design and computation resource management, while ML-based networking focuses on the applications in clustering, base station switching control, user association, and routing. Moreover, literatures in each aspect is organized according to the adopted ML techniques. In addition, several conditions for applying ML to wireless communication are identified to help readers decide whether to use ML and which kind of ML techniques to use. Traditional approaches are also summarized together with their performance comparison with ML-based approaches, based on which the motivations of surveyed literatures to adopt ML are clarified. Given the extensiveness of the research area, challenges and unresolved issues are presented to facilitate future studies. Specifically, ML-based network slicing, infrastructure update to support ML-based paradigms, open data sets and platforms for researchers, theoretical guidance for ML implementation, and so on are discussed.

Journal ArticleDOI
TL;DR: A survey on existing works in the MCS domain is presented and a detailed taxonomy is proposed to shed light on the current landscape and classify applications, methodologies, and architectures to outline potential future research directions and synergies with other research areas.
Abstract: Mobile crowdsensing (MCS) has gained significant attention in recent years and has become an appealing paradigm for urban sensing. For data collection, MCS systems rely on contribution from mobile devices of a large number of participants or a crowd. Smartphones, tablets, and wearable devices are deployed widely and already equipped with a rich set of sensors, making them an excellent source of information. Mobility and intelligence of humans guarantee higher coverage and better context awareness if compared to traditional sensor networks. At the same time, individuals may be reluctant to share data for privacy concerns. For this reason, MCS frameworks are specifically designed to include incentive mechanisms and address privacy concerns. Despite the growing interest in the research community, MCS solutions need a deeper investigation and categorization on many aspects that span from sensing and communication to system management and data storage. In this paper, we take the research on MCS a step further by presenting a survey on existing works in the domain and propose a detailed taxonomy to shed light on the current landscape and classify applications, methodologies, and architectures. Our objective is not only to analyze and consolidate past research but also to outline potential future research directions and synergies with other research areas.

Journal ArticleDOI
TL;DR: In this article, the authors provide an up-to-date comprehensive survey of the IEEE TSN and IETF DetNet standards and related research studies and identify the pitfalls and limitations of the existing standards and research studies.
Abstract: Many network applications, eg, industrial control, demand ultra-low latency (ULL) However, traditional packet networks can only reduce the end-to-end latencies to the order of tens of milliseconds The IEEE 8021 time sensitive networking (TSN) standard and related research studies have sought to provide link layer support for ULL networking, while the emerging IETF deterministic networking (DetNet) standards seek to provide the complementary network layer ULL support This paper provides an up-to-date comprehensive survey of the IEEE TSN and IETF DetNet standards and the related research studies The survey of these standards and research studies is organized according to the main categories of flow concept, flow synchronization, flow management, flow control, and flow integrity ULL networking mechanisms play a critical role in the emerging fifth generation (5G) network access chain from wireless devices via access, backhaul, and core networks We survey the studies that specifically target the support of ULL in 5G networks, with the main categories of fronthaul, backhaul, and network management Throughout, we identify the pitfalls and limitations of the existing standards and research studies This survey can thus serve as a basis for the development of standards enhancements and future ULL research studies that address the identified pitfalls and limitations

Journal ArticleDOI
TL;DR: A comprehensive analysis of security features introduced by NFV and SDN, describing the manifold strategies able to monitor, protect, and react to IoT security threats and the open challenges related to emerging SDN- and NFV-based security mechanisms.
Abstract: The explosive rise of Internet of Things (IoT) systems have notably increased the potential attack surfaces for cybercriminals. Accounting for the features and constraints of IoT devices, traditional security countermeasures can be inefficient in dynamic IoT environments. In this vein, the advantages introduced by software defined networking (SDN) and network function virtualization (NFV) have the potential to reshape the landscape of cybersecurity for IoT systems. To this aim, we provide a comprehensive analysis of security features introduced by NFV and SDN, describing the manifold strategies able to monitor, protect, and react to IoT security threats. We also present lessons learned in the adoption of SDN/NFV-based protection approaches in IoT environments, comparing them with conventional security countermeasures. Finally, we deeply discuss the open challenges related to emerging SDN- and NFV-based security mechanisms, aiming to provide promising directives to conduct future research in this fervent area.

Journal ArticleDOI
TL;DR: A comprehensive state-of-the-art survey of VLC technology, from its physical aspects and communication architecture to its main applications and research challenges, and presents the main research platforms available today.
Abstract: During the last decade, the exponential growth of mobile devices and wireless services created a huge demand for radio frequency-based technologies. Meanwhile, the lighting industry has been revolutionized due to the popularization of LED light bulbs, which are more economical and efficient. In that context, visible light communication (VLC) is a disruptive technology based on LEDs that offers a free spectrum and high data rate, which can potentially serve as a complementary technology to the current radio frequency standards. In this paper, we present a comprehensive state-of-the-art survey of VLC, as well as the main concepts and challenges related to this emergent area. We overview VLC technology, from its physical aspects and communication architecture to its main applications and research challenges. Finally, we present the main research platforms available today, along with a deep analysis of the system design and future directions in the field.

Journal ArticleDOI
TL;DR: This paper presents for the first time a comprehensive overview systematizing the different work directions for both research and industry, while providing a detailed description of each functional split option and an assessment of the advantages and disadvantages.
Abstract: Pacing the way toward 5G has lead researchers and industry in the direction of centralized processing known from Cloud-Radio Access Networks (C-RAN). In C-RAN research, a variety of different functional splits is presented by different names and focusing on different directions. The functional split determines how many base station functions to leave locally, close to the user, with the benefit of relaxing fronthaul network bitrate and delay requirements, and how many functions to centralize with the possibility of achieving greater processing benefits. This paper presents for the first time a comprehensive overview systematizing the different work directions for both research and industry, while providing a detailed description of each functional split option and an assessment of the advantages and disadvantages. This paper gives an overview of where the most effort has been directed in terms of functional splits, and where there is room for further studies. The standardization currently taking place is also considered and mapped into the research directions. It is investigated how the fronthaul network will be affected by the choice of functional split, both in terms of bitrates and latency, and as the different functional splits provide different advantages and disadvantages, the option of flexible functional splits is also looked into.

Journal ArticleDOI
TL;DR: This survey provides an overview of the different methods proposed over the last several years of bitrate adaptation algorithms for HTTP adaptive streaming, leaving it to system builders to innovate and implement their own method.
Abstract: In this survey, we present state-of-the-art bitrate adaptation algorithms for HTTP adaptive streaming (HAS). As a key distinction from other streaming approaches, the bitrate adaptation algorithms in HAS are chiefly executed at each client, i.e. , in a distributed manner. The objective of these algorithms is to ensure a high quality of experience (QoE) for viewers in the presence of bandwidth fluctuations due to factors like signal strength, network congestion, network reconvergence events, etc. While such fluctuations are common in public Internet, they can also occur in home networksor even managed networks where there is often admission control and QoS tools. Bitrate adaptation algorithms may take factors like bandwidth estimations, playback buffer fullness, device features, viewer preferences, and content features into account, albeit with different weights. Since the viewer’s QoE needs to be determined in real-time during playback, objective metrics are generally used including number of buffer stalls, duration of startup delay, frequency and amount of quality oscillations, and video instability. By design, the standards for HAS do not mandate any particular adaptation algorithm, leaving it to system builders to innovate and implement their own method. This survey provides an overview of the different methods proposed over the last several years.

Journal ArticleDOI
TL;DR: A general presentation of blockchain that goes beyond its usage in Bitcoin and surveying a selection of the vast literature that emerged in the last few years is given, drawing the key requirements and their evolution when passing from permissionless to permissioned blockchains.
Abstract: Blockchain is a technology making the shared registry concept from distributed systems a reality for a number of application domains, from the cryptocurrency one to potentially any industrial system requiring decentralized, robust, trusted, and automated decision making in a multi-stakeholder situation. Nevertheless, the actual advantages in using blockchain instead of any other traditional solution (such as centralized databases) are not completely understood to date, or at least there is a strong need for a vademecum guiding designers toward the right decision about when to adopt blockchain or not, which kind of blockchain better meets use-case requirements, and how to use it. In this paper, we aim at providing the community with such a vademecum, while giving a general presentation of blockchain that goes beyond its usage in Bitcoin and surveying a selection of the vast literature that emerged in the last few years. We draw the key requirements and their evolution when passing from permissionless to permissioned blockchains, presenting the differences between proposed and experimented consensus mechanisms, and describing existing blockchain platforms.

Journal ArticleDOI
TL;DR: In this paper, a survey of blockchain-based approaches for several security services including authentication, confidentiality, privacy and access control list, data and resource provenance, and integrity assurance is presented.
Abstract: This paper surveys blockchain-based approaches for several security services. These services include authentication, confidentiality, privacy and access control list, data and resource provenance, and integrity assurance. All these services are critical for the current distributed applications, especially due to the large amount of data being processed over the networks and the use of cloud computing. Authentication ensures that the user is who he/she claims to be. Confidentiality guarantees that data cannot be read by unauthorized users. Privacy provides the users the ability to control who can access their data. Provenance allows an efficient tracking of the data and resources along with their ownership and utilization over the network. Integrity helps in verifying that the data has not been modified or altered. These services are currently managed by centralized controllers, for example, a certificate authority. Therefore, the services are prone to attacks on the centralized controller. On the other hand, blockchain is a secured and distributed ledger that can help resolve many of the problems with centralization. The objectives of this paper are to give insights on the use of security services for current applications, to highlight the state of the art techniques that are currently used to provide these services, to describe their challenges, and to discuss how the blockchain technology can resolve these challenges. Further, several blockchain-based approaches providing such security services are compared thoroughly. Challenges associated with using blockchain-based security services are also discussed to spur further research in this area.

Journal ArticleDOI
TL;DR: This paper discusses optimal and near-optimal detection principles specifically designed for the massive MIMO system such as detectors based on a local search, belief propagation and box detection, and presents recent advances of detection algorithms which are mostly based on machine learning or sparsity based algorithms.
Abstract: Massive multiple-input multiple-output (MIMO) is a key technology to meet the user demands in performance and quality of services (QoS) for next generation communication systems. Due to a large number of antennas and radio frequency (RF) chains, complexity of the symbol detectors increased rapidly in a massive MIMO uplink receiver. Thus, the research to find the perfect massive MIMO detection algorithm with optimal performance and low complexity has gained a lot of attention during the past decade. A plethora of massive MIMO detection algorithms has been proposed in the literature. The aim of this paper is to provide insights on such algorithms to a generalist of wireless communications. We garner the massive MIMO detection algorithms and classify them so that a reader can find a distinction between different algorithms from a wider range of solutions. We present optimal and near-optimal detection principles specifically designed for the massive MIMO system such as detectors based on a local search, belief propagation and box detection. In addition, we cover detectors based on approximate inversion, which has gained popularity among the VLSI signal processing community due to their deterministic dataflow and low complexity. We also briefly explore several nonlinear small-scale MIMO (2-4 antenna receivers) detectors and their applicability in the massive MIMO context. In addition, we present recent advances of detection algorithms which are mostly based on machine learning or sparsity based algorithms. In each section, we also mention the related implementations of the detectors. A discussion of the pros and cons of each detector is provided.