scispace - formally typeset
Search or ask a question
Author

Zubair Md. Fadlullah

Other affiliations: Tohoku University, Xidian University
Bio: Zubair Md. Fadlullah is an academic researcher from Lakehead University. The author has contributed to research in topics: Smart grid & Deep learning. The author has an hindex of 34, co-authored 118 publications receiving 5888 citations. Previous affiliations of Zubair Md. Fadlullah include Tohoku University & Xidian University.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper is the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the Integration of all the three network segments.
Abstract: Space-air-ground integrated network (SAGIN), as an integration of satellite systems, aerial networks, and terrestrial communications, has been becoming an emerging architecture and attracted intensive research interest during the past years. Besides bringing significant benefits for various practical services and applications, SAGIN is also facing many unprecedented challenges due to its specific characteristics, such as heterogeneity, self-organization, and time-variability. Compared to traditional ground or satellite networks, SAGIN is affected by the limited and unbalanced network resources in all three network segments, so that it is difficult to obtain the best performances for traffic delivery. Therefore, the system integration, protocol optimization, resource management, and allocation in SAGIN is of great significance. To the best of our knowledge, we are the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the integration of all the three network segments. In light of this, we present in this paper a comprehensive review of recent research works concerning SAGIN from network design and resource allocation to performance analysis and optimization. After discussing several existing network architectures, we also point out some technology challenges and future directions.

661 citations

Journal ArticleDOI
TL;DR: An overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems, and a new use case, i.e., deep learning based intelligent routing, which is demonstrated to be effective in contrast with the conventional routing strategy.
Abstract: Currently, the network traffic control systems are mainly composed of the Internet core and wired/wireless heterogeneous backbone networks. Recently, these packet-switched systems are experiencing an explosive network traffic growth due to the rapid development of communication technologies. The existing network policies are not sophisticated enough to cope with the continually varying network conditions arising from the tremendous traffic growth. Deep learning, with the recent breakthrough in the machine learning/intelligence area, appears to be a viable approach for the network operators to configure and manage their networks in a more intelligent and autonomous fashion. While deep learning has received a significant research attention in a number of other domains such as computer vision, speech recognition, robotics, and so forth, its applications in network traffic control systems are relatively recent and garnered rather little attention. In this paper, we address this point and indicate the necessity of surveying the scattered works on deep learning applications for various network traffic control aspects. In this vein, we provide an overview of the state-of-the-art deep learning architectures and algorithms relevant to the network traffic control systems. Also, we discuss the deep learning enablers for network systems. In addition, we discuss, in detail, a new use case, i.e., deep learning based intelligent routing. We demonstrate the effectiveness of the deep learning-based routing approach in contrast with the conventional routing strategy. Furthermore, we discuss a number of open research issues, which researchers may find useful in the future.

643 citations

Journal ArticleDOI
TL;DR: The most reliable technology to facilitate M2M communication in the SG home area network is pointed out, and its shortcoming is noted, and a possible solution to deal with this shortcoming to improve SG communications scalability is presented.
Abstract: The advanced metering infrastructure of the smart grid presents the biggest growth potential in the machine-to-machine market today. Spurred by recent advances in M2M technologies, SG smart meters are expected not to require human intervention in characterizing power requirements and energy distribution. However, there are many challenges in the design of the SG communications network whereby the electrical appliances and smart meters are able to exchange information pertaining to varying power requirements. Furthermore, different types of M2M gateways are required at different points (e.g., at home, in the building, at the neighborhood, and so forth) of the SG communication network. This article surveys a number of existing communication technologies that can be adopted for M2M communication in SG. Among these, the most reliable technology to facilitate M2M communication in the SG home area network is pointed out, and its shortcoming is also noted. Furthermore, a possible solution to deal with this shortcoming to improve SG communications scalability is also presented.

520 citations

Journal ArticleDOI
TL;DR: A lightweight message authentication scheme features as a basic yet crucial component for secure SG communication framework and can satisfy the desirable security requirements of SG communications.
Abstract: Smart grid (SG) communication has recently received significant attentions to facilitate intelligent and distributed electric power transmission systems. However, communication trust and security issues still present practical concerns to the deployment of SG. In this paper, to cope with these challenging concerns, we propose a lightweight message authentication scheme features as a basic yet crucial component for secure SG communication framework. Specifically, in the proposed scheme, the smart meters which are distributed at different hierarchical networks of the SG can first achieve mutual authentication and establish the shared session key with Diffie-Hellman exchange protocol. Then, with the shared session key between smart meters and hash-based authentication code technique, the subsequent messages can be authenticated in a lightweight way. Detailed security analysis shows that the proposed scheme can satisfy the desirable security requirements of SG communications. In addition, extensive simulations have also been conducted to demonstrate the effectiveness of the proposed scheme in terms of low latency and few signal message exchanges.

431 citations

Journal ArticleDOI
TL;DR: Preliminary results are reported that demonstrate the encouraging performance of the proposed deep learning system compared to a benchmark routing strategy (Open Shortest Path First (OSPF)) in terms of significantly better signaling overhead, throughput, and delay.
Abstract: Recently, deep learning, an emerging machine learning technique, is garnering a lot of research attention in several computer science areas. However, to the best of our knowledge, its application to improve heterogeneous network traffic control (which is an important and challenging area by its own merit) has yet to appear because of the difficult challenge in characterizing the appropriate input and output patterns for a deep learning system to correctly reflect the highly dynamic nature of large-scale heterogeneous networks. In this vein, in this article, we propose appropriate input and output characterizations of heterogeneous network traffic and propose a supervised deep neural network system. We describe how our proposed system works and how it differs from traditional neural networks. Also, preliminary results are reported that demonstrate the encouraging performance of our proposed deep learning system compared to a benchmark routing strategy (Open Shortest Path First (OSPF)) in terms of significantly better signaling overhead, throughput, and delay.

342 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: A comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management is provided in this paper, where a set of issues, challenges, and future research directions for MEC are discussed.
Abstract: Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized mobile cloud computing toward mobile edge computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also discuss a set of issues, challenges, and future research directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.

2,992 citations

Posted Content
TL;DR: A comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management and recent standardization efforts on MEC are introduced.
Abstract: Driven by the visions of Internet of Things and 5G communications, recent years have seen a paradigm shift in mobile computing, from the centralized Mobile Cloud Computing towards Mobile Edge Computing (MEC). The main feature of MEC is to push mobile computing, network control and storage to the network edges (e.g., base stations and access points) so as to enable computation-intensive and latency-critical applications at the resource-limited mobile devices. MEC promises dramatic reduction in latency and mobile energy consumption, tackling the key challenges for materializing 5G vision. The promised gains of MEC have motivated extensive efforts in both academia and industry on developing the technology. A main thrust of MEC research is to seamlessly merge the two disciplines of wireless communications and mobile computing, resulting in a wide-range of new designs ranging from techniques for computation offloading to network architectures. This paper provides a comprehensive survey of the state-of-the-art MEC research with a focus on joint radio-and-computational resource management. We also present a research outlook consisting of a set of promising directions for MEC research, including MEC system deployment, cache-enabled MEC, mobility management for MEC, green MEC, as well as privacy-aware MEC. Advancements in these directions will facilitate the transformation of MEC from theory to practice. Finally, we introduce recent standardization efforts on MEC as well as some typical MEC application scenarios.

2,289 citations

Journal Article
TL;DR: In this article, the authors explore the effect of dimensionality on the nearest neighbor problem and show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance of the farthest data point.
Abstract: We explore the effect of dimensionality on the nearest neighbor problem. We show that under a broad set of conditions (much broader than independent and identically distributed dimensions), as dimensionality increases, the distance to the nearest data point approaches the distance to the farthest data point. To provide a practical perspective, we present empirical results on both real and synthetic data sets that demonstrate that this effect can occur for as few as 10-15 dimensions. These results should not be interpreted to mean that high-dimensional indexing is never meaningful; we illustrate this point by identifying some high-dimensional workloads for which this effect does not occur. However, our results do emphasize that the methodology used almost universally in the database literature to evaluate high-dimensional indexing techniques is flawed, and should be modified. In particular, most such techniques proposed in the literature are not evaluated versus simple linear scan, and are evaluated over workloads for which nearest neighbor is not meaningful. Often, even the reported experiments, when analyzed carefully, show that linear scan would outperform the techniques being proposed on the workloads studied in high (10-15) dimensionality!.

1,992 citations

Journal ArticleDOI
TL;DR: This article presents a large-dimensional and autonomous network architecture that integrates space, air, ground, and underwater networks to provide ubiquitous and unlimited wireless connectivity and identifies several promising technologies for the 6G ecosystem.
Abstract: A key enabler for the intelligent information society of 2030, 6G networks are expected to provide performance superior to 5G and satisfy emerging services and applications. In this article, we present our vision of what 6G will be and describe usage scenarios and requirements for multi-terabyte per second (Tb/s) and intelligent 6G networks. We present a large-dimensional and autonomous network architecture that integrates space, air, ground, and underwater networks to provide ubiquitous and unlimited wireless connectivity. We also discuss artificial intelligence (AI) and machine learning [1], [2] for autonomous networks and innovative air-interface design. Finally, we identify several promising technologies for the 6G ecosystem, including terahertz (THz) communications, very-large-scale antenna arrays [i.e., supermassive (SM) multiple-input, multiple-output (MIMO)], large intelligent surfaces (LISs) and holographic beamforming (HBF), orbital angular momentum (OAM) multiplexing, laser and visible-light communications (VLC), blockchain-based spectrum sharing, quantum communications and computing, molecular communications, and the Internet of Nano-Things.

1,332 citations

Journal ArticleDOI
TL;DR: This article first introduces deep learning for IoTs into the edge computing environment, and designs a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing.
Abstract: Deep learning is a promising approach for extracting accurate information from raw sensor data from IoT devices deployed in complex environments Because of its multilayer structure, deep learning is also appropriate for the edge computing environment Therefore, in this article, we first introduce deep learning for IoTs into the edge computing environment Since existing edge nodes have limited processing capability, we also design a novel offloading strategy to optimize the performance of IoT deep learning applications with edge computing In the performance evaluation, we test the performance of executing multiple deep learning tasks in an edge computing environment with our strategy The evaluation results show that our method outperforms other optimization solutions on deep learning for IoT

1,270 citations