scispace - formally typeset
Search or ask a question

Showing papers on "Mobile telephony published in 2019"


Journal ArticleDOI
TL;DR: In this paper, the authors proposed to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with mobile edge systems for optimizing mobile edge computing, caching and communication, and designed the "In-Edge AI" framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load.
Abstract: Recently, along with the rapid development of mobile communication technology, edge computing theory and techniques have been attracting more and more attention from global researchers and engineers, which can significantly bridge the capacity of cloud and requirement of devices by the network edges, and thus can accelerate content delivery and improve the quality of mobile services. In order to bring more intelligence to edge systems, compared to traditional optimization methodology, and driven by the current deep learning techniques, we propose to integrate the Deep Reinforcement Learning techniques and Federated Learning framework with mobile edge systems, for optimizing mobile edge computing, caching and communication. And thus, we design the "In-Edge AI" framework in order to intelligently utilize the collaboration among devices and edge nodes to exchange the learning parameters for a better training and inference of the models, and thus to carry out dynamic system-level optimization and application-level enhancement while reducing the unnecessary system communication load. "In-Edge AI" is evaluated and proved to have near-optimal performance but relatively low overhead of learning, while the system is cognitive and adaptive to mobile communication systems. Finally, we discuss several related challenges and opportunities for unveili

764 citations


Journal ArticleDOI
TL;DR: 6G is envisioned to include three major aspects, namely, mobile ultra-broadband, super Internet-of-Things (IoT), and artificial intelligence (AI), and key technologies to realize each aspect are reviewed.
Abstract: With a ten-year horizon from concept to reality, it is time now to start thinking about what will the sixth-generation (6G) mobile communications be on the eve of the fifth-generation (5G) deployment. To pave the way for the development of 6G and beyond, we provide 6G visions in this paper. We first introduce the state-of-the-art technologies in 5G and indicate the necessity to study 6G. By taking the current and emerging development of wireless communications into consideration, we envision 6G to include three major aspects, namely, mobile ultra-broadband, super Internet-of-Things (IoT), and artificial intelligence (AI). Then, we review key technologies to realize each aspect. In particular, teraherz (THz) communications can be used to support mobile ultra-broadband, symbiotic radio and satellite-assisted communications can be used to achieve super IoT, and machine learning techniques are promising candidates for AI. For each technology, we provide the basic principle, key challenges, and state-of-the-art approaches and solutions.

237 citations


Journal ArticleDOI
TL;DR: Following-Me Cloud applies a Markov-decision-process-based algorithm for cost-effective performance-optimized service migration decisions, while two alternative schemes to ensure service continuity and disruption-free operation are proposed, based on either software defined networking technologies or the locator/identifier separation protocol.
Abstract: The trend towards the cloudification of the 3GPP LTE mobile network architecture and the emergence of federated cloud infrastructures call for alternative service delivery strategies for improved user experience and efficient resource utilization. We propose Follow-Me Cloud (FMC), a design tailored to this environment, but with a broader applicability, which allows mobile users to always be connected via the optimal data anchor and mobility gateways, while cloud-based services follow them and are delivered via the optimal service point inside the cloud infrastructure. Follow-Me Cloud applies a Markov-decision-process-based algorithm for cost-effective performance-optimized service migration decisions, while two alternative schemes to ensure service continuity and disruption-free operation are proposed, based on either software defined networking technologies or the locator/identifier separation protocol. Numerical results from our analytic model for follow-me cloud, as well as testbed experiments with the two alternative follow-me cloud implementations we have developed, demonstrate quantitatively and qualitatively the advantages it can bring about.

185 citations


Journal ArticleDOI
18 Feb 2019
TL;DR: This paper reviews the state-of-the-art technology and existing implementations of Mobile AR, as well as enabling technologies and challenges when AR meets the Web, and elaborate on the different potential Web AR provisioning approaches, especially the adaptive and scalable collaborative distributed solution which adopts the osmotic computing paradigm to provide Web AR services.
Abstract: Mobile augmented reality (Mobile AR) is gaining increasing attention from both academia and industry. Hardware-based Mobile AR and App-based Mobile AR are the two dominant platforms for Mobile AR applications. However, hardware-based Mobile AR implementation is known to be costly and lacks flexibility, while the App-based one requires additional downloading and installation in advance and is inconvenient for cross-platform deployment. In comparison, Web-based AR (Web AR) implementation can provide a pervasive Mobile AR experience to users thanks to the many successful deployments of the Web as a lightweight and cross-platform service provisioning platform. Furthermore, the emergence of 5G mobile communication networks has the potential to enhance the communication efficiency of Mobile AR dense computing in the Web-based approach. We conjecture that Web AR will deliver an innovative technology to enrich our ways of interacting with the physical (and cyber) world around us. This paper reviews the state-of-the-art technology and existing implementations of Mobile AR, as well as enabling technologies and challenges when AR meets the Web. Furthermore, we elaborate on the different potential Web AR provisioning approaches, especially the adaptive and scalable collaborative distributed solution which adopts the osmotic computing paradigm to provide Web AR services. We conclude this paper with the discussions of open challenges and research directions under current 3G/4G networks and the future 5G networks. We hope that this paper will help researchers and developers to gain a better understanding of the state of the research and development in Web AR and at the same time stimulate more research interest and effort on delivering life-enriching Web AR experiences to the fast-growing mobile and wireless business and consumer industry of the 21st century.

150 citations


Journal ArticleDOI
TL;DR: A model for interpreting the development of the continuance intention of users of mobile communication apps is developed, indicating that interaction quality, environment quality, inertia, and user satisfaction are key determinants of Continance intention, while outcome quality is not.

135 citations


Journal ArticleDOI
TL;DR: A brief survey of the challenges and opportunities of THz band operation in wireless communication, along with some potential applications and future research directions is provided.
Abstract: With 5G Phase 1 finalized and 5G Phase 2 recently defined by 3GPP, the mobile communication community is on the verge of deciding what will be the Beyond-5G (B5G) system. B5G is expected to further enhance network performance, for example, by supporting throughput per device up to terabits per second and increasing the frequency range of usable spectral bands significantly. In fact, one of the main pillars of 5G networks has been radio access extension to the millimeter-wave bands. However, new envisioned services, asking for more and more throughput, require the availability of one order of magnitude more spectrum chunks, thus suggesting moving the operations into the THz domain. This move will introduce significant new multidisciplinary research challenges emerging throughout the wireless communication protocol stacks, including the way the mobile network is modeled and deployed. This article, therefore, provides a brief survey of the challenges and opportunities of THz band operation in wireless communication, along with some potential applications and future research directions.

130 citations


Journal ArticleDOI
TL;DR: An overview on fog computing enabled mobile communication networks (FogMNW), including network architecture, system capacity and resource management, and a heterogeneous communication and hierarchical fog computing network architecture are provided.
Abstract: The convergence of communication and computing (COM2P) has been taken as a promising solution for the sustainable development of mobile communication systems. The introduction of fog computing in future mobile networks makes COM2P possible. This article provides an overview on fog computing enabled mobile communication networks (FogMNW), including network architecture, system capacity and resource management. First, this article analyzes the heterogeneity of FogMNW with both advanced communication techniques and fog computing. Then a heterogeneous communication and hierarchical fog computing network architecture is proposed. With both communication and computing resources, FogMNW is enabled to achieve much higher capacity than conventional communication networks. This has been well demonstrated by the coded multicast scheme. Furthermore, a systematic management of communication and computing resources is necessary for FogMNW. By exploiting the communication load diversity in N cells, a communication load aware CLA scheme can achieve much higher computing resource efficiency than comparing schemes. The performance gap increases with N, and CLA can improve efficiency by more than 100 percent when there are 14 cells.

120 citations


Journal ArticleDOI
11 Jun 2019
TL;DR: In this article, the authors provide a fresh look to the concept of edge computing by first discussing the applications that the network edge must provide, with a special emphasis on the ensuing challenges in enabling ultrareliable and low-latency edge computing services for mission-critical applications such as virtual reality (VR), vehicle-to-everything (V2X), edge artificial intelligence (AI), and so on.
Abstract: Edge computing is an emerging concept based on distributed computing, storage, and control services closer to end network nodes. Edge computing lies at the heart of the fifth-generation (5G) wireless systems and beyond. While the current state-of-the-art networks communicate, compute, and process data in a centralized manner (at the cloud), for latency and compute-centric applications, both radio access and computational resources must be brought closer to the edge, harnessing the availability of computing and storage-enabled small cell base stations in proximity to the end devices. Furthermore, the network infrastructure must enable a distributed edge decision-making service that learns to adapt to the network dynamics with minimal latency and optimize network deployment and operation accordingly. This paper will provide a fresh look to the concept of edge computing by first discussing the applications that the network edge must provide, with a special emphasis on the ensuing challenges in enabling ultrareliable and low-latency edge computing services for mission-critical applications such as virtual reality (VR), vehicle-to-everything (V2X), edge artificial intelligence (AI), and so on. Furthermore, several case studies where the edge is key are explored followed by insights and prospect for future work.

106 citations


Journal ArticleDOI
TL;DR: Results show that the number of data bidders in different auctions can be balanced effectively through the proposed mobility model, and the income per unit time of sellers in the networked data transaction can also be increased.
Abstract: In this paper, a novel mobile data offloading method is proposed based on an external-infrastructure-free approach. Specifically, through the hotspot function of smartphones, data demands from some mobile users, who have used out the data with their monthly data plans, can be offloaded by some other mobile users who still have redundant unused data. In order to model this data transaction among mobile users, which can be realized with assistance of current mobile social platforms, we introduce an auction-based contract mechanism, and then we further analyze the system performance. To improve the efficiency and performance of the system, a socially-aware mobility model is also designed. In this model, betweenness of the data auctioneers is introduced, according to which the number of data requesters in different auctions can be balanced and the performance of the system can be optimized. The proposed data transaction mechanisms and friendship-aware mobility model are then simulated as operating on Flickr, a real-world online social network database. Results show that the number of data bidders in different auctions can be balanced effectively through the proposed mobility model, and the income per unit time of sellers in the networked data transaction can also be increased.

103 citations


Journal ArticleDOI
TL;DR: This work proposes a novel decomposition of in-cell and inter-cell data traffic, and applies a graph-based deep learning approach to accurate cellular traffic prediction, and reveals intensive spatio-temporal dependency even among distant cell towers, which is largely overlooked in previous works.
Abstract: Understanding and predicting cellular traffic at large-scale and fine-granularity is beneficial and valuable to mobile users, wireless carriers, and city authorities Predicting cellular traffic in modern metropolis is particularly challenging because of the tremendous temporal and spatial dynamics introduced by diverse user Internet behaviors and frequent user mobility citywide In this paper, we characterize and investigate the root causes of such dynamics in cellular traffic through a big cellular usage dataset covering 15 million users and 5,929 cell towers in a major city of China We reveal intensive spatio-temporal dependency even among distant cell towers, which is largely overlooked in previous works To explicitly characterize and effectively model the spatio-temporal dependency of urban cellular traffic, we propose a novel decomposition of in-cell and inter-cell data traffic, and apply a graph-based deep learning approach to accurate cellular traffic prediction Experimental results demonstrate that our method consistently outperforms the state-of-the-art time-series based approaches and we also show through an example study how the decomposition of cellular traffic can be used for event inference

92 citations


Journal ArticleDOI
TL;DR: An integrated QoS prediction approach which unifies the modeling of multi-dimensional QoS data via multi-linear-algebra based concepts of tensor and enables efficient Web service recommendation for mobile clients via tensor decomposition and reconstruction optimization algorithms is proposed.
Abstract: Advances in mobile Internet technology have enabled the clients of Web services to be able to keep their service sessions alive while they are on the move. Since the services consumed by a mobile client may be different over time due to client location changes, a multi-dimensional spatiotemporal model is necessary for analyzing the service consumption relations. Moreover, competitive Web service recommenders for the mobile clients must be able to predict unknown quality-of-service (QoS) values well by taking into account the target client's service requesting time and location, e.g., performing the prediction via a set of multi-dimensional QoS measures. Most contemporary QoS prediction methods exploit the QoS characteristics for one specific dimension, e.g., time or location, and do not exploit the structural relationships among the multi-dimensional QoS data. This paper proposes an integrated QoS prediction approach which unifies the modeling of multi-dimensional QoS data via multi-linear-algebra based concepts of tensor and enables efficient Web service recommendation for mobile clients via tensor decomposition and reconstruction optimization algorithms. In light of the unavailability of measured multi-dimensional QoS datasets in the public domain, this paper also presents a transformational approach to creating a credible multi-dimensional QoS dataset from a measured taxi usage dataset which contains high dimensional time and space information. Comparative experimental evaluation results show that the proposed QoS prediction approach can result in much better accuracy in recommending Web services than several other representative ones.

Posted Content
TL;DR: The goal of this paper is to motivate the need to move to a sixth generation (6G) of mobile communication networks, starting from a gap analysis of 5G, and predicting a new synthesis of near future services, like hologram interfaces, ambient sensing intelligence, a pervasive introduction of artificial intelligence and the incorporation of technologies, like TeraHertz or Visible Light Communications, 3-dimensional coverage.
Abstract: The current development of 5G networks represents a breakthrough in the design of communication networks, for its ability to provide a single platform enabling a variety of different services, from enhanced mobile broadband communications, automated driving, Internet-of-Things, with its huge number of connected devices, etc. Nevertheless, looking at the current development of technologies and new services, it is already possible to envision the need to move beyond 5G with a new architecture incorporating new services and technologies. The goal of this paper is to motivate the need to move to a sixth generation (6G) of mobile communication networks, starting from a gap analysis of 5G, and predicting a new synthesis of near future services, like hologram interfaces, ambient sensing intelligence, a pervasive introduction of artificial intelligence and the incorporation of technologies, like TeraHertz (THz) or Visible Light Communications (VLC), 3-dimensional coverage.

Proceedings ArticleDOI
08 Jul 2019
TL;DR: This work considers the edge user allocation problem as an online decision-making and evolvable process and develops a mobility-aware and migration-enabled approach, named MobMig, for allocating users at real-time, which achieves higher user coverage rate and lower reallocations than traditional ones.
Abstract: The rapid development of mobile communication technologies prompts the emergence of mobile edge computing (MEC). As the key technology toward 5th generation (5G) wireless networks, it allows mobile users to offload their computational tasks to nearby servers deployed in base stations to alleviate the shortage of mobile resource. Nevertheless, various challenges, especially the edge-user-allocation problem, are yet to be properly addressed. Traditional studies consider this problem as a static global optimization problem where user positions are considered to be time-invariant and user-mobility-related information is not fully exploited. In reality, however, edge users are usually with high mobility and time-varying positions, which usually result in users reallocations among different base stations and impact on user-perceived quality-of-service (QoS). To overcome the above limitations, we consider the edge user allocation problem as an online decision-making and evolvable process and develop a mobility-aware and migration-enabled approach, named MobMig, for allocating users at real-time. Experiments based on real-world MEC dataset clearly demonstrate that our approach achieves higher user coverage rate and lower reallocations than traditional ones.

Journal ArticleDOI
TL;DR: BLEST and STTF are compared with existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared with MPTCP’s default scheduler.
Abstract: The demand for mobile communication is continuously increasing, and mobile devices are now the communication device of choice for many people. To guarantee connectivity and performance, mobile devices are typically equipped with multiple interfaces. To this end, exploiting multiple available interfaces is also a crucial aspect of the upcoming 5G standard for reducing costs, easing network management, and providing a good user experience. Multi-path protocols, such as multi-path TCP (MPTCP), can be used to provide performance optimization through load-balancing and resilience to coverage drops and link failures, however, they do not automatically guarantee better performance. For instance, low-latency communication has been proven hard to achieve when a device has network interfaces with asymmetric capacity and delay (e.g., LTE and WLAN). For multi-path communication, the data scheduler is vital to provide low latency, since it decides over which network interface to send individual data segments. In this paper, we focus on the MPTCP scheduler with the goal of providing a good user experience for latency-sensitive applications when interface quality is asymmetric. After an initial assessment of existing scheduling algorithms, we present two novel scheduling techniques: the block estimation (BLEST) scheduler and the shortest transmission time first (STTF) scheduler. BLEST and STTF are compared with existing schedulers in both emulated and real-world environments and are shown to reduce web object transmission times with up to 51% and provide 45% faster communication for interactive applications, compared with MPTCP’s default scheduler.

Journal ArticleDOI
TL;DR: This study presents theoretical and empirical arguments for the role of mobile telephony in promoting good governance in 47 sub-Saharan African countries for the period 2000-2012 by highlighting the importance of various combinations of governance indicators and their responsiveness to mobile phone usage.
Abstract: Purpose- This study presents theoretical and empirical arguments for the role of mobile telephony in promoting good governance in 47 sub-Saharan African countries for the period 2000-2012. Design/methodology/approach- The empirical inquiry uses an endogeneity-robust GMM approach with forward orthogonal deviations to analyse the linkage between mobile phone usage and the variation in three broad governance categories — political, economic and institutional. Findings- Three key findings are established: First, in terms of individual governance indicators, mobile phones consistently stimulated good governance by the same magnitude, with the exception of the effect on the regulation component of economic governance. Second, when indicators are combined, the effect of mobile phones on general governance is three times higher than that on the institutional governance category. Third, countries with lower levels of governance indicators are catching-up with their counterparts with more advanced dynamics. Originality/value- The study makes both theoretical and empirical contributions by highlighting the importance of various combinations of governance indicators and their responsiveness to mobile phone usage.

Journal ArticleDOI
TL;DR: A spammer identification scheme based on Gaussian mixture model (SIGMM) that utilizes machine learning for industrial mobile networks that provides intelligent identification of spammers without relying on flexible and unreliable relationships is proposed.
Abstract: An industrial mobile network is crucial for industrial production in the Internet of Things. It guarantees the normal function of machines and the normalization of industrial production. However, this characteristic can be utilized by spammers to attack others and influence industrial production. Users who only share spams, such as links to viruses and advertisements, are called spammers. With the growth of mobile network membership, spammers have organized into groups for the purpose of benefit maximization, which has caused confusion and heavy losses to industrial production. It is difficult to distinguish spammers from normal users owing to the characteristics of multidimensional data. To address this problem, this paper proposes a spammer identification scheme based on Gaussian mixture model (SIGMM) that utilizes machine learning for industrial mobile networks. It provides intelligent identification of spammers without relying on flexible and unreliable relationships. SIGMM combines the presentation of data, where each user node is classified into one class in the construction process of the model. We validate the SIGMM by comparing it with the reality mining algorithm and hybrid fuzzy c-means (FCM) clustering algorithm using a mobile network dataset from a cloud server. Simulation results show that SIGMM outperforms these previous schemes in terms of recall, precision, and time complexity.

Journal ArticleDOI
TL;DR: This paper introduces machine learning to assist channel modeling and channel estimation with evidence of literature survey and shows that machine learning has been successfully demonstrated efficient handling big data.
Abstract: Channel modeling is fundamental to design wireless communication systems. A common practice is to conduct tremendous amount of channel measurement data and then to derive appropriate channel models using statistical methods. For highly mobile communications, channel estimation on top of the channel modeling enables high bandwidth physical layer transmission in state-of-the-art mobile communications. For the coming 5G and diverse Internet of Things, many challenging application scenarios emerge and more efficient methodology for channel modeling and channel estimation is very much needed. In the mean time, machine learning has been successfully demonstrated efficient handling big data. In this paper, applying machine learning to assist channel modeling and channel estimation has been introduced with evidence of literature survey.

Journal ArticleDOI
TL;DR: This paper focuses on a few potential technologies for 6G wireless communications, all of which represent certain fundamental breakthrough at the physical layer — technical hardcore of any new generation of wireless communications.
Abstract: The standard development of 5G wireless communication culminated between 2017 and 2019, followed by the worldwide deployment of 5G networks, which is expected to result in very high data rate for enhanced mobile broadband, support ultra-reliable and low-latency services and accommodate massive number of connections. Research attention is shifting to future generation of wireless communications, for instance, beyond 5G or 6G. Unlike previous papers, which discussed the use cases, deployment scenarios, or new network architectures of 6G in depth, this paper focuses on a few potential technologies for 6G wireless communications, all of which represent certain fundamental breakthrough at the physical layer-technical hardcore of any new generation of wireless communications. Some of them, such as holographic radio, terahertz communication, large intelligent surface, and orbital angular momentum, are of revolutionary nature and many related studies are still at their scientific exploration stage. Several technical areas, such as advanced channel coding and modulation, visible light communication, and advanced duplex, while having been studied, may find more opportunities in 6G.

Journal ArticleDOI
TL;DR: Introducing fog computing and proactive network association, realizing virtual cell by integrating open-loop radio transmission and error control, and innovating anticipatory mobility management through machine learning opens a new avenue toward ultra-low latency mobile networking.
Abstract: Mobile networking to achieve the ultra-low latency goal of 1 msec enables massive operation of autonomous vehicles and other intelligent mobile machines, and emerges as one of the most critical technologies beyond 5G mobile communications and state-of-the-art vehicular networks. Introducing fog computing and proactive network association, realizing virtual cell by integrating open-loop radio transmission and error control, and innovating anticipatory mobility management through machine learning, opens a new avenue toward ultra-low latency mobile networking.

Journal ArticleDOI
TL;DR: The soft frequency reuse (SFR) is introduced to reduce the inter-cell interference in multiple cellular networks (such as 5G cellular networks) with the orthogonal frequency division multiplexing in base stations and the energy-efficient resource allocation problem is described as a Stackelberg game model.
Abstract: To obtain better bandwidth and performance, the fifth generation (5G) cellular network is proposed to implement new-generation cellular mobile communications for new applications such as the internet of things, big data, smart city and so on. However, due to multiple/dense cellular network structures and high data rate, the 5G cellular network holds high inter-cell interference (ICI) and lower energy efficiency. The soft frequency reuse (SFR) is introduced to reduce the inter-cell interference in multiple cellular networks (such as 5G cellular networks) with the orthogonal frequency division multiplexing in base stations. Then, we investigate the energy-efficient resource allocation problem in the 5G cellular network with SFR. To coordinate the ICI among adjacent cells, we introduce the interference pricing factor into the utility function. The energy-efficient resource allocation problem is described as a Stackelberg game model. Because the sub-carrier assignment in the optimization process is an integer program which is very hard to be solved, we make a relaxation for the integer variable in the model and propose an iteration algorithm to obtain the Stackelberg game equilibrium solution. Simulation results show that the proposed method is feasible and promising.

Journal ArticleDOI
TL;DR: Studies of a subcomponent of the smart grid for a standard means of interface between the smart meter and the concentration point for collecting meter data show that experts believe that Power line communication has a high chance of becoming dominant and that the most important factor affecting standard success is technological superiority.
Abstract: The world is faced with various societal challenges related to e.g. climate change and energy scarcity. To address these issues, complex innovative systems may be developed such as smart grids. When these systems are realized challenges pertaining to renewable energy and sustainability may, in part, be solved. To implement them, generally accepted common standards should be developed and used by firms and society so that the technological components can be connected and quality and safety requirements of smart grids and their governance can be guaranteed. This paper studies a subcomponent of the smart grid. Specifically, the paper studies competing technologies for a standard means of interface between the smart meter and the concentration point for collecting meter data. Three types of communication technologies for the interface are currently battling for standard dominance: Power line communication, Mobile telephony, and Radio frequency. Nine relevant standard dominance factors were found: operational supremacy, technological superiority, compatibility, flexibility, pricing strategy, timing of entry, current installed base, regulator, and suppliers. The Best-Worst Method was applied to calculate the factors’ relative weights. The results show that experts believe that Power line communication has a high chance of becoming dominant and that the most important factor affecting standard success is technological superiority. The relative weights per factor are explained and theoretical and practical contributions, limitations, and areas for further research are discussed.

Journal ArticleDOI
TL;DR: A three-phase approach for D2D data dissemination, which exploits social-awareness and opportunistic contacts with user mobility, and proposes the new mechanisms for message selection and cooperation pairing, which take into account both altruistic and selfish behaviors of users.
Abstract: Nowadays, pervasive mobile devices not only pose new challenges for existing wireless networks to accommodate the surging demands, but also offer new opportunities to support various services. For example, device-to-device (D2D) communications provide a promising paradigm for data dissemination with low resource cost and high energy efficiency. In this paper, we propose a three-phase approach for D2D data dissemination, which exploits social-awareness and opportunistic contacts with user mobility. The proposed approach includes one phase of seed selection and two subsequent phases of data forwarding. In Phase I, we build a social-physical graph model, which combines the social network and the mobile network with opportunistic transmissions. Then we partition the social-physical graph into communities using the Girvan-Newman algorithm based on edge-betweenness, and select seeds for the communities according to vertex-closeness. In Phase II, data forwarding only takes place among socially connected users. In Phase III, the base station intervenes to enable data forwarding among cooperative users. For Phases II and III, we propose the new mechanisms for message selection and cooperation pairing, which take into account both altruistic and selfish behaviors of users. The theoretical analysis for the message selection mechanism proves its truthfulness and approximation ratio in the worst case. Extensive simulation results further demonstrate the effectiveness of the proposed three-phase approach with various synthetic and real tracing datasets.

Journal ArticleDOI
TL;DR: An efficient task offloading and channel resource allocation scheme based on MEC in 5G UDN is proposed based on differential evolution algorithm and results show that the proposed scheme can obviously reduce energy consumption and has good convergence.
Abstract: Driven by the vision of 5G communication, the demand for mobile communication services has increased explosively. Ultra-dense networks (UDN) is a key technology in 5G. The combination of mobile edge computing (MEC) and UDN can not only cope with access from mass communication devices, but also provide powerful computing capacity for users at the edge of wireless networks. The UDN based on MEC can effectively process computation-intensive and data-intensive tasks. However, when a large number of users offload tasks to the edge server, both the network load and transmission interference would increase. In this paper, the problem of task offloading and channel resource allocation based on MEC in 5G UDN is studied. Specifically, we formulate task offloading as an integer nonlinear programming problem. Due to the coupling of decision variables, we propose an efficient task offloading and channel resource allocation scheme based on differential evolution algorithm. Simulation results show that the proposed scheme can obviously reduce energy consumption and has good convergence.

Journal ArticleDOI
TL;DR: This paper reviews the frontier technology of software definition networks (SDN) of 5G and 6G, including system architecture, resource management, mobility management, interference management, challenges, and open issues, and considers the challenges.
Abstract: The current mobile communications could not satisfy the explosive data requirement of users. This paper reviews the frontier technology of software definition networks (SDN) of 5G and 6G, including system architecture, resource management, mobility management, interference management, challenges, and open issues. First of all, the system architectures of 5G and 6G mobile networks are introduced based on SDN technologies. Then typical SDN-5G/6G application scenarios and key issues are discussed. We also focus on mobility management approaches in mobile networks. Besides, three types of mobility management mechanism in software defined 5G/6G are described and compared. We then summarize the current interference management techniques in wireless cellular networks. Next, we provide a brief survey of interference management method in SDN-5G/6G. Additionally, considering the challenges, we discuss mm-Wave spectrum, un-availability of popular channel model, massive MIMO, low latency and QoE, energy efficiency, scalability, mobility and routing, inter operability, standardization and security for software defined 5G/6G networks.

Journal ArticleDOI
TL;DR: The improvement of network intelligence enabled by these technologies to deterministic content delivery over 5G optical transport networks with the performances on large-capacity, low-latency and high-efficiency is discussed.
Abstract: The fifth generation (5G) of mobile communications are facing big challenges, due to the proliferation of diversified terminals and unprecedented services such as internet of things (IoT), high-definition videos, virtual/augmented reality (VR/AR). To accommodate massive connections and astonish mobile traffic, an efficient 5G transport network is required. Optical transport network has been demonstrated to play an important role for carrying 5G radio signals. This paper focuses on the future challenges, recent studies and potential solutions for the 5G flexible optical transport networks with the performances on large-capacity, low-latency and high-efficiency. In addition, we discuss the technology development trends of the 5G transport networks in terms of the optical device, optical transport system, optical switching, and optical networking. Finally, we conclude the paper with the improvement of network intelligence enabled by these technologies to deterministic content delivery over 5G optical transport networks.

Journal ArticleDOI
TL;DR: An uplink scheduling technique is proposed for a LEO satellite-based mMTC-NB-IoT system, able to mitigate the level of the differential Doppler down to a value tolerable by the IoT devices.
Abstract: Narrowband Internet of Things (NB-IoT) is one of the most promising IoT technology to support the massive machine-type communication (mMTC) scenarios of the fifth generation mobile communication (5G). While the aim of this technology is to provide global coverage to the low-cost IoT devices distributed all over the globe, the vital role of satellites to complement and extend the terrestrial IoT network in remote or under-served areas has been recognized. In the context of having the global IoT networks, low earth (LEO) orbits would be beneficial due to their smaller propagation signal loss, which for the low complexity, low power, and cheap IoT devices is of utmost importance to close the link-budget. However, while this would lessen the problem of large delay and signal loss in the geostationary (GEO) orbit, it would come up with increased Doppler effects. In this paper, we propose an uplink scheduling technique for a LEO satellite-based mMTC-NB-IoT system, able to mitigate the level of the differential Doppler down to a value tolerable by the IoT devices. The performance of the proposed strategy is validated through numerical simulations and the achievable data rates of the considered scenario are shown, in order to emphasize the limitations of such systems coming from the presence of a satellite channel.

Journal ArticleDOI
TL;DR: Context-enhanced Trajectory Reconstruction is proposed, a new technique that hinges on tensor factorization as a core method to complete individual CDR-based trajectories, unveiling potential biases that incomplete trajectories obtained from legacy CDR induce on key results about human mobility laws, trajectory uniqueness, and movement predictability.
Abstract: Mobile phone data are a popular source of positioning information in many recent studies that have largely improved our understanding of human mobility. These data consist of time-stamped and geo-referenced communication events recorded by network operators, on a per-subscriber basis. They allow for unprecedented tracking of populations of millions of individuals over long periods that span months. Nevertheless, due to the uneven processes that govern mobile communications, the sampling of user locations provided by mobile phone data tends to be sparse and irregular in time, leading to substantial gaps in the resulting trajectory information. In this paper, we illustrate the severity of the problem through an empirical study of a large-scale Call Detail Records (CDR) dataset. We then propose Context-enhanced Trajectory Reconstruction, a new technique that hinges on tensor factorization as a core method to complete individual CDR-based trajectories. The proposed solution infers missing locations with a median displacement within two network cells from the actual position of the user, on an hourly basis and even when as little as 1% of her original mobility is known. Our approach lets us revisit seminal works in the light of complete mobility data, unveiling potential biases that incomplete trajectories obtained from legacy CDR induce on key results about human mobility laws, trajectory uniqueness, and movement predictability.

Book ChapterDOI
01 Jan 2019
TL;DR: massive multiple-input multiple-output (MIMO), millimeter wave (mmWave), mmWave massive MIMO, and beamforming techniques are clarified in a detail which are taken into account as promising key technologies for the 5G networks.
Abstract: Wireless and mobile communication technologies exhibit remarkable changes in every decade. The necessity of these changes is based on the changing user demands and innovations offered by the emerging technologies. This chapter provides information on the current situation of fifth generation (5G) mobile communication systems. Before discussing the details of the 5G networks, the evolution of mobile communication systems is considered from first generation to fourth generation systems. The advantages and weaknesses of each generation are explained comparatively. Later, technical infrastructure developments of the 5G communication systems have been evaluated in the context of system requirements and new experiences of users such as 4K video streaming, tactile Internet, and augmented reality. After the main goals and requirements of the 5G networks are described, the planned targets to be provided in real applications by this new generation systems are clarified. In addition, different usage scenarios and minimum requirements for the ITU-2020 are evaluated. On the other hand, there are several challenges to be overcome for achieving the intended purpose of 5G communication systems. These challenges and potential solutions for them are described in the proceeding subsections of the chapter. Furthermore, massive multiple-input multiple-output (MIMO), millimeter wave (mmWave), mmWave massive MIMO, and beamforming techniques are clarified in a detail which are taken into account as promising key technologies for the 5G networks. Besides, potential application areas and application examples of the 5G communication systems are covered at the end of this chapter.

Journal ArticleDOI
TL;DR: This article proposes two supervised methods - one combining rule-based heuristics (RBH) with random forest (RF) and the other combining RBH with a fuzzy logic system - and a third, unsupervised method with RBH and k-medoids clustering, to detect fine-grained transport modes from CSD, particularly subway, train, tram, bike, car, and walk.

Journal ArticleDOI
TL;DR: Whether the development of new mobile technologies will be able to support the Industry 4.0 revolution for Smart Factories and manufacturers is sought.