scispace - formally typeset
Search or ask a question

Showing papers on "Mobile telephony published in 2017"


Journal ArticleDOI
TL;DR: This paper describes major use cases and reference scenarios where the mobile edge computing (MEC) is applicable and surveys existing concepts integrating MEC functionalities to the mobile networks and discusses current advancement in standardization of the MEC.
Abstract: Technological evolution of mobile user equipment (UEs), such as smartphones or laptops, goes hand-in-hand with evolution of new mobile applications. However, running computationally demanding applications at the UEs is constrained by limited battery capacity and energy consumption of the UEs. A suitable solution extending the battery life-time of the UEs is to offload the applications demanding huge processing to a conventional centralized cloud. Nevertheless, this option introduces significant execution delay consisting of delivery of the offloaded applications to the cloud and back plus time of the computation at the cloud. Such a delay is inconvenient and makes the offloading unsuitable for real-time applications. To cope with the delay problem, a new emerging concept, known as mobile edge computing (MEC), has been introduced. The MEC brings computation and storage resources to the edge of mobile network enabling it to run the highly demanding applications at the UE while meeting strict delay requirements. The MEC computing resources can be exploited also by operators and third parties for specific purposes. In this paper, we first describe major use cases and reference scenarios where the MEC is applicable. After that we survey existing concepts integrating MEC functionalities to the mobile networks and discuss current advancement in standardization of the MEC. The core of this survey is, then, focused on user-oriented use case in the MEC, i.e., computation offloading. In this regard, we divide the research on computation offloading to three key areas: 1) decision on computation offloading; 2) allocation of computing resource within the MEC; and 3) mobility management. Finally, we highlight lessons learned in area of the MEC and we discuss open research challenges yet to be addressed in order to fully enjoy potentials offered by the MEC.

1,829 citations


Journal ArticleDOI
TL;DR: A polynomial-time algorithm with successive MBS placement, where the MBSs are placed sequentially starting on the area perimeter of the uncovered GTs along a spiral path toward the center, until all GTs are covered.
Abstract: In terrestrial communication networks without fixed infrastructure, unmanned aerial vehicle-mounted mobile base stations (MBSs) provide an efficient solution to achieve wireless connectivity. This letter aims to minimize the number of MBSs needed to provide wireless coverage for a group of distributed ground terminals (GTs), ensuring that each GT is within the communication range of at least one MBS. We propose a polynomial-time algorithm with successive MBS placement, where the MBSs are placed sequentially starting on the area perimeter of the uncovered GTs along a spiral path toward the center, until all GTs are covered. Numerical results show that the proposed algorithm performs favorably compared with other schemes in terms of the number of required MBSs as well as time complexity.

820 citations


Journal ArticleDOI
TL;DR: This survey makes an exhaustive review on the state-of-the-art research efforts on mobile edge networks, including definition, architecture, and advantages, and presents a comprehensive survey of issues on computing, caching, and communication techniques at the network edge.
Abstract: As the explosive growth of smart devices and the advent of many new applications, traffic volume has been growing exponentially. The traditional centralized network architecture cannot accommodate such user demands due to heavy burden on the backhaul links and long latency. Therefore, new architectures, which bring network functions and contents to the network edge, are proposed, i.e., mobile edge computing and caching. Mobile edge networks provide cloud computing and caching capabilities at the edge of cellular networks. In this survey, we make an exhaustive review on the state-of-the-art research efforts on mobile edge networks. We first give an overview of mobile edge networks, including definition, architecture, and advantages. Next, a comprehensive survey of issues on computing, caching, and communication techniques at the network edge is presented. The applications and use cases of mobile edge networks are discussed. Subsequently, the key enablers of mobile edge networks, such as cloud technology, SDN/NFV, and smart devices are discussed. Finally, open research challenges and future directions are presented as well.

782 citations


Journal ArticleDOI
TL;DR: This work forms the computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network, and develops an alternating direction method of multipliers-based algorithm to solve the optimization problem.
Abstract: Mobile edge computing has risen as a promising technology for augmenting the computational capabilities of mobile devices Meanwhile, in-network caching has become a natural trend of the solution of handling exponentially increasing Internet traffic The important issues in these two networking paradigms are computation offloading and content caching strategies, respectively In order to jointly tackle these issues in wireless cellular networks with mobile edge computing, we formulate the computation offloading decision, resource allocation and content caching strategy as an optimization problem, considering the total revenue of the network Furthermore, we transform the original problem into a convex problem and then decompose it in order to solve it in a distributed and efficient way Finally, with recent advances in distributed convex optimization, we develop an alternating direction method of multipliers-based algorithm to solve the optimization problem The effectiveness of the proposed scheme is demonstrated by simulation results with different system parameters

611 citations


Journal ArticleDOI
TL;DR: An algorithm is devised that enables the placement of the cloudlets at user dense regions of the WMAN, and assigns mobile users to the placed cloudlets while balancing their workload, which indicates that the performance of the proposed algorithm is very promising.
Abstract: Mobile applications are becoming increasingly computation-intensive, while the computing capability of portable mobile devices is limited. A powerful way to reduce the completion time of an application in a mobile device is to offload its tasks to nearby cloudlets, which consist of clusters of computers. Although there is a significant body of research in mobile cloudlet offloading technology, there has been very little attention paid to how cloudlets should be placed in a given network to optimize mobile application performance. In this paper we study cloudlet placement and mobile user allocation to the cloudlets in a wireless metropolitan area network (WMAN). We devise an algorithm for the problem, which enables the placement of the cloudlets at user dense regions of the WMAN, and assigns mobile users to the placed cloudlets while balancing their workload. We also conduct experiments through simulation. The simulation results indicate that the performance of the proposed algorithm is very promising.

412 citations


Journal ArticleDOI
TL;DR: The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency to ensure ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.
Abstract: This article proposes an approach to enhance users' experience of video streaming in the context of smart cities. The proposed approach relies on the concept of MEC as a key factor in enhancing QoS. It sustains QoS by ensuring that applications/services follow the mobility of users, realizing the "Follow Me Edge" concept. The proposed scheme enforces an autonomic creation of MEC services to allow anywhere anytime data access with optimum QoE and reduced latency. Considering its application in smart city scenarios, the proposed scheme represents an important solution for reducing core network traffic and ensuring ultra-short latency through a smart MEC architecture capable of achieving the 1 ms latency dream for the upcoming 5G mobile systems.

351 citations


Journal ArticleDOI
TL;DR: An integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of applications for smart cities is proposed and a novel big data deep reinforcement learning approach is presented.
Abstract: Recent advances in networking, caching, and computing have significant impacts on the developments of smart cities. Nevertheless, these important enabling technologies have traditionally been studied separately in the existing works on smart cities. In this article, we propose an integrated framework that can enable dynamic orchestration of networking, caching, and computing resources to improve the performance of applications for smart cities. Then we present a novel big data deep reinforcement learning approach. Simulation results with different system parameters are presented to show the effectiveness of the proposed scheme.

335 citations


Journal ArticleDOI
TL;DR: In this article, a user-centric energy-aware mobility management (EMM) scheme is proposed to optimize the delay due to both radio access and computation under the long-term energy consumption constraint of the user.
Abstract: Merging mobile edge computing (MEC) functionality with the dense deployment of base stations (BSs) provides enormous benefits such as a real proximity, low latency access to computing resources. However, the envisioned integration creates many new challenges, among which mobility management (MM) is a critical one. Simply applying existing radio access-oriented MM schemes leads to poor performance mainly due to the co-provisioning of radio access and computing services of the MEC-enabled BSs. In this paper, we develop a novel user-centric energy-aware mobility management (EMM) scheme, in order to optimize the delay due to both radio access and computation, under the long-term energy consumption constraint of the user. Based on Lyapunov optimization and multi-armed bandit theories, EMM works in an online fashion without future system state information, and effectively handles the imperfect system state information. Theoretical analysis explicitly takes radio handover and computation migration cost into consideration and proves a bounded deviation on both the delay performance and energy consumption compared with the oracle solution with exact and complete future system information. The proposed algorithm also effectively handles the scenario in which candidate BSs randomly switch ON/OFF during the offloading process of a task. Simulations show that the proposed algorithms can achieve close-to-optimal delay performance while satisfying the user energy consumption constraint.

332 citations


Journal ArticleDOI
TL;DR: The approach, implemented via successive convex approximation, is seen to yield considerable gains in mobile energy consumption as compared to conventional independent offloading across users.
Abstract: Mobile edge computing is a provisioning solution to enable augmented reality (AR) applications on mobile devices. AR mobile applications have inherent collaborative properties in terms of data collection in the uplink, computing at the edge, and data delivery in the downlink. In this letter, these features are leveraged to propose a novel resource allocation approach over both communication and computation resources. The approach, implemented via successive convex approximation, is seen to yield considerable gains in mobile energy consumption as compared to conventional independent offloading across users.

290 citations


Journal ArticleDOI
TL;DR: This survey discusses advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application.
Abstract: The boom in the capabilities and features of mobile devices, like smartphones, tablets, and wearables, combined with the ubiquitous and affordable Internet access and the advances in the areas of cooperative networking, computer vision, and mobile cloud computing transformed mobile augmented reality (MAR) from science fiction to a reality. Although mobile devices are more constrained computationalwise from traditional computers, they have a multitude of sensors that can be used to the development of more sophisticated MAR applications and can be assisted from remote servers for the execution of their intensive parts. In this paper, after introducing the reader to the basics of MAR, we present a categorization of the application fields together with some representative examples. Next, we introduce the reader to the user interface and experience in MAR applications and continue with the core system components of the MAR systems. After that, we discuss advances in tracking and registration, since their functionality is crucial to any MAR application and the network connectivity of the devices that run MAR applications together with its importance to the performance of the application. We continue with the importance of data management in MAR systems and the systems performance and sustainability, and before we conclude this survey, we present existing challenging problems.

285 citations


Journal ArticleDOI
TL;DR: In this article, an efficient reinforcement learning-based resource management algorithm was proposed to minimize the long-term system cost, including both service delay and operational cost, by using a decomposition of the (offline) value iteration and (online) reinforcement learning.
Abstract: Mobile edge computing (also known as fog computing) has recently emerged to enable in-situ processing of delay-sensitive applications at the edge of mobile networks. Providing grid power supply in support of mobile edge computing, however, is costly and even infeasible (in certain rugged or under-developed areas), thus mandating on-site renewable energy as a major or even sole power supply in increasingly many scenarios. Nonetheless, the high intermittency and unpredictability of renewable energy make it very challenging to deliver a high quality of service to users in energy harvesting mobile edge computing systems. In this paper, we address the challenge of incorporating renewables into mobile edge computing and propose an efficient reinforcement learning-based resource management algorithm, which learns on-the-fly the optimal policy of dynamic workload offloading (to the centralized cloud) and edge server provisioning to minimize the long-term system cost (including both service delay and operational cost). Our online learning algorithm uses a decomposition of the (offline) value iteration and (online) reinforcement learning, thus achieving a significant improvement of learning rate and run-time performance when compared to standard reinforcement learning algorithms such as ${Q}$ -learning. We prove the convergence of the proposed algorithm and analytically show that the learned policy has a simple monotone structure amenable to practical implementation. Our simulation results validate the efficacy of our algorithm, which significantly improves the edge computing performance compared to fixed or myopic optimization schemes and conventional reinforcement learning algorithms.

Proceedings ArticleDOI
27 Mar 2017
TL;DR: MoDNN is proposed — a local distributed mobile computing system for DNN applications that can partition already trained DNN models onto several mobile devices to accelerate DNN computations by alleviating device-level computing cost and memory usage.
Abstract: Although Deep Neural Networks (DNN) are ubiquitously utilized in many applications, it is generally difficult to deploy DNNs on resource-constrained devices, e.g., mobile platforms. Some existing attempts mainly focus on client-server computing paradigm or DNN model compression, which require either infrastructure supports or special training phases, respectively. In this work, we propose MoDNN — a local distributed mobile computing system for DNN applications. MoDNN can partition already trained DNN models onto several mobile devices to accelerate DNN computations by alleviating device-level computing cost and memory usage. TWo model partition schemes are also designed to minimize non-parallel data delivery time, including both wakeup time and transmission time. Experimental results show that when the number of worker nodes increases from 2 to 4, MoDNN can accelerate the DNN computation by 2.17–4.28 X. Besides the parallel execution, the performance speedup also partially comes from the reduction of the data delivery time, e.g., 30.02% w.r.t. conventional 2D-grids partition.

Journal ArticleDOI
TL;DR: Simulation results demonstrate that the proposed mobile relaying scheme could significantly outperform the static relays scheme in terms of secrecy enhancement, and the algorithm developed is proven to have Karush–Kuhn–Tucker point convergence guarantee.
Abstract: Mobile relaying has aroused great interest in wireless communications recently, thanks to the rapid development and evolvement of unmanned aerial vehicles. This letter establishes the utility of mobile relaying in facilitating secure wireless communications. In particular, we consider transmit optimization in a four-node (source, destination, buffer-aided mobile relay, and eavesdropper) channel setup, wherein we aim at maximizing the secrecy rate of this system. However, the secrecy rate maximization problem is nonconvex and intractable to solve. To circumvent the nonconvexity, we exploit the difference-of-concave (DC) program to develop an iterative algorithm, which is proven to have Karush–Kuhn–Tucker point convergence guarantee. The algorithm conducts a water-filling-based solution in each DC iteration, and thus is computationally efficient to implement. In addition, for a given special case, we show that each DC iteration could yield a closed-form solution, which further reduces the computational complexity. Simulation results demonstrate that the proposed mobile relaying scheme could significantly outperform the static relaying scheme in terms of secrecy enhancement.

Journal ArticleDOI
TL;DR: In this paper, the adoption of mobile telephony to provide financial services in sub Saharan Africa has become instrumental in integrating the hitherto unbanked segments of the population to the mainstream financial systems.

Proceedings ArticleDOI
01 May 2017
TL;DR: An efficient three-step algorithm comprising of semidefinite relaxation (SDR), alternating optimization (AO), and sequential tuning (ST) is shown to always compute a locally optimal solution, and give nearly optimal performance under a wide range of parameter settings.
Abstract: We consider a general multi-user mobile cloud computing system with a computing access point (CAP), where each mobile user has multiple independent tasks that may be processed locally, at the CAP, or at a remote cloud server. The CAP serves both as the network access gateway and a computation service provider to the mobile users. We aim to jointly optimize the offloading decisions of all users' tasks as well as the allocation of computation and communication resources, to minimize the overall cost of energy, computation, and delay for all users. This problem is NP-hard in general. We propose an efficient three-step algorithm comprising of semidefinite relaxation (SDR), alternating optimization (AO), and sequential tuning (ST). It is shown to always compute a locally optimal solution, and give nearly optimal performance under a wide range of parameter settings. Through evaluating the performance of different combinations of the three components of this SDR-AO-ST algorithm, we provide insights into their roles and contributions in the overall solution. We further compare the performance of SDR-AO-ST against a lower bound to the minimum cost, purely local processing, purely cloud processing, and hybrid local-cloud processing without using the CAP. Our numerical results demonstrate the effectiveness of the proposed algorithm in the joint management of computation and communication resources in mobile cloud computing systems with a CAP.

Journal ArticleDOI
TL;DR: This department describes phone, watch, and embedded prototypes that can locally run large-scale deep networks processing audio, images, and inertial sensor data and vastly reduce conventional inference-time overhead of deep models.
Abstract: This department provides an overview the progress the authors have made to the emerging area of embedded and mobile forms of on-device deep learning. Their work addresses two core technical questions. First, how should deep learning principles and algorithms be applied to sensor inference problems that are central to this class of computing? Second, what is required for current and future deep learning innovations to be efficiently integrated into a variety of mobile resource-constrained systems? Toward answering such questions, the authors describe phone, watch, and embedded prototypes that can locally run large-scale deep networks processing audio, images, and inertial sensor data. These prototypes are enabled with a variety of algorithmic and system-level innovations that vastly reduce conventional inference-time overhead of deep models.

Journal ArticleDOI
TL;DR: Results indicate the system and embedded decision algorithm are able to provide decisions on selecting wireless medium and cloud resources based on different context of the mobile devices, and achieve significant reduction on makespan and energy, with the improved service availability when compared with existing offloading schemes.
Abstract: Mobile cloud computing (MCC) has become a significant paradigm for bringing the benefits of cloud computing to mobile devices’ proximity. Service availability along with performance enhancement and energy efficiency are primary targets in MCC. This paper proposes a code offloading framework, called mCloud, which consists of mobile devices, nearby cloudlets and public cloud services, to improve the performance and availability of the MCC services. The effect of the mobile device context (e.g., network conditions) on offloading decisions is studied by proposing a context-aware offloading decision algorithm aiming to provide code offloading decisions at runtime on selecting wireless medium and appropriate cloud resources for offloading. We also investigate failure detection and recovery policies for our mCloud system. We explain in details the design and implementation of the mCloud prototype framework. We conduct real experiments on the implemented system to evaluate the performance of the algorithm. Results indicate the system and embedded decision algorithm are able to provide decisions on selecting wireless medium and cloud resources based on different context of the mobile devices, and achieve significant reduction on makespan and energy, with the improved service availability when compared with existing offloading schemes.

Book ChapterDOI
01 Jan 2017
TL;DR: Considering the trend in 5G, achieving significant gains in capacity and system throughput performance is a high priority requirement in view of the recent exponential increase in the volume of mobile traffic and the proposed system should be able to support enhanced delay-sensitive high-volume services.
Abstract: Radio access technologies for cellular mobile communications are typically characterized by multiple access schemes, e.g., frequency division multiple access (FDMA), time division multiple access (TDMA), code division multiple access (CDMA), and OFDMA. In the 4th generation (4G) mobile communication systems such as Long-Term Evolution (LTE) (Au et al., Uplink contention based SCMA for 5G radio access. Globecom Workshops (GC Wkshps), 2014. doi: 10.1109/GLOCOMW.2014.7063547) and LTE-Advanced (Baracca et al., IEEE Trans. Commun., 2011. doi: 10.1109/TCOMM.2011.121410.090252; Barry et al., Digital Communication, Kluwer, Dordrecht, 2004), standardized by the 3rd Generation Partnership Project (3GPP), orthogonal multiple access based on OFDMA or single carrier (SC)-FDMA is adopted. Orthogonal multiple access was a reasonable choice for achieving good system-level throughput performance with simple single-user detection. However, considering the trend in 5G, achieving significant gains in capacity and system throughput performance is a high priority requirement in view of the recent exponential increase in the volume of mobile traffic. In addition the proposed system should be able to support enhanced delay-sensitive high-volume services such as video streaming and cloud computing. Another high-level target of 5G is reduced cost, higher energy efficiency and robustness against emergencies.

Journal ArticleDOI
TL;DR: In this article, a paradigm shift of wireless security to the surveillance and intervention of infrastructure-free suspicious and malicious wireless communications, by exploiting legitimate eavesdropping and jamming jointly, is presented.
Abstract: Conventional wireless security assumes wireless communications are legitimate, and aims to protect them against malicious eavesdropping and jamming attacks. However, emerging infrastructure- free mobile communication networks can be illegally used (e.g., by criminals or terrorists) but are difficult to monitor, thus imposing new challenges in public security. To tackle this issue, this article presents a paradigm shift of wireless security to the surveillance and intervention of infrastructure-free suspicious and malicious wireless communications, by exploiting legitimate eavesdropping and jamming jointly. In particular, proactive eavesdropping (via jamming) is proposed to intercept and decode information from suspicious communication links for the purpose of inferring their intentions and deciding further measures against them. Cognitive jamming (via eavesdropping) is also proposed to disrupt, disable, and even spoof the targeted malicious wireless communications to achieve various intervention tasks.

Journal ArticleDOI
TL;DR: In this article, the performance of edge content caching for mobile video streaming is analyzed using frequency-domain and entropy analysis approaches, and an efficient caching strategy based on the measurement insights and experimentally evaluate its performance.
Abstract: Today’s Internet has witnessed an increase in the popularity of mobile video streaming, which is expected to exceed 3/4 of the global mobile data traffic by 2019. To satisfy the considerable amount of mobile video requests, video service providers have been pushing their content delivery infrastructure to edge networks—from regional content delivery network (CDN) servers to peer CDN servers (e.g., smartrouters in users’ homes)—to cache content and serve users with storage and network resources nearby. Among the edge network content caching paradigms, Wi-Fi access point caching and cellular base station caching have become two mainstream solutions. Thus, understanding the effectiveness and performance of these solutions for large-scale mobile video delivery is important. However, the characteristics and request patterns of mobile video streaming are unclear in practical wireless network. In this paper, we use real-world data sets containing 50 million trace items of nearly 2 million users viewing more than 0.3 million unique videos using mobile devices in a metropolis in China over two weeks, not only to understand the request patterns and user behaviors in mobile video streaming, but also to evaluate the effectiveness of Wi-Fi and cellular-based edge content caching solutions. To understand the performance of edge content caching for mobile video streaming, we first present temporal and spatial video request patterns, and we analyze their impacts on caching performance using frequency-domain and entropy analysis approaches. We then study the behaviors of mobile video users, including their mobility and geographical migration behaviors, which determine the request patterns. Using trace-driven experiments, we compare strategies for edge content caching, including least recently used (LRU) and least frequently used (LFU), in terms of supporting mobile video requests. We reveal that content, location, and mobility factors all affect edge content caching performance. Moreover, we design an efficient caching strategy based on the measurement insights and experimentally evaluate its performance. The results show that our design significantly improves the cache hit rate by up to 30% compared with LRU/LFU.

Proceedings ArticleDOI
21 May 2017
TL;DR: It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.
Abstract: Wireless communication by leveraging the use of low-altitude unmanned aerial vehicles (UAVs) has received significant interests recently due to its low-cost and flexibility in providing wireless connectivity in areas without infrastructure coverage. This paper studies a UAV-enabled mobile relaying system, where a high-mobility UAV is deployed to assist in the information transmission from a ground source to a ground destination with their direct link blocked. By assuming that the UAV adopts the energy-efficient circular trajectory and employs time-division duplexing (TDD) based decode-and-forward (DF) relaying, we maximize the spectrum efficiency (SE) in bits/second/Hz as well as energy efficiency (EE) in bits/Joule of the considered system by jointly optimizing the time allocations for the UAV's relaying together with its flying speed and trajectory. It is revealed that for UAV-enabled mobile relaying with the UAV propulsion energy consumption taken into account, there exists a trade-off between the maximum achievable SE and EE by exploiting the new degree of freedom of UAV trajectory design.

Journal ArticleDOI
TL;DR: Glasgow Network Functions (GNF), a container-based NFV platform that runs and orchestrates lightweight container VNFs, saving core network utilization and providing lower latency is presented.
Abstract: In order to cope with the increasing network utilization driven by new mobile clients, and to satisfy demand for new network services and performance guarantees, telecommunication service providers are exploiting virtualization over their network by implementing network services in virtual machines, decoupled from legacy hardware accelerated appliances. This effort, known as NFV, reduces OPEX and provides new business opportunities. At the same time, next generation mobile, enterprise, and IoT networks are introducing the concept of computing capabilities being pushed at the network edge, in close proximity of the users. However, the heavy footprint of today's NFV platforms prevents them from operating at the network edge. In this article, we identify the opportunities of virtualization at the network edge and present Glasgow Network Functions (GNF), a container-based NFV platform that runs and orchestrates lightweight container VNFs, saving core network utilization and providing lower latency. Finally, we demonstrate three useful examples of the platform: IoT DDoS remediation, on-demand troubleshooting for telco networks, and supporting roaming of network functions.

Journal ArticleDOI
TL;DR: The need for the deep customization of mobile networks at different granularity levels is discussed: per network, per application, per group of users, per individual users, and even per data of users.
Abstract: 5G mobile systems are expected to meet different strict requirements beyond the traditional operator use cases. Effectively, to accommodate needs of new industry segments such as healthcare and manufacturing, 5G systems need to accommodate elasticity, flexibility, dynamicity, scalability, manageability, agility, and customization along with different levels of service delivery parameters according to the service requirements. This is currently possible only by running the networks on top of the same infrastructure, the technology called network function virtualization, through this sharing of the development and infrastructure costs between the different networks. In this article, we discuss the need for the deep customization of mobile networks at different granularity levels: per network, per application, per group of users, per individual users, and even per data of users. The article also assesses the potential of network slicing to provide the appropriate customization and highlights the technology challenges. Finally, a high-level architectural solution is proposed, addressing a massive multi-slice environment.

Journal ArticleDOI
TL;DR: A mobile service provisioning architecture named a mobile service sharing community is proposed and a service composition approach by utilizing the Krill-Herd algorithm is proposed, which can obtain superior solutions as compared with current standard composition methods in mobile environments.
Abstract: The advances in mobile technologies enable mobile devices to perform tasks that are traditionally run by personal computers as well as provide services to the others. Mobile users can form a service sharing community within an area by using their mobile devices. This paper highlights several challenges involved in building such service compositions in mobile communities when both service requesters and providers are mobile. To deal with them, we first propose a mobile service provisioning architecture named a mobile service sharing community and then propose a service composition approach by utilizing the Krill-Herd algorithm. To evaluate the effectiveness and efficiency of our approach, we build a simulation tool. The experimental results demonstrate that our approach can obtain superior solutions as compared with current standard composition methods in mobile environments. It can yield near-optimal solutions and has a nearly linear complexity with respect to a problem size.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a scalable framework for data shuffling in a wireless distributed computing system, in which the required communication bandwidth for shuffling does not increase with the number of users in the network.
Abstract: We consider a wireless distributed computing system, in which multiple mobile users, connected wirelessly through an access point, collaborate to perform a computation task. In particular, users communicate with each other via the access point to exchange their locally computed intermediate computation results, which is known as data shuffling . We propose a scalable framework for this system, in which the required communication bandwidth for data shuffling does not increase with the number of users in the network. The key idea is to utilize a particular repetitive pattern of placing the data set (thus a particular repetitive pattern of intermediate computations), in order to provide the coding opportunities at both the users and the access point, which reduce the required uplink communication bandwidth from users to the access point and the downlink communication bandwidth from access point to users by factors that grow linearly with the number of users. We also demonstrate that the proposed data set placement and coded shuffling schemes are optimal (i.e., achieve the minimum required shuffling load) for both a centralized setting and a decentralized setting, by developing tight information-theoretic lower bounds.

Journal ArticleDOI
TL;DR: The measurements reveal that LTE can provide coverage for 99 percent of the outdoor and road users, but the LTE-M or NarrowBand-IoT upgrades are required in combination with other measures to allow for additional penetration losses, such as those experienced in underground parking lots.
Abstract: Long Term Evolution, the fourth generation of mobile communication technology, has been commercially deployed for about five years. Even though it is continuously updated through new releases, and with LTE Advanced Pro Release 13 being the latest one, the development of the fifth generation has been initiated. In this article, we measure how current LTE network implementations perform in comparison with the initial LTE requirements. The target is to identify certain key performance indicators that have suboptimal implementations and therefore lend themselves to careful consideration when designing and standardizing next generation wireless technology. Specifically, we analyze user and control plane latency, handover execution time, and coverage, which are critical parameters for connected mobility use cases such as road vehicle safety and efficiency. We study the latency, handover execution time, and coverage of four operational LTE networks based on 19,000 km of drive tests covering a mixture of rural, suburban, and urban environments. The measurements have been collected using commercial radio network scanners and measurement smartphones. Even though LTE has low air interface delays, the measurements reveal that core network delays compromise the overall round-trip time design requirement. LTE's breakbefore- make handover implementation causes a data interruption at each handover of 40 ms at the median level. While this is in compliance with the LTE requirements, and lower values are certainly possible, it is also clear that break-before-make will not be sufficient for connected mobility use cases such as road vehicle safety. Furthermore, the measurements reveal that LTE can provide coverage for 99 percent of the outdoor and road users, but the LTE-M or NarrowBand-IoT upgrades, as of LTE Release 13, are required in combination with other measures to allow for additional penetration losses, such as those experienced in underground parking lots. Based on the observed discrepancies between measured and standardized LTE performance, in terms of latency, handover execution time, and coverage, we conclude the article with a discussion of techniques that need careful consideration for connected mobility in fifth generation mobile communication technology.

Journal ArticleDOI
TL;DR: This paper presents the architecture and functions of 5G mobile communication system agreed in the NextGen study, and describes the main functions and entities of the network.

Journal ArticleDOI
TL;DR: This paper proposes a new secure and lightweight mobile user authentication scheme for mobile cloud computing, based on cryptographic hash, bitwise XOR, and fuzzy extractor functions, and demonstrates that it is secure against possible well-known passive and active attacks and also provides user anonymity.
Abstract: Secure and efficient lightweight user authentication protocol for mobile cloud computing becomes a paramount concern due to the data sharing using Internet among the end users and mobile devices. Mutual authentication of a mobile user and cloud service provider is necessary for accessing of any cloud services. However, resource constraint nature of mobile devices makes this task more challenging. In this paper, we propose a new secure and lightweight mobile user authentication scheme for mobile cloud computing, based on cryptographic hash, bitwise XOR, and fuzzy extractor functions. Through informal security analysis and rigorous formal security analysis using random oracle model, it has been demonstrated that the proposed scheme is secure against possible well-known passive and active attacks and also provides user anonymity. Moreover, we provide formal security verification through ProVerif 1.93 simulation for the proposed scheme. Also, we have done authentication proof of our proposed scheme using the Burrows-Abadi-Needham logic. Since the proposed scheme does not exploit any resource constrained cryptosystem, it has the lowest computation cost in compare to existing related schemes. Furthermore, the proposed scheme does not involve registration center in the authentication process, for which it is having lowest communication cost compared with existing related schemes.

Journal ArticleDOI
TL;DR: It is proved that the proposed algorithm converges to the optimal solution of social welfare maximization problem, and it is shown that the prices and task allocation obtained by the algorithm also yields a Walrasian equilibrium.
Abstract: In this paper, we consider joint pricing and task allocation in a unified mobile crowdsensing system, where all task initiators and mobile users are viewed as peers. From an exchange market point of view, the pricing and task allocation in such a unified system depend only on the supply and demand since no one can dominate the process, with the optimal solution being characterized by the Walrasian equilibrium. This is quite different from existing approaches, where each task initiator builds a specific mobile crowdsensing system and provides an incentive mechanism to maximize his/her own utility. We design distributed algorithms to compute the Walrasian equilibrium under the scenario where one cloud platform is available in the system. We propose to maximize social welfare of the whole system, and dual decomposition is then employed to divide the social welfare maximization problem into a set of subproblems that can be solved by task initiators and mobile users. We prove that the proposed algorithm converges to the optimal solution of social welfare maximization problem. Further, we show that the prices and task allocation obtained by the algorithm also yields a Walrasian equilibrium. Also, the proposed algorithm does not need the cloud to collect private information such as utility functions of task initiators and cost functions of mobile users. Extensive simulations demonstrate the effectiveness of the proposed algorithms.

Journal ArticleDOI
TL;DR: To demonstrate the benefits of link adaptation over a mobile VLC channel, an adaptive system with luminary selection is proposed and improvements in spectral efficiency over non-adaptive systems are demonstrated.
Abstract: In this letter, we propose a realistic channel model for visible light communication (VLC) assuming a mobile user. Based on non-sequential ray tracing, we first obtain channel impulse responses for each point over the user movement trajectories, and then express path loss and delay spread as a function of distance through curve fitting. Our results demonstrate large variations in received power. In system design, this necessitates the use of adaptive schemes, where transmission parameters can be selected according to channel conditions. To demonstrate the benefits of link adaptation over a mobile VLC channel, we propose an adaptive system with luminary selection and demonstrate improvements in spectral efficiency over non-adaptive systems.