scispace - formally typeset
Search or ask a question

Showing papers on "Handover published in 2018"


Journal ArticleDOI
TL;DR: The cutting-edge research efforts on service migration in MEC are reviewed, a devisal of taxonomy based on various research directions for efficient service migration is presented, and a summary of three technologies for hosting services on edge servers, i.e., virtual machine, container, and agent are provided.
Abstract: Mobile edge computing (MEC) provides a promising approach to significantly reduce network operational cost and improve quality of service (QoS) of mobile users by pushing computation resources to the network edges, and enables a scalable Internet of Things (IoT) architecture for time-sensitive applications (e-healthcare, real-time monitoring, and so on.). However, the mobility of mobile users and the limited coverage of edge servers can result in significant network performance degradation, dramatic drop in QoS, and even interruption of ongoing edge services; therefore, it is difficult to ensure service continuity. Service migration has great potential to address the issues, which decides when or where these services are migrated following user mobility and the changes of demand. In this paper, two conceptions similar to service migration, i.e., live migration for data centers and handover in cellular networks, are first discussed. Next, the cutting-edge research efforts on service migration in MEC are reviewed, and a devisal of taxonomy based on various research directions for efficient service migration is presented. Subsequently, a summary of three technologies for hosting services on edge servers, i.e., virtual machine, container, and agent, is provided. At last, open research challenges in service migration are identified and discussed.

264 citations


Journal ArticleDOI
TL;DR: A new fog simulator called FogNetSim++1 is proposed that provides users with detailed configuration options to simulate a large fog network and enables researchers to incorporate customized mobility models and fog node scheduling algorithms, and manage handover mechanisms.
Abstract: Fog computing is a technology that brings computing and storage resources near to the end user. Being in its infancy, fog computing lacks standardization in terms of architectures and simulation platforms. There are a number of fog simulators available today, among which a few are open-source, whereas rest are commercially available. The existing fog simulators mainly focus on a number of devices that can be simulated. Generally, the existing simulators are more inclined toward sensors’ configurations, where sensors generate raw data and fog nodes are used to intelligently process the data before sending to back-end cloud or other nodes. Therefore, these simulators lack network properties and assume reliable and error-free delivery on every service request. Moreover, no simulator allows researchers to incorporate their own fog nodes management algorithms, such as scheduling. In existing work, device handover is also not supported. In this paper, we propose a new fog simulator called FogNetSim++ 1 that provides users with detailed configuration options to simulate a large fog network. It enables researchers to incorporate customized mobility models and fog node scheduling algorithms, and manage handover mechanisms. In our evaluation setup, a traffic management system is evaluated to demonstrate the scalability and effectiveness of proposed simulator in terms of CPU and memory usage. We have also benchmarked the network parameters, such as execution delay, packet error rate, handovers, and latency. 1 available at https://fognetsimpp.com

134 citations


Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of measurement techniques for beam and mobility management in mmWave cellular networks, and give insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR cellular network.
Abstract: The millimeter wave (mmWave) frequencies offer the availability of huge bandwidths to provide unprecedented data rates to next-generation cellular mobile terminals. However, mmWave links are highly susceptible to rapid channel variations and suffer from severe free-space pathloss and atmospheric absorption. To address these challenges, the base stations and the mobile terminals will use highly directional antennas to achieve sufficient link budget in wide area networks. The consequence is the need for precise alignment of the transmitter and the receiver beams, an operation which may increase the latency of establishing a link, and has important implications for control layer procedures, such as initial access, handover and beam tracking. This tutorial provides an overview of recently proposed measurement techniques for beam and mobility management in mmWave cellular networks, and gives insights into the design of accurate, reactive and robust control schemes suitable for a 3GPP NR cellular network. We will illustrate that the best strategy depends on the specific environment in which the nodes are deployed, and give guidelines to inform the optimal choice as a function of the system parameters.

133 citations


Journal ArticleDOI
TL;DR: A two-layer framework to learn the optimal handover (HO) controllers in possibly large-scale wireless systems supporting mobile Internet-of-Things users or traditional cellular users, where the user mobility patterns could be heterogeneous, is proposed.
Abstract: In this paper, we propose a two-layer framework to learn the optimal handover (HO) controllers in possibly large-scale wireless systems supporting mobile Internet-of-Things users or traditional cellular users, where the user mobility patterns could be heterogeneous. In particular, our proposed framework first partitions the user equipments (UEs) with different mobility patterns into clusters, where the mobility patterns are similar in the same cluster. Then, within each cluster, an asynchronous multiuser deep reinforcement learning (RL) scheme is developed to control the HO processes across the UEs in each cluster, in the goal of lowering the HO rate while ensuring certain system throughput. In this scheme, we use a deep neural network (DNN) as an HO controller learned by each UE via RL in a collaborative fashion. Moreover, we use supervised learning in initializing the DNN controller before the execution of RL to exploit what we already know with traditional HO schemes and to mitigate the negative effects of random exploration at the initial stage. Furthermore, we show that the adopted global-parameter-based asynchronous framework enables us to train faster with more UEs, which could nicely address the scalability issue to support large systems. Finally, simulation results demonstrate that the proposed framework can achieve better performance than the state-of-art online schemes, in terms of HO rates.

103 citations


Journal ArticleDOI
TL;DR: A new DMM schema based on the blockchain is proposed, capable of resolving hierarchical security issues without affecting the network layout, and also satisfying fully distributed security requirements with less consumption of energy.
Abstract: Modern fog network architectures, empowered by IoT applications and 5G communications technologies, are characterized by the presence of a huge number of mobile nodes, which undergo frequent handovers, introducing a significant load on the involved network entities. Considering the distributed and flat nature of these architectures, DMM can be the only viable option for efficiently managing handovers in these scenarios. The existing DMM solutions are capable of providing smooth handovers, but lack robustness from the security point of view. Indeed, DMM depends on external mechanisms for handover security and uses a centralized device, which has obvious security and performance implications in flat architectures where hierarchical dependencies can introduce problems. We propose a new DMM schema based on the blockchain, capable of resolving hierarchical security issues without affecting the network layout, and also satisfying fully distributed security requirements with less consumption of energy.

95 citations


Posted Content
TL;DR: In this article, the authors proposed the concentrator-based and without concentrator based femtocell network architecture and presented the signal flow with appropriate parameters for the handover between 3GPP UMTS based macrocell and femtocells networks.
Abstract: The femtocell networks that use home base station and existing xDSL or other cable line as backhaul connectivity can fulfill the upcoming demand of high data rate for wireless communication system as well as can extend the coverage area. Hence the modified handover procedure for existing networks is needed to support the macrocell/femtocell integrated network. Some modifications of existing network and protocol architecture for the integration of femtocell networks with the existing UMTS based macrocell networks are essential. These modifications change the signal flow for handover procedures due to different 2-tier cell (macrocell and femtocell) environment. The measurement of signal-to-interference ratio parameter should be considered for handover between macrocell and femtocell. A frequent and unnecessary handover is another problem for hierarchical network environment that must be optimized to improve the performance of macrocell/femtocell integrated network. In this paper, firstly we propose the concentrator based and without concentrator based femtocell network architecture. Then we present the signal flow with appropriate parameters for the handover between 3GPP UMTS based macrocell and femtocell networks. A scheme for unnecessary handoff minimization is also presented in this paper. We simulate the proposed handover optimization scheme to validate the performance.

90 citations


Journal ArticleDOI
TL;DR: A reinforcement learning based handoff policy named SMART is proposed to reduce the number of handoffs while maintaining user Quality of Service (QoS) requirements in mmWave HetNets and proposes reinforcement-learning based BS selection algorithms for different UE densities.
Abstract: The millimeter wave (mmWave) radio band is promising for the next-generation heterogeneous cellular networks (HetNets) due to its large bandwidth available for meeting the increasing demand of mobile traffic. However, the unique propagation characteristics at mmWave band cause huge redundant handoffs in mmWave HetNets that brings heavy signaling overhead, low energy efficiency and increased user equipment (UE) outage probability if conventional Reference Signal Received Power (RSRP) based handoff mechanism is used. In this paper, we propose a reinforcement learning based handoff policy named SMART to reduce the number of handoffs while maintaining user Quality of Service (QoS) requirements in mmWave HetNets. In SMART, we determine handoff trigger conditions by taking into account both mmWave channel characteristics and QoS requirements of UEs. Furthermore, we propose reinforcement-learning based BS selection algorithms for different UE densities. Numerical results show that in typical scenarios, SMART can significantly reduce the number of handoffs when compared with traditional handoff policies without learning.

87 citations


Proceedings ArticleDOI
01 Nov 2018
TL;DR: In this article, the authors leverage machine learning tools and propose a novel solution for reliability and latency challenges in mmWave MIMO systems, where the base stations learn how to predict that a certain link will experience blockage in the next few time frames using their observations of adopted beamforming vectors.
Abstract: The sensitivity of millimeter wave (mmWave) signals to blockages is a fundamental challenge for mobile mmWave communication systems. The sudden blockage of the line-of-sight (LOS) link between the base station and the mobile user normally leads to disconnecting the communication session, which highly impacts the system reliability. Further, reconnecting the user to another LOS base station incurs high beam training overhead and critical latency problem. In this paper, we leverage machine learning tools and propose a novel solution for these reliability and latency challenges in mmWave MIMO systems. In the developed solution, the base stations learn how to predict that a certain link will experience blockage in the next few time frames using their observations of adopted beamforming vectors. This allows the serving base station to proactively hand-over the user to another base station with highly probable LOS link. Simulation results show that the developed deep learning based strategy successfully predicts blockage/hand-off in close to 95% of the times. This reduces the probability of communication session disconnection, which ensures high reliability and low latency in mobile mmWave systems.

83 citations


Journal ArticleDOI
TL;DR: A cross-domain SDN architecture that divides theMLSTIN into satellite, aerial, and terrestrial domains is proposed and illustrative results validate that the proposed architecture can significantly improve the efficiencies of configuration updating and decision making in MLSTIN.
Abstract: The MLSTIN is considered as a powerful architecture to meet the heavy consumer demand for wireless data access in the coming 5G ecosystem. However, due to the inherent heterogeneity of MLSTIN, it is challenging to manage the diverse physical devices for large amounts of traffic delivery with optimal network performance. As promoted by the advantage of SDN, MLSTIN is expected to be a flexible paradigm to dynamically provision various applications and services. In light of this, a cross-domain SDN architecture that divides the MLSTIN into satellite, aerial, and terrestrial domains is proposed in this article. We discuss the design and implementation details of this architecture and then point out some challenges and open issues. Moreover, illustrative results validate that the proposed architecture can significantly improve the efficiencies of configuration updating and decision making in MLSTIN.

82 citations


Journal ArticleDOI
TL;DR: Numerical results corroborate the analytical derivations and show that the proposed solution will significantly reduce both the HOF and energy consumption of MUEs, resulting in an enhanced mobility management for heterogeneous wireless networks with mm-wave capabilities.
Abstract: One of the most promising approaches to overcoming the uncertainty of millimeter wave (mm-wave) communications is to deploy dual-mode small base stations (SBSs) that integrate both mm-wave and microwave ( $\mu \text{W}$ ) frequencies. In this paper, a novel approach to analyzing and managing mobility in joint mmwave– $\mu \text{W}$ networks is proposed. The proposed approach leverages device-level caching along with the capabilities of dual-mode SBSs to minimize handover failures and reduce inter-frequency measurement energy consumption. First, fundamental results on the caching capabilities are derived for the proposed dual-mode network scenario. Second, the impact of caching on the number of handovers (HOs), energy consumption, and the average handover failure (HOF) is analyzed. Then, the proposed cache-enabled mobility management problem is formulated as a dynamic matching game between mobile user equipments (MUEs) and SBSs. The goal of this game is to find a distributed HO mechanism that, under network constraints on HOFs and limited cache sizes, allows each MUE to choose between: 1) executing an HO to a target SBS; 2) being connected to the macrocell base station; or 3) perform a transparent HO by using the cached content. To solve this dynamic matching problem, a novel algorithm is proposed and its convergence to a two-sided dynamically stable HO policy for MUEs and target SBSs is proved. Numerical results corroborate the analytical derivations and show that the proposed solution will significantly reduce both the HOF and energy consumption of MUEs, resulting in an enhanced mobility management for heterogeneous wireless networks with mm-wave capabilities.

79 citations


Journal ArticleDOI
01 Jun 2018
TL;DR: This paper surveys and reviews some related studies in the literature that deals with VANet heterogeneous wireless networks communications in term of vertical handover, data dissemination and collection, gateway selection and other issues, and outlines open issues that help to identify the future research directions of VANET in the heterogeneous environment.
Abstract: Vehicular communications have developed rapidly contributing to the success of intelligent transportation systems. In VANET, continuous connectivity is a huge challenge caused by the extremely dynamic network topology and the highly variable number of mobile nodes. Moreover, message dissemination efficiency is a serious issue in traffic effectiveness and road safety. The heterogeneous vehicular network, which integrates cellular networks with DSRC, has been suggested and attracted significant attention recently. VANET-cellular integration offers many potential benefits, for instance, high data rates, low latency, and extended communication range. Due to the heterogeneous wireless access, a seamless handover decision is required to guarantee QoS of communications and to maintain continuous connectivity between the vehicles. On the other hand, VANET heterogeneous wireless networks integration will significantly help autonomous cars to be functional in reality. This paper surveys and reviews some related studies in the literature that deals with VANET heterogeneous wireless networks communications in term of vertical handover, data dissemination and collection, gateway selection and other issues. The comparison between different works is based on parameters like bandwidth, delay, throughput, and packet loss. Finally, we outline open issues that help to identify the future research directions of VANET in the heterogeneous environment.

Journal ArticleDOI
TL;DR: This article proposes a novel software-defined-networking-based fog computing architecture by decoupling mobility control and data forwarding to provide seamless and transparent mobility support to mobile users, and presents an efficient route optimization algorithm by considering the performance gain in data communications and system overhead in mobile fog computing.
Abstract: The emerging real-time and computation-intensive services driven by the Internet of Things, augmented reality, automatic driving, and so on, have tight quality of service and quality of experience requirements, which can hardly be supported by conventional cloud computing. Fog computing, which migrates the features of cloud computing to the network edge, guarantees low latency for location-aware services. However, due to the locality feature of fog computing, maintaining service continuity when mobile users travel across different access networks has become a challenging issue. In this article, we propose a novel software-defined-networking-based fog computing architecture by decoupling mobility control and data forwarding. Under the proposed architecture, we design efficient signaling operations to provide seamless and transparent mobility support to mobile users, and present an efficient route optimization algorithm by considering the performance gain in data communications and system overhead in mobile fog computing. Numerical results from extensive simulations have demonstrated that the proposed scheme can not only guarantee service continuity, but also greatly improve handover performance and achieve high data communication efficiency in mobile fog computing.

Journal ArticleDOI
TL;DR: A mobility load balancing algorithm for small-cell networks is proposed by adapting network load status and considering load estimation, which provides a more balanced load across networks and higher network throughput than previous algorithms.
Abstract: Small cells were introduced to support high data-rate services and for dense deployment. Owing to user equipment (UE) mobility and small-cell coverage, the load across a small-cell network recurrently becomes unbalanced. Such unbalanced loads result in performance degradation in throughput and handover success and can even cause radio link failure. In this paper, we propose a mobility load balancing algorithm for small-cell networks by adapting network load status and considering load estimation. To that end, the proposed algorithm adjusts handover parameters depending on the overloaded cells and adjacent cells. Resource usage depends on signal qualities and traffic demands of connected UEs in long-term evolution. Hence, we define a resource block-utilization ratio as a measurement of cell load and employ an adaptive threshold to determine overloaded cells, according to the network load situation. Moreover, to avoid performance oscillation, the impact of moving loads on the network is considered. Through system-level simulations, the performance of the proposed algorithm is evaluated in various environments. Simulation results show that the proposed algorithm provides a more balanced load across networks (i.e., smaller standard deviation across the cells) and higher network throughput than previous algorithms.

Journal ArticleDOI
TL;DR: Two intelligent handover skipping techniques are proposed to overcome the high handover rates and handover cost and outperform the conventional techniques for moderate to high-velocity values.
Abstract: Ultra-dense network deployment is a key technology for potentially achieving the capacity target of next-generation wireless communication systems. However, such a deployment results in cell proliferation and cell size decrement, leading to an increased number of handovers and limited sojourn time within a cell, which severely degrade the user’s quality of service (QoS). In this paper, we propose two intelligent handover skipping techniques to overcome the high handover rates. The first technique considers a user associated with a single base station (BS) and the decision to skip a handover is based on the upcoming cell’s topology; we consider three criteria: 1) the area of the cell; 2) the trajectory distance within the cell; and 3) the distance of the BS from the cell edge. The second technique exploits BS cooperation and enables a dynamic handover skipping scheme, where the skipping decision is taken based on the BSs of three consecutive cells in the user’s trajectory. This technique achieves a balance between BS cooperation and single BS transmission and manages to maintain a good QoS during the skipping phase. We show that the proposed techniques reduce both the handover rate and handover cost and outperform the conventional techniques for moderate to high-velocity values.

Journal ArticleDOI
TL;DR: A dual-link soft handover scheme for C/U plane split network in HSR can significantly reduce the outage probability and improve the handover success probability in the inter-macro eNB handover.
Abstract: The heterogeneous network architecture based on control/user (C/U) plane split is a research hot spot in the fifth generation (5G) communication system. This new architecture for the high-speed railway (HSR) communication system can provide high quality of service (QoS) for the passengers, such as higher system transmission capacity, better transmission reliability, and lower co-channel interference. The relatively critical C plane is expanded and maintained in a reliable low-frequency band to guarantee transmission reliability, and the U plane is supported by the available high-frequency band to meet the increasing system capacity demands. However, there are still many problems to be solved in the C/U plane split network to ensure reliable transmission. In the HSR communication system, the C plane and the U plane are supported by the macro evolved NodeBs (eNBs) and the small eNBs, respectively. The handover between the different macro eNBs involves two types of handovers, which directly reduces its applicability and reliability in HSR. Therefore, a dual-link soft handover scheme for C/U plane split network in HSR is proposed in this paper. By deploying a train relay station (TRS) and two antennas in the train, the handover outage probability will be reduced. Moreover, the bi-casting is adopted to decrease the communication interruption time and the signaling flows of the intra-macro eNB handover and inter-macro eNB handover are designed in detail. Simulation results show that the proposed handover scheme can significantly reduce the outage probability and improve the handover success probability in the inter-macro eNB handover.

Journal ArticleDOI
TL;DR: This work provides a perspective on various trade-offs between energy efficiency and user plane delay for upcoming URLLC systems, and proposes solutions that optimize EE of discontinuous reception, mobility measurements, and the handover process, respectively, without compromising on delay.
Abstract: Emerging 5G URLLC wireless systems are characterized by minimal over-the-air latency and stringent decoding error requirements. The low latency requirements can cause conflicts with 5G EE design targets. Therefore, this work provides a perspective on various trade-offs between energy efficiency and user plane delay for upcoming URLLC systems. For network infrastructure EE, we propose solutions that optimize base station on-off switching and distributed access network architectures. For URLLC devices, we advocate solutions that optimize EE of discontinuous reception (DRX), mobility measurements, and the handover process, respectively, without compromising on delay.

Journal ArticleDOI
TL;DR: Simulation results show that the proposed conditional make-before- break handover to target zero MIT and zero HOF rate simultaneously can achieve almost zeroHOF rate even at 120 km/h.
Abstract: For many URLLC services, mobility is a key requirement together with latency and reliability. 3GPP has defined the target of MIT as 0 ms, and a general URLLC reliability requirement as 1 - 10-5 within a latency of 1 ms for 5G. In this article, we analyzed the impact of MIT and handover failure (HOF) rate on the reliability performance. From the analysis, at 120 km/h, with MIT of 0 ms, the required HOF rate to achieve 1 - 10-5 reliability is only 0.52 percent. Therefore, to achieve the reliability for URLLC, we need to minimize not only the MIT but also the HOF rate as close to zero as possible. Hence, we propose conditional make-before- break handover to target zero MIT and zero HOF rate simultaneously. The solution can achieve zero MIT by not releasing the connection to the source cell until the first or some downlink receptions from the target cell. It can achieve the zero HOF rate, by receiving an HO Command message when the radio link to the source cell is still stable, and by executing the handover when the connection to the target cell is preferable. Simulation results show that our proposed solution can achieve almost zero HOF rate even at 120 km/h.

Patent
09 Jan 2018
TL;DR: In this paper, the authors provided an interworking method between networks of a user equipment (UE) in a wireless communication system, including: performing a first interworking procedure for changing a network of the UE from a 5-generation core network (5GC) network to an evolved packet core (EPC) network.
Abstract: According to an aspect of the present invention, there is provided an interworking method between networks of a user equipment (UE) in a wireless communication system, including: performing a first interworking procedure for changing a network of the UE from a 5-generation core network (5GC) network to an evolved packet core (EPC) network, wherein, when an interface between the 5GC and the EPC networks does not exist, the performing of the first interworking procedure includes: receiving a first indication from an access and mobility management function (AMF) of the 5GC network; and performing a handover attach procedure in the EPC network based on the first indication.

Journal ArticleDOI
TL;DR: A fuzzy logic-based scheme exploiting a user velocity and a radio channel quality to adapt a hysteresis margin for handover decision in a self-optimizing manner to reduce a number of redundant handovers and a handover failure ratio while allowing the users to exploit benefits of the dense small cell deployment.
Abstract: To satisfy requirements on future mobile network, a large number of small cells should be deployed. In such scenario, mobility management becomes a critical issue in order to ensure seamless connectivity with a reasonable overhead. In this paper, we propose a fuzzy logic-based scheme exploiting a user velocity and a radio channel quality to adapt a hysteresis margin for handover decision in a self-optimizing manner. The objective of the proposed algorithm is to reduce a number of redundant handovers and a handover failure ratio while allowing the users to exploit benefits of the dense small cell deployment. Simulation results show that our proposed algorithm efficiently suppresses ping pong effect and keeps it at a negligible level (below 1%) in all investigated scenarios. Moreover, the handover failure ratio and the total number of handovers are notably reduced with respect to existing algorithms, especially in scenario with high number of small cells. In addition, the proposed scheme keeps the time spent by the users connected to the small cells at a similar level as the competitive algorithms. Thus, the benefits of the dense small cell deployment for the users are preserved.

Journal ArticleDOI
TL;DR: This paper uses a machine learning scheme, called the Transfer Actor-Critic Learning (TACT), for the spectrum mobility management, which achieves a higher reward, in terms of the mean opinion score, compared to the myopic and Q-learning based spectrum management schemes.
Abstract: This paper presents an intelligent spectrum mobility management scheme for cognitive radio networks. The spectrum mobility could involve spectrum handoff (i.e., the user switches to a new channel) or stay-and-wait (i.e., the user pauses the transmission for a while until the channel quality improves again). An optimal spectrum mobility management scheme needs to consider its long-term impact on the network performance, such as throughput and delay, instead of optimizing only the short-term performance. We use a machine learning scheme, called the Transfer Actor-Critic Learning (TACT), for the spectrum mobility management. The proposed scheme uses a comprehensive reward function that considers the channel utilization factor (CUF), packet error rate (PER), packet dropping rate (PDR), and flow throughput. Here, the CUF is determined by the spectrum sensing accuracy and channel holding time. The PDR is calculated from the non-preemptive M/G/1 queueing model, and the flow throughput is estimated from a link-adaptive transmission scheme, which utilizes the rateless (Raptor) codes. The proposed scheme achieves a higher reward, in terms of the mean opinion score, compared to the myopic and Q-learning based spectrum management schemes.

Journal ArticleDOI
TL;DR: In this paper, an uplink-based multi-connectivity approach is proposed for mm-wave networks, which enables less consuming, better performing, faster and more stable cell selection decisions with respect to a traditional downlink-based standalone scheme.
Abstract: The millimeter-wave (mm-wave) frequencies offer the potential of orders of magnitude that increases in capacity for next-generation cellular systems. However, links in mm-wave networks are susceptible to blockage and may suffer from rapid variations in quality. Connectivity to multiple cells at mm-wave and/or traditional frequencies is considered essential for robust communication. One of the challenges in supporting multi-connectivity in mm-waves is the requirement for the network to track the direction of each link in addition to its power and timing. To address this challenge, we implement a novel uplink measurement system that, with the joint help of a local coordinator operating in the legacy band, guarantees continuous monitoring of the channel propagation conditions and allows for the design of efficient control plane applications, including handover, beam tracking, and initial access. We show that an uplink-based multi-connectivity approach enables less consuming, better performing, faster and more stable cell selection, and scheduling decisions with respect to a traditional downlink-based standalone scheme. Moreover, we argue that the presented framework guarantees: 1) efficient tracking of the user in the presence of the channel dynamics expected at mm-waves and 2) fast reaction to situations in which the primary propagation path is blocked or not available.

Journal ArticleDOI
TL;DR: A universal learning framework, which is called AI framework based on deep reinforcement learning (DRL), which adopts convolutional neural network and recurrent neural network to model the potential spatial features and sequential features from the raw wireless signal automatically and gets significant improvements and learns intuitive features automatically.
Abstract: To solve the policy optimizing problem in many scenarios of smart wireless network management using a single universal algorithm, this letter proposes a universal learning framework, which is called AI framework based on deep reinforcement learning (DRL). This framework can also solve the problem that the state is painful to design in traditional RL. This AI framework adopts convolutional neural network and recurrent neural network to model the potential spatial features (i.e., location information) and sequential features from the raw wireless signal automatically. These features can be taken as the state definition of DRL. Meanwhile, this framework is suitable for many scenarios, such as resource management and access control due to DRL. The mean value of throughput, the standard deviation of throughput, and handover counts are used to evaluate its performance on the mobility management problem in the wireless local area network on a practical testbed. The results show that the framework gets significant improvements and learns intuitive features automatically.

Book ChapterDOI
01 Jan 2018
TL;DR: A learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station, is presented, formulate the problem as contextual policy search, in which the robot learns object hand over by interacting with the human.
Abstract: Object handover is a basic, but essential capability for robots interacting with humans in many applications, e.g., caring for the elderly and assisting workers in manufacturing workshops. It appears deceptively simple, as humans perform object handover almost flawlessly. The success of humans, however, belies the complexity of object handover as collaborative physical interaction between two agents with limited communication. This paper presents a learning algorithm for dynamic object handover, for example, when a robot hands over water bottles to marathon runners passing by the water station. We formulate the problem as contextual policy search, in which the robot learns object handover by interacting with the human. A key challenge here is to learn the latent reward of the handover task under noisy human feedback. Preliminary experiments show that the robot learns to hand over a water bottle naturally and that it adapts to the dynamics of human motion. One challenge for the future is to combine the model-free learning algorithm with a model-based planning approach and enable the robot to adapt over human preferences and object characteristics, such as shape, weight, and surface texture.

Journal ArticleDOI
TL;DR: A distributed mobility robustness optimization algorithm to minimize handover failures due to radio link failures by adjusting time-to-trigger and offset parameters by adaptively optimizing parameters and outperforms previous algorithms in various mobile environments.
Abstract: In this paper, we propose a distributed mobility robustness optimization algorithm to minimize handover failures due to radio link failures by adjusting time-to-trigger and offset parameters. According to the reason for failure, the algorithm classifies handover failure into three categories (too late, too early, and wrong cell), and simultaneously optimizes three handover parameters according to the dominant failure. Moreover, the algorithm considers handover failures to each neighboring cell and adjusts handover parameters individually. Via simulation, we show how the proposed algorithm adaptively optimizes the parameters and outperforms previous algorithms in various mobile environments.

Journal ArticleDOI
TL;DR: This work proposes a novel dynamic mobility-aware partial offloading (DMPO) algorithm to figure out the amount of data for offloading dynamically, together with the decision of communication path in MM, minimizing the energy consumption while satisfying the delay constraint.

Patent
27 Sep 2018
TL;DR: In this article, a wireless device receives from a first base station, a radio resource control message indicating a command of a conditional handover towards a target cell of a second base station.
Abstract: A wireless device receives from a first base station, a radio resource control message indicating a command of a conditional handover towards a target cell of a second base station. The radio resource control message comprises: a cell identifier of the target cell; and at least one handover execution condition. The wireless device determines that at least one criteria of the at least one handover execution condition is met by the target cell of the second base station. The wireless device transmits to the first base station and in response to the determining that the at least one criteria is met, a first signal indicating a handover execution notification associated with the command of the conditional handover. The wireless device transmits to the second base station, a random access preamble via the target cell of the second base station.

Journal ArticleDOI
TL;DR: Two modified TOPSIS methods for the purpose of handover management in the heterogeneous network are proposed and outperformed the existing methods by reducing the number of frequent handovers and radio link failures, in addition to enhancing the achieved mean user throughput.
Abstract: Ultra-dense small cell deployment in future 5G networks is a promising solution to the ever increasing demand of capacity and coverage. However, this deployment can lead to severe interference and high number of handovers, which in turn cause increased signaling overhead. In order to ensure service continuity for mobile users, minimize the number of unnecessary handovers and reduce the signaling overhead in heterogeneous networks, it is important to model adequately the handover decision problem. In this paper, we model the handover decision based on the multiple attribute decision making method, namely Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). The base stations are considered as alternatives, and the handover metrics are considered as attributes to selecting the proper base station for handover. In this paper, we propose two modified TOPSIS methods for the purpose of handover management in the heterogeneous network. The first method incorporates the entropy weighting technique for handover metrics weighting. The second proposed method uses a standard deviation weighting technique to score the importance of each handover metric. Simulation results reveal that the proposed methods outperformed the existing methods by reducing the number of frequent handovers and radio link failures, in addition to enhancing the achieved mean user throughput.

Journal ArticleDOI
TL;DR: A novel key exchange and authentication protocol is proposed, which is capable of securing Xhaul links for a moving terminal in the network, and the efficiency in comparison with the existing solutions is justified.

Journal ArticleDOI
TL;DR: This paper focuses on self-organization techniques to improve handover efficiency using vehicular traffic data gathered in London and exploits mobility patterns between cell coverage areas and road traffic congestion levels to optimize the handover bias in heterogeneous networks and dynamically manage mobility management entity (MME) loads to reduce handover completion times.
Abstract: So far, research on Smart Cities and self-organizing networking techniques for fifth-generation (5G) cellular systems has been one-sided: a Smart City relies on 5G to support massive machine-to-machine (M2M) communications, but the actual network is unaware of the information flowing through it. However, a greater synergy between the two would make the relationship mutual, since the insights provided by the massive amount of data gathered by sensors can be exploited to improve the communication performance. In this paper, we concentrate on self-organization techniques to improve handover efficiency using vehicular traffic data gathered in London. Our algorithms exploit mobility patterns between cell coverage areas and road traffic congestion levels to optimize the handover bias in heterogeneous networks and dynamically manage mobility management entity (MME) loads to reduce handover completion times.

Posted Content
Zhi Wang, Yue Xu, Lihua Li, Hui Tian, Shuguang Cui 
TL;DR: In this paper, a two-layer framework is proposed to learn the optimal handover (HO) controllers in possibly large-scale wireless systems supporting mobile Internet-of-Things (IoT) users or traditional cellular users, where the user mobility patterns could be heterogeneous.
Abstract: In this paper, we propose a two-layer framework to learn the optimal handover (HO) controllers in possibly large-scale wireless systems supporting mobile Internet-of-Things (IoT) users or traditional cellular users, where the user mobility patterns could be heterogeneous. In particular, our proposed framework first partitions the user equipments (UEs) with different mobility patterns into clusters, where the mobility patterns are similar in the same cluster. Then, within each cluster, an asynchronous multi-user deep reinforcement learning scheme is developed to control the HO processes across the UEs in each cluster, in the goal of lowering the HO rate while ensuring certain system throughput. In this scheme, we use a deep neural network (DNN) as an HO controller learned by each UE via reinforcement learning in a collaborative fashion. Moreover, we use supervised learning in initializing the DNN controller before the execution of reinforcement learning to exploit what we already know with traditional HO schemes and to mitigate the negative effects of random exploration at the initial stage. Furthermore, we show that the adopted global-parameter-based asynchronous framework enables us to train faster with more UEs, which could nicely address the scalability issue to support large systems. Finally, simulation results demonstrate that the proposed framework can achieve better performance than the state-of-art on-line schemes, in terms of HO rates.