scispace - formally typeset
Search or ask a question

Showing papers on "Network planning and design published in 2018"


Book ChapterDOI
08 Sep 2018
TL;DR: ShuffleNet V2 as discussed by the authors proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs, based on a series of controlled experiments, and derives several practical guidelines for efficient network design.
Abstract: Currently, the neural network architecture design is mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.

3,393 citations


Journal ArticleDOI
26 Sep 2018
TL;DR: In this article, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is provided.
Abstract: Ensuring ultrareliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay, and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology (across access, edge, and core), and decision-making under uncertainty is sorely lacking. The overarching goal of this paper is a first step to filling this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a wide variety of techniques and methodologies pertaining to the requirements of URLLC, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliability wireless networks.

779 citations


Journal ArticleDOI
TL;DR: This paper is the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the Integration of all the three network segments.
Abstract: Space-air-ground integrated network (SAGIN), as an integration of satellite systems, aerial networks, and terrestrial communications, has been becoming an emerging architecture and attracted intensive research interest during the past years. Besides bringing significant benefits for various practical services and applications, SAGIN is also facing many unprecedented challenges due to its specific characteristics, such as heterogeneity, self-organization, and time-variability. Compared to traditional ground or satellite networks, SAGIN is affected by the limited and unbalanced network resources in all three network segments, so that it is difficult to obtain the best performances for traffic delivery. Therefore, the system integration, protocol optimization, resource management, and allocation in SAGIN is of great significance. To the best of our knowledge, we are the first to present the state-of-the-art of the SAGIN since existing survey papers focused on either only one single network segment in space or air, or the integration of space-ground, neglecting the integration of all the three network segments. In light of this, we present in this paper a comprehensive review of recent research works concerning SAGIN from network design and resource allocation to performance analysis and optimization. After discussing several existing network architectures, we also point out some technology challenges and future directions.

661 citations


Proceedings ArticleDOI
18 Jun 2018
TL;DR: Pointwise convolution as discussed by the authors is a new convolution operator that can be applied at each point of a point cloud, which can yield competitive accuracy in both semantic segmentation and object recognition task.
Abstract: Deep learning with 3D data such as reconstructed point clouds and CAD models has received great research interests recently. However, the capability of using point clouds with convolutional neural network has been so far not fully explored. In this paper, we present a convolutional neural network for semantic segmentation and object recognition with 3D point clouds. At the core of our network is point-wise convolution, a new convolution operator that can be applied at each point of a point cloud. Our fully convolutional network design, while being surprisingly simple to implement, can yield competitive accuracy in both semantic segmentation and object recognition task.

496 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the challenges and benefits of adopting big data analytics, machine learning, and artificial intelligence in the next-generation communication systems and discuss the data sources and strong drivers for the adoption of the data analytics and the role of ML, Artificial Intelligence in making the system intelligent regarding being self-aware, self-adaptive, proactive and prescriptive.
Abstract: The next-generation wireless networks are evolving into very complex systems because of the very diversified service requirements, heterogeneity in applications, devices, and networks. The network operators need to make the best use of the available resources, for example, power, spectrum, as well as infrastructures. Traditional networking approaches, i.e., reactive, centrally-managed, one-size-fits-all approaches, and conventional data analysis tools that have limited capability (space and time) are not competent anymore and cannot satisfy and serve that future complex networks regarding operation and optimization cost effectively. A novel paradigm of proactive, self-aware, self-adaptive, and predictive networking is much needed. The network operators have access to large amounts of data, especially from the network and the subscribers. Systematic exploitation of the big data dramatically helps in making the system smart, intelligent, and facilitates efficient as well as cost-effective operation and optimization. We envision data-driven next-generation wireless networks, where the network operators employ advanced data analytics, machine learning (ML), and artificial intelligence. We discuss the data sources and strong drivers for the adoption of the data analytics, and the role of ML, artificial intelligence in making the system intelligent regarding being self-aware, self-adaptive, proactive and prescriptive. A set of network design and optimization schemes are presented concerning data analytics. This paper concludes with a discussion of challenges and the benefits of adopting big data analytics, ML, and artificial intelligence in the next-generation communication systems.

238 citations


Proceedings ArticleDOI
10 Feb 2018
TL;DR: SuperNeurons as mentioned in this paper proposes a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity, which reduces network-wide peak memory usage down to the maximal memory usage among layers.
Abstract: Going deeper and wider in neural architectures improves their accuracy, while the limited GPU DRAM places an undesired restriction on the network design domain. Deep Learning (DL) practitioners either need to change to less desired network architectures, or nontrivially dissect a network across multiGPUs. These distract DL practitioners from concentrating on their original machine learning tasks. We present SuperNeurons: a dynamic GPU memory scheduling runtime to enable the network training far beyond the GPU DRAM capacity. SuperNeurons features 3 memory optimizations, Liveness Analysis, Unified Tensor Pool, and Cost-Aware Recomputation; together they effectively reduce the network-wide peak memory usage down to the maximal memory usage among layers. We also address the performance issues in these memory-saving techniques. Given the limited GPU DRAM, SuperNeurons not only provisions the necessary memory for the training, but also dynamically allocates the memory for convolution workspaces to achieve the high performance. Evaluations against Caffe, Torch, MXNet and TensorFlow have demonstrated that SuperNeurons trains at least 3.2432 deeper network than current ones with the leading performance. Particularly, SuperNeurons can train ResNet2500 that has 104 basic network layers on a 12GB K40c.

189 citations


Posted Content
TL;DR: This work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs, and derives several practical guidelines for efficient network design, called ShuffleNet V2.
Abstract: Currently, the neural network architecture design is mostly guided by the \emph{indirect} metric of computation complexity, i.e., FLOPs. However, the \emph{direct} metric, e.g., speed, also depends on the other factors such as memory access cost and platform characterics. Thus, this work proposes to evaluate the direct metric on the target platform, beyond only considering FLOPs. Based on a series of controlled experiments, this work derives several practical \emph{guidelines} for efficient network design. Accordingly, a new architecture is presented, called \emph{ShuffleNet V2}. Comprehensive ablation experiments verify that our model is the state-of-the-art in terms of speed and accuracy tradeoff.

157 citations


Posted Content
TL;DR: A principled and scalable framework which takes into account delay, reliability, packet size, network architecture and topology, and decision-making under uncertainty is sorely lacking and is a first step to filling this void.
Abstract: Ensuring ultra-reliable and low-latency communication (URLLC) for 5G wireless networks and beyond is of capital importance and is currently receiving tremendous attention in academia and industry. At its core, URLLC mandates a departure from expected utility-based network design approaches, in which relying on average quantities (e.g., average throughput, average delay and average response time) is no longer an option but a necessity. Instead, a principled and scalable framework which takes into account delay, reliability, packet size, network architecture, and topology (across access, edge, and core) and decision-making under uncertainty is sorely lacking. The overarching goal of this article is a first step to fill this void. Towards this vision, after providing definitions of latency and reliability, we closely examine various enablers of URLLC and their inherent tradeoffs. Subsequently, we focus our attention on a plethora of techniques and methodologies pertaining to the requirements of ultra-reliable and low-latency communication, as well as their applications through selected use cases. These results provide crisp insights for the design of low-latency and high-reliable wireless networks.

131 citations


Posted Content
Jian Cheng1, Peisong Wang1, Gang Li1, Qinghao Hu1, Hanqing Lu1 
TL;DR: A comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view is provided.
Abstract: Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks also continue to increase. This will pose a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on FPGA/ASIC have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design and hardware accelerators. Finally, we will introduce and discuss a few possible future directions.

129 citations


Proceedings ArticleDOI
23 Apr 2018
TL;DR: NeuTM as mentioned in this paper is a LSTM RNN-based framework for predicting traffic matrix in large networks, which is well suited to learn from data and classify or predict time series with time lags of unknown size.
Abstract: This paper presents NeuTM, a framework for network Traffic Matrix (TM) prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs). TM prediction is defined as the problem of estimating future network traffic matrix from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from data and classify or predict time series with time lags of unknown size. LSTMs have been shown to model longrange dependencies more accurately than conventional RNNs. NeuTM is a LSTM RNN-based framework for predicting TM in large networks. By validating our framework on real-world data from GEANT network, we show that our model converges quickly and gives state of the art TM prediction performance.

117 citations


Journal ArticleDOI
TL;DR: A novel stochastic geometry-based network planning approach that focuses on the structure of the network to find strategic placement for multiple UAV-BSs in a large-scale network is proposed.
Abstract: Using base stations mounted on an unmanned aerial vehicle (UAV-BSs) is a promising new evolution of wireless networks for the provision of on-demand high data rates. While many studies have explored deploying UAV-BSs in a green field—no existence of terrestrial BSs, this letter focuses on the deployment of UAV-BSs in the presence of a terrestrial network. The purpose of this letter is twofold: 1) to provide supply-side estimation for how many UAV-BSs are needed to support a terrestrial network so as to achieve a particular quality of service and 2) to investigate where these UAV-BSs should hover. We propose a novel stochastic geometry-based network planning approach that focuses on the structure of the network to find strategic placement for multiple UAV-BSs in a large-scale network.

Journal ArticleDOI
Jian Cheng1, Peisong Wang1, Gang Li1, Qinghao Hu1, Hanqing Lu1 
TL;DR: A comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view is provided in this paper, where the authors provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher-student networks, compact network design, and hardware accelerators.
Abstract: Deep neural networks have evolved remarkably over the past few years and they are currently the fundamental tools of many intelligent systems. At the same time, the computational complexity and resource consumption of these networks continue to increase. This poses a significant challenge to the deployment of such networks, especially in real-time applications or on resource-limited devices. Thus, network acceleration has become a hot topic within the deep learning community. As for hardware implementation of deep neural networks, a batch of accelerators based on a field-programmable gate array (FPGA) or an application-specific integrated circuit (ASIC) have been proposed in recent years. In this paper, we provide a comprehensive survey of recent advances in network acceleration, compression, and accelerator design from both algorithm and hardware points of view. Specifically, we provide a thorough analysis of each of the following topics: network pruning, low-rank approximation, network quantization, teacher–student networks, compact network design, and hardware accelerators. Finally, we introduce and discuss a few possible future directions.

Journal ArticleDOI
TL;DR: In this article, a two-phase, multiobjective mixed integer, multi-period and multi-commodity mathematical modeling in the three-level relief chain was offered, in which, locating of the distribution centers and warehouses with various levels of capacity, related decisions to the stored goods in the warehouses and established distribution centers were considered in the first phase, and considering the limited hard time windows, in the second phase, operational programming was performed for vehicle routing and distribution of goods to the affected areas, so minimizing the total cost and travel time also increased the reliability of the route.
Abstract: The accidental and unpredictable nature of disasters such as earthquake brings about some plans to deal with critical problems in order to reduce the dangers at the time of their occurrence. Effective distribution of relief goods and supplies plays an important role in the rescue operation after an earthquake. Therefore, a two-phase, multi-objective mixed integer, multi-period and multi-commodity mathematical modeling in the three-level relief chain was offered, in which, locating of the distribution centers and warehouses with various levels of capacity, related decisions to the stored goods in the warehouses and established distribution centers were considered in the first phase, and considering the limited hard time windows, in the second phase, operational programming was performed for vehicle routing and distribution of goods to the affected areas, so minimizing the total cost and travel time also increased the reliability of the route. In addition to the features considered in this model, in special cases, it is possible that each critical area receives service more than once; to consider split delivery assumption in the problem, a different model will be presented for this purpose. Since some parameters are uncertain during the crisis, in order to let the model approach the reality, using a robust optimization approach, the model was developed in an uncertain condition. Two meta-heuristic algorithms of NSGAII and MOPSO were used to solve the given problem, in which the accuracy of the mathematical model and the proposed algorithms efficiency were assessed through numerical examples. The results of algorithms were presented for 35 various problems.

Journal ArticleDOI
TL;DR: An approach to quantify resilience for the design of systems that can be described as a network is introduced via a non-linear function that provides the ability to model at the component level more refined attributes of restoration.

Journal ArticleDOI
TL;DR: The analysis suggests that the EMF-aware 5G planning risks to be a real challenge for network operators, which stimulates further actions at governmental, societal, technological, and research levels.
Abstract: The deployment of 5G networks will necessarily involve the installation of new base station (BS) equipment to support the requirements of next-generation mobile services. In a scenario where there exist already many sources of electromagnetic fields (EMFs), including overlapping 2G/3G/4G technologies of competing network operators, there is a growing concern that the planning of a 5G network will be severely constrained by the limits on maximum EMF levels established in a wide set of regulations. The goal of this paper is to shed light on EMF-aware 5G network planning and, in particular, on the problem of site selection for 5G BS equipment that abides by downlink EMF limits. To this end, we present the current state of the art in EMF-aware mobile networking and overview the current exposure limits and how the EMF constraints may impact 5G planning. We then substantiate our analysis by reporting on two realistic case studies, which demonstrate the saturation of EMF levels already occurring under current 2G/3G/4G networks, as well as the negative impact of strict regulations on network planning and user quality of service. Finally, we discuss the expected impact of 5G technologies in terms of EMFs and draw the guidelines for an EMF-aware planning of 5G. Our analysis suggests that the EMF-aware 5G planning risks to be a real challenge for network operators, which stimulates further actions at governmental, societal, technological, and research levels.

Journal ArticleDOI
TL;DR: A novel robust possibilistic optimization approach is introduced and its performance is analyzed, and the two objective functions of the presented model seek to minimize the total cost and the maximum unmet demand.

Journal ArticleDOI
TL;DR: A chance-constrained two-stage mean-risk stochastic programming model, where the conditional value-at-risk (CVaR) is specified as the risk measure, and enforces a joint probabilistic constraint on the feasibility of the second-stage problem concerned with distributing the relief supplies to the affected areas in case of a disaster.
Abstract: We consider a stochastic pre-disaster relief network design problem, which mainly determines the capacities and locations of the response facilities and their inventory levels of the relief supplies in the presence of uncertainty in post-disaster demands and transportation network conditions. In contrast to the traditional humanitarian logistics literature, we develop a chance-constrained two-stage mean-risk stochastic programming model. This risk-averse model features a mean-risk objective, where the conditional value-at-risk (CVaR) is specified as the risk measure, and enforces a joint probabilistic constraint on the feasibility of the second-stage problem concerned with distributing the relief supplies to the affected areas in case of a disaster. To solve this computationally challenging stochastic optimization model, we employ an exact Benders decomposition-based branch-and-cut algorithm. We develop three variants of the proposed algorithm by using alternative representations of CVaR. We illustrate the application of our model and solution methods on a case study concerning the threat of hurricanes in the Southeastern part of the United States. An extensive computational study provides practical insights about the proposed modeling approach and demonstrates the computational effectiveness of the solution framework.

Journal ArticleDOI
Emmanuel Seve1, Jelena Pesic1, Camille Delezoide1, Sebastien Bigo1, Yvan Pointurier1 
TL;DR: In this article, a machine learning algorithm was used to reduce the uncertainties on the input parameters of the QoT model, improving the accuracy of the SNR estimation with respect to new optical demands in a brownfield phase.
Abstract: In this paper, we propose to lower the network design margins by improving the estimation of the signal-tonoise ratio (SNR) given by a quality of transmission (QoT) estimator, for new optical demands in a brownfield phase, based on a mathematical model of the physics of propagation During the greenfield phase and the network operation, we collect and correlate information on the QoT input parameters, issued from the established initial demands and available almost for free from the network elements: amplifiers output power and the SNR at the coherent receiver side Since we have some uncertainties on these input parameters of the QoT model, we use a machine learning algorithm to reduce them, improving the accuracy of the SNR estimation With this learning process and for a European backbone network (28 nodes, 41 links), we could reduce the QoT inaccuracy by several dBs for new demands whatever the amount of uncertainties of the initial parameters

Journal ArticleDOI
TL;DR: An aerial network management protocol built on top of an SDN architecture is proposed to address the needs of efficient and robust end-to-end data relaying and a novel 3D spatial coverage-related metric is calculates diverse multiple paths among unmanned aerial vehicles so that isolated and localized failures do not interrupt the overall network performance.
Abstract: Unmanned aerial vehicles allow rapidly deploying a multihop communication backbone in challenging environments with applications in public safety, search and rescue missions, crowd surveillance, and disaster area monitoring. Due to environmental obstructions in the above scenarios or intentional jamming, the communication links between peer unmanned aerial vehicless are susceptible to outages. This necessitates resiliency measures to be closely integrated into the network design. To address the needs of efficient and robust end-to-end data relaying, we propose an aerial network management protocol built on top of an SDN architecture. Unique to our design, each unmanned aerial vehicle becomes an SDN switch that performs under directives sent by a centralized controller. Using a novel 3D spatial coverage-related metric, the controller calculates diverse multiple paths among unmanned aerial vehicles so that isolated and localized failures do not interrupt the overall network performance. The controller issues directives to the unmanned aerial vehicle switches through flow entries in Openflow v1.5 protocol for immediate and effective switching to the best available path. Results reveal that the proposed multi-path routing algorithm reduces the average end-to-end outage rate by 18 percent while increasing the average endto- end delay by 12 percent when compared to the traditional multi-path routing algorithms.

Journal ArticleDOI
TL;DR: A stochastic model based on state transition theory is proposed to investigate the dynamics of cascading failures in communication networks and reveals the effects of the initial failure pattern, community structure and network design parameters on the dynamic propagation of cascades failures.
Abstract: In this brief, we propose a stochastic model based on state transition theory to investigate the dynamics of cascading failures in communication networks. We describe the failure events of the nodes in the network as node state transitions. Taking a probabilistic perspective, we focus on two uncertain conditions in the failure propagation process: which node in the network will fail next and how long it will last before the next node state transition takes place. The stochastic model gives each overloaded element a probability of failing, and the failure rate is relevant to the degree of overloading. Moreover, the time dimension is considered in the stochastic process, by removing a node after a time delay when its traffic load exceeds its capacity. We employ this proposed model to study the dynamics of cascading failure evolution in a Barabasi–Albert scale-free network and an Internet AS-level network. Simulation results reveal the effects of the initial failure pattern, community structure and network design parameters on the dynamic propagation of cascading failures.

Journal ArticleDOI
TL;DR: This paper analyses and summarizes the role of softwarization and virtualization in enhancing the network architecture and functionalities of mobile systems, and analyzes several 5G application scenarios in order to derive and classify the requirements to be taken into account in the design process of 5G network.

Journal ArticleDOI
TL;DR: A differential evolution approach to address the UTNDP by simultaneously determining the set of transit routes and their associated service frequency with the objective to minimize the passenger cost, as well as the unmet demand is proposed.
Abstract: The urban transit network design problem (UTNDP) is concerned with the development of a set of transit routes and corresponding schedules on an existing road network with known demand points and travel time. It is an NP-hard combinatorial optimization problem characterized by high computational intractability, leading to utilization of a wide variety of heuristics and metaheuristics in an attempt to find near-optimal solutions. This paper proposes a differential evolution approach to address the UTNDP by simultaneously determining the set of transit routes and their associated service frequency with the objective to minimize the passenger cost, as well as the unmet demand. In addition, a combined repair mechanism is employed to deal with the infeasible route sets generated from the route construction heuristic and the operators of the differential evolution. The proposed algorithm is evaluated on a well-known Mandl's Swiss network reported in the literature. Computational experiments show that the proposed algorithm is competitive according to the performance metrics with other approaches in the literature.

Journal ArticleDOI
TL;DR: In this paper, a stochastic geometry framework is proposed to perform the coverage and rate analysis of a typical user in co-existing visible light communication (VLC) and radio frequency (RF) networks.
Abstract: This paper provides a stochastic geometry framework to perform the coverage and rate analysis of a typical user in co-existing visible light communication (VLC) and radio frequency (RF) networks. The framework can be customized to capture the performance of a typical user in various network configurations such as 1) RF-only , in which only small base-stations (SBSs) are available to provide the coverage to a user; 2) VLC-only , in which only optical BSs (OBSs) are available to provide the coverage to a user; 3) opportunistic RF/VLC , where a user selects the network with maximum received signal power; and 4) hybrid RF/VLC , where a user can simultaneously utilize the available resources from both RF and VLC networks. The developed model for VLC network precisely captures the impact of the field-of-view (FOV) of the photo-detector receiver on the number of optical interferers, distribution of the aggregate interference, association probability, the coverage probability, and average rate of a typical user. A closed-form approximation is presented for special cases and for asymptotic scenarios, such as when the intensity of SBSs becomes very low or the intensity of OBSs becomes very high. The closed-form solutions for network design parameters (such as intensity of OBSs and SBSs, transmit power, and/or FOV) enable network operators to distribute the users among RF and VLC networks according to their choice. Moreover, we also optimize the network parameters in order to prioritize the association of users to VLC network. Finally, simulations are carried out to verify the derived solutions. It is shown that the performance of VLC network depends significantly on the receiver’s FOV/intensity of SBSs/OBSs and careful selection of such parameters is crucial to harness the benefits of VLC networks. Important trade-offs between height and intensity of OBSs are highlighted to optimize the performance of VLC networks.

Journal ArticleDOI
TL;DR: A memetic algorithm (MA) is developed to obtain high quality solutions for the HS-RRIT network design problem and is conducted computational analysis over the Turkish network data set to demonstrate the applicability of proposed model and the effectiveness of solution method.

Journal ArticleDOI
TL;DR: A secure controller-to-controller (C- to-C) protocol is designed that allows SDN-controllers lying in different autonomous systems (AS) to securely communicate and transfer attack information with each other, thus saving valuable time and network resources.
Abstract: Software Defined Networking (SDN) has proved itself to be a backbone in the new network design and is quickly becoming an industry standard. The idea of separation of control plane and data plane is the key concept behind SDN. SDN not only allows us to program and monitor our networks but it also helps in mitigating some key network problems. Distributed denial of service (DDoS) attack is among them. In this paper we propose a collaborative DDoS attack mitigation scheme using SDN. We design a secure controller-to-controller (C-to-C) protocol that allows SDN-controllers lying in different autonomous systems (AS) to securely communicate and transfer attack information with each other. This enables efficient notification along the path of an ongoing attack and effective filtering of traffic near the source of attack, thus saving valuable time and network resources. We also introduced three different deployment approaches i.e., linear, central and mesh in our testbed. Based on the experimental results we demonstrate that our SDN based collaborative scheme is fast and reliable in efficiently mitigating DDoS attacks in real time with very small computational footprints.

Journal ArticleDOI
TL;DR: Results show that carefully selecting the deployment of HOT lanes can improve the overall system travel time and reduce the problem size and facilitate computation.
Abstract: Though the conventional network design is extensively studied, the network design problem for ridesharing, in particular, the deployment of high-occupancy toll (HOT) lanes, remains understudied. This paper focuses on one type of network design problem as to whether existing roads should be retrofit into HOT lanes. It is a continuous bi-level mathematical program with equilibrium constraints. The lower level problem is ridesharing user equilibrium (RUE). To reduce the problem size and facilitate computation, we reformulate RUE in the link-node representation. Then we extend the RUE framework to accommodate the presence of HOT lanes and tolls. Algorithms are briefly discussed and numerical examples are illustrated on the Braess network and the Sioux Falls network, respectively. Results show that carefully selecting the deployment of HOT lanes can improve the overall system travel time.

Proceedings ArticleDOI
01 Sep 2018
TL;DR: A novel intelligent algorithm for performance optimization of the massive MIMO beamforming using a combination of three neural networks which cooperatively implement the deep adversarial reinforcement learning workflow.
Abstract: The rapid increasing of the data volume in mobile networks forces operators to look into different options for capacity improvement. Thus, modern 5G networks became more complex in terms of deployment and management. Therefore, new approaches are needed to simplify network design and management by enabling self-organizing capabilities. In this paper, we propose a novel intelligent algorithm for performance optimization of the massive MIMO beamforming. The key novelty of the proposed algorithm is in the combination of three neural networks which cooperatively implement the deep adversarial reinforcement learning workflow. In the proposed system, one neural network is trained to generate realistic user mobility patterns, which are then used by second neural network to produce relevant antenna diagram. Meanwhile, third neural network estimates the efficiency of the generated antenna diagram returns corresponding reward to both networks. The advantage of the proposed approach is that it leans by itself and does not require large training datasets.

Journal ArticleDOI
TL;DR: A biobjective mixed-integer nonlinear programming model is developed for a hierarchical three-level health service network design problem, which is then transformed to its linear counterpart.

01 Jan 2018
TL;DR: This article provides a comprehensive survey on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP) techniques for enhancing the efficiency of wireless network operation.
Abstract: Next generation wireless networks (i.e. 5G and beyond), which will be extremely dynamic and complex due to the ultra-dense deployment of heterogeneous networks (HetNets), pose many critical challenges for network planning, operation, management and troubleshooting. At the same time, the generation and consumption of wireless data are becoming increasingly distributed with an ongoing paradigm shift from people-centric to machine-oriented communications, making the operation of future wireless networks even more complex. In mitigating the complexity of future network operation, new approaches of intelligently utilizing distributed computational resources with improved context awareness becomes extremely important. In this regard, the emerging fog (edge) computing architecture aiming to distribute computing, storage, control, communication, and networking functions closer to end users, has a great potential for enabling efficient operation of future wireless networks. These promising architectures make the adoption of artificial intelligence (AI) principles, which incorporate learning, reasoning and decision-making mechanisms, natural choices for designing a tightly integrated network. To this end, this article provides a comprehensive survey on the utilization of AI integrating machine learning, data analytics and natural language processing (NLP) techniques for enhancing the efficiency of wireless network operation. In particular, we provide comprehensive discussion on the utilization of these techniques for efficient data acquisition, knowledge discovery, network planning, operation and management of next generation wireless networks. A brief case study utilizing the AI techniques for this network has also been provided.

Journal ArticleDOI
TL;DR: In this article, a mixed integer linear programming model with a profit maximization objective is proposed to design a multi-stage reverse logistics network for product recovery, where different recovery options such as product remanufacturing, component repairing and material recycling are simultaneously considered.