scispace - formally typeset
Search or ask a question

Showing papers on "Network planning and design published in 2017"


Journal ArticleDOI
TL;DR: A comprehensive review of studies in the fields of SCND and reverse logistics network design under uncertainty and existing optimization techniques for dealing with uncertainty such as recourse-based stochastic programming, risk-averse stochastics, robust optimization, and fuzzy mathematical programming are explored.

442 citations


Journal ArticleDOI
TL;DR: Block-VN is a reliable and secure architecture that operates in a distributed way to build the new distributed transport management system, and examines how the network of vehicles evolves with paradigms focused on networking and vehicular information.
Abstract: In recent decades, the ad hoc network for vehicles has been a core network technology to provide comfort and security to drivers in vehicle environments. However, emerging applications and services require major changes in underlying network models and computing that require new road network planning. Meanwhile, blockchain widely known as one of the disruptive technologies has emerged in recent years, is experiencing rapid development and has the potential to revolutionize intelligent transport systems. Blockchain can be used to build an intelligent, secure, distributed and autonomous transport system. It allows better utilization of the infrastructure and resources of intelligent transport systems, particularly effective for crowdsourcing technology. In this paper, we proposes a vehicle network architecture based on blockchain in the smart city (Block-VN). Block-VN is a reliable and secure architecture that operates in a distributed way to build the new distributed transport management system. We are considering a new network system of vehicles, Block-VN, above them. In addition, we examine how the network of vehicles evolves with paradigms focused on networking and vehicular information. Finally, we discuss service scenarios and design principles for Block-VN.

310 citations


Journal ArticleDOI
TL;DR: This work compares two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration and bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.
Abstract: Major interest is currently given to the integration of clusters of virtualization servers, also referred to as ‘cloudlets’ or ‘edge clouds’, into the access network to allow higher performance and reliability in the access to mobile edge computing services. We tackle the edge cloud network design problem for mobile access networks. The model is such that the virtual machines (VMs) are associated with mobile users and are allocated to cloudlets. Designing an edge cloud network implies first determining where to install cloudlet facilities among the available sites, then assigning sets of access points, such as base stations to cloudlets, while supporting VM orchestration and considering partial user mobility information, as well as the satisfaction of service-level agreements. We present link-path formulations supported by heuristics to compute solutions in reasonable time. We qualify the advantage in considering mobility for both users and VMs as up to 20% less users not satisfied in their SLA with a little increase of opened facilities. We compare two VM mobility modes, bulk and live migration, as a function of mobile cloud service requirements, determining that a high preference should be given to live migration, while bulk migrations seem to be a feasible alternative on delay-stringent tiny-disk services, such as augmented reality support, and only with further relaxation on network constraints.

203 citations


Proceedings ArticleDOI
07 Aug 2017
TL;DR: While RotorNet dynamically reconfigures its constituent circuit switches, it decouples switch configuration from traffic patterns, obviating the need for demand collection and admitting a fully decentralized control plane.
Abstract: The ever-increasing bandwidth requirements of modern datacenters have led researchers to propose networks based upon optical circuit switches, but these proposals face significant deployment challenges. In particular, previous proposals dynamically configure circuit switches in response to changes in workload, requiring network-wide demand estimation, centralized circuit assignment, and tight time synchronization between various network elements--- resulting in a complex and unwieldy control plane. Moreover, limitations in the technologies underlying the individual circuit switches restrict both the rate at which they can be reconfigured and the scale of the network that can be constructed.We propose RotorNet, a circuit-based network design that addresses these two challenges. While RotorNet dynamically reconfigures its constituent circuit switches, it decouples switch configuration from traffic patterns, obviating the need for demand collection and admitting a fully decentralized control plane. At the physical layer, RotorNet relaxes the requirements on the underlying circuit switches---in particular by not requiring individual switches to implement a full crossbar---enabling them to scale to 1000s of ports. We show that RotorNet outperforms comparably priced Fat Tree topologies under a variety of workload conditions, including traces taken from two commercial datacenters. We also demonstrate a small-scale RotorNet operating in practice on an eight-node testbed.

174 citations


Posted Content
TL;DR: This paper discusses the data sources and strong drivers for the adoption of the data analytics, and the role of ML, artificial intelligence in making the system intelligent regarding being self-aware, self-adaptive, proactive and prescriptive, and proposes a set of network design and optimization schemes concerning data analytics.
Abstract: The next-generation wireless networks are evolving into very complex systems because of the very diversified service requirements, heterogeneity in applications, devices, and networks. The mobile network operators (MNOs) need to make the best use of the available resources, for example, power, spectrum, as well as infrastructures. Traditional networking approaches, i.e., reactive, centrally-managed, one-size-fits-all approaches and conventional data analysis tools that have limited capability (space and time) are not competent anymore and cannot satisfy and serve that future complex networks in terms of operation and optimization in a cost-effective way. A novel paradigm of proactive, self-aware, self- adaptive and predictive networking is much needed. The MNOs have access to large amounts of data, especially from the network and the subscribers. Systematic exploitation of the big data greatly helps in making the network smart, intelligent and facilitates cost-effective operation and optimization. In view of this, we consider a data-driven next-generation wireless network model, where the MNOs employ advanced data analytics for their networks. We discuss the data sources and strong drivers for the adoption of the data analytics and the role of machine learning, artificial intelligence in making the network intelligent in terms of being self-aware, self-adaptive, proactive and prescriptive. A set of network design and optimization schemes are presented with respect to data analytics. The paper is concluded with a discussion of challenges and benefits of adopting big data analytics and artificial intelligence in the next-generation communication system.

173 citations


Journal ArticleDOI
TL;DR: A multi-commodity network flow-based optimization model to formulate a customized bus service network design problem so as to optimize the utilization of the vehicle capacity while satisfying individual demand requests defined through space-time windows is developed.
Abstract: Emerging transportation network services, such as customized buses, hold the promise of expanding overall traveler accessibility in congested metropolitan areas. A number of internet-based customized bus services have been planned and deployed for major origin-destination (OD) pairs to/from inner cities with limited physical road infrastructure. In this research, we aim to develop a joint optimization model for addressing a number of practical challenges for providing flexible public transportation services. First, how to maintain minimum loading rate requirements and increase the number of customers per bus for the bus operators to reach long-term profitability. Second, how to optimize detailed bus routing and timetabling plans to satisfy a wide range of specific user constraints, such as passengers’ pickup and delivery locations with preferred time windows, through flexible decision for matching passengers to bus routes. From a space-time network modeling perspective, this paper develops a multi-commodity network flow-based optimization model to formulate a customized bus service network design problem so as to optimize the utilization of the vehicle capacity while satisfying individual demand requests defined through space-time windows. We further develop a solution algorithm based on the Lagrangian decomposition for the primal problem and a space-time prism based method to reduce the solution search space. Case studies using both the illustrative and real-world large-scale transportation networks are conducted to demonstrate the effectiveness of the proposed algorithm and its sensitivity under different practical operating conditions.

128 citations


Posted Content
TL;DR: This paper proposes a LSTM RNN framework for predicting short and long term Traffic Matrix (TM) in large networks and validates the framework on real-world data from GEANT network, showing that the LSTm models converge quickly and give state of the art TM prediction performance for relatively small sized models.
Abstract: Network Traffic Matrix (TM) prediction is defined as the problem of estimating future network traffic from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from experience to classify, process and predict time series with time lags of unknown size. LSTMs have been shown to model temporal sequences and their long-range dependencies more accurately than conventional RNNs. In this paper, we propose a LSTM RNN framework for predicting Traffic Matrix (TM) in large networks. By validating our framework on real-world data from G ´ EANT network, we show that our LSTM models converge quickly and give state of the art TM prediction performance for relatively small sized models.

122 citations


Journal ArticleDOI
TL;DR: An iterative refinement algorithm using partially time-expanded networks that solves continuous-time service network design problems and demonstrates that the algorithm not only solves problems but also obtains an optimal solution at each point in time.
Abstract: Consolidation carriers transport shipments that are small relative to trailer capacity. To be cost effective, the carrier must consolidate shipments, which requires coordinating their paths in both space and time; i.e., the carrier must solve a service network design problem. Most service network design models rely on discretization of time—i.e., instead of determining the exact time at which a dispatch should occur, the model determines a time interval during which a dispatch should occur. While the use of time discretization is widespread in service network design models, a fundamental question related to its use has never been answered: Is it possible to produce an optimal continuous-time solution without explicitly modeling each point in time? We answer this question in the affirmative. We develop an iterative refinement algorithm using partially time-expanded networks that solves continuous-time service network design problems. An extensive computational study demonstrates that the algorithm not only...

119 citations


Journal ArticleDOI
TL;DR: This paper is concerned with the synchronization of multiagent systems connected via different types of interactions, known as multilayer networks, and additive coupling and Markovian switching coupling are proposed to capture the layered connections with two kinds of mathematical models constructed.
Abstract: This paper is concerned with the synchronization of multiagent systems connected via different types of interactions, known as multilayer networks. Additive coupling and Markovian switching coupling are proposed to capture the layered connections with two kinds of mathematical models constructed. First, based on simultaneously diagonalization of multiple Laplacian matrices, a general criterion is derived, ensuring that the synchronization problem with additive coupling can be decoupled. Then, an alternative condition is presented, which is related to the number of layers, regardless of the number of agents. With the derived criteria, a concept of joint synchronization region is introduced and further discussed as a network design problem. Synchronization with Markovian switching layers is also analyzed in parallel, exemplified by some special cases of two-layer networks. Finally, a group of cellular neural networks coupled by two-layer connections are chosen to illustrate the effectiveness of the theoretical results.

111 citations


Posted Content
TL;DR: NeuTM as mentioned in this paper is a LSTM RNN-based framework for predicting traffic matrix in large networks, which is well suited to learn from data and classify or predict time series with time lags of unknown size.
Abstract: This paper presents NeuTM, a framework for network Traffic Matrix (TM) prediction based on Long Short-Term Memory Recurrent Neural Networks (LSTM RNNs). TM prediction is defined as the problem of estimating future network traffic matrix from the previous and achieved network traffic data. It is widely used in network planning, resource management and network security. Long Short-Term Memory (LSTM) is a specific recurrent neural network (RNN) architecture that is well-suited to learn from data and classify or predict time series with time lags of unknown size. LSTMs have been shown to model long-range dependencies more accurately than conventional RNNs. NeuTM is a LSTM RNN-based framework for predicting TM in large networks. By validating our framework on real-world data from GEEANT network, we show that our model converges quickly and gives state of the art TM prediction performance.

101 citations


Journal ArticleDOI
TL;DR: It is shown that heterogeneous product quality decay should be taken into account in network design as it significantly influences network designs and their profitability, especially when the supply chain includes processes that change the level of decay, and product quality differences can be exploited in serving different markets.

Journal ArticleDOI
TL;DR: A robust location-routing approach that considers simultaneous decisions on routing vehicles and locating charging stations for strategic network design of electric logistics fleets and the benefit of a robust planning approach with regard to operational feasibility and savings in overall costs is analyzed.
Abstract: We present a robust location-routing approach that considers simultaneous decisions on routing vehicles and locating charging stations for strategic network design of electric logistics fleets. In this approach, we consider uncertain customer patterns with respect to the spatial customer distribution, demand, and service time windows. To solve large-sized instances as well as instances considering a high number of scenarios, a (parallelized) adaptive large neighbourhood search is presented. We derive new benchmark instances for the proposed problem class with different degrees of uncertainty and evaluate the performance of our algorithm. Results are presented for a real-world application case and are compared to results of different deterministic modeling approaches. Based on these results, the benefit of a robust planning approach with regard to operational feasibility and savings in overall costs is analyzed for the underlying planning problem, and managerial insights are derived.

Posted Content
TL;DR: This work establishes a fundamental connection between the fields of quantum physics and deep learning, and shows an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure.
Abstract: Deep convolutional networks have witnessed unprecedented success in various machine learning applications. Formal understanding on what makes these networks so successful is gradually unfolding, but for the most part there are still significant mysteries to unravel. The inductive bias, which reflects prior knowledge embedded in the network architecture, is one of them. In this work, we establish a fundamental connection between the fields of quantum physics and deep learning. We use this connection for asserting novel theoretical observations regarding the role that the number of channels in each layer of the convolutional network fulfills in the overall inductive bias. Specifically, we show an equivalence between the function realized by a deep convolutional arithmetic circuit (ConvAC) and a quantum many-body wave function, which relies on their common underlying tensorial structure. This facilitates the use of quantum entanglement measures as well-defined quantifiers of a deep network's expressive ability to model intricate correlation structures of its inputs. Most importantly, the construction of a deep ConvAC in terms of a Tensor Network is made available. This description enables us to carry a graph-theoretic analysis of a convolutional network, with which we demonstrate a direct control over the inductive bias of the deep network via its channel numbers, that are related to the min-cut in the underlying graph. This result is relevant to any practitioner designing a network for a specific task. We theoretically analyze ConvACs, and empirically validate our findings on more common ConvNets which involve ReLU activations and max pooling. Beyond the results described above, the description of a deep convolutional network in well-defined graph-theoretic tools and the formal connection to quantum entanglement, are two interdisciplinary bridges that are brought forth by this work.

Journal ArticleDOI
TL;DR: A multi-product, multi-tier mixed integer linear model is developed for a closed-loop supply chain network design and the result shows applicability of the model in the tire industry.

Proceedings ArticleDOI
19 Mar 2017
TL;DR: A network traffic prediction method based on a deep belief network and a Gaussian model that outperforms three existing methods for wireless mesh backbone network prediction.
Abstract: Wireless mesh network is prevalent for providing a decentralized access for users. For a wireless mesh backbone network, it has obtained extensive attention because of its large capacity and low cost. Network traffic prediction is important for network planning and routing configurations that are implemented to improve the quality of service for users. This paper proposes a network traffic prediction method based on a deep belief network and a Gaussian model. The proposed method first adopts discrete wavelet transform to extract the low-pass component of network traffic that describes the long-range dependence of itself. Then a prediction model is built by learning a deep belief network from the extracted low-pass component. Otherwise, for the rest high-pass component that expresses the gusty and irregular fluctuations of network traffic, a Gaussian model is used to model it. We estimate the parameters of the Gaussian model by the maximum likelihood method. Then we predict the high-pass component by the built model. Based on the predictors of two components, we can obtain a predictor of network traffic. From the simulation, the proposed prediction method outperforms three existing methods.

Journal ArticleDOI
TL;DR: An Adaptive Large Neighborhood Search (ALNS) algorithm is developed, which can simultaneously handle the network design and line planning problems considering also rolling stock and personnel planning aspects, and is compared with state-of-the-art commercial solvers on a small-size artificial instance.

Posted Content
TL;DR: A unified way to encode individual layers into vectors and bring them together to form an integrated description via LSTM, taking advantage of the recurrent network's strong expressive power, can reliably predict the performances of various network architectures.
Abstract: The quest for performant networks has been a significant force that drives the advancements of deep learning in recent years. While rewarding, improving network design has never been an easy journey. The large design space combined with the tremendous cost required for network training poses a major obstacle to this endeavor. In this work, we propose a new approach to this problem, namely, predicting the performance of a network before training, based on its architecture. Specifically, we develop a unified way to encode individual layers into vectors and bring them together to form an integrated description via LSTM. Taking advantage of the recurrent network's strong expressive power, this method can reliably predict the performances of various network architectures. Our empirical studies showed that it not only achieved accurate predictions but also produced consistent rankings across datasets -- a key desideratum in performance prediction.

Journal ArticleDOI
TL;DR: This work presents an electric vehicle battery service network design problem considering a customers satisfaction related to “range anxiety’ and “loss anxiety”, formulated as a linear integer programming model under deterministic and fuzzy scenarios.
Abstract: Key to the mass adoption of electric vehicles is the establishment of a sufficient battery service infrastructure network on the basis of customer behavior and psychology. Motivated by EV service infrastructure network design under the battery leasing/electric car sharing service business models, we present an electric vehicle battery service network design problem considering a customers satisfaction related to “range anxiety” and “loss anxiety”. The problem is formulated as a linear integer programming model under deterministic and fuzzy scenarios. A Tabu Search heuristic combined with GRASP is proposed to efficiently solve the problem. Finally, we conduct parametric analysis on real-world road networks.

Book
13 Nov 2017
TL;DR: Computer Networks reveals the guts of what's going on with computers that share data and is neither a user manual nor a technical reference, but provides in-depth background on how network architectures and protocols work.
Abstract: Computer Networks: A Systems Approach, designed for an advanced college-level course in network design and operation, provides the network applications programmer with detailed information about how networks do their thing. While Computer Networks is neither a user manual nor a technical reference, it provides an in-depth background on how network architectures and protocols work. In the beginning, Larry Peterson and Bruce Davie discuss why networks are important and talk about where networks may go in the long term. The authors then move right into a discussion of protocols. There's a fascinating section--complete with plenty of C code--in which the authors actually develop a network protocol called A Simple Protocol (ASP). They compare switching and packet networks and emphasize tunneling protocols. In the internetworking chapter, you'll learn practically all there is to know about Internet Protocol (IP). The concluding chapters talk about traffic management, congestion reduction, and high-speed networking technologies. Computer Networks reveals the guts of what's going on with computers that share data. Though way out of the league of most computer users, true geeks with an interest in networking will find what they need here.

Journal ArticleDOI
01 Jul 2017
TL;DR: A general two-stage quantitative framework that enables decision makers to select the optimal network design scheme for CLNs under uncertainty is proposed in this paper.
Abstract: Graphical abstractDisplay Omitted HighlightsProposes a general two-staged decision framework for DU-CLNDOP.Adds the robust constraints into the expected value model.Utilizes the orthogonal experiment design method to select the optimal scheme. Collaborative logistics networks (CLNs) are considered to be an effective organizational form for business cooperation that provides high stability and low cost. One common key issue regarding CLN resource combination is the network design optimization problem under discrete uncertainty (DU-CLNDOP). Operational environment changes and information uncertainty in network designs, due to partner selection, resource constrains and network robustness, must be effectively controlled from the system perspective. Therefore, a general two-stage quantitative framework that enables decision makers to select the optimal network design scheme for CLNs under uncertainty is proposed in this paper. Phase 1 calculates the simulation result of each hypothetical scenario of CLN resource combination using the expected value model with robust constraints. Phase 2 selects the optimal network design scheme for DU-CLNDOP using the orthogonal experiment design method. The validity of the model and method are verified via an illustrative example.

Proceedings ArticleDOI
21 Sep 2017
TL;DR: An SDN based network planning framework utilizing machine-learning techniques and a network-scale monitoring database is implemented over an optical field-trial testbed comprised of 436.4km fibre to demonstrate the spectral efficiency utilising probabilistic-shaping BVT based on link performance prediction.
Abstract: An SDN based network planning framework utilizing machine-learning techniques and a network-scale monitoring database is implemented over an optical field-trial testbed comprised of 436.4km fibre. Adaption of the spectral efficiency utilising probabilistic-shaping BVT based on link performance prediction is demonstrated.

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the problem of EV traffic equilibrium and optimal deployment of charging stations subject to range limitation, where the authors use origin-based flows to maintain the range limitation constraint at the path level without path enumeration.
Abstract: This study investigates the electric vehicle (EV) traffic equilibrium and optimal deployment of charging locations subject to range limitation. The problem is similar to a network design problem with traffic equilibrium, which is characterized by a bi-level model structure. The upper level objective is to optimally locate charging stations such that the total generalized cost of all users is minimized, where the user’s generalized cost includes two parts, travel time and energy consumption. The total generalized cost is a measure of the total societal cost. The lower level model seeks traffic equilibrium, in which travelers minimize their individual generalized cost. All the utilized paths have identical generalized cost while satisfying the range limitation constraint. In particular, we use origin-based flows to maintain the range limitation constraint at the path level without path enumeration. To obtain the global solution, the optimality condition of the lower level model is added to the upper level problem resulting in a single level model. The nonlinear travel time function is approximated by piecewise linear functions, enabling the problem to be formulated as a mixed integer linear program. We use a modest-sized network to analyze the model and illustrate that it can determine the optimal charging station locations in a planning context while factoring the EV users’ individual path choice behaviours.

Journal ArticleDOI
TL;DR: In this article, the authors provide an insight on how network convergence and application-centric approaches will play a leading role toward enabling the 5G vision and propose the use of the concept of network convergence for providing the overall architectural framework to bring together all the different technologies within a unifying and coherent network ecosystem.
Abstract: Future 5G services are characterised by unprecedented need for high rate, ubiquitous availability, ultralow latency, and high reliability. The fragmented network view that is widespread in current networks will not stand the challenge posed by next generations of users. A new vision is required, and this paper provides an insight on how network convergence and application-centric approaches will play a leading role toward enabling the 5G vision. This paper, after expressing the view on the need for an end-to-end approach to network design, brings the reader into a journey on the expected 5G network requirements and outlines some of the work currently carried out by main standardisation bodies. It then proposes the use of the concept of network convergence for providing the overall architectural framework to bring together all the different technologies within a unifying and coherent network ecosystem. The novel interpretation of multidimensional convergence we introduce leads us to the exploration of aspects of node consolidation and converged network architectures, delving into details of optical-wireless integration and future convergence of optical data centre and access-metro networks. We then discuss how ownership models enabling network sharing will be instrumental in realising the 5G vision. This paper concludes with final remarks on the role SDN will play in 5G and on the need for new business models that reflect the application-centric view of the network. Finally, we provide some insight on growing research areas in 5G networking.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an eco-efficient closed loop supply chain (CLSC) design for extending the existing supply chain of an Indian firm that assembles inkjet printers.

Journal ArticleDOI
01 Jun 2017
TL;DR: A probabilistic-based multi-objective optimization model to address challenges of RFID network planning and can achieve better performance in terms of quality metric and generational distance under the same computational environment is proposed.
Abstract: Graphical abstractDisplay Omitted HighlightsUncertainty is considered in RFID network planning.RFID network planning is modelled as a multi-objective optimization.A novel multi-objective firefly algorithm is proposed to solve multi-objective RFID network planning.Numerical experiments show the effectiveness of the proposed firefly algorithm for multi-objective RFID network planning. Radio frequency identification (RFID) is widely used for item identification and tracking. Due to the limited communication range between readers and tags, how to configure a RFID system in a large area is important but challenging. To configure a RFID system, most existing results are based on cost minimization through using 0/1 identification model. In practice, the system is interfered by environment and probabilistic model would be more reliable. To make sure the quality of the system, more objectives, such as interference and coverage, should be considered in addition to cost. In this paper, we propose a probabilistic-based multi-objective optimization model to address these challenges. The objectives to be optimized include number of readers, interference level and coverage of tags. A decomposition-based firefly algorithm is designed to solve this multi-objective optimization problem. Virtual force is integrated into random walk to guide readers moving in order to enhance exploitation. Numerical simulations are introduced to demonstrate and validate our proposed method. Comparing with existing methods, such as Non-dominated Sorting Genetic Algorithm-II and Multi-objective Particle Swarm Optimization approaches, our proposed method can achieve better performance in terms of quality metric and generational distance under the same computational environment. However, the spacing metric of the proposed method is slightly inferior to those compared methods.

Journal ArticleDOI
TL;DR: This research provides a framework for the refueling demand uncertainty and the effect of travelers' deviation to refuel considerations in the network and proposes a discrete, robust optimization model in which refuelingdemand is formulated as an uncertainty set during planning horizon.

Journal ArticleDOI
TL;DR: A novel simulation-based simulated annealing algorithm is developed to address large-sized test problems and results indicate the applicability of the model as well as the efficiency of the solution approach.
Abstract: This paper addresses design and planning of an integrated forward/reverse logistics network over a planning horizon with multiple tactical periods. In the network, demand for new products and potential return of used products are stochastic. Furthermore, collection amounts of used products with different quality levels are assumed dependent on offered acquisition prices to customer zones. A uniform distribution function defines the expected price of each customer zone for one unit of each used product. Using two-stage stochastic programming, a mixed-integer linear programming model is proposed. To cope with demand and potential return uncertainty, Latin Hypercube Sampling method is applied to generate fan of scenarios and then, backward scenario reduction technique is used to reduce the number of scenarios. Due to the problem complexity, a novel simulation-based simulated annealing algorithm is developed to address large-sized test problems. Numerical results indicate the applicability of the model as well as the efficiency of the solution approach. In addition, the performance of the scenario generation method and the importance of stochasticity are examined for the optimization problem. Finally, several numerical experiments including sensitivity analysis on main parameters of the problem are performed.

Journal ArticleDOI
TL;DR: This paper reviews and extends mathematical models and algorithms to solve optimization problems related to the design, operation, and reoptimization of EONs and two use cases are presented as illustrative examples on how the network life cycle needs to be extended with in-operation planning and data analytics thus adding cognition to the network.
Abstract: Emerging services and applications demanding high bitrate and stringent quality of service requirements are pushing telecom operators to upgrade their core networks based on wavelength-division multiplexing (WDM) to a more flexible technology for the more dynamic and variable traffic that is expected to be conveyed. Finally, academy- and industry-driven research on elastic optical networks (EON) has turned out into a mature enough technology ready to gradually upgrade WDM-based networks. Among others, key EON features include flexible spectrum allocation, connections beyond 100 Gb/s, advanced modulation formats, and elasticity against time-varying traffic. As a consequence of the variety of features involved, network design and algorithms for EONs are remarkably more complex than those for WDM networks. However, new opportunities for network operators to reduce costs arise by exploiting those features; in fact, the classical network life cycle based on fixed periodical planning cycles needs to be adapted to greatly reduce overprovisioning by applying reoptimization techniques to reconfigure the network while it is in operation and to efficiently manage new services, such as datacenter interconnection that will require provisioning multicast connections and elastic spectrum allocation for time-varying traffic. This paper reviews and extends mathematical models and algorithms to solve optimization problems related to the design, operation, and reoptimization of EONs. In addition, two use cases are presented as illustrative examples on how the network life cycle needs to be extended with in-operation planning and data analytics thus adding cognition to the network.

Journal ArticleDOI
TL;DR: This article introduces a novel data-driven intelligent radio access network (RAN) architecture that is hierarchical and distributed and operates in real time and identifies the required data and respective workflows that facilitate intelligent network optimizations.
Abstract: The concept of using big data (BD) for wireless communication network optimization is no longer new. However, previous work has primarily focused on long-term policies in the network, such as network planning and management. Apart from this, the source of the data collected for analysis/model training is mostly limited to the core network (CN). In this article, we introduce a novel data-driven intelligent radio access network (RAN) architecture that is hierarchical and distributed and operates in real time. We also identify the required data and respective workflows that facilitate intelligent network optimizations. It is our strong belief that the wireless BD (WBD) and machine-learning/artificial-intelligence (AI)-based methodology applies to all layers of the communication system. To demonstrate the superior performance gains of our proposed methodology, two use cases are analyzed with system-level simulations; one is the neural-network-aided optimization for Transmission Control Protocol (TCP), and the other is prediction-based proactive mobility management.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a hub and shuttle model consisting of a combination of a few high-frequency bus routes between key hubs and a large number of shuttles that bring passengers from their origin to the closest hub and take them from their last bus stop to their destination.
Abstract: The BusPlus project aims at improving the off-peak hours public transit service in Canberra, Australia. To address the difficulty of covering a large geographic area, proposes a hub and shuttle model consisting of a combination of a few high-frequency bus routes between key hubs and a large number of shuttles that bring passengers from their origin to the closest hub and take them from their last bus stop to their destination. This paper focuses on the design of the bus network and proposes an efficient solving method to this multimodal network design problem based on the Benders decomposition method. Starting from a mixed-integer programming (MIP) formulation of the problem, the paper presents a Benders decomposition approach using dedicated solution techniques for solving independent subproblems, Pareto-optimal cuts, cut bundling, and core point update. Computational results on real-world data from Canberra’s public transit system justify the design choices and show that the approach outperforms the MIP...