scispace - formally typeset
Search or ask a question

Showing papers in "IEEE\/OSA Journal of Optical Communications and Networking in 2018"


Journal ArticleDOI
TL;DR: This tutorial paper reviews several machine learning concepts tailored to the optical networking industry and discusses algorithm choices, data and model management strategies, and integration into existing network control and management tools.
Abstract: Networks are complex interacting systems involving cloud operations, core and metro transport, and mobile connectivity all the way to video streaming and similar user applications.With localized and highly engineered operational tools, it is typical of these networks to take days to weeks for any changes, upgrades, or service deployments to take effect. Machine learning, a sub-domain of artificial intelligence, is highly suitable for complex system representation. In this tutorial paper, we review several machine learning concepts tailored to the optical networking industry and discuss algorithm choices, data and model management strategies, and integration into existing network control and management tools. We then describe four networking case studies in detail, covering predictive maintenance, virtual network topology management, capacity optimization, and optical spectral analysis.

201 citations


Journal ArticleDOI
TL;DR: A ML classifier is investigated that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format.
Abstract: Predicting the quality of transmission (QoT) of a lightpath prior to its deployment is a step of capital importance for an optimized design of optical networks. Due to the continuous advances in optical transmission, the number of design parameters available to system engineers (e.g., modulation formats, baud rate, code rate, etc.) is growing dramatically, thus significantly increasing the alternative scenarios for lightpath deployment. As of today, existing (pre-deployment) estimation techniques for light-path QoT belong to two categories: “exact” analytical models estimating physical-layer impairments, which provide accurate results but incur heavy computational requirements, and margined formulas, which are computationally faster but typically introduce high link margins that lead to underutilization of network resources. In this paper, we explore a third option, i.e., machine learning (ML), as ML techniques have already been successfully applied for optimization and performance prediction of complex systems where analytical models are hard to derive and/ or numerical procedures impose high computational burden. We investigate a ML classifier that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format. The classifier is trained and tested on synthetic data and its performance is assessed over different network topologies and for various combinations of classification features. Results in terms of classifier accuracy are promising and motivate further investigation over real field data.

163 citations


Journal ArticleDOI
TL;DR: Energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period are discussed and an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks is implemented.
Abstract: In this paper, we discuss energy-efficiency improvements in core networks obtained as a result of work carried out by the GreenTouch consortium over a five-year period A number of techniques that yield substantial energy savings in core networks were introduced, including (i) the use of improved network components with lower power consumption, (ii) putting idle components into sleep mode, (iii) optically bypassing intermediate routers, (iv) the use of mixed line rates, (v) placing resources for protection into a low power state when idle, (vi) optimization of the network physical topology, and (vii) the optimization of distributed clouds for content distribution and network equipment virtualization These techniques are recommended as the main energy-efficiency improvement measures for 2020 core networks A mixed integer linear programming optimization model combining all the aforementioned techniques was built to minimize energy consumption in the core network We consider group 1 nations' traffic and place this traffic on a US continental network represented by the AT&T network topology The projections of the 2020 equipment power consumption are based on two scenarios: a business as usual (BAU) scenario and a GreenTouch (GT) (ie, BAU + GT) scenario The results show that the 2020 BAU scenario improves the network energy efficiency by a factor of 423 x compared with the 2010 network as a result of the reduction in the network equipment power consumption Considering the 2020 BAU + GT network, the network equipment improvements alone reduce network power by a factor of 20 x compared with the 2010 network Including of all the BAU + GT energy-efficiency techniques yields a total energy efficiency improvement of 315× We have also implemented an experimental demonstration that illustrates the feasibility of energy-efficient content distribution in IP/WDM networks

156 citations


Journal ArticleDOI
Rui Manuel Morais1, Joao Pedro1
TL;DR: This work evaluates the effectiveness of various machine learning models when used to predict the quality of transmission (QoT) of an unestablished lightpath, speeding up the process of lightpath provisioning.
Abstract: It is estimated that 5G and the Internet of Things (IoT) will impact traffic, both in volume and dynamicity, at unprecedented rates Thus, to cost-efficiently accommodate these challenging requirements, optical networks must become more responsive to changes impacting the traffic and network state as well as operate more closely to optimality In this context, knowledge-defined networking (KDN) promises to play a paramount role in improving network flexibility and automation KDN is a solution that introduces reasoning processes and machine learning techniques into the control plane of the network, enabling it to operate autonomously and faster One of the key aspects in this environment is the accurate validation of lightpaths Accurate lightpath validation demands running computationally intensive performance models, which can be time-consuming and impact time-critical applications (eg, optical channel restoration) This work evaluates the effectiveness of various machine learning models when used to predict the quality of transmission (QoT) of an unestablished lightpath, speeding up the process of lightpath provisioning Three network scenarios to efficiently generate the knowledge database used to train the models are proposed as well as an overview of the mostused machine learning models The considered models are: K-nearest neighbors, logistic regression, support vector machines, and artificial neural networks Results showthat, in general, allmachine learningmodels are able to correctly predict the QoTofmore than 90% of the lightpaths However, the artificial neural networks (ANN) model is the model presenting better generalization, being able to correctly predict the QoT of almost 999% of the lightpaths Moreover, ANN is able to estimate the residual margin of a lightpath with an average error of only 04 dB

111 citations


Journal ArticleDOI
TL;DR: Simulation results are presented, showing the effectiveness of the TISSUE algorithm in properly exploiting OTC information to assess BER performance of quadrature-phase-shift-keying-modulated signals, and the high accuracy of the FEELING algorithm to correctly detect soft failures as laser drift, filter shift, and tight filtering.
Abstract: In elastic optical networks (EONs), effective soft failure localization is of paramount importance to early detection of service level agreement violations while anticipating possible hard failure events. So far, failure localization techniques have been proposed and deployed mainly for hard failures, while significant work is still required to provide effective and automated solutions for soft failures, both during commissioning testing and in-operation phases. In this paper, we focus on soft failure localization in EONs by proposing two techniques for active monitoring during commissioning testing and for passive in-operation monitoring. The techniques rely on specifically designed low-cost optical testing channel (OTC) modules and on the widespread deployment of cost-effective optical spectrum analyzers (OSAs). The retrieved optical parameters are elaborated by machine learning-based algorithms running in the agent's node and in the network controller. In particular, the Testing optIcal Switching at connection SetUp timE (TISSUE) algorithm is proposed to localize soft failures by elaborating the estimated bit-error rate (BER) values provided by the OTC module. In addition, the FailurE causE Localization for optIcal NetworkinG (FEELING) algorithm is proposed to localize failures affecting a lightpath using OSAs. Extensive simulation results are presented, showing the effectiveness of the TISSUE algorithm in properly exploiting OTC information to assess BER performance of quadrature-phase-shift-keying-modulated signals, and the high accuracy of the FEELING algorithm to correctly detect soft failures as laser drift, filter shift, and tight filtering.

99 citations


Journal ArticleDOI
TL;DR: This paper forms the RSCA problem using a nodearc- based integer linear programming (ILP) method in which the numbers of both variables and constraints are greatly reduced compared with previous ILP methods, thereby leading to a significant improvement in convergence efficiency.
Abstract: In this paper, we focus on the static routing, spectrum, and core assignment (RSCA) problem in spacedivision multiplexing (SDM)-based elastic optical networks (EONs) with multi-core fiber (MCF). In RSCA problems, it is a challenging task to control the inter-core interference, called inter-core crosstalk (XT), within an acceptable level and simultaneously maximize the spectrum utilization. We first consider XT in a worst interference scenario (i.e., XTunaware), which can simplify the RSCA problem. In this scenario, we formulate the RSCA problem using a nodearc- based integer linear programming (ILP) method in which the numbers of both variables and constraints are greatly reduced compared with previous ILP methods, thereby leading to a significant improvement in convergence efficiency. Then, we consider the XT strictly (i.e., XT-aware) and formulate the problem using a mixed integer linear programming (MILP) method, which is an extension of the above node-arc-based ILP method. It is more suitable for different XT thresholds and/or geographically large networks, in that it has a higher degree of generalizability. Finally, we propose an XT-aware-based heuristic algorithm. The simulation results demonstrate that our heuristic algorithm achieves higher spectrum efficiency, higher degree of generalizability, and higher computational efficiency than the existing heuristic algorithm(s).

91 citations


Journal ArticleDOI
Emmanuel Seve1, Jelena Pesic1, Camille Delezoide1, Sebastien Bigo1, Yvan Pointurier1 
TL;DR: In this article, a machine learning algorithm was used to reduce the uncertainties on the input parameters of the QoT model, improving the accuracy of the SNR estimation with respect to new optical demands in a brownfield phase.
Abstract: In this paper, we propose to lower the network design margins by improving the estimation of the signal-tonoise ratio (SNR) given by a quality of transmission (QoT) estimator, for new optical demands in a brownfield phase, based on a mathematical model of the physics of propagation During the greenfield phase and the network operation, we collect and correlate information on the QoT input parameters, issued from the established initial demands and available almost for free from the network elements: amplifiers output power and the SNR at the coherent receiver side Since we have some uncertainties on these input parameters of the QoT model, we use a machine learning algorithm to reduce them, improving the accuracy of the SNR estimation With this learning process and for a European backbone network (28 nodes, 41 links), we could reduce the QoT inaccuracy by several dBs for new demands whatever the amount of uncertainties of the initial parameters

85 citations


Journal ArticleDOI
TL;DR: This work addresses the relatively long setup latency and complicated network control and management caused by on-demand virtual network function service chain (vNF-SC) provisioning in inter-datacenter elastic optical networks with a provisioning framework designed as a discrete-time system.
Abstract: This work addresses the relatively long setup latency and complicated network control and management caused by on-demand virtual network function service chain (vNF-SC) provisioning in inter-datacenter elastic optical networks. We first design a provisioning framework with resource pre-deployment to resolve the aforementioned challenge. Specifically, the framework is designed as a discrete-time system, in which the operations are performed periodically in fixed time slots (TS). Each TS includes a pre-deployment phase followed by a provisioning phase. In the pre-deployment phase, a deep-learning (DL) model is designed to predict future vNF-SC requests, then lightpath establishment and vNF deployment are performed accordingly to pre-deploy resources for the predicted requests. Then, the system proceeds to the provisioning phase, which collects dynamic vNF-SC requests from clients and serves them in real-time by steering their traffic through the required vNFs in sequence. In order to forecast the high-dimensional data of future vNF-SC requests accurately, we design our DL model based on the long/short-term memory-based neural network and develop an effective training scheme for it. Then, the provisioning framework and DL model are optimized from several perspectives. We evaluate our proposed framework with simulations that leverage real traffic traces. The results indicate that our DL model achieves higher request prediction accuracy and lower blocking probability than two benchmarks that also predict vNF-SC requests and follow the principle of the proposed provisioning framework.

79 citations


Journal ArticleDOI
TL;DR: An analytical model for XT in bi-directional normal step-index and trench-assisted MCFs is presented and corresponding XT-aware core prioritization schemes are proposed and developed, aimed at relieving the complexity of online XT computation.
Abstract: The rapid growth of traffic inside data centers caused by the increasing adoption of cloud services necessitates a scalable and cost-efficient networking infrastructure. Space-division multiplexing (SDM) is considered as a promising solution to overcome the optical network capacity crunch and support cost-effective network capacity scaling. Multi-core fiber (MCF) is regarded as the most feasible and efficient way to realize SDM networks, and its deployment inside data centers seems very likely as the issue of inter-core crosstalk (XT) is not severe over short link spans (<1 km) compared to that in long-haul transmission. However, XT can still have a considerable effect in MCF over short distances,which can limit the transmission reach and in turn the data center’s size. XT can be further reduced by bi-directional transmission of optical signals in adjacent MCF cores. This paper evaluates the benefits of MCF-based SDM solutions in terms of maximizing the capacity and spatial efficiency of data center networks. To this end, we present an analytical model for XT in bi-directional normal step-index and trench-assisted MCFs and propose corresponding XT-aware core prioritization schemes. We further develop XT-aware spectrum resource allocation strategies aimed at relieving the complexity of online XT computation. These strategies divide the available spectrum into disjoint bands and incrementally add them to the pool of accessible resources based on the network conditions. Several combinations of core mapping and spectrum resource allocation algorithms are investigated for eight types of homogeneous MCFs comprising 7–61 cores, three different multiplexing schemes, and three data center network topologies with two traffic scenarios. Extensive simulation results showthat combining bi-directional transmission in dense core fibers with tailored resource allocation schemes significantly increases the network capacity. Moreover, a multiplexing scheme that combines SDM and WDM can achieve up to 33 times higher link spatial efficiency and up to 300 times greater capacity compared to a WDM solution.

74 citations


Journal ArticleDOI
TL;DR: Simulations reveal that Kingman's exponential law of congestion provides accurate estimates on such delays for the particular case of aggregating a number of evolved Common Public Radio Interface fronthaul flows, namely functional splits and IID.
Abstract: Enabling the transport of fronthaul traffic in next-generation cellular networks [fifth-generation (5G)] following the cloud radio access network (C-RAN) architecture requires a redesign of the fronthaul network featuring high capacity and ultra-low latency. With the aim of leveraging statistical multiplexing gains, infrastructure reuse, and, ultimately, cost reduction, the research community is focusing on Ethernet-based packet-switch networks. To this end, we propose using the high queuing delay percen-tiles of the G/G/1 queuing model as the key metric in front-haul network dimensioning. Simulations reveal that Kingman's exponential law of congestion provides accurate estimates on such delays for the particular case of aggregating a number of evolved Common Public Radio Interface fronthaul flows, namely functional splits and II D . We conclude that conventional 10 G, 40 G, and 100 G transponders can cope with multiple legacy 10–20 MHz radio channels with worst-case delay guarantees. Conversely, scaling to 40 and 100 MHz channels will require the introduction of 200G, 400G, and even 1T high-speed transponders.

71 citations


Journal ArticleDOI
TL;DR: The performance results show that the SGA initiated with connection demands sorted based on increasing the number of possible space and spectrum assignment layouts (SAL) can achieve the best near-optimal solution for a small network experiment and the ascending SAL number (ASN) policy shows the best performance for realistic networks.
Abstract: Space division multiplexing (SDM) and elastic optical networking (EON) have been proposed to increase the transmission capacity and flexibility of optical transport networks. The problem of allocating available resources over the EON network is called routing and spectrum assignment (RSA); it is called routing, modulation level, and spectrum assignment (RMLSA) if modulation adaptivity is enabled. Considering SDM adds more flexibility to the resource allocation problem. In this paper, we formulate the routing, modulation level, space, and spectrum assignment (RMLSSA) as integer linear programming (ILP) in a path-based manner for static traffic. Next, the stepwise greedy algorithm (SGA) and four different sorting policies to initiate the algorithmare proposed as a heuristic method to find a near-optimal solution of the RMLSSA problem. Finally, the paper evaluates the effectiveness of sorting policies and the SGA algorithm with different metrics. The performance results show that the SGA initiated with connection demands sorted based on increasing the number of possible space and spectrum assignment layouts (SAL) can achieve the best near-optimal solution for a small network experiment. Moreover, the ascending SAL number (ASN) policy shows the best performance for realistic networks.

Journal ArticleDOI
TL;DR: This paper demonstrates a switch-pluggable, 4.5 W, 100 Gbit/s, siliconphotonics- based, PAM4, QSFP-28 module to transport Ethernet data directly over DWDM for layer 2/3 connection between switches at data centers up to 120 km apart, thereby eliminating the need for a separate optical transport layer.
Abstract: In this paper we discuss the nature of and requirements for data center interconnects. We then demonstrate a switch-pluggable, 4.5 W, 100 Gbit/s, siliconphotonics- based, PAM4, QSFP-28 module to transport Ethernet data directly over DWDM for layer 2/3 connection between switches at data centers up to 120 km apart, thereby eliminating the need for a separate optical transport layer. The module, based on the direct detect modulation format, is of much reduced complexity, power, and cost compared to the coherent systems that are currently being deployed for this application.

Journal ArticleDOI
TL;DR: All the requirements and key performance indicators of a network to disaggregate IT resources are identified while summarizing the progress and importance of optical interconnects are summarized, and it is shown that the more diverse the VM requests are, the higher the net financial gain is.
Abstract: Disaggregated rack-scale data centers have been proposed as the only promising avenue to break the barrier of the fixed CPU-to-memory proportionality caused by main-tray direct-attached conventional/traditional server-centric systems However, memory disaggregation has stringent network requirements in terms of latency, energy efficiency, bandwidth, and bandwidth density This paper identifies all the requirements and key performance indicators of a network to disaggregate IT resources while summarizing the progress and importance of optical interconnects Crucially, it proposes a rack-and-cluster scale architecture, which supports the disaggregation of CPU, memory, storage, and/or accelerator blocks Optical circuit switching forms the core of this architecture, whereas the end-points (IT resources) are equipped with on-chip programmable hybrid electrical packet/circuit switches This architecture offers dynamically reconfigurable physical topology to form virtual ones, each embedded with a set of functions It analyzes the latency overhead of disaggregated DDR4 (parallel) and the proposed hybrid memory cube (serial) memory elements on the conventional and the proposed architecture A set of resource allocation algorithms are introduced to (1) optimally select disaggregated IT resources with the lowest possible latency, (2) pool them together by means of a virtual network interconnect, and (3) compose virtual disaggregated servers Simulation findings show up to a 34% resource utilization increase over traditional data centers while highlighting the importance of the placement and locality among compute, memory, and storage resources In particular, the network-aware locality-based resource allocation algorithm achieves as low as 15 ns, 95 ns, and 315 ns memory transaction round-trip latency on 63%, 22%, and 15% of the allocated virtual machines (VMs) accordingly while utilizing 100% of the CPU resources Furthermore, a formulation to parameterize and evaluate the additional financial costs endured by disaggregation is reported It is shown that the more diverse the VM requests are, the higher the net financial gain is Finally, an experiment was carried out using silicon photonic midboard optics and an optical circuit switch, which demonstrates forward error correction free 10−12 bit error rate performance on up to five-tier scale-out networks

Journal ArticleDOI
TL;DR: The numerical results demonstrate that the proposed algorithms improve the system capacity and system fairness with fast convergence and provides faster convergence and better performance than the traditionalsubgradient method.
Abstract: In this paper, we propose and study a new jointloadbalancing(LB)andpowerallocation(PA)scheme for a hybrid visible light communication (VLC) and radio frequency (RF) system consisting of one RF access point (AP) and multiple VLC APs. An iterative algorithm is proposed to distribute users on APs and distribute the powers of the APs on their users. In the PA subproblem, an optimization problem is formulated to allocate the power of each AP to the connected users for total achievable data rate maximization. In this subproblem, we propose a new efficient algorithm that finds optimal dual variables after formulating them in terms of each other. This new algorithm provides faster convergence and better performance than thetraditionalsubgradientmethod.Inaddition,itdoesnot dependonthestepsizeortheinitialvaluesofthevariables, which we look for, as the subgradient does. Then, we start withtheuseroftheminimumdatarateseekinganotherAP thatoffersahigherdatarateforthatuser.Userswithlower dataratescontinuereconnectingfromoneAPtoanotherto balancetheloadonlyifthistravelincreasesthesummation of the achievable data rates and enhances the system fairness.TwoapproachesareproposedtohavethejointPAand LB performed: a main approach that considers the exact interference information for all users, and a suboptimal approach that aims to decrease the complexity of the first approach by considering only the approximate interference information of users. The numerical results demonstrate that the proposed algorithms improve the system capacity and system fairness with fast convergence.

Journal ArticleDOI
TL;DR: In this paper, the authors report on the evolution of radio access network (RAN) equipment, including the advent of virtualization and an investigation of the required architecture and optical access technologies.
Abstract: Optical fiber is the required technology for radio access network (RAN) backhaul and fronthaul. We report on the evolution of RAN equipment, including the advent of virtualization and an investigation of the required architecture and optical access technologies.

Journal ArticleDOI
TL;DR: This paper estimates the linear and nonlinear signal-to-noise ratio (SNR) from the received signal by obtaining features of two distinct effects: nonlinear phase noise and second-order statistical moments from a small neural network trained to estimate the SNRs.
Abstract: Operators are pressured to maximize the achieved capacity over deployed links. This can be obtained by operating in the weakly nonlinear regime, requiring a precise understanding of the transmission conditions. Ideally, optical transponders should be capable of estimating the regime of operation from the received signal and feeding that information to the upper management layers to optimize the transmission characteristics; however, this estimation is challenging. This paper addresses this problem by estimating the linear and nonlinear signal-to-noise ratio (SNR) from the received signal. This estimation is performed by obtaining features of two distinct effects: nonlinear phase noise and second-order statistical moments. A small neural network is trained to estimate the SNRs from the extracted features. Over extensive simulations covering 19,800 sets of realistic fiber transmissions, we verified the accuracy of the proposed techniques. Employing both approaches simultaneously gave measured performances of 0.04 and 0.20 dB of standard error for the linear and nonlinear SNRs, respectively.

Journal ArticleDOI
TL;DR: This work proposes a novel hybrid DCN architecture based on distributed flow-controlled fast optical switches (FOS) and modified top-of-the-rack (TOR) switches (HiFOST) and investigates the performance of HiFOST DCN with different TOR buffer sizes, optical link capacities, elastic allocation of transceivers, and network scales under realistic data center (DC) traffic.
Abstract: To solve the bandwidth and latency issues in current hierarchical data center network (DCN) architectures based on electrical switches, we propose a novel hybridDCNarchitecture based on distributed flow-controlled fast optical switches (FOS) and modified top-of-the-rack (TOR) switches (HiFOST). The intra-cluster interconnection of HiFOST is built by FOS with wavelength switching in nanoseconds’ time for an efficient statistical multiplexing operation, while the inter-cluster interconnection is connected by the TOR interfaces directly. Due to the lack of practical optical buffers, optical flow control is implemented to retransmit packets in case of contention. We investigate the performance of HiFOST DCN with different TOR buffer sizes, optical link capacities, elastic allocation of transceivers, and network scales under realistic data center (DC) traffic. The results show an average serverto- server latency of less than 2.8 μs, a packet loss <5.6 × 10−6 at load of 0.5 for a DC size of 94,080 servers with limited 50 KB TOR buffer. In addition, for scaling out the servers’ number and scaling up the data rate of connected servers, the cost and power consumption of the HiFOST DCN have been investigated and compared with the electrical Fat-Tree and Leaf-Spine DCN architectures, as well as with the optical H-LION and OPSquare DCN architectures. Results indicate that, for 94,080 servers operating at 10 Gb/s, HiFOST has a 48.2% and 34.1% savings of the cost and 46.3% and 32.5% savings of the power consumption with respect to the Fat-Tree and Leaf-Spine, respectively. For a HiFOST DCN supporting a 10880 server, scaling up the operating data rate of the server to 100 Gb/s, the HiFOST solution has a cost savings of 35.6% and 34.1% and power consumption of 56.5% and 59.2% as compared to the Fat-Tree and Leaf-Spine, respectively.

Journal ArticleDOI
TL;DR: This paper proposes and numerically demonstrate a novel mobile fronthaul architecture based on functional split and time-division multiplexed (TDM) passive optical networks (PONs) with a unified mobile and PON scheduler known as Mobile-PON.
Abstract: To meet the capacity requirements of the exponential increase in mobile traffic and to continue to drive down the per-bit cost for mobile service providers, the cloud radio access network (C-RAN) has become a crucial step toward 5G. However, the current C-RAN architecture has some major drawbacks in terms of scalability, cost, and efficiency. In this paper, we propose and numerically demonstrate a novel mobile fronthaul architecture based on functional split and time-division multiplexed (TDM) passive optical networks (PONs) with a unified mobile and PON scheduler known as Mobile-PON. The optimal functional split distributes lower physical layer hardware toward remote radio sites. The new interface that divides the remote radio processing, and centralized baseband processing requires less bandwidth, also opens the possibility of sharing and multiplexing the bandwidth with multiple remote sites. Our combined mobile and PON scheduler is mainly based on the more complex wireless scheduling that translates its results into the TDM-PON system through LTE resource block mapping, eliminating additional scheduling delay at the PON. Without the additional scheduling delay, the cost-effective TDM-PON becomes applicable for mobile fronthaul, while the optimal fronthaul interface increases bandwidth efficiency by ∼10× over CPRI.

Journal ArticleDOI
TL;DR: This work attempts to localize single-link failures by utilizing statistical machine learning techniques trained on data that describe the network state upon current and past failure incidents by utilizing a Gaussian process classifier trained on historical data extracted from the examined network.
Abstract: In this work we consider the problem of fault localization in transparent optical networksWe attempt to localize single-link failures by utilizing statistical machine learning techniques trained on data that describe the network state upon current and past failure incidents In particular, a Gaussian process classifier is trained on historical data extracted from the examined network, with the goal of modeling and predicting the failure probability of each link therein To limit the set of suspect links for every failure incident, the proposed approach is complemented by the utilization of a graph-based correlation heuristic The proposed approach is tested on a number of datasets generated for an orthogonal frequency-division multiplexing-based optical network, and demonstrates that the approach achieves a high localization accuracy (91%–99%) that is insignificantly affected as the size of the historical dataset is reduced The approach is also compared to a conventional fault localization method that is based on the utilization of monitoring information It is shown that the conventional method significantly increases the network cost, as measured by the number of monitoring nodes required to achieve the same accuracy as that achieved by the proposed approach The proposed scheme can be used by service providers to reduce the network cost related to the fault localization procedure As the approach is generic and does not depend on specific network technologies, it can be applied to different network types, eg, fixed-grid or space-division multiplexing elastic optical networks

Journal ArticleDOI
TL;DR: Two applications of machine learning in the context of internet protocol (IP)/Optical networks are described, which allows agilemanagement of resources in a core IP/Optical network by using machine learning for shortterm and long-term prediction of traffic flows and joint global optimization of IP and optical layers using colorless/ directionless (CD) reconfigurable optical add-drop multiplexers (ROADMs).
Abstract: We describe two applications ofmachine learning in the context of internet protocol (IP)/Optical networks. The first one allows agilemanagement of resources in a core IP/Optical network by using machine learning for shorttermand long-term prediction of traffic flows. It also allows joint global optimization of IP and optical layers using colorless/ directionless (CD) reconfigurable optical add-drop multiplexers (ROADMs). Multilayer coordination allows for significant cost savings, flexible new services to meet dynamic capacity needs, and improved robustness by being able to proactively adapt to new traffic patterns and network conditions. The second application is important as we migrate our networks to Open ROADM networks to allow physical routing without the need for detailed knowledge of optical parameters. We discuss a proof-of-concept study, where detailed performance data for established wavelengths in an existing ROADM network is used for machine learning to predict the optical performance of each wavelength. Both applications can be efficiently implemented by using a software-defined network controller.

Journal ArticleDOI
TL;DR: A dynamic lightpath routing algorithm called adaptive routing with back-to-back regeneration (ARBR) is developed, which is evaluated in comparison to other reference methods and shows that the proposed algorithm outperforms the reference ones in terms of the BBP metric.
Abstract: We focus on dynamic lightpath provisioning in translucent spectrally spatially flexible optical networks (SS-FONs) operating with multi-core fibers and realizing spectral super-channel transmission, in which flexible signal regeneration achieved with transceivers operating in back-to-back (B2B) configurations and modulation conversion is allowed. For optimized allocation of limited spectrum and transceiver resources, we develop a dynamic lightpath routing algorithm called adaptive routing with back-to-back regeneration (ARBR), which we evaluate in comparison to other reference methods. Using the ARBR algorithm, we study potential performance gains in terms of bandwidth blocking probability (BBP) in such flexible network scenarios. To this end, we analyze three alternative scenarios that differ in the way in which dynamic translucent lightpath connections are provisioned, namely a reference scenario in which the use of regenerators is minimized and the modulation conversion is not allowed, and two other scenarios with intentional B2B regeneration. The results of extensive simulation experiments run on two representative network topologies show that the proposed algorithm outperforms the reference ones in terms of the BBP metric. Moreover, the fully flexible B2B regeneration with modulation conversion can be beneficial in terms of both spectrum and transceiver resource utilization, resulting in lower BBP than other scenarios.

Journal ArticleDOI
David Côté1
TL;DR: This paper presents results from a concrete application using unsupervised machine learning in a real network that can detect anomalies at multiple network layers, including the optical layer, and how it can be trained to identify the root cause of each anomaly.
Abstract: In this paper, we first review how the main machine learning concepts can apply to communication networks. Then we present results from a concrete application using unsupervised machine learning in a real network. We show how the application can detect anomalies at multiple network layers, including the optical layer, how it can be trained to anticipate anomalies before they become a problem, and how it can be trained to identify the root cause of each anomaly. Finally, we elaborate on the importance of this work and speculate about the future of intelligent adaptive networks.

Journal ArticleDOI
TL;DR: A monitoring and data analytics architecture consisting of centralized data storage with data analytics capabilities, together with a generic node agent for monitoring/telemetry supporting disaggregation, is presented and a YANG data model that allows one to clearly separate responsibilities for monitoring configuration from node configuration is proposed.
Abstract: Focused on reducing capital expenditures by opening the data plane to multiple vendors without impacting performance, node disaggregation is attracting the interest of network operators. Although the softwaredefined networking (SDN) paradigm is key for the control of such networks, the increased complexity of multilayer networks strictly requires monitoring/telemetry and data analytics capabilities to assist in creating and operating self-managed (autonomic) networks. Such autonomicity greatly reduces operational expenditures, while improving network performance. In this context, a monitoring and data analytics (MDA) architecture consisting of centralized data storage with data analytics capabilities, together with a generic node agent for monitoring/telemetry supporting disaggregation, is presented. A YANG data model that allows one to clearly separate responsibilities for monitoring configuration from node configuration is also proposed. The MDA architecture and YANG data models are experimentally demonstrated through three different use cases: i) virtual link creation supported by an optical connection, where monitoring is automatically activated; ii) multilayer self-configuration after bit error rate (BER) degradation detection, where a modulation format adaptation is recommended for the SDN controller to minimize errors (this entails reducing the capacity of both the virtual link and supported multiprotocol label switching-transport profile (MPLS-TP) paths); and iii) optical layer selfhealing, including failure localization at the optical layer to find the cause of BER degradation. A combination of active and passive monitoring procedures allows one to localize the cause of the failure, leading to lightpath rerouting recommendations toward the SDN controller avoiding the failing element(s).

Journal ArticleDOI
TL;DR: The design and implementation of Topanga is presented, a packet switch optoASIC with the requisite technologies to free datacenter topologies from constraints induced by limited electrical reach, which provides optical I/O with lower energy per bit at lower cost per port than existing solutions.
Abstract: Breakthroughs in silicon photonics are changing the economics of network hardware and are enabling new directions in computer networking, compelling us to reexamine our previous assumptions regarding computer and network architectures. One such area that bears further study is the choice of network topology in a world where the distance between nodes is largely irrelevant, and where there is no inherent economic advantage to spatial locality. Network architects should be able to select topologies based on their merits and not merely from tradition. Combined with relaxed system-level packaging constraints, silicon photonics will likely lead to future datacenter architectures that look very different from today’s solutions. We present the design and implementation of Topanga, a packet switch optoASIC with the requisite technologies to free datacenter topologies from constraints induced by limited electrical reach. By means of our platform for integrating silicon photonics inside the package, Topanga provides optical I/O with lower energy per bit at lower cost per port than existing solutions.

Journal ArticleDOI
TL;DR: The results show that the TWR scheme almost doubles the network ergodic capacity compared to that of the OWR scheme with the same outage performance, and it is shown that under weak-to-moderate weather turbulence conditions and small pointing error, the outage probability is dominated by the RF downlink with a neglected effect for the user selection process at the RF uplink transmission.
Abstract: In this paper, the performance of two-way multiuser mixed radio frequency/free space optical (RF/FSO) relay networks with opportunistic user scheduling and asymmetric channel fading is studied. RF links are used to conduct data transmission between users and relay node, while a FSO link is used to conduct data transmission on the last-mile communication link between the relay node and base station. The RF links are assumed to follow a Rayleigh fading model, while the FSO links are assumed to follow a unified Gamma–Gamma atmospheric turbulence fading model with pointing error. First, closed-form expressions for the exact outage probability, asymptotic (high signal-to-noise ratio) outage probability, average symbol error rate, and average ergodic channel capacity are derived assuming a heterodyne detection scheme. The asymptotic results are used to conduct a power optimization algorithm where expressions for optimal transmission power values for the transmitting nodes are provided. Additionally, performance comparisons between the considered two-way-relaying (TWR) network and the oneway- relaying (OWR) network are provided and discussed. Also, the impact of several system parameters, including number of users, pointing errors, atmospheric turbulence conditions, and outage probability threshold on the overall network performance are investigated. All the theoretical results are validated by Monte Carlo simulations. The results show that the TWR scheme almost doubles the network ergodic capacity compared to that of the OWR scheme with the same outage performance. Additionally, it is shown that under weak-to-moderate weather turbulence conditions and small pointing error, the outage probability is dominated by the RF downlink with a neglected effect for the user selection process at the RF uplink transmission. However, for severe pointing error, the outage probability is dominated by the FSO uplink/downlink transmission.

Journal ArticleDOI
Yaoqiang Xiao1, Zhiyi Wang1, Jun Cao1, Rui Deng1, Yi Liu1, Jing He1, Lin Chen1 
TL;DR: A time-frequency domain encryption technique based on multi-chaotics for physical layer security and selected mapping (SLM) for peak-to-average-power ratio (PAPR) reduction is proposed and experimentally demonstrated in an OFDM-PON system.
Abstract: The security of orthogonal-frequency-division-multiplexing-based passive optical network (OFDM-PON) systems has been an important problem recently. In this paper, a time-frequency domain encryption technique based on multi-chaotics for physical layer security and selected mapping (SLM) for peak-to-average-power ratio (PAPR) reduction is proposed and experimentally demonstrated in an OFDM-PON system. The proposed scheme is based on Lozi and Logistic maps and can generate chaotic sequences to scramble the subcarrier in both the time and frequency domains for enhancing physical layer security. Meanwhile, a SLM method can be applied in the scheme to improve the PAPR performance of the system. In the experiment, an 8.9 Gb/s encrypted OFDM signal has been securely transmitted over 100 km standard single-mode fiber. The results show that the proposed method can effectively enhance the physical layer security and simultaneously improve the bit-error-rate performance of the system.

Journal ArticleDOI
TL;DR: For the first time, to the best of the knowledge, the fronthaul network for providing simultaneous 4G and 5G services by propagating LTE signals in coexistence with UF-OFDM is demonstrated.
Abstract: Fifth generation (5G) mobile communications will require a dense deployment of small cell antenna sites and higher channel bandwidth, in conjunction with a cloud radio access network (C-RAN) architecture This necessitates low latency and high-capacity architecture in addition to energy- and cost-efficient fronthaul links An efficient way of achieving such connectivity is to make use of an opticalfiber- based infrastructure where multiple wireless services may be distributed over the same fiber to remote radio head (RRH) sites In this work, we demonstrate the spectral containment of fourth generation (4G) Long-Term Evolution (LTE) signals and 5G candidate waveforms—generalized frequency division multiplexing and universally filtered orthogonal frequency division multiplexing (UF-OFDM) through a directly modulated link Seventy-five bands of LTEand 10 bands of 5Gwaveforms are successfully transmitted over a 25 km analog intermediate frequency signal over fiber (AIFoF) link through our setup, limited only by the bandwidth of the laser For the first time, to the best of our knowledge, we demonstrate the fronthaul network for providing simultaneous 4G and 5G services by propagating LTE signals in coexistence with UF-OFDM

Journal ArticleDOI
TL;DR: Simulation results show that ML-based prediction and initial setup times (history) of traffic flows can be used to further improve connection blocking and resource utilization in space-division multiplexed ODCNs.
Abstract: Traffic prediction and utilization of past information are essential requirements for intelligent and efficient management of resources, especially in optical data center networks (ODCNs), which serve diverse applications. In this paper, we consider the problem of traffic aggregation in ODCNs by leveraging the predictable or exact knowledge of application-specific information and requirements, such as holding time, bandwidth, traffic history, and latency. As ODCNs serve diverse flows (e.g., long/ elephant and short/mice), we utilize machine learning (ML) for prediction of time-varying traffic and connection blocking inODCNs.Furthermore,withthe predictedmean service time, passed time is utilized to estimate the mean residual life (MRL) of an active flow (connection). The MRL information is used for dynamic traffic aggregation while allocating resources to a new connection request. Additionally, blocking rate is predicted for a future time interval based on the predicted traffic and past blocking information, which is used to trigger a spectrumreallocation process (also called defragmentation) to reduce spectrum fragmentation resulting from the dynamic connection setup and tearing-down scenarios. Simulation results showthatML-based prediction and initial setup times (history) of traffic flows can be used to further improve connection blocking and resource utilization in space-division multiplexed ODCNs.

Journal ArticleDOI
TL;DR: A comprehensive analysis of all-optical relaying free space optical systems in the presence of all main noise sources, including background, thermal, and amplified spontaneous emission noise, by considering the effect of the optical degree-of-freedom is proposed.
Abstract: In this paper, we propose a comprehensive analysis of all-optical relaying free space optical (FSO) systems in the presence of all main noise sources, including background, thermal, and amplified spontaneous emission noise, by considering the effect of the optical degree-of-freedom. Using full channel-state information (CSI) and semi-blind CSI relaying, we derive closed-form expressions for the ergodic capacity, outage capacity, and outage probability of the considered dual-hop FSO system. Numerical and analytical simulation results are provided to verify the accuracy of the proposed mathematical analysis. To simplify analytical expressions of full CSI relaying, we also propose and analyze the validity of different commonly used approximations in the context of dual-hop FSO communications.

Journal ArticleDOI
TL;DR: A deep-neural-network-based machine learning method is presented to predict the power dynamics of a 90-channel ROADM system from data collection and training and it is shown that the trained deep neural network can recommend wavelength assignments for wavelength switching with minimal power excursions.
Abstract: Recent advances in software and hardware greatly improve the multi-layer control and management of reconfigurable optical add-drop multiplexer (ROADM) systems facilitating wavelength switching. However, ensuring stable performance and reliable quality of transmission (QoT) remain difficult problems for dynamic operation. Optical power dynamics that arise from a variety of physical effects in the amplifiers and transmission fiber complicate the control and performance predictions in these systems.We present a deep-neural-network-based machine learning method to predict the power dynamics of a 90-channel ROADM system from data collection and training. We further show that the trained deep neural network can recommend wavelength assignments for wavelength switching with minimal power excursions.