scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Design of Low-Margin Optical Networks

Yvan Pointurier1
01 Jan 2017-Journal of Optical Communications and Networking (Optical Society of America)-Vol. 9, Iss: 1
TL;DR: Techniques that the network designer can use in order to increase the capacity of optical networks, extend their life, and decrease deployment cost (CAPEX) or total cost of ownership over their life duration are reviewed.
Abstract: We review margins used in optical networks and review a formerly proposed margin taxonomy. For each category of margins, we review techniques that the network designer can use in order to increase the capacity of optical networks, extend their life, and decrease deployment cost (CAPEX) or total cost of ownership over their life duration. Green field (new network deployments) and brown field techniques (used after initial network deployment) are discussed. The technology needed to leverage the margins and achieve the aforementioned gains are also reviewed, along with the associated challenges.
Citations
More filters
Journal ArticleDOI
TL;DR: This manuscript discusses the motivations for jointly utilizing transmission techniques such as probabilistic shaping and digital sub-carrier multiplexing in digital coherent optical transmissions systems and describes the key-building blocks of modern high-speed DSP-based transponders working at up to 800G per wave.
Abstract: The design of application-specific integrated circuits (ASIC) is at the core of modern ultra-high-speed transponders employing advanced digital signal processing (DSP) algorithms. This manuscript discusses the motivations for jointly utilizing transmission techniques such as probabilistic shaping and digital sub-carrier multiplexing in digital coherent optical transmissions systems. First, we describe the key-building blocks of modern high-speed DSP-based transponders working at up to 800G per wave. Second, we show the benefits of these transmission methods in terms of system level performance. Finally, we report, to the best of our knowledge, the first long-haul experimental transmission – e.g., over 1000 km – with a real-time 7 nm DSP ASIC and digital coherent optics (DCO) capable of data rates up to 1.6 Tb/s using two waves (2 × 800G).

181 citations


Cites background from "Design of Low-Margin Optical Networ..."

  • ...because it can postpone costly deployment of new fibers by efficiently utilizing the existing optical fiber infrastructures [7], [14]....

    [...]

Journal ArticleDOI
TL;DR: A ML classifier is investigated that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format.
Abstract: Predicting the quality of transmission (QoT) of a lightpath prior to its deployment is a step of capital importance for an optimized design of optical networks. Due to the continuous advances in optical transmission, the number of design parameters available to system engineers (e.g., modulation formats, baud rate, code rate, etc.) is growing dramatically, thus significantly increasing the alternative scenarios for lightpath deployment. As of today, existing (pre-deployment) estimation techniques for light-path QoT belong to two categories: “exact” analytical models estimating physical-layer impairments, which provide accurate results but incur heavy computational requirements, and margined formulas, which are computationally faster but typically introduce high link margins that lead to underutilization of network resources. In this paper, we explore a third option, i.e., machine learning (ML), as ML techniques have already been successfully applied for optimization and performance prediction of complex systems where analytical models are hard to derive and/ or numerical procedures impose high computational burden. We investigate a ML classifier that predicts whether the bit error rate of unestablished lightpaths meets the required system threshold based on traffic volume, desired route, and modulation format. The classifier is trained and tested on synthetic data and its performance is assessed over different network topologies and for various combinations of classification features. Results in terms of classifier accuracy are promising and motivate further investigation over real field data.

163 citations


Cites background from "Design of Low-Margin Optical Networ..."

  • ...The randomization of the latter parameter accounts for the unpredictability of fast time-varying penalties (such as polarization effects [3])....

    [...]

  • ..., simplified power budget with nonlinear-impairment estimations based on a Gaussian model [2]) introduce higher link margins in the calculation of the lightpath budget to compensate for model inaccuracies, thus leading to an underutilization of network resources [3]....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors analyze several failure causes affecting the quality of optical connections and propose two different algorithms: one focused on detecting significant bit error rate (BER) changes in optical connections, named as BANDO, and the other focused on identifying the most probable failure pattern, called LUCIDA.
Abstract: Optical connections support virtual links in MPLS-over-optical multilayer networks and therefore, errors in the optical layer impact on the quality of the services deployed on such networks. Monitoring the performance of the physical layer allows verifying the proper operation of optical connections, as well as detecting bit error rate (BER) degradations and anticipating connection disruption. In addition, failure identification facilitates localizing the cause of the failure by providing a short list of potential failed elements and enables self-decision making to keep committed service level. In this paper, we analyze several failure causes affecting the quality of optical connections and propose two different algorithms: one focused on detecting significant BER changes in optical connections, named as BANDO, and the other focused on identifying the most probable failure pattern, named as LUCIDA. BANDO runs inside the network nodes to accelerate degradation detection and sends a notification to the LUCIDA algorithm running on the centralized controller. Experimental measures were carried out on two different setups to obtain values for BER and received power and used to generate synthetic data used in subsequent simulations. Results show significant improvement anticipating maximum BER violation with small failure identification errors.

100 citations


Cites background from "Design of Low-Margin Optical Networ..."

  • ...Finally, as stated in [6], failure localization or quality of transmission estimators [16] based on monitoring information typically require link-level characteristics while coherent receivers provide end-to-end information....

    [...]

  • ...Monitoring is attracting increasing interest for several reasons such as: i) the reduction of system margins (which derives in reducing capital expenditures) might induce more frequent degradations at the optical layer [6], [7]; ii) a more accurate estimation of the quality of transmission and an optimization of transmission parameters, routing, and spectrum assignment [8]....

    [...]

Journal ArticleDOI
TL;DR: Simulation results are presented, showing the effectiveness of the TISSUE algorithm in properly exploiting OTC information to assess BER performance of quadrature-phase-shift-keying-modulated signals, and the high accuracy of the FEELING algorithm to correctly detect soft failures as laser drift, filter shift, and tight filtering.
Abstract: In elastic optical networks (EONs), effective soft failure localization is of paramount importance to early detection of service level agreement violations while anticipating possible hard failure events. So far, failure localization techniques have been proposed and deployed mainly for hard failures, while significant work is still required to provide effective and automated solutions for soft failures, both during commissioning testing and in-operation phases. In this paper, we focus on soft failure localization in EONs by proposing two techniques for active monitoring during commissioning testing and for passive in-operation monitoring. The techniques rely on specifically designed low-cost optical testing channel (OTC) modules and on the widespread deployment of cost-effective optical spectrum analyzers (OSAs). The retrieved optical parameters are elaborated by machine learning-based algorithms running in the agent's node and in the network controller. In particular, the Testing optIcal Switching at connection SetUp timE (TISSUE) algorithm is proposed to localize soft failures by elaborating the estimated bit-error rate (BER) values provided by the OTC module. In addition, the FailurE causE Localization for optIcal NetworkinG (FEELING) algorithm is proposed to localize failures affecting a lightpath using OSAs. Extensive simulation results are presented, showing the effectiveness of the TISSUE algorithm in properly exploiting OTC information to assess BER performance of quadrature-phase-shift-keying-modulated signals, and the high accuracy of the FEELING algorithm to correctly detect soft failures as laser drift, filter shift, and tight filtering.

99 citations

Journal ArticleDOI
Emmanuel Seve1, Jelena Pesic1, Camille Delezoide1, Sebastien Bigo1, Yvan Pointurier1 
TL;DR: In this article, a machine learning algorithm was used to reduce the uncertainties on the input parameters of the QoT model, improving the accuracy of the SNR estimation with respect to new optical demands in a brownfield phase.
Abstract: In this paper, we propose to lower the network design margins by improving the estimation of the signal-tonoise ratio (SNR) given by a quality of transmission (QoT) estimator, for new optical demands in a brownfield phase, based on a mathematical model of the physics of propagation During the greenfield phase and the network operation, we collect and correlate information on the QoT input parameters, issued from the established initial demands and available almost for free from the network elements: amplifiers output power and the SNR at the coherent receiver side Since we have some uncertainties on these input parameters of the QoT model, we use a machine learning algorithm to reduce them, improving the accuracy of the SNR estimation With this learning process and for a European backbone network (28 nodes, 41 links), we could reduce the QoT inaccuracy by several dBs for new demands whatever the amount of uncertainties of the initial parameters

85 citations


Cites background from "Design of Low-Margin Optical Networ..."

  • ...Many other techniques also exist in the literature [5][6]....

    [...]

  • ...Input Power {Pactual} N(p , p) p = 0 dBm p = 1 dBm {Pe} N(p , p) p = [-2,-1,0,1,2] dBm p = 1 dBm Noise Figure {NFactual} U[5, 7] dB {NFe} 5, 6 or 7 dB V....

    [...]

  • ...To ensure that all traffic demands in an optical network fulfill their target capacities, network designers add significant (up to several dBs) pre-defined “design margins” to the values predicted by the QoT model or tool [5][6]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: This paper analyzes in detail the GN-model errors and derives a complete set of formulas accounting for all single, cross, and multi-channel effects that constitute the enhanced GN- model (EGN-model), which is found to be very good when assessing detailed span-by-span NLI accumulation and excellent when estimating realistic system maximum reach.
Abstract: The GN-model has been proposed as an approximate but sufficiently accurate tool for predicting uncompensated optical coherent transmission system performance, in realistic scenarios. For this specific use, the GN-model has enjoyed substantial validation, both simulative and experimental. Recently, however, it has been pointed out that its predictions, when used to obtain a detailed picture of non-linear interference (NLI) noise accumulation along a link, may be affected by a substantial NLI overestimation error, especially in the first spans of the link. In this paper we analyze in detail the GN-model errors. We discuss recently proposed formulas for correcting such errors and show that they neglect several contributions to NLI, so that they may substantially underestimate NLI in specific situations, especially over low-dispersion fibers. We derive a complete set of formulas accounting for all single, cross, and multi-channel effects, This set constitutes what we have called the enhanced GN-model (EGN-model). We extensively validate the EGN model by comparison with accurate simulations in several different system scenarios. The overall EGN model accuracy is found to be very good when assessing detailed span-by-span NLI accumulation and excellent when estimating realistic system maximum reach. The computational complexity vs. accuracy trade-offs of the various versions of the GN and EGN models are extensively discussed.

414 citations

Journal ArticleDOI
TL;DR: In this paper, a nonlinear state-space model for nonlinearity mitigation, carrier recovery, and nanoscale device characterization is proposed, which allows for tracking and compensation of the XPM induced impairments by employing approximate stochastic filtering methods such as extended Kalman or particle filtering.
Abstract: Machine learning techniques relevant for nonlinearity mitigation, carrier recovery, and nanoscale device characterization are reviewed and employed. Markov Chain Monte Carlo in combination with Bayesian filtering is employed within the nonlinear state-space framework and demonstrated for parameter estimation. It is shown that the time-varying effects of cross-phase modulation (XPM) induced polarization scattering and phase noise can be formulated within the nonlinear state-space model (SSM). This allows for tracking and compensation of the XPM induced impairments by employing approximate stochastic filtering methods such as extended Kalman or particle filtering. The achievable gains are dependent on the autocorrelation (AC) function properties of the impairments under consideration which is strongly dependent on the transmissions scenario. The gain of the compensation method are therefore investigated by varying the parameters of the AC function describing XPM-induced polarization scattering and phase noise. It is shown that an increase in the nonlinear tolerance of more than 2 dB is achievable for 32 Gbaud QPSK and 16-quadratic-amplitude modulation (QAM). It is also reviewed how laser rate equations can be formulated within the nonlinear state-space framework which allows for tracking of nonLorentzian laser phase noise lineshapes. It is experimentally demonstrated for 28 Gbaud 16-QAM signals that if the laser phase noise shape strongly deviates from the Lorentzian, phase noise tracking algorithms employing rate equation-based SSM result in a significant performance improvement ( $>$ 8 dB) compared to traditional approaches using digital phase-locked loop. Finally, Gaussian mixture model is reviewed and employed for nonlinear phase noise compensation and characterization of nanoscale devices structure variations.

199 citations

Journal ArticleDOI
TL;DR: In this paper, the authors review the modeling of inter-channel nonlinear interference noise (NLIN) in fiber-optic communication systems, focusing on the accurate extraction of the NLIN variance, the dependence on modulation format, the role of nonlinear phase-noise, and the existence of temporal correlations.
Abstract: We review the modeling of inter-channel nonlinear interference noise (NLIN) in fiber-optic communication systems, focusing on the accurate extraction of the NLIN variance, the dependence on modulation format, the role of nonlinear phase-noise, and the existence of temporal correlations. We show ways in which temporal correlations can be exploited for reducing the impact of NLIN, and discuss the prospects of this procedure in future systems.

167 citations

Journal ArticleDOI
TL;DR: This paper analyses a number of long-haul network architectures from an unavailability point of view, finding that self-healing rings and dual fed systems offer the highest level of survivability, by eliminating service impacts caused by cable cuts and equipment failures.
Abstract: Network survivability is a key concern in today's network, and will become increasingly important in future optical networks as they carry ever more traffic. Networks are also becoming more complex, with the requirement for increased functionality. Currently, there is a lack of understanding in the industry as to the exact relationship between the choice of network architecture and the meeting of a set availability objective. This paper analyses a number of long-haul network architectures from an unavailability point of view. The long-haul networks analyzed include: networks with diversity, networks with restoration capability, and networks with survivability. Derivations are given for each architecture; formulas for 2 and 4-fiber rings, and dual fed routing are new. A hypothetical reference connection (HRX) and its unavailability objectives are used as references. Networks with restoration capability and networks with survivability meet the proposed objective. Self-healing rings (both 2 and 4-fiber bidirectional line switched rings) and dual fed systems offer the highest level of survivability, by eliminating service impacts caused by cable cuts and equipment failures. >

165 citations

Journal ArticleDOI
TL;DR: By using link-length demands from an exemplary distance-diverse network, it is demonstrated that time-domain hybrid-QAM-enabled fine-grain rate-adaptable transponders can reduce network cost by more than 20 percent within a traditional, fixed-bandwidth, wavelength-division-multiplexed grid.
Abstract: We discuss the emerging rate-adaptable optical transmission technology and how this new technology may be employed to further reduce the transport network cost to meet ever growing bandwidth demand in the core network Two different types of transponders are considered: those adjusting either the transported bit rate (ie, client data rate) or the symbol rate (with a fixed bit rate) We propose a methodology for calculating the (normalized) cost to build out an entire long-haul transport network with several options for bit-rate-adaptable transponders By using link-length demands from an exemplary distance-diverse network, we demonstrate that time-domain hybrid-QAM-enabled fine-grain rate-adaptable transponders can reduce network cost by more than 20 percent within a traditional, fixed-bandwidth, wavelength-division-multiplexed grid We also argue that the total transponder expense using symbol-rate-adaptable technology will be greater than when using bit-rate-adaptable technology, as well as requiring more costly flex-grid ROADMs for channel routing

100 citations


"Design of Low-Margin Optical Networ..." refers background in this paper

  • ...Assuming that the deployed optical TRX is rateflexible and 200 Gb/s-capable: when the demand grows, for instance to 200 Gb/s, the U-margin is leveraged and the extra capacity can be allocated without replacing the optical TRX, by simply changing its modulation format from PDM-QPSK to PDM-16QAM (Fig....

    [...]