scispace - formally typeset
Search or ask a question

Showing papers in "IEEE\/OSA Journal of Optical Communications and Networking in 2021"


Journal ArticleDOI
Yvan Pointurier1
TL;DR: In this article, a taxonomy for ML-aided QoT estimation is proposed, and a review and comparison of all recently published machine learning-assisted optical performance monitoring articles is provided.
Abstract: The estimation of the quality of transmission (QoT) in optical systems with machine learning (ML) has recently been the focus of a large body of research. We discuss the sources of inaccuracy in QoT estimation in general; we propose a taxonomy for ML-aided QoT estimation; we briefly review ML-aided optical performance monitoring, a tightly related topic; and we review and compare all recently published ML-aided QoT articles.

65 citations


Journal ArticleDOI
TL;DR: In this paper, a multiband optimized optical power control for BDM upgrades is proposed, which consists of setting a pre-tilt and power offset in the line amplifiers, thus achieving a considerable increase in QoT, both in average value and flatness.
Abstract: Spatial-division multiplexing (SDM) and band-division multiplexing (BDM) have emerged as solutions to expand the capacity of existing C-band wavelength-division multiplexing (WDM) optical systems and to deal with increasing traffic demands An important difference between these two approaches is that BDM solutions enable data transmission over unused spectral bands of already-deployed optical fibers, whereas SDM solutions require the availability of additional fibers to replicate C-band WDM transmission On the other hand, to properly design a multiband optical line system (OLS), the following fiber propagation effects have been taken into account in the analysis: (i) stimulated Raman scattering (SRS), which induces considerable power transfer among bands; (ii) frequency dependence of fiber parameters such as attenuation, dispersion, and nonlinear coefficients; and (iii) utilization of optical amplifiers with different doping materials, thus leading to different characteristics, eg, in terms of noise figures This work follows a two-step approach: First, we aim at maximizing and flattening the quality of transmission (QoT) when adding L- and ${\rm L} {+} {\rm S}$-bands to a traditional WDM OLS where only the C-band is deployed This is achieved by applying a multiband optimized optical power control for BDM upgrades, which consists of setting a pre-tilt and power offset in the line amplifiers, thus achieving a considerable increase in QoT, both in average value and flatness Second, the SDM approach is used as a benchmark for the BDM approach by assessing network performance on three network topologies with different geographical footprints We show that, with optical power properly optimized, BDM may enable an increase in network traffic, slightly less than an SDM upgrade but still comparable, without requiring additional fiber cables

59 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate how to employ ML regression approaches to estimate the distribution of the received generalized signal-to-noise ratio (GSNR) of unestablished lightpaths, and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools.
Abstract: Estimating the quality of transmission (QoT) of a candidate lightpath prior to its establishment is of pivotal importance for effective decision making in resource allocation for optical networks. Several recent studies investigated machine learning (ML) methods to accurately predict whether the configuration of a prospective lightpath satisfies a given threshold on a QoT metric such as the generalized signal-to-noise ratio (GSNR) or the bit error rate. Given a set of features, the GSNR for a given lightpath configuration may still exhibit variations, as it depends on several other factors not captured by the features considered. It follows that the GSNR associated with a lightpath configuration can be modeled as a random variable and thus be characterized by a probability distribution function. However, most of the existing approaches attempt to directly answer the question “is a given lightpath configuration (e.g., with a given modulation format) feasible on a certain path?” but do not consider the additional benefit that estimating the entire statistical distribution of the metric under observation can provide. Hence, in this paper, we investigate how to employ ML regression approaches to estimate the distribution of the received GSNR of unestablished lightpaths. In particular, we discuss and assess the performance of three regression approaches by leveraging synthetic data obtained by means of two different data generation tools. We evaluate the performance of the three proposed approaches on a realistic network topology in terms of root mean squared error and R2 score and compare them against a baseline approach that simply predicts the GSNR mean value. Moreover, we provide a cost analysis by attributing penalties to incorrect deployment decisions and emphasize the benefits of leveraging the proposed estimation approaches from the point of view of a network operator, which is allowed to make more informed decisions about lightpath deployment with respect to state-of-the-art QoT classification techniques.

41 citations


Journal ArticleDOI
Junwen Zhang1, Zhensheng Jia1, Mu Xu1, Haipeng Zhang1, Luis Alberto Campos1 
TL;DR: The coherent upstream burst-mode detection of 100-Gb/s polarization-division-multiplexed quadrature-phase-shift-keying signals based on a 71.68-ns preamble and corresponding burst- mode DSP, achieving a 36-dB power budget after 50-km single-mode fiber transmission is demonstrated.
Abstract: Coherent optics has been proved to be a promising candidate for 100 Gb/s and even beyond single-wavelength time-division multiplexing passive optical networks (TDM-PONs). However, one of the key issues in TDM coherent-PON is how to efficiently and robustly achieve upstream burst-mode coherent detection. To solve this problem, we proposed and demonstrated a reliable and efficient preamble design with corresponding burst-mode digital signal processing (DSP) for coherent upstream burst-mode detection in TDM coherent-PON. To reduce the preamble length, a special designed preamble unit is used for three burst-mode DSP functions including frame synchronization, state-of-the polarization estimation, and frequency-offset estimation. The efficiency of the designed preamble and the overall performances under different test conditions are experimentally verified. We also confirmed the robust performance in large frequency offset, residual fiber dispersion, and long-time running. As a proof-of-concept, we demonstrate the coherent upstream burst-mode detection of 100-Gb/s polarization-division-multiplexed quadrature-phase-shift-keying (PDM-QPSK) signals based on a 71.68-ns preamble and corresponding burst-mode DSP, achieving a 36-dB power budget after 50-km single-mode fiber transmission. Around 20-dB dynamic range of received power is also experimentally verified for burst signals in the ${100\,\,\text{Gb/s/}}\lambda$100Gb/s/λ TDM coherent-PON.

40 citations


Journal ArticleDOI
TL;DR: The requirements needed and the impact on this for next-generation transceiver technologies based on past and present PON transceiver designs are analyzed.
Abstract: This paper provides an overview of transceiver technologies to be used for current and next-generation passive optical networks (PONs). The uninterrupted scaling of PONs to higher bitrates in a cost-effective way to meet future bandwidth demands will drive the need for continuous improvement in PON transceiver technologies. In this paper we try to analyze the requirements needed and the impact on this for next-generation transceiver technologies based on past and present PON transceiver designs.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a novel functional block called the security operation center (SOC) to boost efficiency of ML-based security diagnostic techniques when processing high-dimensional optical performance monitoring data in the presence of previously unseen physical-layer attacks.
Abstract: As the communication infrastructure that sustains critical societal services, optical networks need to function in a secure and agile way. Thus, cognitive and automated security management functionalities are needed, fueled by the proliferating machine learning (ML) techniques and compatible with common network control entities and procedures. Automated management of optical network security requires advancements both in terms of the performance and efficiency of ML approaches for security diagnostics, as well as novel management architectures and functionalities. This paper tackles these challenges by proposing what we believe to be a novel functional block called the security operation center, describing its architecture, specifying key requirements on the supported functionalities, and providing guidelines on its integration with optical-layer controller. Moreover, to boost efficiency of ML-based security diagnostic techniques when processing high-dimensional optical performance monitoring data in the presence of previously unseen physical-layer attacks, we combine unsupervised and semi-supervised learning techniques with three different dimensionality reduction methods and analyze the resulting performance and trade-offs between the ML accuracy and run-time complexity.

27 citations


Journal ArticleDOI
TL;DR: This study compares the performance of two ML methodologies explicitly designed to augment small-sized training datasets, namely, active learning and domain adaptation, for the estimation of the signal-to-noise ratio (SNR) of an unestablished lightpath.
Abstract: Machine learning (ML) is currently being investigated as an emerging technique to automate quality of transmission (QoT) estimation during lightpath deployment procedures in optical networks. Even though the potential network-resource savings enabled by ML-based QoT estimation has been confirmed in several studies, some practical limitations hinder its adoption in operational network deployments. Among these, the lack of a comprehensive training dataset is recognized as a main limiting factor, especially in the early network deployment phase. In this study, we compare the performance of two ML methodologies explicitly designed to augment small-sized training datasets, namely, active learning (AL) and domain adaptation (DA), for the estimation of the signal-to-noise ratio (SNR) of an unestablished lightpath. This comparison also allows us to provide some guidelines for the adoption of these two techniques at different life stages of a newly deployed optical network infrastructure. Results show that both AL and DA permit us, starting from limited datasets, to reach a QoT estimation capability similar to that achieved by standard supervised learning approaches working on much larger datasets. More specifically, we observe that a few dozen additional samples acquired from selected probe lightpaths already provide significant performance improvement for AL, whereas a few hundred samples gathered from an external network topology are needed in the case of DA.

26 citations


Journal ArticleDOI
TL;DR: In this article, a vendor-agnostic optical line controller architecture capable of autonomously setting the working point of optical amplifiers to maximize the capacity of a ROADM-to-ROADM (reconfigurable optical add-drop multiplexer) link is proposed.
Abstract: In the direction of disaggregated and cognitive optical networks, this work proposes and experimentally tests a vendor-agnostic optical line controller architecture capable of autonomously setting the working point of optical amplifiers to maximize the capacity of a ROADM-to-ROADM (reconfigurable optical add–drop multiplexer) link. From a procedural point of view, once the equipment is installed, the presented software framework performs an automatic characterization of the line, span by span, to abstract the properties of the physical layer. This process requires the exploitation of monitoring devices such as optical channel monitors and optical time domain reflectometers, available, in a future perspective, in each amplification site. On the basis of this information, an optimization algorithm determines the working point of each amplifier to maximize the quality of transmission (QoT) over the entire band. The optical line controller has been experimentally tested in the laboratory using two different control strategies, achieving in both cases a homogeneous QoT for each channel close to the maximum average and an excellent match with respect to emulation results. In this framework, the Gaussian noise simulation in Python (GNPy) open source Python library is used as the physical model for optical propagation through the fiber, and the covariance matrix adaptation evolution strategy is used as an optimization algorithm to identify properties of each fiber span and to maximize the link capacity.

24 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe a series of workflows for the whitebox transponder, including getting optical performance data from the coherent optical transceiver, diagnosing optical transmission line conditions by applying deep neural networks (DNNs) to the collected data, and notifying the remote network management system (NMS) of the diagnosis results.
Abstract: In recent years, optical networks have become more complex due to traffic increase and service diversification, and it has become increasingly difficult for network operators to monitor large-scale networks and keep track of communication status at all times, as well as to control and operate the various services running on the networks. This issue is motivating the need for autonomous optical network diagnosis, and expectations are growing for the use of machine learning and deep learning. Another trend is the active movement toward reducing capital expenditure (CAPEX)/operational expenditure (OPEX) of optical transport equipment by employing whitebox hardware, open source software, and open interfaces. In this paper, we describe in detail the concept of a series of workflows for the whitebox transponder, including getting optical performance data from the coherent optical transceiver, diagnosing optical transmission line conditions by applying deep neural networks (DNNs) to the collected data, and notifying the remote network management system (NMS) of the diagnosis results. In addition, as one of the use cases, we demonstrate fiber bending detection based on the diagnosis workflow. Offline and online demonstrations show the deployed diagnosis system can identify the fiber bend with up to 99% accuracy in our evaluation environment.

24 citations


Journal ArticleDOI
TL;DR: This paper has shown how the hardware abstraction layer interfaces of optical transceivers are implemented for multivendor and heterogeneous environments, coherent digital signal processor interoperability, and optical transport whiteboxes, and driven the effort to define the transponder abstraction interface with partners.
Abstract: In this paper, we identify challenges in developing future optical network infrastructure for new services based on technologies such as 5G, virtual reality, and artificial intelligence, and we suggest approaches to handling these challenges that include a business model, architecture, and diversity. Through activities in multiservice agreement and de facto standard organizations, we have shown how the hardware abstraction layer interfaces of optical transceivers are implemented for multivendor and heterogeneous environments, coherent digital signal processor interoperability, and optical transport whiteboxes. We have driven the effort to define the transponder abstraction interface with partners. The feasibility of such implementation was verified through demonstrations and trials. In addition, we are constructing an open-transport platform by combining existing open-source software and implementing software components that automate and enhance operations. An open architecture maintains a healthy ecosystem for industry and allows for a flexible, operator-driven network.

22 citations


Journal ArticleDOI
TL;DR: Two digitally precompensated modulation schemes that are highly tolerant of chromatic dispersion are compared, showing a possible extension to C-band operation, preserving direct-detection and linear-impairment equalization at the optical network unit side.
Abstract: The future-generation passive optical network (PON) physical layer, targeting 100 Gbps/wavelength, will have to deal with severe optoelectronics bandwidth and chromatic dispersion limitations. In this paper, largely extending our Optical Fiber Communication Conference (OFC) 2020 invited paper, we review 100 Gbps/wavelength PON downstream alternatives over standard single-mode fiber in the O- and C-bands, analyzing three modulation formats (PAM-4, partial-response PAM-4, and PAM-8), two types of direct-detection receivers (APD- and SOA $+$+ PIN-based), and three digital reception strategies (unequalized, feed-forward equalized, and decision-feedback equalized). We evaluate by means of simulations the performance of these alternatives under different optoelectronics bandwidth and dispersion scenarios, identifying O-band feasible solutions able to reach 20 km of fiber and an optical path loss of at least 29 dB over a wide wavelength range of operation. Finally, we compare two digitally precompensated modulation schemes that are highly tolerant of chromatic dispersion, showing a possible extension to C-band operation, preserving direct-detection and linear-impairment equalization at the optical network unit side.

Journal ArticleDOI
TL;DR: SDN Enabled Broadband Access (SEBA) as mentioned in this paper is a large open-source development and integration project hosted by the Open Networking Foundation (ONF). Built using white-box hardware, merchant silicon, and SDN principles, SEBA brings the benefits of virtualization and cloudification to passive optical network (PON)-based broadband access networks for FTTH/FTTB deployments.
Abstract: SDN Enabled Broadband Access (SEBA) is a large open-source development and integration project hosted by the Open Networking Foundation (ONF). Built using white-box hardware, merchant silicon, and SDN principles, SEBA brings the benefits of virtualization and cloudification to passive optical network (PON)-based broadband access networks for FTTH/FTTB deployments. Already in field trials with several Tier-1 operators, SEBA is rapidly moving toward production readiness with contributions from a rich open-source community of operators, vendors, and system integrators.

Journal ArticleDOI
TL;DR: In this article, point-to-point (PtP), wavelength division multiplexing (WDM) and time division multiple-layer (TDM) optical interfaces are discussed as solutions for backhaul, midhaul, and fronthaul networks.
Abstract: Point to point (PtP), wavelength division multiplexing (WDM) and time division multiplexing (TDM) optical interfaces are discussed as solutions for backhaul, midhaul, and fronthaul networks. The evolution of radio access networks (RANs) for 5G and beyond is introduced and PtP is identified as the most deployed solution, with many transceiver technologies available to cover the different needs for each RAN configuration. WDM and TDM interfaces remain of interest when a lack of fiber occurs. WDM technologies are being adapted to answer to this RAN market with the appearance of medium-WDM (MWDM) and autotunable dense-WDM (DWDM) transceivers. TDM technologies are trying to evolve towards higher bit rates and lower latency to cope with RAN backhaul specifications. A gap in the transceiver technologies is identified for each of those interface types and also for bit rates above 25 Gbit/s that will impose more complex optics, electronics, and integration.

Journal ArticleDOI
TL;DR: In this article, an ML-based soft-failure localization framework is proposed for partial telemetry. But, the proposed framework is based on an artificial neural network (ANN) trained by optical signal and noise power models that simulate the network telemetry upon all possible failure scenarios.
Abstract: Soft-failure localization frameworks typically use if-else rules to localize failures based on the received telemetry data. However, in certain cases, particularly in disaggregated networks, some devices may not implement telemetry, or their telemetry may not be readily available. Alternatively, machine-learning-based (ML-based) frameworks can automatically learn complex relationships between telemetry and the fault location, incorporating information from the telemetry data collected network-wide. This paper evaluates an ML-based soft-failure localization framework in scenarios of partial telemetry. The framework is based on an artificial neural network (ANN) trained by optical signal and noise power models that simulate the network telemetry upon all possible failure scenarios. The ANN can be trained in less than 2 min, allowing it to be retrained according to the available partial telemetry data. The ML-based framework exhibits excellent performance in scenarios of partial telemetry, practically interpolating the missing data. We show that in the rare cases of incorrect failure localization, the actual failure is in the localized device’s vicinity. We also show that ANN training is accelerated by principal component analysis and can be carried out using cloud-based services. Finally, the evaluated ML-based framework is emulated in a software-defined-networking-based setup using the gNMI protocol for streaming telemetry.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed an artificial-intelligence-assisted forecast system to predict latency and geolocation in advance and trigger faster edge steering in a P4-based in-band network telemetry (INT) system.
Abstract: In beyond-5G networks, detailed end-to-end monitoring of specific application traffic will be required along with the access-backhaul-cloud continuum to enable low latency service due to local edge steering. Current monitoring solutions are confined to specific network segments. In-band network telemetry (INT) technologies for software defined network (SDN) programmable data planes based on the P4 language are effective in the backhaul network segment, although limited to inter-switch latency; therefore, link latencies including wireless and optical segments are excluded from INT monitoring. Moreover, information such as user equipment (UE) geolocation would allow detailed mobility monitoring and improved cloud-edge steering policies. However, the synchronization between latency and location information, typically provided by different platforms, is hard to achieve with current monitoring systems. In this paper, P4-based INT is proposed to be thoroughly extended involving UE. The INT mechanism is designed to provide synchronized and accurate end-to-end latency and geolocation information, enabling decentralized steering policies, i.e., involving UE and selected switches, without SDN controller intervention. The proposal also includes an artificial-intelligence-assisted forecast system able to predict latency and geolocation in advance and trigger faster edge steering.

Journal ArticleDOI
TL;DR: In this paper, a machine learning agent trained on a dataset from an in-service network is used to reduce the uncertainty in the generalized signal-to-noise ratio (GSNR) computation on an unused sister network, based on the same optical transport equipment.
Abstract: Precise computation of the quality of transmission (QoT) of lightpaths (LPs) in transparent optical networks has techno-economic importance for any network operator. The QoT metric of LPs is defined by the generalized signal-to-noise ratio (GSNR), which includes the effects of both amplified spontaneous emission noise and nonlinear interference accumulation. Generally, the physical layer of a network is characterized by nominal values provided by vendors for the operational parameters of each network element (NE). Typically, NEs suffer a variation in the working point that implies an uncertainty from the nominal value, which creates uncertainty in the GSNR computation and requires the deployment of a system margin. We propose the use of a machine learning agent trained on a dataset from an in-service network to reduce the uncertainty in the GSNR computation on an unused sister network, based on the same optical transport equipment and thus following the transfer learning paradigm. We synthetically generate datasets for both networks using the open-source library GNPy and show how the proposed deep neural network based on TensorFlow may substantially reduce the GSNR uncertainty and, consequently, the needed margin. We also present a statistical analysis of the observed GSNR fluctuations, showing that the per-wavelength GSNR distribution is always well-approximated as Gaussian, enabling a statistical closed-form approach to the margin setting.

Journal ArticleDOI
TL;DR: This paper presents the first experimental demonstration of transport-application-programming-interface-enabled software defined networking control architecture for partially disaggregated multi-domain and multi-layer (WDM over SDM) optical networks.
Abstract: Network operators are facing a critical issue on their optical transport networks to deploy 5G$+$+ and Internet of Things services. They need to address the capacity increase by a factor of 10, while keeping a similar cost per user. Over the past years, network operators have been working on the optical disaggregated approach with great interest for achieving the required efficiency and cost reduction. In particular, partially disaggregated optical networks make it possible to decouple the transponders from the transport system (known as an open line system) that are provided by different vendors. On the other hand, space division multiplexing (SDM) has been proposed as the key technology to overcome the capacity crunch that the optical standard single-mode fibers are facing to support the forecasted $10 \times$10× growth. Spatial core switching is gaining interest because it makes it possible to deploy SDM networks to bypass the overloaded wavelength division multiplexing (WDM) networks, by provisioning spatial media channels between WDM nodes. This paper presents, to the best of our knowledge, the first experimental demonstration of transport-application-programming-interface-enabled software defined networking control architecture for partially disaggregated multi-domain and multi-layer (WDM over SDM) optical networks.

Journal ArticleDOI
TL;DR: This work associates machine learning and an analytical model (i.e., the Gaussian noise model) to reduce uncertainties on the output power profile and the noise figure of each amplifier in an optical network, and leverage the signal-to-noise ratio (SNR) of all the light paths of an Optical network, monitored in all the coherent receivers.
Abstract: By associating machine learning and an analytical model (i.e., the Gaussian noise model), we reduce uncertainties on the output power profile and the noise figure of each amplifier in an optical network. We leverage the signal-to-noise ratio (SNR) of all the light paths of an optical network, monitored in all the coherent receivers. The learning process is based on a gradient-descent algorithm where all the uncertain input parameters of the analytical model are iteratively modified from their estimated values to match with the SNR of light paths in a European optical network. The design margin is then reduced to 0.1 dB for new traffic demands.

Journal ArticleDOI
TL;DR: In this article, a new variant of the BA model, taking into account the internodal signal-to-noise ratio, is proposed, which captures both the effects of graph structure and physical properties to generate better networks than traditional methods.
Abstract: The key goal in optical network design is to introduce intelligence in the network and deliver capacity when and where it is needed. It is critical to understand the dependencies between network topology properties and the achievable network throughput. Real topology data of optical networks are scarce, and often large sets of synthetic graphs are used to evaluate their performance including proposed routing algorithms. These synthetic graphs are typically generated via the Erdos–Renyi (ER) and Barabasi–Albert (BA) models. Both models lead to distinct structural properties of the synthetic graphs, including degree and diameter distributions. In this paper, we show that these two commonly used approaches are not adequate for the modeling of real optical networks. The structural properties of optical core networks are strongly influenced by internodal distances. These, in turn, impact the signal-to-noise ratio, which is distance dependent. The analysis of optical network performance must, therefore, include spatial awareness to better reflect the graph properties of optical core network topologies. In this work, a new variant of the BA model, taking into account the internodal signal-to-noise ratio, is proposed. It is shown that this approach captures both the effects of graph structure and physical properties to generate better networks than traditional methods. The proposed model is compared to spatially agnostic approaches, in terms of the wavelength requirements and total information throughput, and highlights how intelligent choices can significantly increase network throughputs while saving fiber.

Journal ArticleDOI
TL;DR: The simulation results suggest that co-packaged optics form a promising solution to keep up with bandwidth scaling in future networks, while the reduced number of switching layers can lead to significant mean packet delay improvements that start from 30% and reach up to 74% for high-load conditions.
Abstract: We investigate the advantages of using co-packaged optics for building low-diameter, large-scale high-performance computing (HPC) and data center networks. The increased escape bandwidth offered by co-packaged optics can enable high-radix switch implementations of more than 150 switch ports, which can be combined with data rates of up to 400 Gb/s per port. From the network architecture perspective, the key benefits of using co-packaged optics in future fat-tree networks include (a) the ability to implement large-scale topologies of ${\gt}{11}{,}{000}$>11,000 end points by eliminating the need for a third switching layer and (b) the ability to provide up to ${4\times}$4× higher bisection bandwidth compared to existing solutions, reducing at the same time the number of required switch application-specific integrated circuits by ${\gt}{80}\%$>80%. From the network operation perspective, both reduced energy consumption and lower packet delays can be achieved since fewer hops are required; i.e., packets need to traverse fewer serializer/deserializer lanes and fewer switch buffers, which reduces the probability of contending with other packets and improves the tolerance of network congestion. The performance of the proposed architecture is evaluated via discrete-event simulations for a wide range of representative HPC synthetic-traffic cases that include both hotspot and non-hotspot scenarios. The simulation results suggest that co-packaged optics form a promising solution to keep up with bandwidth scaling in future networks, while the reduced number of switching layers can lead to significant mean packet delay improvements that start from 30% and reach up to 74% for high-load conditions.

Journal ArticleDOI
TL;DR: In this paper, a multitask learning model based on long short-term memory was proposed to detect, locate, and estimate the reflectance of fiber reflective faults including connectors and the mechanical splices by extracting insights from monitored data obtained by the optical time-domain reflectometry principle commonly used for troubleshooting of fiber optic cables or links.
Abstract: To reduce operation-and-maintenance expenses (OPEX) and to ensure optical network survivability, optical network operators need to detect and diagnose faults in a timely manner and with high accuracy. With the rapid advancement of telemetry technology and data analysis techniques, data-driven approaches leveraging telemetry data to tackle the fault diagnosis problem have been gaining popularity due to their quick implementation and deployment. In this paper, we propose a novel multitask learning model based on long short-term memory to detect, locate, and estimate the reflectance of fiber reflective faults (events) including the connectors and the mechanical splices by extracting insights from monitored data obtained by the optical time-domain reflectometry principle commonly used for troubleshooting of fiber optic cables or links. The experimental results prove that the proposed method (i) achieves a good detection capability and high localization accuracy within a short measurement time even for low SNR values and (ii) outperforms conventionally employed techniques.

Journal ArticleDOI
TL;DR: This paper presents a review of recent progress in achieving functions of communication, localization, resiliency, and dynamic networking using optical-layer techniques.
Abstract: Optical wireless access networks have seen rapid progress. With beam-steering capability, optical wireless communications can deliver very high capacity, support user mobility with indoor localization supported directly at the optical layer, be resilient against the blocking of beams by exploiting spatial diversity at the optical layer, and guarantee low-latency links with modified protocols and network architectures. This paper presents a review of recent progress in achieving functions of communication, localization, resiliency, and dynamic networking using optical-layer techniques.

Journal ArticleDOI
TL;DR: In this article, the authors demonstrate completely automated generation and collection of an ultra-large-scale experimental training dataset for ML-model-based QoT estimation by automation of transceivers and optical link parameters, as well as data transfer and DSP.
Abstract: Applications of machine learning (ML) models in optical communications and networks have been extensively investigated. For an optical wavelength-division-multiplexing (WDM) system, the quality of transmission (QoT) estimation generally depends on many parameters including the number and arrangement of WDM channels; launch power of each channel; number and distribution of fiber spans; attenuation, dispersion, and nonlinearity parameters and length of each fiber span; noise figure; gain and gain tilt of erbium-doped fiber amplifiers; transceiver noise; digital signal processing (DSP) performance; and so on. In recent years, ML-based QoT estimation schemes have gained significant attention. However, nearly all relevant works are conducted through simulations because it is difficult to obtain sufficient and high-quality datasets for training ML models. In this paper, we demonstrate completely automated generation and collection of an ultra-large-scale experimental training dataset for ML-model-based QoT estimation by automation of transceivers and optical link parameters, as well as data transfer and DSP. Implementation details and key codes of automation are presented. Artificial neural network models with one and two hidden layers are trained by the collected dataset, and brief QoT estimation results are evaluated and discussed to verify the performance and stability of the established automated system.

Journal ArticleDOI
TL;DR: This work considers modular sliceable bandwidth/bit rate variable transceivers (S-BVTs) based on vertical-cavity surface-emitting laser (VCSEL) technology and dense photonic integration, and analysis confirms that the system is promising to support Tb/s connections in future agile MANs.
Abstract: To deal with the challenging requirements of metropolitan area networks (MANs), it is essential to design cost-effective systems that can support high capacity and dynamic adaptation, as well as a synergy of programmability and efficient photonic technologies. This becomes crucial for very large MANs that support 5G, where multihop connections will need to be dynamically established at target capacities beyond Tb/s. Programmability, automation, and modularity of network elements are key desired features. In this work, a modular photonic system, programmable via a software-defined networking platform, designed for dynamic 5G-supportive MANs, is described and analyzed. We consider modular sliceable bandwidth/bit rate variable transceivers (S-BVTs) based on vertical-cavity surface-emitting laser (VCSEL) technology and dense photonic integration. The proposed system and its programmability are experimentally assessed using a VCSEL with 10 GHz bandwidth. The experiments are performed over connections as long as six-hop and 160 km, from low-level aggregation nodes to metro-core nodes, thereby enabling IP off-loading. Furthermore, a numerical model is derived to estimate the performance when adopting higher bandwidth VCSELs (${\ge} {18}\;{\rm GHz}$≥18GHz) and integrated coherent receivers, as targeted in the proposed system. The analysis is performed for both 50 GHz and 25 GHz granularities. In the former case, 50 Gb/s capacity per flow can be supported over the targeted connections, for optical signal-to-noise ratio values above 26 dB. When the granularity is 25 GHz, the filter narrowing effect severely impacts the performance. Nevertheless, 1.2 Tb/s capacity (scalable to higher values if spectral/spatial dimensions are exploited) can be achieved when configuring the S-BVT to enable 40 VCSEL flows. This confirms that the system is promising to support Tb/s connections in future agile MANs.

Journal ArticleDOI
TL;DR: This paper compares several DA approaches applied to the problem of estimating the QoT of an optical lightpath using a supervised ML approach and shows that, when the number of samples from the target domain is limited to a few dozen, DA approaches consistently outperform standard supervised ML techniques.
Abstract: Machine learning (ML) is increasingly applied in optical network management, especially in cross-layer frameworks where physical layer characteristics may trigger changes at the network layer due to transmission performance measurements (quality of transmission, QoT) monitored by optical equipment. Leveraging ML-based QoT estimation approaches has proven to be a promising alternative to exploiting classical mathematical methods or transmission simulation tools. However, supervised ML models rely on large representative training sets, which are often unavailable, due to the lack of the necessary telemetry equipment or of historical data. In such cases, it can be useful to use training data collected from a different network. Unfortunately, the resulting models may be uneffective when applied to the current network, if the training data (the source domain) is not well representative of the network under study (the target domain). Domain adaptation (DA) techniques aim at tackling this issue, to make possible the transfer of knowledge among different networks. This paper compares several DA approaches applied to the problem of estimating the QoT of an optical lightpath using a supervised ML approach. Results show that, when the number of samples from the target domain is limited to a few dozen, DA approaches consistently outperform standard supervised ML techniques.

Journal ArticleDOI
Takahito Tanimura1, Setsuo Yoshida1, Kazuyuki Tajima1, Shoichiro Oda1, Takeshi Hoshida1 
TL;DR: In this paper, a DSP-based optical power profile monitor was proposed and demonstrated toward optical network tomography that captures the whole physical status of an optical network, including in-span and wavelength-specific power profiles over a multi-span transmission light path.
Abstract: A new class of digital signal processing (DSP)-based fiber-longitudinal optical power profile monitor has recently been proposed and demonstrated toward optical network tomography that captures the whole physical status of an optical network, including in-span and wavelength-specific power profiles over a multi-span transmission light path. In this invited paper, we review the monitor that disentangles signal waveforms received by a standard digital coherent receiver to a distance-wise power profile over a multi-span transmission link and discuss its implementation aspect, including the advantages and limitations of its cloud/edge implementation, the dependency of the number representation in its algorithm, and a feasibility study on field-programmable gate array implementation.

Journal ArticleDOI
TL;DR: In this paper, the authors examined supervised machine learning methods using multiple artificial neural networks (ANNs) to build models for gain spectra prediction of optical transmission line EDFAs under different operating conditions.
Abstract: Optical transmission systems with high spectral efficiency require accurate quality of transmission estimation for optical channel provisioning. However, the wavelength-dependent gain effects of erbium-doped fiber amplifiers (EDFAs) complicate precise optical channel power prediction and low-margin operation. In this work, we examine supervised machine learning methods using multiple artificial neural networks (ANNs) to build models for gain spectra prediction of optical transmission line EDFAs under different operating conditions. Channel-loading configurations and channel input power spectra are used as an a posteriori knowledge data feature for model training. In a hybrid learning approach, estimated gain spectra calculated by an analytical model are added as an a priori input data feature to further improve the EDFA ANN model performance in terms of prediction accuracy, training time, and quantity of training data. Using these methods, the root mean square error and maximum absolute error of the predicted channel output power can be as low as 0.144 dB and 1.6 dB, respectively.

Journal ArticleDOI
TL;DR: The memory over optical network (MONet) as mentioned in this paper is a disaggregated data center architecture where serial (HMC)/parallel (DDR4) memory resources can be accessed over optically switched interconnects within and between racks.
Abstract: The memory over optical network (MONet) system is a disaggregated data center architecture where serial (HMC)/parallel (DDR4) memory resources can be accessed over optically switched interconnects within and between racks. An FPGA/ASIC-based custom hardware IP (ReMAT) supports heterogeneous memory pools, accommodates optical-to-electrical conversion for remote access, performs the required serial/parallel conversion, and hosts the necessary local memory controller. An optically interconnected HMC-based (serial I/O type) memory card is accessed by a memory controller embedded in the compute card, simplifying the hardware near the memory modules. This substantially reduces overheads on latency, cost, power consumption, and space. We characterize CPU–memory performance by experimentally demonstrating the impact of distance, number of switching hops, transceivers, channel bonding, and bit rate per transceiver on the bit error rate, power consumption, additional latency, sustained remote memory bandwidth/throughput (using industry standard benchmark STREAMS), and cloud workload performance (such as operations per second, average added latency, and retired instructions per second memcached with YCSB cloud workloads). MONet pushes the CPU–memory operational limit from a few centimeters to tens of meters, yet applications can experience as low as 10% performance penalty (at 36 m) compared to a direct-attached equivalent. Using the proposed parallel topology, a system can support up to 100,000 disaggregated cards.

Journal ArticleDOI
TL;DR: In this article, the authors conduct a comprehensive comparative study of quality-of-transmission (QoT) estimation for wavelength-division-multiplexed systems using artificial neural network (ANN)-based machine learning (ML) models and Gaussian noise (GN) model-based analytical models.
Abstract: We conduct a comprehensive comparative study of quality-of-transmission (QoT) estimation for wavelength-division-multiplexed systems using artificial neural network (ANN)-based machine learning (ML) models and Gaussian noise (GN) model-based analytical models. To obtain the best performance for comparison, we optimize all the system parameters for GN-based models in a brute-force manner. For ML models, we optimize the number of neurons, activation function, and number of layers. In simulation settings with perfect knowledge of system parameters and communication channels, GN-based analytical models generally outperform ANN models even though GN models are less accurate on the side channels due to the local white-noise assumption. In experimental settings, however, inaccurate knowledge of various link parameters degrades GN-based models, and ML generally estimates the QoT with better accuracy. However, ML models are temporally less stable and less generalizable to different link configurations. We also briefly study potential network capacity gains resulting from improved QoT estimators and reduced operating margins.

Journal ArticleDOI
TL;DR: In this paper, a Kafka-based monitoring framework leveraging the telemetry service is proposed, which allows a continuous monitoring of optical system data and their distribution through simple compressed text messages to a large number of consumers.
Abstract: Telemetry data acquisition is becoming crucial for efficient detection and timely reaction in the case of network status changes, such as failures. Streaming telemetry data to many collectors might be hindered by scalability issues, causing delay in localization and detection procedures. Providing efficient mechanisms for managing the massive telemetry traffic coming from network devices can pave the way to novel procedures, speeding up failure detection and thus minimizing response time. This paper proposes a novel Kafka-based monitoring framework leveraging the telemetry service. The proposed framework exploits the built-in scalability and reliability of Kafka to go beyond traditional monitoring systems. The framework allows a continuous monitoring of optical system data and their distribution through simple compressed text messages to a large number of consumers. Moreover, the proposed framework keeps a limited history of the monitored data, easing, for example, root cause failure analysis. The implemented monitoring platform is experimentally validated, considering the disaggregated paradigm, in terms of functional assessment, scalability, resiliency, and end-to-end message latency. Obtained results show that the framework is highly scalable, supporting up to around 4000 messages per second (and potentially more) with low CPU load, and is capable of achieving an end-to-end (i.e., producer–consumer) latency of about 50 ms. Moreover, the considered architecture is capable of overcoming the failure of a monitoring framework core component without losing any message.