scispace - formally typeset
Search or ask a question
Author

Sugang Xu

Bio: Sugang Xu is an academic researcher from National Institute of Information and Communications Technology. The author has contributed to research in topics: Network topology & Disaster recovery. The author has an hindex of 8, co-authored 76 publications receiving 267 citations. Previous affiliations of Sugang Xu include Waseda University & University of Tokyo.


Papers
More filters
Journal ArticleDOI
Cheng Zhang1, Bo Gu1, Kyoko Yamori2, Sugang Xu1, Yoshiaki Tanaka 
TL;DR: Analytical and experimental results show that the TDP benefits the NSPs, but the revenue improvement is limited due to the competition effect, and a duopoly NSP case is studied.
Abstract: Due to network users’ different time-preference, network traffic load usually significantly differs at different time. In traffic peak time, network congestion may happen, which make the quality of service for network users deteriorate. There are essentially two ways to improve the quality of services in this case: (1) Network service providers (NSPs) overprovision network capacity by investment; (2) NSPs use time-dependent pricing (TDP) to reduce the traffic at traffic peak time. However, overprovisioning network capacity can be costly. Therefore, some researchers have proposed TDP to control congestion as well as improve the revenue of NSP. But to the best of our knowledge, all of the literature related timedependent pricing scheme only consider the monopoly NSP case. In this paper, a duopoly NSP case is studied. The NSPs try to maximize their overall revenue by setting time-dependent price, while users choose NSP by considering their own preference, congestion status in the networks and the price set by the NSPs. Analytical and experimental results show that the TDP benefits the NSPs, but the revenue improvement is limited due to the competition effect. key words: time-dependent pricing, revenue maximization, duopoly competition

19 citations

Journal ArticleDOI
TL;DR: This work solves the optimization problem of joint progressive recovery to find the optimal sequence of network element and DC repairs with the objective to maximize cumulative weighted content reachability in the network, and proposes a scalable heuristic for scheduling the sequential repair of network nodes/links and DCs.
Abstract: Large-scale disasters affecting both network and datacenter (DC) infrastructures can cause severe disruptions in cloud-based services. During post-disaster recovery, repairs are usually carried out in stages in a progressive manner due to limited repair resource availability. The order in which network elements and DCs are repaired can significantly impact users’ reachability to important contents/services. We investigate joint progressive network and DC recovery in which network recovery and DC recovery are conducted in a coordinated manner such that users have access to the maximum possible amount of contents/services at each repair stage. We first solve the optimization problem of joint progressive recovery to find the optimal sequence of network element and DC repairs with the objective to maximize cumulative weighted content reachability in the network. We then propose a scalable heuristic for scheduling the sequential repair of network nodes/links and DCs. Our model assumes that, at each repair stage, one network node with adjacent links and one DC can be fully repaired; however, full recovery may not be guaranteed due to limited resource availability. Hence, we also propose a “resource-aware” approach (with two resource-allocation strategies, namely “selective allocation” and “adaptive allocation”), which considers both full and partial recovery of elements based on available resources at each stage. We show that, compared to disjoint progressive recovery approach, in which network recovery and DC recovery plans are independent, our joint progressive recovery approach provides significantly higher per-stage content reachability in the network.

18 citations

01 Nov 2015
TL;DR: This document provides mechanisms to support distributed wavelengths assignment with a choice of distributed wavelength assignment algorithms.
Abstract: This document provides extensions to Generalized Multiprotocol Label Switching (GMPLS) signaling for control of Wavelength Switched Optical Networks (WSONs). Such extensions are applicable in WSONs under a number of conditions including: (a) when optional processing, such as regeneration, must be configured to occur at specific nodes along a path, (b) where equipment must be configured to accept an optical signal with specific attributes, or (c) where equipment must be configured to output an optical signal with specific attributes. This document provides mechanisms to support distributed wavelength assignment with a choice of distributed wavelength assignment algorithms.

16 citations

Journal ArticleDOI
TL;DR: A simulated annealing approach is proposed to determine the target topology with a smaller logical topology change and satisfy the performance requirement and a threshold on the congestion performance requirement is used to balance the optimal congestion requirement and operation complexity.
Abstract: WDM optical networks represent the future direction of high capacity wide-area network applications. By creating optical paths between several nodes in the core networks, a logical topology can be created over the physical topology. By reconfiguring the logical topology, network resource utilization can be optimized corresponding to traffic pattern changes. From the viewpoint of network operation, the complexity of reconfiguration should be minimized as well. In this paper we consider the logical topology reconfiguration in arbitrary topology IP over WDM networks with balancing between network performance and operation complexity. The exact formulation of the logical topology reconfiguration problem is usually represented as Mixed Integer Linear Programming, but it grows intractable with increasing network size. Here we propose a simulated annealing approach in order to both determine the target topology with a smaller logical topology change and also satisfy the performance requirement. A threshold on the congestion performance requirement is used to balance the optimal congestion requirement and operation complexity. This is achieved by tuning this threshold to a feasible value. For effective solution discovery, a two-stage SA algorithm is developed for multiple objectives optimization.

15 citations

Journal ArticleDOI
TL;DR: This work proposes an approach for quick recreation of OPM and for achieving robust telemetry based on OpenConfig YANG that can tolerate low post-disaster bandwidth and can adapt the telemetry system following the changing conditions of the C/M-plane network.
Abstract: Optical performance monitoring (OPM) and the corresponding telemetry systems play an important role in modern optical transport networks based on software-defined networking (SDN). There have been extensive studies and standardization activities to build high-speed and high-accuracy OPM/telemetry systems that can ensure sufficient monitoring data for effective network control and management. However, current solutions for OPM/telemetry assume that control and management planes (C/M-plane) always provide sufficient bandwidth (BW) to deliver telemetry data. Unfortunately, in the event of several concurrent network failures (e.g., following a large-scale disaster), C/M-plane networks can become heavily degraded and/or unstable, and even experience isolation of some of their parts. Under such circumstances, the existing OPM systems would hardly function. To enhance resiliency and to ensure the quick recovery of OPM/telemetry in case of disaster, we propose an approach for quick recreation of OPM and for achieving robust telemetry based on OpenConfig YANG. Our proposal addresses three key problems: (1) how to quickly recreate the lost OPM capability, (2) how to address the mismatch between the high data rate of OPM and the low BW in the C/M-plane network, and (3) how to flexibly reconfigure the telemetry system to be adaptive to sudden BW changes in the C/M-plane network. We implement a testbed and experimentally demonstrate that our proposal can tolerate low post-disaster bandwidth and can adapt the telemetry system following the changing conditions of the C/M-plane network.

15 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, a bibliographical survey, general background and comparative analysis of three most commonly used techniques (i) Capacitor Placement, (ii) Feeder Reconfiguration, (iii) and DG Allocation) for loss minimization in distribution network based on over 147 published articles, so that new researchers can easily find literature particularly in this area.
Abstract: Distribution system provides a link between the high voltage transmission system and low voltage consumers thus I2R loss in a distributed system is high because of low voltage and high current. Distribution companies (DISCOs) have an economic enticement to reduce losses in their networks. Usually, this enticement is the cost difference between real and standard losses. Therefore, if real losses are higher than the standard ones, the DISCOs are economically penalized or if the opposite happens, they obtain a profit. Thus loss minimization problem is a well researched topic and all previous approaches vary from each other by selection of tool for loss minimization and thereafter either in their problem formulation or problem solution methods employed. Many methods of loss reduction exist like feeder reconfiguration, capacitor placement, high voltage distribution system, conductor grading, Distributed Generator (DG) Allocation etc. This paper gives a bibliographical survey, general background and comparative analysis of three most commonly used techniques (i) Capacitor Placement, (ii) Feeder Reconfiguration, (iii) and DG Allocation for loss minimization in distribution network based on over 147 published articles, so that new researchers can easily find literature particularly in this area.

179 citations

Journal ArticleDOI
TL;DR: This survey presents the state-of-the-art on the Path Computation Element (PCE) architecture for GMPLS-controlled networks carried out by research and standardization community, in which the PCE is shown to achieve a number of evident benefits.
Abstract: Quality of Service-enabled applications and services rely on Traffic Engineering-based (TE) Label Switched Paths (LSP) established in core networks and controlled by the GMPLS control plane. Path computation process is crucial to achieve the desired TE objective. Its actual effectiveness depends on a number of factors. Mechanisms utilized to update topology and TE information, as well as the latency between path computation and resource reservation, which is typically distributed, may affect path computation efficiency. Moreover, TE visibility is limited in many network scenarios, such as multi-layer, multi-domain and multi-carrier networks, and it may negatively impact resource utilization. The Internet Engineering Task Force (IETF) has promoted the Path Computation Element (PCE) architecture, proposing a dedicated network entity devoted to path computation process. The PCE represents a flexible instrument to overcome visibility and distributed provisioning inefficiencies. Communications between path computation clients (PCC) and PCEs, realized through the PCE Protocol (PCEP), also enable inter-PCE communications offering an attractive way to perform TE-based path computation among cooperating PCEs in multi-layer/domain scenarios, while preserving scalability and confidentiality. This survey presents the state-of-the-art on the PCE architecture for GMPLS-controlled networks carried out by research and standardization community. In this work, packet (i.e., MPLS-TE and MPLS-TP) and wavelength/spectrum (i.e., WSON and SSON) switching capabilities are the considered technological platforms, in which the PCE is shown to achieve a number of evident benefits.

122 citations

Journal ArticleDOI
TL;DR: In this article, a deep Q-network (DQN) based offloading algorithm was proposed to minimize the monetary cost and energy consumption of mobile users without a known mobility pattern.
Abstract: With the rapid increase in demand for mobile data, mobile network operators are trying to expand wireless network capacity by deploying wireless local area network (LAN) hotspots on to which they can offload their mobile traffic. However, these network-centric methods usually do not fulfill the interests of mobile users (MUs). Taking into consideration many issues such as different applications' deadlines, monetary cost and energy consumption, how the MU decides whether to offload their traffic to a complementary wireless LAN is an important issue. Previous studies assume the MU's mobility pattern is known in advance, which is not always true. In this paper, we study the MU's policy to minimize his monetary cost and energy consumption without known MU mobility pattern. We propose to use a kind of reinforcement learning technique called deep Q-network (DQN) for MU to learn the optimal offloading policy from past experiences. In the proposed DQN based offloading algorithm, MU's mobility pattern is no longer needed. Furthermore, MU's state of remaining data is directly fed into the convolution neural network in DQN without discretization. Therefore, not only does the discretization error present in previous work disappear, but also it makes the proposed algorithm has the ability to generalize the past experiences, which is especially effective when the number of states is large. Extensive simulations are conducted to validate our proposed offloading algorithms.

36 citations

Journal ArticleDOI
TL;DR: The state-of-the-art, potentials, and limitations of the ONOS controller applied to disaggregated optical networks are reported on with specific focus on the ongoing activities within the ODTN working group.
Abstract: Use of disaggregated equipment in optical transport networks is emerging as an attractive solution to bring flexibility and break vendor lock-in dependencies. The disaggregation process requires standard protocols and interfaces between the control plane and network equipment. NETCONF has been selected as the standard protocol, and multiple initiatives are currently working on the definition of standard models for each type of data plane device. Different levels of disaggregation of the data plane are under evaluation, and it is still not clear up to which level it will be useful to disaggregate the data plane. The disaggregation of optical networks yielded the development of several controllers based on software-defined networking concepts, providing an environment for creating and deploying networking application on optical networks. Among them, the Open Network Operating System (ONOS) controller features the most active community with the recent establishment of the Open and Disaggregated Transport Network (ODTN) working group, specifically focused on the introduction of required functionality to control and monitor disaggregated transport networks. This paper reports on the state-of-the-art, potentials, and limitations of the ONOS controller applied to disaggregated optical networks with specific focus on the ongoing activities within the ODTN working group. Then, the paper describes a set of experiments performed on a setup including both emulated and real optical devices controlled with ONOS. The performed experiments consider both the establishment of a connectivity service and the recovery of the connectivity in case of failure on the data plane.

33 citations

Journal ArticleDOI
TL;DR: A study of the relationship between PSP and ESP in the simultaneous-play game (SPG) scenario, in which they compete to set prices of their cloud services simultaneously, shows that MUs prefer to select service from the edge cloud if the number of tasks they run is small.
Abstract: With offloading the tasks that mobile users (MUs) running in their mobile devices (MDs) to the data centers of remote public clouds, mobile cloud computing (MCC) can greatly improve the computing capacity and prolong the battery life of MDs. However, the data centers of remote public cloud are generally far from the MUs, thus long delay will be caused due to the transmission from the base station to the public clouds over the Internet. Mobile edge computing (MEC) is recognized as a promising technique to augment the computation capabilities of MDs and shorten the transmission delay. Nevertheless, compared with the traditional MCC and MEC generally has a limited number of cloud resources. Therefore, making a choice on offloading task to the MCC or MEC is a challenging issue for each MU. In this paper, we investigate service selection in a mobile cloud architecture, in which MUs select cloud services from two cloud service providers (CSPs), i.e., public cloud service provider (PSP) and an edge cloud service provider (ESP). We use M/M/$\infty $ queue and M/M/1 queue to model PSP and ESP, respectively. We analyze the interaction of the two CSPs and MUs by adopting Stackelberg game, in which PSP and ESP set the prices first, and then the MUs decide to select cloud services based on performances and prices. In particular, we study the relationship between PSP and ESP in the simultaneous-play game (SPG) scenario, in which they compete to set prices of their cloud services simultaneously. Our numerical results show that MUs prefer to select service from the edge cloud if the number of tasks they run is small. In another hand, more tasks will be offloaded to the remote public cloud if the number of tasks they run becomes large.

30 citations