scispace - formally typeset
Search or ask a question

Showing papers by "Biswanath Mukherjee published in 2016"


Journal ArticleDOI
TL;DR: This article aims to guide service providers, industry practitioners, and local entrepreneurs with a technology-and-deployment-trend analysis to choose, deploy, and operate suitable telecommunication networks depending on the unique features of the rural/remote area.
Abstract: Increasing economic and educational exposure, and promotion of global health and wellness can be achieved through the power of sharing knowledge, technology, and resources. ICT can play a key role in disseminating such knowledge across the world. But a digital divide exists between urban and rural/remote areas, which results in economic and social disparities across regions. Developing last-mile telecommunication technologies for rural/remote areas is a crucial aspect in providing computing and ICT services that can integrate millions of stakeholders in rural/remote areas globally into the digital age, particularly with the advent of cloud computing. This article focuses on the different aspects of providing last-mile rural telecommunication access such as interfering factors, technology options, and deployment trends. This article aims to guide service providers, industry practitioners, and local entrepreneurs with a technology-and-deployment-trend analysis to choose, deploy, and operate suitable telecommunication networks depending on the unique features of the rural/remote area. Our goal is to bring attention to accessible and affordable technologies with practical considerations.

156 citations


Journal ArticleDOI
TL;DR: This paper comprehensively survey a large body of work focusing on resilience of cloud computing, in each (or a combination of the server, network, and application components), and introduces and categorizes a large number of techniques for cloud computing infrastructure resiliency.
Abstract: Today’s businesses increasingly rely on cloud computing, which brings both great opportunities and challenges. One of the critical challenges is resiliency: disruptions due to failures (either accidental or because of disasters or attacks) may entail significant revenue losses (e.g., US$ 25.5 billion in 2010 for North America). Such failures may originate at any of the major components in a cloud architecture (and propagate to others): 1) the servers hosting the application; 2) the network interconnecting them (on different scales, inside a data center, up to wide-area connections); or 3) the application itself. We comprehensively survey a large body of work focusing on resilience of cloud computing, in each (or a combination) of the server, network, and application components. First, we present the cloud computing architecture and its key concepts. We highlight both the infrastructure (servers, network) and application components. A key concept is virtualization of infrastructure (i.e., partitioning into logically separate units), and thus we detail the components in both physical and virtual layers. Before moving to the detailed resilience aspects, we provide a qualitative overview of the types of failures that may occur (from the perspective of the layered cloud architecture), and their consequences. The second major part of the paper introduces and categorizes a large number of techniques for cloud computing infrastructure resiliency. This ranges from designing and operating the facilities, servers, networks, to their integration and virtualization (e.g., also including resilience of the middleware infrastructure). The third part focuses on resilience in application design and development. We study how applications are designed, installed, and replicated to survive multiple physical failure scenarios as well as disaster failures.

111 citations


Journal ArticleDOI
TL;DR: This paper proposes and leverage the concept of a virtual base station (VBS), which is dynamically formed for each cell by assigning virtualized network resources, i.e., a virtualized fronthaul link connecting the DU and RU, and virtualized functional entities performing baseband processing in DU cloud.
Abstract: In recent years, the increasing traffic demand in radio access networks (RANs) has led to considerable growth in the number of base stations (BSs), posing a serious scalability issue, including the energy consumption of BSs. Optical-access-enabled Cloud-RAN (CRAN) has been recently proposed as a next-generation access network. In CRAN, the digital unit (DU) of a conventional cell site is separated from the radio unit (RU) and moved to the “cloud” (DU cloud) for centralized signal processing and management. Each DU/RU pair exchanges bandwidth-intensive digitized baseband signals through an optical access network (fronthaul). Time-wavelength division multiplexing (TWDM) passive optical network (PON) is a promising fronthaul solution due to its low energy consumption and high capacity. In this paper, we propose and leverage the concept of a virtual base station (VBS), which is dynamically formed for each cell by assigning virtualized network resources, i.e., a virtualized fronthaul link connecting the DU and RU, and virtualized functional entities performing baseband processing in DU cloud. We formulate and solve the VBS formation (VF) optimization problem using an integer linear program (ILP). We propose novel energy-saving schemes exploiting VF for both the network planning stage and traffic engineering stage. Extensive simulations show that CRAN with our proposed VF schemes achieves significant energy savings compared to traditional RAN and CRAN without VF.

91 citations


Journal ArticleDOI
TL;DR: An integrated design for network function instance allocation and end-to-end demand realization sharing the same physical substrate network is proposed and it is demonstrated that the corresponding network design problem is NP complete.
Abstract: Network function virtualization is an emerging network resource utilization approach which decouples network functions from proprietary hardware and enables adaptive services to end-user requests. To accommodate the network function requests, network function instances are created and deployed at runtime. In this paper, we study a network virtualization scheme to orchestrate and manage networking and network function services. We propose an integrated design for network function instance allocation and end-to-end demand realization sharing the same physical substrate network and demonstrate that the corresponding network design problem is NP complete. A mixed-integer programming formulation is proposed first to find its optimal solution, followed by a two-player pure-strategy game model which captures the competition on physical resources between network function instance allocation and routing. We then design an algorithm based on iterative weakly dominated elimination in Game Theory. Computational results demonstrate the value of the integrated approach and its ability to allocate network function instances supporting end-to-end requests with limited physical resources in optical networks.

78 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated a disaster-aware submarine fiber-optic cable deployment optimization problem to minimize such expected costs in case of a disaster, and presented an Integer Linear Program to address the problem, together with illustrative numerical examples.
Abstract: With the increasing social and economic reliance on the Internet and the significant monetary and non-monetary societal cost associated with service interruption, network survivability is an important element in telecommunication network design. A major cause of Internet service interruption is breakage of fiber-optic cables due to man-made or natural disasters such as earthquakes. In addition to the societal cost, there is also cost of repairing damaged cables paid by the cable owner. A disaster-resilient submarine cable deployment can achieve significant cost saving when disaster strikes. In this study, we investigate a disaster-aware submarine fiber-optic cable deployment optimization problem to minimize such expected costs in case of a disaster. While selecting paths for the cables, our approach aims to minimize the expected cost for both cable owner and the affected society, considering that submarine fiber-optic cables may break because of natural disasters, subject to limitation of available deployment budget and other constraints. In our approach, localized disaster-unrelated potential disconnection (e.g., due to shark bites) are avoided by providing a backup cable along with primary cable. We consider a mesh topology network with multiple nodes located at different sea/ocean shores, submarine fiber-optic cables of irregular shape, and a topography of undersea environment. We present an Integer Linear Program to address the problem, together with illustrative numerical examples. Finally, we validate our approach by applying it to a case study of an existing cable system in the Mediterranean Sea, and the results show that we can significantly reduce overall expected cost at a slight increase in deployment cost. The results demonstrate a potential saving of billions of US dollars for the society in case of a future disaster. In order to achieve such large savings, cable companies may require to lay somewhat longer cables to avoid potential disaster areas, which may increase deployment cost that is relatively smaller compared to potential savings in case of a disaster. Understanding such trade-offs is important for stakeholders, including government agencies, cable industry, and insurance companies, which may have different objectives, but can work together for the overall benefit of the society.

35 citations


Journal ArticleDOI
TL;DR: An online traffic-aware intelligent differentiated allocation of lightpaths (TIDAL) algorithm, based on stateful grooming and the MOILP, to accommodate the dynamic tidal traffic is proposed and can achieve a significant performance improvement in an energy-efficient way.
Abstract: The growing popularity of high-speed mobile communications, cloud computing, and the Internet of Things (IoT) has reinforced the tidal traffic phenomenon, which induces spatio-temporal disequilibrium in the network traffic load. The main reason for tidal traffic is the large-scale population migration between business areas during the day and residential areas during the night. Traffic grooming provides an effective solution to aggregate multiple fine-grained IP traffic flows into the optical transport layer by flexibly provisioning lightpaths over the physical topology. In this paper, we introduce a comprehensive study on energy efficiency and network performance enhancement in the presence of tidal traffic. We propose and leverage the concept of stateful grooming to apply differentiated provisioning policies based on the state of network nodes. We formulate and solve the node-state-decision optimization problem, which can decide the specific state of network nodes when a certain traffic profile is given, considering the trade-off between energy efficiency and blocking performance with a multi-objective integer linear program (MOILP). Then, we propose an online traffic-aware intelligent differentiated allocation of lightpaths (TIDAL) algorithm, based on stateful grooming and the MOILP, to accommodate the dynamic tidal traffic. Our illustrative numerical results show that the proposed method can achieve a significant performance improvement in an energy-efficient way.

34 citations


Journal ArticleDOI
TL;DR: Simulation results show that V-CRAN can enhance the throughput of users at the cell-edge, as well as significantly reduce the number of handovers, handover delay, and failure rate.
Abstract: To meet challenging 5G capacity requirements, operators are densifying their cellular networks by deploying additional small cells to cover hot spots, and such an increase in the number and density of cells may result in excessive numbers of handovers. In this study, we propose a handover reduction mechanism implemented in a cloud radio access network (CRAN) by exploiting the high capacity of an optical access network serving as a “fronthaul.” CRAN has been proposed as a 5G radio access network architecture, where the digital unit (DU) of a conventional base station (BS) is separated from the radio unit (RU) and moved to the “cloud” (DU-cloud) for better mobility management and cost saving. Separating RUs and DUs requires a low-latency and high-bandwidth 5G transport network to handle “fronthaul” traffic, for which optical access is an excellent choice. Here, we present a new 5G architecture, called virtualized-CRAN (V-CRAN), moving toward a cell-less 5G mobile network architecture. We leverage the concept of a “virtualized-BS” (V-BS) that can be formed by exploiting several enabling technologies such as software-defined radio and coordinated multi-point transmission/reception. A V-BS can be formed on a per-user basis by allocating virtualized resources on demand so that common signals can be jointly transmitted from multiple RUs to the user without triggering handover. We first model the handover reduction optimization problem for a scenario where future mobility information is known, and then propose a suite of algorithms for a scenario where future information is unknown. Simulation results show that V-CRAN can enhance the throughput of users at the cell-edge, as well as significantly reduce the number of handovers, handover delay, and failure rate.

33 citations


Proceedings ArticleDOI
01 Nov 2016
TL;DR: This study considers a BBU hoteling scheme based on the concept of access cloud network (ACN), and suggests a partial ACN protection approach which provides degraded services with only 8% additional network resources.
Abstract: Cloud Radio Access Network (C-RAN) will improve mobile radio coordination and resource efficiency by allowing baseband processing unit (BBU) functions to be virtualized and centralized, i.e., deployed in a BBU hotel. We consider a BBU hoteling scheme based on the concept of access cloud network (ACN). An ACN consists of virtualized BBUs (vBBUs) placed in metro cloud data centers (metro DCs). A vBBU is connected to a set of remote radio heads (RRHs). ACN resiliency against network and processing failures is critical for C-RAN deployments. Hence, in this study, we propose three protection approaches: 1+1 ACN protection, 1+1 ACN and vBBU protection, and partial ACN protection. Simulation results show that both 1+1 ACN and 1+1 ACN and vBBU protection requires large capacity for backup to provide 100% survivability for singlelink and single-DC failures. As a result, we suggest a partial ACN protection approach which provides degraded services with only 8% additional network resources.

22 citations


Proceedings ArticleDOI
01 Dec 2016
TL;DR: A novel mathematical model based on constraint programming for joint allocation of radio, optical network, and baseband processing resources to enhance RAN throughput is proposed and solved by optimally forming VPONs and V-BSs.
Abstract: 5G Radio Access Networks (RANs) are supposed to increase their capacity by 1000x to handle growing number of connected devices and increasing data rates. The concept of cloud-RAN (CRAN) has been recently proposed to decouple digital units (DUs) and radio units (RUs) of base stations (BSs), and centralize DUs into central offices. CRAN can ease the implementation of advanced radio coordination techniques, e.g., Coordinated Multi-Point (CoMP) Transmission/Reception, to enhance its system throughput. However, separating DUs and RUs, and implementing CoMP in CRAN require low-latency and high-bandwidth connectivity links, called "fronthaul". Today, consensus has not yet been achieved on how BSs, fronthaul, and central offices will be orchestrated to enhance the system throughput. In this study, we present a CRAN over Passive Optical Network (PON) architecture called virtualized-CRAN (V-CRAN). V-CRAN leverages the concept of virtualized PON (VPON) that can dynamically associate any RU to any DU so that several RUs can be coordinated by the same DU, and the concept of virtualized BS (V-BS) that can jointly transmit common signals from multiple RUs to a user. We propose a novel mathematical model based on constraint programming for joint allocation of radio, optical network, and baseband processing resources to enhance RAN throughput, and we solve it by optimally forming VPONs and V-BSs. Comprehensive simulations show that V-CRAN can enhance the system throughput and the efficiency of resource utilization.

22 citations


Proceedings ArticleDOI
TL;DR: Numerical results verify that the novel problem of online degraded provisioning in service-differentiated multi-layer networks with optical elasticity can achieve significant blocking reduction, and indicate that degradation in optical layer can increase the network capacity, while degradation in electric layer provides flexible time-bandwidth exchange.
Abstract: The emergence of new network applications is driving network operators to not only fulfill dynamic bandwidth requirements, but offer various grades of service. Degraded provisioning provides an effective solution to flexibly allocate resources in various dimensions to reduce blocking for differentiated demands when network congestion occurs. In this work, we investigate the novel problem of online degraded provisioning in service-differentiated multi-layer networks with optical elasticity. Quality of Service (QoS) is assured by service-holding-time prolongation and immediate access as soon as the service arrives without set-up delay. We decompose the problem into degraded routing and degraded resource allocation stages, and design polynomial-time algorithms with the enhanced multi-layer architecture to increase the network flexibility in temporal and spectral dimensions. Illustrative results verify that we can achieve significant reduction of network service failures, especially for requests with higher priorities. The results also indicate that degradation in optical layer can increase the network capacity, while the degradation in electric layer provides flexible time-bandwidth exchange.

21 citations


Journal ArticleDOI
TL;DR: A Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared- Path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization.
Abstract: As networks grow in size, large-scale failures caused by disasters may lead to huge data loss, especially in an optical network employing wavelength-division multiplexing (WDM) Providing 100 % protection against disasters would require massive and economically unsustainable bandwidth overprovisioning, as disasters are difficult to predict, statistically rare, and may create large-scale failures Backup reprovisioning schemes are proposed to remedy this problem, but in case of a large-scale disaster, even the flexibility provided by backup reprovisioning may not be enough, given the sudden reduction in available network resource, ie, resource crunch To mitigate the adverse effects of resource crunch, an effective resource reallocation is possible by exploiting service heterogeneity, specifically degraded-service tolerance, which makes it possible to provide some level of service, eg, reduced capacity, to connections that can tolerate degraded service, versus no service at all Software-Defined Networking (SDN) is a promising approach to perform such dynamic changes (redistribution of network resources) as it simplifies network management via centralized control logic By exploiting these new opportunities, we propose a Backup Reprovisioning with Partial Protection (BRPP) scheme supporting dedicated-path protection, where backup resources are reserved but not provisioned (as in shared-path protection), such that the amount of bandwidth reserved for backups as well as their routings are subject to dynamic changes, given the network state, to increase utilization The performance of the proposed scheme is evaluated by means of SDN emulation using Mininet environment and OpenDaylight as the controller

Proceedings ArticleDOI
22 May 2016
TL;DR: A new architecture for supporting CoMP operations in emerging cellular systems is proposed, based on a time-and-wavelength-division-multiplexed passive optical network (TWDM-PON) fronthaul, using virtualized base stations and a cloud radio access network (C-RAN) architecture.
Abstract: In emerging cellular systems, optical fronthaul is expected to play a major role to support many control operations, e.g., Coordinated Multipoint (CoMP). CoMP is a promising technique for interference mitigation as it can transform interfing signals into joint transmission (reception) in which signals from adjacent cell sites are simultaneously transmitted (received) to (from) mobile terminals. But the exchange of information required by CoMP demands high flexibility and capacity. This paper proposes a new architecture for supporting CoMP operations in emerging cellular systems. It is based on a time-and-wavelength-division-multiplexed passive optical network (TWDM-PON) fronthaul, using virtualized base stations and a cloud radio access network (C-RAN) architecture. We also propose techniques to distribute the load on controllers to minimize the coordination delay. Results show that, for a typical setting, our methods can save up to 37% on the time required to distribute channel state information among multiple base stations.

Proceedings ArticleDOI
09 May 2016
TL;DR: This study develops the optimization model and provides a novel DBA scheme for WA-PON, and compares the performance of these hybrid PONs by simulation in terms of average delay and packet drop ratio, which suggest that WA- PON can tolerate heavier traffic and performs the best among three Hybrid-Pon architectures.
Abstract: To effectively schedule transmissions in NG-EPON, Hybrid-PON is further classified into MSD-PON, SSD-PON, and WA-PON depending on their different access architectures, viz. how a group of ONUs share the assigned wavelength channels. The architectures for these three Hybrid-PONs were introduced by the IEEE 802.3 Ethernet Working Group in 2015. In this study, we first propose to apply some existing dynamic bandwidth allocation (DBA) schemes in the cases of MSD-PON and SSD-PON. Then, since WA-PON is a new hybrid multiplexing architecture and previous DBA schemes cannot exploit its flexibility of wavelength assignment and sharing, we develop the optimization model and also provide a novel DBA scheme for WA-PON. Then, we compare the performance of these hybrid PONs by simulation in terms of average delay and packet drop ratio. We analyze the influence of increasing buffer size for each ONU in WA-PON and show how the flexibility of WA-PON can impact the performance. We find that WA-PON can reduce the drop ratio compared with SSD-PON at low offered load, and has smaller average delay than MSD-PON, which suggest that WA-PON can tolerate heavier traffic and performs the best among three Hybrid-PON architectures.

Proceedings ArticleDOI
01 Dec 2016
TL;DR: In this article, the authors investigate the problem of online degraded provisioning in service-differentiated multi-layer networks with optical elasticity and design polynomial-time algorithms with the enhanced multilayer architecture to exploit network flexibility in temporal and spectral dimensions.
Abstract: Degraded provisioning provides an effective solution to flexibly allocate resources in various dimensions to reduce blocking for differentiated demands when network congestion occurs. In this work, we investigate the novel problem of online degraded provisioning in service-differentiated multi-layer networks with optical elasticity. Quality of Service (QoS) is assured by service-holding-time prolongation and immediate access as soon as the service arrives without set-up delay. We decompose the problem into degraded routing and degraded resource allocation stages, and design polynomial-time algorithms with the enhanced multi-layer architecture to exploit network flexibility in temporal and spectral dimensions. Numerical results verify that we can achieve significant blocking reduction, especially for requests with higher priorities. They also indicate that degradation in optical layer can increase the network capacity, while degradation in electric layer provides flexible time-bandwidth exchange.

Proceedings ArticleDOI
15 Mar 2016
TL;DR: Numerical examples using the ILP show that an efficient progressive datacenter-recovery plan can significantly help to increase reachability of contents during the network recovery phase, and increase the number of important contents in the early stages of recovery.
Abstract: Today's cloud system are composed of geographically distributed datacenter interconnected by high-speed optical networks. Disaster failures can severely affect both the communication network as well as datacenters infrastructure and prevent users from accessing cloud services. After large-scale disasters, recovery efforts on both network and datacenters may take days, and, in some cases, weeks or months. Traditionally, the repair of the communication network has been treated as a separate problem from the repair of datacenters. While past research has mostly focused on network recovery, how to efficiently recover a cloud system jointly considering the limited computing and networking resources has been an important and open research problem. In this work, we investigate the problem of progressive datacenter recovery after a large-scale disaster failure, given that a network-recovery plan is made. An efficient recovery plan is explored to determine which datacenters should be recovered at each recovery stage to maximize cumulative content reachability from any source considering limited available network resources. We devise an Integer Linear Program (ILP) formulation to model the associated optimization problem. Our numerical examples using the ILP show that an efficient progressive datacenter-recovery plan can significantly help to increase reachability of contents during the network recovery phase. We succeeded in increasing the number of important contents in the early stages of recovery compared to a random-recovery strategy with a slight increase in resource consumption.

Journal ArticleDOI
TL;DR: A fairness-aware degradation based multipath re-provisioning (FDM) strategy to combat large-scale cascading and/or correlated failures in post-disaster telecom mesh networks (such as optical backbone networks) is studied.
Abstract: Resource crunch, i.e., severe reduction of network resources caused by large-scale failures or drastic traffic fluctuations, represents a crucial concern for network survivability. Under resource crunch, simply relying on preserving large amounts of redundant resources to achieve network robustness becomes difficult or even impossible. However, reactively optimizing network resources and re-provisioning disrupted and/or existing connections might be a promising solution for disaster failures due to earthquake, tsunami, hurricane, malicious attack, etc. In this paper, we study a re-provisioning method for survivable networks, namely, a fairness-aware degradation based multipath re-provisioning (FDM) strategy to combat large-scale cascading and/or correlated failures in post-disaster telecom mesh networks (such as optical backbone networks). By exploiting policies of fairness-aware bandwidth degradation and multipath deployment, FDM exploits the potential bandwidth resource in post-disaster networks to maintain network connectivity and maximize traffic flows, while avoiding temporary interruptions of existing connections and balancing the traffic distribution. A mixed integer linear programming model and a heuristic algorithm, called FDM-i and FDM-h, respectively, are developed and applied in different scale test networks under various volumes of post-disaster traffic demands. Simulation results show that, with respect to some counterparts, the proposed FDM schemes achieve better performance in terms of connection loss ratio, traffic loss ratio, and fairness of traffic distribution and demonstrate a good trade-off between resource utilization and traffic distribution. Compared with FDM-i, FDM-h achieves similar performance and shows better characteristics on adaptability and scalability.

Proceedings ArticleDOI
20 Mar 2016
TL;DR: This work proposes and evaluates four optical interconnect architectures based on spatial division multiplexing for ultra-high capacity modular data centers and shows which way the best option depends on the network load and size.
Abstract: We propose and evaluate four optical interconnect architectures based on spatial division multiplexing for ultra-high capacity modular data centers. It is shown in which way the best option depends on the network load and size.

Proceedings ArticleDOI
22 May 2016
TL;DR: An inter-DC Content Fragmentation (CF) scheme is proposed which aims to achieve brown-energy saving by reducing storage overhead while maintaining low risk compared to basic replication schemes and a Green and Low-Risk Content Placement approach (GR-CP) is proposed to address the tradeoff between brown- energy consumption and disaster risk.
Abstract: With the rapid growth of content-based network services, there is increasing interest in reducing the emissions associated with brown-energy consumption in Content Delivery Networks (CDNs). At the same time, content needs to be placed in safe Data Center (DC) locations, which are unlikely to be hit by disasters. Further risk reduction is achieved using content replication which provides inter-DC content redundancy. Unfortunately, there is contention between the objectives of brown-energy minimization and risk reduction since replicating content increases brown-energy consumption. To address these contradictory issues, we leverage the concept of content fragmentation used inside DCs and propose an inter-DC Content Fragmentation (CF) scheme which aims to achieve brown-energy saving by reducing storage overhead while maintaining low risk compared to basic replication schemes. We also propose a Green and Low-Risk Content Placement approach (GR-CP) to address the tradeoff between brown-energy consumption and disaster risk. Both CF and replication schemes are implemented in GR-CP and evaluated over a range of content popularity and redundancy levels. Our results show that CF outperforms replication scheme except when content popularity is high and the risk constraint is stringent. When popularity and risk are both low, CF can save more brown energy by using low-redundancy fragmentation techniques.

Proceedings ArticleDOI
TL;DR: In this article, the authors focus on the computationally complex problem of multiple VNF SC placement and routing while considering VNF service chaining explicitly, and propose a column generation model for placing multiple SCs and routing.
Abstract: Network Function Virtualization (NFV) aims to abstract the functionality of traditional proprietary hardware into software as Virtual Network Functions (VNFs), which can run on commercial off the shelf (COTS) servers. Besides reducing dependency on proprietary support, NFV helps network operators to deploy multiple services in a agile fashion. Service deployment involves placement and in sequence routing through VNFs comprising a Service Chain (SC). Our study is the first to focus on the computationally complex problem of multiple VNF SC placement and routing while considering VNF service chaining explicitly. We propose a novel column generation model for placing multiple VNF SCs and routing, which reduces the computational complexity of the problem significantly. Our aim here is to determine the ideal NFV Infrastructure (NFVI) for minimizing network resource consumption. Our results indicate that a Network enabled Cloud (NeC) results in lower networkresource consumption than a centralized NFVI (e.g., Data Center) while avoiding infeasibility with a distributed NFVI.

Proceedings ArticleDOI
22 May 2016
TL;DR: This study introduces the multiple traveling repairmen problem (MTRP) for post-disaster resilience, i.e., to reduce the impact of a disaster, and proposes a greedy and a simulated annealing algorithm.
Abstract: In network virtualization, when a disaster hits a physical network infrastructure, it is likely to break multiple virtual network connections. So, after a disaster occurs, the network operator has to schedule multiple teams of repairmen to fix the failed components, by considering that these elements may be geographically dispersed. An effective schedule is very important as different schedules may result in very different amounts of time needed to restore a failure. In this study, we introduce the multiple traveling repairmen problem (MTRP) for post-disaster resilience, i.e., to reduce the impact of a disaster. Re-provisioning of failed virtual links is also considered. We first formally state the problem, where our objective is to find an optimal schedule for multiple teams of repairmen to restore the failed components in physical network, maximizing the traffic in restored virtual network and with minimum damage cost. Then, we propose a greedy (GR) and a simulated annealing (SA) algorithm, and we measure the damage caused by a disaster in terms of disconnected virtual networks (DVN), failed virtual links (FVL), and failed physical links (FPL). Numerical result shows that both proposed algorithms can make good schedules for multiple repairmen teams, and SA leads to significantly lower damage in terms of DVN, FVL, and FPL than GR.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: Simulations show that the algorithm significantly reduces the risk of data loss caused by disasters with minimum additional time consumption.
Abstract: We develop a risk-aware rapid data evacuation scheme for large-scale disasters in optical cloud networks. Simulations show that our algorithm significantly reduces the risk of data loss caused by disasters with minimum additional time consumption.

Proceedings ArticleDOI
01 Aug 2016
TL;DR: The authors will give an optimized deployment strategy of fog computing resources, and then will proposed a software defined optical networking architecture for hybrid computing considering the service characteristics.
Abstract: Driven by ubiquitous cloud computing application, datacenters have been deployed widely as important IT resources. However, some disadvantages are challenging the cloud computing because the datacenters are deployed concentrated and remotely, which cannot guarantee the quality of experience (QoE) for the customers, such as the latency, bandwidth, energy consumption, and network cost. Fog computing has been proposed by Cisco in 2011, the goal of which is to improve efficiency and reduce the amount of data to be transported to the cloud for data processing, analysis and storage. Compared with cloud computing scenario as shown in Fig. 1, the traces of traffic flows in fog computing scenario will change to be locked locally to reduce latency and save network resource for some applications, such as online games and online searching. However, fog computing cannot replace cloud computing in the future, because the latter also has some obvious advantages, such as security, sharing, and scalability, which means that cloud computing and fog computing will be co-existing in the future. The concept of hybrid computing is proposed in the paper, considering cloud and fog together, as shown in Fig. 2. Then, two kinds of problems will emerge in hybrid computing scenario, one of which is how to deploy fog computing resource at the network edge to cooperate with cloud efficiently, and the other of which to how to provide network resources for hybrid computing scenario because the traffic flows will be divided into different tiers according to their requirement and the status of IT resource and network resource. Focusing on the problems above, the authors will give an optimized deployment strategy of fog computing resources, and then will proposed a software defined optical networking architecture for hybrid computing considering the service characteristics. Some numeric results and analysis will be given finally.

Proceedings ArticleDOI
02 Nov 2016
TL;DR: A location-aware virtual machine (VM) designation scheme is first proposed in hybrid cloud-fog computing using hierarchical control architecture of software-defined optical networks, in which virtual machines are designated for different requests with different latency requirements.
Abstract: A location-aware virtual machine (VM) designation scheme is first proposed in hybrid cloud-fog computing using hierarchical control architecture of software-defined optical networks, in which virtual machines are designated for different requests with different latency requirements.