scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Network and Systems Management in 2018"


Journal ArticleDOI
TL;DR: In order to control the energy consumption efficaciously, the Dynamic Voltage Frequency Scaling system is incorporated in the optimization procedure and a set of non-domination solutions are obtained using Non-dominated Sorting Genetic Algorithm (NSGA-II).
Abstract: The utilization of cloud services has significantly increased due to the easiness in accessibility, better performance, and decrease in the high initial cost. In general, cloud users anticipate completing their tasks without any delay, whereas cloud providers yearn for reducing the energy cost, which is one of the major costs in the cloud service environment. However, reducing energy consumption increases the makespan and leads to customer dissatisfaction. So, it is essential to obtain a set of non-domination solutions for these multiple and conflicting objectives (makespan and energy consumption). In order to control the energy consumption efficaciously, the Dynamic Voltage Frequency Scaling system is incorporated in the optimization procedure and a set of non-domination solutions are obtained using Non-dominated Sorting Genetic Algorithm (NSGA-II). Further, the Artificial Neural Network (ANN), which is one of the most successful machine learning algorithms, is used to predict the virtual machines based on the characteristics of tasks and features of the resources. The optimum solutions obtained using the optimization process with the support of ANN and without the support of ANN are presented and discussed.

73 citations


Journal ArticleDOI
TL;DR: A particle swarm optimization based resource scheduling technique has been designed named BULLET which is used to execute workloads effectively on available resources and efficiently reduces execution cost, time and energy consumption along with other QoS parameters.
Abstract: Cloud resource scheduling requires mapping of cloud resources to cloud workloads. Scheduling results can be optimized by considering Quality of Service (QoS) parameters as inherent requirements of scheduling. In existing literature, only a few resource scheduling algorithms have considered cost and execution time constraints but efficient scheduling requires better optimization of QoS parameters. The main aim of this research paper is to present an efficient strategy for execution of workloads on cloud resources. A particle swarm optimization based resource scheduling technique has been designed named as BULLET which is used to execute workloads effectively on available resources. Performance of the proposed technique has been evaluated in cloud environment. The experimental results show that the proposed technique efficiently reduces execution cost, time and energy consumption along with other QoS parameters.

67 citations


Journal ArticleDOI
TL;DR: An exhaustive survey of spot pricing in cloud ecosystem is presented and an insight into the Amazon spot instances and its pricing mechanism has been presented for better understanding of the spot ecosystem.
Abstract: Amazon offers spot instances to cloud customers using an auction-like mechanism. These instances are dynamically priced and offered at a lower price with less guarantee of availability. Observing the popularity of Amazon spot instances among the cloud users, research has intensified on defining the users’ and providers’ behavior in the spot market. This work presents an exhaustive survey of spot pricing in cloud ecosystem. An insight into the Amazon spot instances and its pricing mechanism has been presented for better understanding of the spot ecosystem. Spot pricing and resource provisioning problem, modeled as a market mechanism, is discussed from both computational and economics perspective. A significant amount of important research papers related to price prediction and modeling, spot resource provisioning, bidding strategy designing etc. are summarized and categorized to evaluate the state of the art in the context. All theoretical frameworks, developed for cloud spot market, are illustrated and compared in terms of the techniques and their findings. Finally, research gaps are identified and various economic and computational challenges in cloud spot ecosystem are discussed as a guide to the future research.

52 citations


Journal ArticleDOI
TL;DR: Extensive black-box tests are presented to quantify the throughput and latency of software switches with emphasis on the market leader, Open vSwitch.
Abstract: Virtual switches, like Open vSwitch, have emerged as an important part of today's data centers. They connect interfaces of virtual machines and provide an uplink to the physical network via network interface cards. We discuss usage scenarios for virtual switches involving physical and virtual network interfaces. We present extensive black-box tests to quantify the throughput and latency of software switches with emphasis on the market leader, Open vSwitch. Finally, we explain the observed effects using white-box measurements.

48 citations


Journal ArticleDOI
TL;DR: The two main objectives of this paper are to make use of the controller’s broad view of the network to detect DDoS attacks and propose a solution that is effective and lightweight in terms of the resources that it uses.
Abstract: Software Defined Network (SDN) is a new network architecture that has an operating system. Unlike conventional production networks, SDN allows more flexibility in network management using that operating system that is called the controller. The main advantage of having a controller in the network is the separation of the forwarding and the control planes, which provides central control over the network. Although central control is the major advantage of SDN, it is also a single point of failure if it is made unreachable by a Distributed Denial of Service (DDoS) attack. In this paper, that single point of failure is addressed by utilizing the controller to detect such attacks and protect the SDN architecture of the network in its early stages. The two main objectives of this paper are to (1) make use of the controller's broad view of the network to detect DDoS attacks and (2) propose a solution that is effective and lightweight in terms of the resources that it uses. To accomplish these objectives, this paper examines the effect of DDoS attacks on the SDN controller and the way it can exhaust controller resources. The proposed solution to detect such attacks is based on the entropy variation of the destination IP address. Based on our experimental setup, the proposed method can detect DDoS within the first 250 packets of the attack traffic.

45 citations


Journal ArticleDOI
TL;DR: A novel push-based approach for HAS, in which HTTP/2’s push feature is used to actively push segments from server to client, is proposed, which can reduce the startup time and end-to-end delay in HAS live streaming.
Abstract: Over the last years, streaming of multimedia content has become more prominent than ever. To meet increasing user requirements, the concept of HTTP Adaptive Streaming (HAS) has recently been introduced. In HAS, video content is temporally divided into multiple segments, each encoded at several quality levels. A rate adaptation heuristic selects the quality level for every segment, allowing the client to take into account the observed available bandwidth and the buffer filling level when deciding the most appropriate quality level for every new video segment. Despite the ability of HAS to deal with changing network conditions, a low average quality and a large camera-to-display delay are often observed in live streaming scenarios. In the meantime, the HTTP/2 protocol was standardized in February 2015, providing new features which target a reduction of the page loading time in web browsing. In this paper, we propose a novel push-based approach for HAS, in which HTTP/2's push feature is used to actively push segments from server to client. Using this approach with video segments with a sub-second duration, referred to as super-short segments, it is possible to reduce the startup time and end-to-end delay in HAS live streaming. Evaluation of the proposed approach, through emulation of a multi-client scenario with highly variable bandwidth and latency, shows that the startup time can be reduced with 31.2% compared to traditional solutions over HTTP/1.1 in mobile, high-latency networks. Furthermore, the end-to-end delay in live streaming scenarios can be reduced with 4 s, while providing the content at similar video quality.

39 citations


Journal ArticleDOI
TL;DR: A server selection, configuration, reconfiguration and automatic performance verification technology to meet user functional and performance requirements on various types of cloud compute servers to enable cloud providers to provision compute resources on appropriate hardware based on user requirements.
Abstract: We propose a server selection, configuration, reconfiguration and automatic performance verification technology to meet user functional and performance requirements on various types of cloud compute servers. Various servers mean there are not only virtual machines on normal CPU servers but also container or baremetal servers on strong graphic processing unit (GPU) servers or field programmable gate arrays (FPGAs) with a configuration that accelerates specified computation. Early cloud systems are composed of many PC-like servers, and virtual machines on these severs use distributed processing technology to achieve high computational performance. However, recent cloud systems change to make the best use of advances in hardware power. It is well known that baremetal and container performances are better than virtual machines performances. And dedicated processing servers, such as strong GPU servers for graphics processing, and FPGA servers for specified computation, have increased. Our objective for this study was to enable cloud providers to provision compute resources on appropriate hardware based on user requirements, so that users can benefit from high performance of their applications easily. Our proposed technology select appropriate servers for user compute resources from various types of hardware, such as GPUs and FPGAs, or set appropriate configurations or reconfigurations of FPGAs to use hardware power. Furthermore, our technology automatically verifies the performances of provisioned systems. We measured provisioning and automatic performance verification times to show the effectiveness of our technology.

37 citations


Journal ArticleDOI
TL;DR: The research conducted allowed for the verification of the solution proposed in the domain of multimedia file conversion and demonstrated its usefulness in reducing the time required for task execution.
Abstract: Since the concept of merging the capabilities of mobile devices and cloud computing is becoming increasingly popular, an important question is how to optimally schedule services/tasks between the device and the cloud. The main objective of this article is to investigate the possibilities for using machine learning on mobile devices in order to manage the execution of services within the framework of Mobile Cloud Computing. In this study, an agent-based architecture with learning possibilities is proposed to solve this problem. Two learning strategies are considered: supervised and reinforcement learning. The solution proposed leverages, among other things, knowledge about mobile device resources, network connection possibilities and device power consumption, as a result of which a decision is made with regard to the place where the task in question is to be executed. By employing machine learning techniques, the agent working on a mobile device gains experience in determining the optimal place for the execution of a given type of task. The research conducted allowed for the verification of the solution proposed in the domain of multimedia file conversion and demonstrated its usefulness in reducing the time required for task execution. Using the experience gathered as a result of subsequent series of tests, the agent became more efficient in assigning the task of multimedia file conversion to either the mobile device or cloud computing resources.

28 citations


Journal ArticleDOI
TL;DR: Human mobility and the resulting dynamics in the network workload caused by three different types of large-scale events: a major soccer match, a rock concert, and a New Year’s Eve celebration, which took place in a large Brazilian city are analyzed.
Abstract: The analysis of mobile phone data can help carriers to improve the way they deal with unusual workloads imposed by large-scale events. This paper analyzes human mobility and the resulting dynamics in the network workload caused by three different types of large-scale events: a major soccer match, a rock concert, and a New Year’s Eve celebration, which took place in a large Brazilian city. Our analysis is based on the characterization of records of mobile phone calls made around the time and place of each event. That is, human mobility and network workload are analyzed in terms of the number of mobile phone calls, their inter-arrival and inter-departure times, and their durations. We use heat maps to visually analyze the spatio-temporal dynamics of the movement patterns of the participants of the large-scale event. The results obtained can be helpful to improve the understanding of human mobility caused by large-scale events. Such results could also provide valuable insights for network managers into effective capacity management and planning strategies. We also present PrediTraf, an application built to help the cellphone carriers plan their infrastructure on large-scale events.

26 citations


Journal ArticleDOI
TL;DR: From Matlab simulations, it was able to verify that in a network with randomly deployed sensor nodes, CNs can be strategically deployed at pre-determined positions, to deliver application-aware data that satisfies the end-user’s quality of information requirements, even at high application payloads.
Abstract: Ubiquitous Sensor Network describes an application platform comprised of intelligently networked sensors deployed over a large area, supporting multiple application scenarios. On one hand, at the user-end, storing and managing the large amount of heterogeneous data generated by the network is a daunting task. On the other hand, at the network-end, ensuring network connectivity and longevity in a dynamically changing network environment, while trying to provide context-aware application data to the end-users are very challenging for the resource constrained sensor network. While cloud computing offers a cost-effective solution for storage of the large volume of data generated by the underlying heterogeneous network, an equally elegant solution does not exist on the network interface to provide application-aware data. In this paper, we propose the use of cognitive nodes (CNs) in the underlying sensor network to provide intelligent information processing and knowledge-based services to the end-users. We identify tools and techniques to implement the cognitive functionality and formulate a strategy for the deployment of CNs in the underlying sensor network to ensure a high probability of successful data reception among communicating nodes. From Matlab simulations, we were able to verify that in a network with randomly deployed sensor nodes, CNs can be strategically deployed at pre-determined positions, to deliver application-aware data that satisfies the end-user’s quality of information requirements, even at high application payloads.

18 citations


Journal ArticleDOI
TL;DR: The purpose of this article is to look at literature on P2PBNM and to highlight initiatives regarding the use of P1P technology in network management and to predict what the future holds for P2 PBNM.
Abstract: Network management has steadily evolved over recent years. Along with the growing need for advanced features in network management solutions, several distribution models were investigated, varying from centralized to fully distributed models. Despite the common agreement that some sort of distribution is really needed to execute management tasks, there seems to exist a permanent quest for the next distributed network management model. Among the distributed models, an interesting and emerging possibility is the use of P2P technology in network management, also known as P2P-Based Network Management (P2PBNM). Several investigations have shown that P2PBNM can be seen as an enabler for advanced network management features. However, due to the dispersion concerning the concepts and features related to these investigations, it is difficult to draw a comprehensive picture of the P2PBNM area. The purpose of this article is to look at literature on P2PBNM and to highlight initiatives regarding the use of P2P technology in network management. Furthermore, such initiatives are classified in respect to proposed review questions. Finally, future trends are discussed in order to predict what the future holds for P2PBNM.

Journal ArticleDOI
TL;DR: A stochastic game model for quantifying the security of cyber-physical systems (CPS), which operate under intentional disturbances, is proposed and applied to a boiling water power plant as an illustrative example.
Abstract: A quantitative security evaluation in the domain of cyber-physical systems (CPS), which operate under intentional disturbances, is an important open problem. In this paper, we propose a stochastic game model for quantifying the security of CPS. The proposed model divides the security modeling process of these systems into two phases: (1) intrusion process modeling and (2) disruption process modeling. In each phase, the game theory paradigm predicts the behaviors of the attackers and the system. By viewing the security states of the system as the elements of a stochastic game, Nash equilibriums and best-response strategies for the players are computed. After parameterization, the proposed model is analytically solved to compute some quantitative security measures of CPS. Furthermore, the impact of some attack factors and defensive countermeasures on the system availability and mean time-to-shutdown is investigated. Finally, the proposed model is applied to a boiling water power plant as an illustrative example.

Journal ArticleDOI
TL;DR: A novel method that combines individual partitions to become a strong learner through the use of a link-based algorithm is proposed that outperforms existing botnet detection mechanisms with a high reliability and is proposed for the maximum duration time of flows in botnet research.
Abstract: Botnet detection is one of the most imminent tasks for cyber security. Among popular botnet countermeasures, an intrusion detection system is the prominent mechanism. In the past, packet-based intrusion detection systems were popular. However, flow-based intrusion detection systems have been preferred in recent years due to their ability to adapt to modern high-speed networks. A collection of flows from an enterprise network usually contains both botnet traffic and normal traffic. To classify this traffic, supervised machine learning algorithms, i.e., classifications, have been applied and achieved a high accuracy. In an effort to improve the ability of intrusion detection systems against botnets, some studies have suggested partitioning flows into clusters before applying the classifications and this step could significantly reduce the complexity of a flow set. However, the instability of individual clustering algorithms is still a constraint for botnet detection.To overcome this bottleneck, we propose a novel method that combines individual partitions to become a strong learner through the use of a link-based algorithm. Our experiments show that our cluster ensemble model outperforms existing botnet detection mechanisms with a high reliability. We also determine the balance between accuracy and computer resources for botnet detection, and thereby propose a range for the maximum duration time of flows in botnet research.

Journal ArticleDOI
TL;DR: A novel cooperative VNE algorithm is proposed to coordinate centralized and distributed algorithms and unite their respective advantages and specialties and has acceptable and even better performance in terms of long-term average revenue and acceptance ratio than previous algorithms.
Abstract: Network virtualization provides a promising solution for next-generation network management by allowing multiple isolated and heterogeneous virtual networks to coexist and run on a shared substrate network. A long-standing challenge in network virtualization is how to effectively and efficiently map these virtual nodes and links of heterogeneous virtual networks onto specific nodes and links of the shared substrate network, known as the Virtual Network Embedding (VNE) problem. Existing centralized VNE algorithms and distributed VNE algorithms both have advantages and disadvantages. In this paper, a novel cooperative VNE algorithm is proposed to coordinate centralized and distributed algorithms and unite their respective advantages and specialties. By leveraging the learning technology and topology decomposition, autonomous substrate nodes entrusted with detailed mapping solutions cooperate closely with the central controller with a global view and in charge of general management to achieve a successful embedding process. Besides a topology-aware resource evaluation mechanism and customized mapping management policies, Bloom filter is elaborately introduced to synchronize the mapping information within the substrate network, instead of flooding which generates massive communication overhead. Extensive simulations demonstrate that the proposed cooperative algorithm has acceptable and even better performance in terms of long-term average revenue and acceptance ratio than previous algorithms.

Journal ArticleDOI
TL;DR: The future of network reliability engineering will benefit substantially from actively addressing the human role in network administration and management, and specific demographic, organizational, and technical factors that contribute to network reliability issues are discussed.
Abstract: Network administration and management tasks play an integral role in Information Technology (IT) operations; which are utilized across a diverse set of organizations. The reliability of networks is therefore of crucial importance for ensuring effective business processes. All IT networks are administered and managed by human administrators. As the process of administration becomes increasingly complex, human limitations can amplify challenges to network reliability and security. Despite researchers' agreement that the human factor becomes increasingly significant as the network becomes more reliable, efforts to design reliability measures have remained largely separate from considerations of the human component. We examined the question of whether joint consideration of these two components would be useful in designing reliability of enterprise networks. We interviewed and surveyed networking professionals to understand their impact on network reliability. The result is a discussion of specific demographic, organizational, and technical factors that contribute to network reliability issues. For demographic factors, academic background was a notable factor associated with network instability. For organizational factors, a notable factor was the number of devices assigned per administrator. Finally, for technical factors, a notable factor was misconfiguration of networking devices, which contributed significantly to the unreliability of the studied networks. Based on this research, we concluded that the future of network reliability engineering will benefit substantially from actively addressing the human role in network administration and management.

Journal ArticleDOI
TL;DR: The experimental results of the study show that the proposed learning automata-based topology control method yields more improvement in the quality of service parameters of throughput and end-to-end delay more than do the other methods.
Abstract: The mobility of the nodes and their limited energy supply in mobile ad hoc networks (MANETs) complicates network conditions. Having an efficient topology control mechanism in the MANET is very important and can reduce the interference and energy consumption in the network. Indeed, since current networks are highly complex, an efficient topology control is expected to be able to adapt itself to the changes in the environment drawing upon a preventive approach and without human intervention. To accomplish this purpose, the present paper proposes a learning automata-based topology control method within a cognitive approach. This approach deals with adding cognition to the entire network protocol stack to achieve stack-wide and network-wide performance goals. In this protocol, two cognitive elements are embedded at each node: one for transmission power control, and the other for channel control. The first element estimates the probability of link connectivity, and then, in a non-cooperative game of learning automata, it sets the proper power for the corresponding node. Subsequently, the second element allocates efficient channel to the corresponding node, again using learning automata. Having a cognitive network perspective to control the topology of the network brings about many benefits, including a self-aware and self-adaptive topology control method and the ability of nodes to self-adjust dynamically. The experimental results of the study show that the proposed method yields more improvement in the quality of service (QoS) parameters of throughput and end-to-end delay more than do the other methods.

Journal ArticleDOI
TL;DR: This paper proposes and implements a REST API security module for SDN controller based on OAuth 2.0, and presents novel access control parameters to cope with the granular resources introduced by SDN.
Abstract: Implementing REST API for SDN is quite challenging compared to conventional web services First, the state transfers in SDN are more complex among network devices, controllers, and applications Second, SDN provides more granular resources in both the controller and the network device itself Those challenges require SDN to have a proper REST API security definition, which is currently not available in most of the SDN controllers In this paper, we propose and implement a REST API security module for SDN controller based on OAuth 20 We answer the SDN REST API security challenges by presenting novel access control parameters to cope with the granular resources introduced by SDN Our prototype maintains the best trade-off between performance and safety by generating a maximum value of 15% overhead during our benchmark It also offers a customizable and flexible access control for the network in various use cases

Journal ArticleDOI
TL;DR: An agent-based approach for data and energy management in an SG and CoDA, a correlation-based data aggregation technique designed for the AMI, which employs fuzzy logic to evaluate the correlation between several messages received from Smart Meters.
Abstract: One of the requirements of a smart grid (SG) is making the electrical network and its subsystems aware of their condition. The deployment of various sensing devices plays an essential part in achieving this goal. Nevertheless, data generated by this deployment needs to be well managed so that it can be leveraged for operational improvement. Data aggregation is perceived as an important technique for managing data in the SG in general, and in its Advanced Metering Infrastructure (AMI) in particular. Indeed, data aggregation techniques have been used in order to reduce communication overhead in SG networks. However, in order to fully take advantage of the aggregation process, some level of intelligence should be introduced at concentrator nodes to make the network more responsive to local conditions. Moreover, by using a more meaningful aggregation technique, entities can be accurately informed of any disturbance. This paper contributes an agent-based approach for data and energy management in an SG. It also proposes CoDA, a correlation-based data aggregation technique designed for the AMI. CoDA employs fuzzy logic to evaluate the correlation between several messages received from Smart Meters (SMs). Analysis and simulation results show the benefits of the proposed approach w.r.t. both packet concatenation and no aggregation approaches.

Journal ArticleDOI
TL;DR: A peer selection strategy which manages to build a minimum delay overlay using three different stages of overlay construction and it is demonstrated that the stability of the system also improves during peer churn.
Abstract: Peer-to-peer (P2P) live streaming systems have gained popularity due to the self-scalability property of the P2P overlay networks. In P2P live streaming, peers retrieve stream content from other peers in the system. Therefore, peer selection strategy is a fundamental element to build an overlay which manages the playback delay and startup delay experienced by the peers. In this paper, we propose a peer selection strategy which manages to build a minimum delay overlay using three different stages of overlay construction. In the first stage, the tracker suggests some peers as prospective partners to a new peer. In the second stage, the peer selects its partners out of these peers such that delay is minimized. The third stage is the topology adaptation phase of peers, where peers reposition themselves in the overlay to maintain minimum delay during peer churn. In the proposed peer selection strategy, peers are selected in all the stages based on parameters such as propagation delay, upload capacity, buffering duration and buffering level. The proposed strategy is compared with two existing strategies in the literature: Fast-Mesh (Ren et al. in IEEE Trans Multimed 11: 1446, 2009) and Hybrid live p2p streaming protocol (Hammami et al., 2014) using simulations. Our results show that playback delay and startup delay are reduced significantly with the help of proposed strategy. We demonstrate that the stability of the system also improves during peer churn.

Journal ArticleDOI
TL;DR: A new auction framework for the spectrum markets is proposed, called aDaptive and Economically robust Auction-based Leasing (DEAL), that keeps all the benefits of TASG while improving the utility (or revenue) of the participants.
Abstract: For the recent decade, cognitive radio networks have received much attention as an alternative to the traditional static spectrum allocation policy since the licensed spectrum channels are not being used efficiently. The most critical issue of the cognitive radio networks is how to distribute the idle spectrum channels to the secondary users opportunistically. The auction-based market is desirable for the trade of idle spectrum channels since the secondary users can purchase a channel in timely manner and the licensed primary users can earn the additional profit while not using the channels. Among the auction algorithms proposed for the spectrum market, we focus on the TASG framework, which consists of two nested auction algorithms, because it enables the group-buying of spectrum channels for the secondary users with limited budgets, and possesses many positive properties such as budget-balance, individual rationality and truthfulness. However, the TASG framework is not very attractive to the market participants since the seller earns the small revenue and the buyer has the low utility. In this paper, we propose a new auction framework for the spectrum markets, called aDaptive and Economically robust Auction-based Leasing (DEAL), that keeps all the benefits of TASG while improving the utility (or revenue) of the participants. To this end, we develop an enhanced inner-auction algorithm, called the Global Auction algorithm in our DEAL framework, and adapt the involved parameters dynamically based on the previous bids from the potential buyers. Simulation results demonstrate that our framework significantly outperforms the previous TASG.

Journal ArticleDOI
TL;DR: A large-scale questionnaire is presented which was answered by experts in the field, evaluating the relevance of each individual topic for the next five years, and an updated version of the taxonomy is proposed.
Abstract: Network and service management is an established research field within the general area of computer networks. A few years ago, an initial taxonomy, organizing a comprehensive list of terms and topics, was established through interviews with experts from both industry and academia. This taxonomy has since been used to better partition standardization efforts, identify classes of managed objects and improve the assignment of reviewers to papers submitted in the field. Because the field of network and service management is rapidly evolving, a biyearly update of the taxonomy was proposed. In this paper, a large-scale questionnaire is presented which was answered by experts in the field, evaluating the relevance of each individual topic for the next five years. Missing topics, which are likely to become relevant over the next few years, are identified as well. Furthermore, an analysis is performed of the records of papers submitted to major conferences in the area. Based on the obtained results, an updated version of the taxonomy is proposed.

Journal ArticleDOI
TL;DR: A combinatorial optimal reverse auction (CORA) mechanism, which efficiently selects and utilizes available high-end SDs on the basis of available resources for offloading purposes and decides the optimal pricing policy for the selected SDs is presented.
Abstract: The explosive growth of smart devices has led to the evolution of multimedia data (mainly video) services in mobile networks It attracted many mobile network operators (MNO) to deploy novel network architectures and develop effective economic policies Mobile data offloading through smart devices (SDs) by exploring device-to-device (D2D) communications can significantly reduce network congestion and enhance quality of service at a lower cost, which is the key requirement of upcoming 5G networks This reasonable cost solution is useful for attracting mobile users to participate in the offloading process by paying them proper incentives It is beneficial for MNOs as well as mobile users Moreover, D2D communications promise to be one of the prominent services for 5G networks In this paper, we present a combinatorial optimal reverse auction (CORA) mechanism, which efficiently selects and utilizes available high-end SDs on the basis of available resources for offloading purposes It also decides the optimal pricing policy for the selected SDs The efficiency of CORA has been realized in terms of bandwidth and storage demand Subsequently, we implement caching in SDs, eNodeB (eNB), and evolved packet core (EPC) with the help of our novel video dissemination cache update algorithm to solve the latency or delay issues in the offloading process Due to high popularity, we specifically focus on video data Simulation results show that the proposed SD caching scenario curtails the delay up to 75% and the combined cache (CC) scenario slashes the delay varying from 15 to 57% We also scruitinize the video downloading time performance of various cache scenarios (ie, CC, EPC cache, eNB cache, and SD cache scenarios)

Journal ArticleDOI
TL;DR: A quantized Hopfield neural network with an augmented Lagrange multiplier method (MEDCCN-QHN) is proposed to derive the solution to the energy consumption in CCN while being aware of QoS consideration in terms of imposed delay, and the numerical results show that the method achieves to better delay profile compared to the optimal energy-efficient algorithm, and near-optimal energy consumption.
Abstract: Internet infrastructure is going to be re-designed as a core network layer, shifting from hosts to contents. To this end, content centric networking (CCN) as one of the most effective architectures has been proposed with significant features of in-network caching to open new possibilities for energy efficiency in content dissemination. However in energy-efficient CCN, less popular contents are cached near the origin server, and therefore in delay sensitive applications with less popularity, it leads to dropping delayed chunks, increasing energy waste, and degrading the quality of service (QoS). In the present paper, the energy consumption in CCN while being aware of QoS consideration in terms of imposed delay is minimized. The minimization is performed through integer linear programming by considering most of the energy consuming components. However, since this problem is NP-hard, a quantized Hopfield neural network with an augmented Lagrange multiplier method (MEDCCN-QHN) is proposed to derive the solution. The numerical results show that the MEDCCN-QHN achieves to better delay profile compared to the optimal energy-efficient algorithm, and near-optimal energy consumption. Moreover, the method is fast due to its parallel execution capability.

Journal ArticleDOI
TL;DR: The properties of such troubleshooting cases and training sets are studied and a method based on model fitting is proposed to extract a statistical model that can be used to generate vectors that emulate the network behavior in the presence of faults.
Abstract: Self-Organizing Networks (SON) add automation to the Operation and Maintenance of mobile networks. Self-healing is the SON function that performs automated troubleshooting. Among other functions, self-healing performs automatic diagnosis (or root cause analysis), that is the task of identifying the most probable fault causes in problematic cells. For training the automatic diagnosis functionality based on support-decision systems, supervised learning algorithms usually extract the knowledge from a training set made up from solved troubleshooting cases. However, the lack of these sets of real solved cases is the bottleneck in the design of realistic diagnosis systems. In this paper, the properties of such troubleshooting cases and training sets are studied. Subsequently, a method based on model fitting is proposed to extract a statistical model that can be used to generate vectors that emulate the network behavior in the presence of faults. These emulated vectors can then be used to evaluate novel diagnosis systems. In order to evaluate the feasibility of the proposed approach, an LTE fault dataset has been modeled, based on both the analysis of real cases collected over two months and a network simulator. In addition, the obtained baseline model can be very useful for the research community in the area of automatic diagnosis.

Journal ArticleDOI
TL;DR: An energy aware joint management framework for geo-distributed data centers and their interconnection network is proposed based on virtual machine migration and formulated using mixed integer linear programming that shows that significant energy cost savings can be achieved compared to a baseline strategy.
Abstract: Every time an Internet user downloads a video, shares a picture, or sends an email, his/her device addresses a data center and often several of them. These complex systems feed the web and all Internet applications with their computing power and information storage, but they are very energy hungry. The energy consumed by Information and Communication Technology (ICT) infrastructures is currently more than 4% of the worldwide consumption and it is expected to double in the next few years. Data centers and communication networks are responsible for a large portion of the ICT energy consumption and this has stimulated in the last years a research effort to reduce or mitigate their environmental impact. Most of the approaches proposed tackle the problem by separately optimizing the power consumption of the servers in data centers and of the network. However, the Cloud computing infrastructure of most providers, which includes traditional telcos that are extending their offer, is rapidly evolving toward geographically distributed data centers strongly integrated with the network interconnecting them. Distributed data centers do not only bring services closer to users with better quality, but also provide opportunities to improve energy efficiency exploiting the variation of prices in different time zones, the locally generated green energy, and the storage systems that are becoming popular in energy networks. In this paper, we propose an energy aware joint management framework for geo-distributed data centers and their interconnection network. The model is based on virtual machine migration and formulated using mixed integer linear programming. It can be solved using state-of-the art solvers such as CPLEX in reasonable time. The proposed approach covers various aspects of Cloud computing systems. Alongside, it jointly manages the use of green and brown energies using energy storage technologies. The obtained results show that significant energy cost savings can be achieved compared to a baseline strategy, in which data centers do not collaborate to reduce energy and do not use the power coming from renewable resources.

Journal ArticleDOI
TL;DR: Comprehensive performance evaluation indicates that among other well-known AQM schemes of comparable complexities, CHORD provides enhanced TCP goodput and intra-protocol fairness and is well-suited for fair bandwidth allocation to aggregate traffic across a wide range of packet and buffer sizes at a bottleneck router.
Abstract: The end-to-end congestion control mechanism of transmission control protocol (TCP) is critical to the robustness and fairness of the best-effort Internet. Since it is no longer practical to rely on end-systems to cooperatively deploy congestion control mechanisms, the network itself must now participate in regulating its own resource utilization. To that end, fairness-driven active queue management (AQM) is promising in sharing the scarce bandwidth among competing flows in a fair manner. However, most of the existing fairness-driven AQM schemes cannot provide efficient and fair bandwidth allocation while being scalable. This paper presents a novel fairness-driven AQM scheme, called CHORD (CHOKe with recent drop history) that seeks to maximize fair bandwidth sharing among aggregate flows while retaining the scalability in terms of the minimum possible state space and per-packet processing costs. Fairness is enforced by identifying and restricting high-bandwidth unresponsive flows at the time of congestion with a lightweight control function. The identification mechanism consists of a fixed-size cache to capture the history of recent drops with a state space equal to the size of the cache. The restriction mechanism is stateless with two matching trial phases and an adaptive drawing factor to take a strong punitive measure against the identified high-bandwidth unresponsive flows in proportion to the average buffer occupancy. Comprehensive performance evaluation indicates that among other well-known AQM schemes of comparable complexities, CHORD provides enhanced TCP goodput and intra-protocol fairness and is well-suited for fair bandwidth allocation to aggregate traffic across a wide range of packet and buffer sizes at a bottleneck router.

Journal ArticleDOI
TL;DR: Simulations show that the controller placement approach can meet the reliability and delay requirement with appropriate controller allocation scheme, and the backup method can improve the survivability of backup controllers and control paths while ensuring the performance of control network.
Abstract: In software-defined networking (SDN), the communication between controllers and switches is very important, for switch can only work by relying on flow tables received from its controller. Therefore, how to ensure the reliability of the communication between controllers and switches is a key problem in SDN. In this paper, we study this problem from two aspects: the controller placement and the resource backup aspect. Firstly, in order to implement the reliable communication and meet the required propagation delay between controllers and switches, a min-cover based controller placement approach is proposed. Then, in order to protect both controllers and control paths from regional failure, a backup method based on an exponential decay failure model is proposed, which considers the regional influence and the survivability of backup controllers and control paths. Simulations show that our controller placement approach can meet the reliability and delay requirement with appropriate controller allocation scheme, and our backup method can improve the survivability of backup controllers and control paths while ensuring the performance of control network.

Journal ArticleDOI
TL;DR: This is the first time that a combination of neural networks, analytic hierarchy process, and software agents was applied to manage a spare parts inventory and prioritize incidents in such a complex scenario as the optical transmission network of a major telecommunications’ operator.
Abstract: In this paper we describe how to improve the spare parts management process in a telecommunications' operator. Several techniques such as: neural networks, analytic hierarchy process, and software agents are used to implement a software prototype that has been validated in an operational environment with a concept trial. Better working conditions were reached by freeing up the technicians for other functions, given that they should not carry out the tedious and stressful activities included in the spare parts process. Such tasks were completed according to the time that was established in the customers' service level agreements to avoid penalties. Operating expenditure was cut in a significant way. An increase of the overall industrial process performance was also accomplished as the spare parts management time dropped. This is, as far as we know, the first time that a combination of these techniques was applied to manage a spare parts inventory and prioritize incidents in such a complex scenario as the optical transmission network of a major telecommunications' operator. The framework might be used in other domains such as: the hardware replacements that are required in some critical operational environments.

Journal ArticleDOI
TL;DR: An architecture that extends current cloud management software to enable the configuration of network functions and exploits the use of additional software components, i.e. translators and gateways, which are network function-agnostic and not specific for a particular type of network function, and do not require any change in the network functions.
Abstract: Network function virtualization has enabled data center providers to offer new service provisioning models. Through the use of data center management software (cloud managers), providers allow their tenants to customize their virtual network infrastructure, enabling them to create a network topology that includes network functions (e.g., routers, firewalls), either chosen among the natively supported catalog or provided by third-parties. In order to deploy a ready-to-go service, providers have also to take care of pushing functional configurations into each network function (e.g., IP addresses for routers and policy rules in firewalls). This paper proposes an architecture that extends current cloud management software to enable the configuration of network functions. We propose a model-based approach that exploits the use of additional software components, i.e. translators and gateways, which are network function-agnostic, i.e. they are vendor-neutral and not specific for a particular type of network function, and do not require any change in the network functions. A prototype of this solution has been also implemented and tested, in order to validate our approach and evaluate its effectiveness in the configuration phase.

Journal ArticleDOI
TL;DR: A new approach to construct a simulation model whose output can be used as an alternative method to create demand functions avoiding to use arbitrary and predefined demand functions is presented.
Abstract: The evaluation of pricing approaches for mobile data services proposed in the literature can rarely be done in practice. Evaluation by simulation is the most common practice. In these proposals demand and utility functions that describe the reaction of users to offered service prices, use traditional and arbitrary functions (linear, exponential, logit, etc.). In this paper, we present a new approach to construct a simulation model whose output can be used as an alternative method to create demand functions avoiding to use arbitrary and predefined demand functions. However, it is out of the scope of this paper to utilize them to propose pricing approaches, since the main objective of this article is to show the difference between the arbitrary demand functions used and our approach that come from users’ data. The starting point in this paper is to consider data offered from Eurostat, although other data sources could be used for the same purposes with the aim to offer more realistic values that could characterize more appropriately, what users are demanding. In this sense, some demographic and psychographic characteristics of the users are included and others such as the utilization of application usage profiles, as parameters that are included in the user`s profiles. These characteristics and usage profiles make up the user profile that will influence users’ behavior in the model. Using the same procedure, Mobile Network Operators could feed their customers’ data into the model and use it to validate their pricing approaches more accurately before their real implementation or simulate future or hypothetical scenarios. It also makes possible to segment users and make insights for decision-making. Results presented in this paper refer to a simple study case, since the purpose of the paper is to show how the proposal model works and to reveal its differences with arbitrary demand functions used. Of course, results depend on the set of parameters assigned to characterize each user’s profile.