Design, Measurement and Management of Large-Scale IP Networks: Bridging the Gap Between Theory and Practice
04 Dec 2008-
TL;DR: This title sets out the design and management principles of large-scale IP networks, and the need for tasks to be underpinned by actual measurements, and discusses of the types of measurements available in IP networks.
Abstract: Designing efficient Internet Protocol (IP) networks and maintaining them effectively poses a range of challenges, but in this highly competitive industry it is crucial that these are overcome. Weaving together theory and practice, this title sets out the design and management principles of large-scale IP networks, and the need for tasks to be underpinned by actual measurements. Discussions of the types of measurements available in IP networks are included, along with the ways in which they can assist both in the design phase as well as in the monitoring and management of IP applications. Other topics covered include IP network design, traffic engineering, network and service management and security. A valuable resource for graduate students and researchers in electrical and computer engineering and computer science, this is also an excellent reference for network designers and operators in the communication industry.
TL;DR: An open source Java-based software tool that automates the elaboration of performance evaluation tests for userdefined or built-in network design algorithms, network recovery schemes, connection-admission-control systems, or dynamic provisioning algorithms for timevarying traffic is presented.
Abstract: The plethora of network planning results published in top-ranked journals is a good sign of the success of the network planning research field. Unfortunately, it is often difficult for network carriers and ISPs to reproduce these investigations on their networks. This is partially because of the absence of a software planning tool, meeting the requirements of industry and academia, which can make the adaptation and validation of planning algorithms less time consuming. We describe how a paradigm shift to an open source view of the network planning field emphasizes the power of distributed peer review and transparency to create high-quality software at an accelerated pace and lower cost. Then we present Net2Plan, an open source Java-based software tool. Built on top of a technology-agnostic network representation, it automates the elaboration of performance evaluation tests for userdefined or built-in network design algorithms, network recovery schemes, connection-admission-control systems, or dynamic provisioning algorithms for timevarying traffic. The Net2Plan philosophy enforces code reutilization as an open repository of network planning resources. In this article, a case study in a multilayer IP-over-WDM network is presented to illustrate the potential of Net2Plan. We cover standard CAPEX studies, and more advanced aspects such as a resilience analysis of the network under random independent failures and disaster scenarios, and an energy efficiency assessment of “green” schemes that switch off parts of the network during low load periods. All the planning algorithms in this article are publicly available on the Net2Plan website.
TL;DR: An analytical model is presented to estimate the energy consumption of an Energy Efficient Ethernet link, based on simple traffic parameters, that is validated through simulation and experimental data.
Abstract: The recently approved Energy Efficient Ethernet standard IEEE 802.3az achieves energy savings by using a low power mode when the link is idle. However, those savings heavily depend on the traffic patterns, due to the overhead inherent in transitions between active and low power modes. This makes it impractical to estimate energy savings through measurements or simulations in all relevant scenarios. In this letter we present an analytical model to estimate the energy consumption of an Energy Efficient Ethernet link, based on simple traffic parameters. The model is validated through simulation and experimental data.
01 Aug 2012
TL;DR: This thesis redefines IA architecture and creates models that recognise the integrated, complex issues within technical to organisational interoperability and the assurance that the right information is delivered to the right people at the right time in a trustworthy environment and identifies the need for IA practitioners and a necessary IA education for all Cyber Warriors.
Abstract: The military has 5 domains of operations: Land, Sea, Air, Space and now Cyber. This 5th Domain is a heterogeneous network (of networks) of Communication and Information Systems (CIS) which were designed and accredited to meet Netcentric capability requirements; to be robust, secure and functional to the organisation’s needs. Those needs have changed. In the globalised economy and across the Battlespace, organisations now need to share information. Keeping our secrets, secret has been the watchwords of Information Security and the accreditation process; whilst sharing them securely across coalition, geo-physically dispersed networks has become the cyber security dilemma. The diversity of Advanced Persistent Threats, the contagion of Cyber Power and insecurity of coalition Interoperability has generated a plethora of vulnerabilities to the Cyber Domain. Necessity (fiscal and time-constraints) has created security gaps in deployed CIS architectures through their interconnections. This federated environment for superior decision making and shared situational awareness requires that Bridging the (new capability) Gaps needs to be more than just improving security (Confidentiality, Integrity and Availability) mechanisms to the technical system interfaces. The solution needs a new approach to creating and understanding a trusted,social-technical CIS environment and how these (sensitive) information assets should be managed, stored and transmitted. Information Assurance (IA) offers a cohesive architecture for coalition system (of systems) interoperability; the identification of strategies, skills and business processes required for effective information operations, management and exploitation. IA provides trusted, risk managed social-technical (Enterprise) infrastructures which are safe, resilient, dependable and secure. This thesis redefines IA architecture and creates models that recognise the integrated, complex issues within technical to organisational interoperability and the assurance that the right information is delivered to the right people at the right time in a trustworthy environment and identifies the need for IA practitioners and a necessary IA education for all Cyber Warriors.
••31 Aug 2015
TL;DR: This paper introduces a new representation of path variables which can be seen as a lightweight relaxation of usual representations and shows how to define and implement fast propagators on these new variables while reducing the memory impact of classical traffic engineering models.
Abstract: Segment routing is an emerging network technology that exploits the existence of several paths between a source and a destination to spread the traffic in a simple and elegant way. The major commercial network vendors already support segment routing, and several Internet actors are ready to use segment routing in their network. Unfortunately, by changing the way paths are computed, segment routing poses new optimization problems which cannot be addressed with previous research contributions. In this paper, we propose a new hybrid constraint programming framework to solve traffic engineering problems in segment routing. We introduce a new representation of path variables which can be seen as a lightweight relaxation of usual representations. We show how to define and implement fast propagators on these new variables while reducing the memory impact of classical traffic engineering models. The efficiency of our approach is confirmed by experiments on real and artificial networks of big Internet actors.
Cites background from "Design, Measurement and Management ..."
...For this reason, controlling the paths followed by traffic has become an increasingly critical challenge for network operators  – especially those managing large networks....
TL;DR: A linear-regression model is provided that adjusts the amount of traffic that each network user contributes to the busy-hour traffic mean values, with a direct application to the problem of link capacity planning of IP networks.
Related Papers (5)
17 Aug 2021
23 Feb 2009
01 Jun 2005