scispace - formally typeset
Search or ask a question
Author

Matthew Roughan

Other affiliations: Melbourne Institute of Technology, RMIT University, AT&T  ...read more
Bio: Matthew Roughan is an academic researcher from University of Adelaide. The author has contributed to research in topics: The Internet & Traffic generation model. The author has an hindex of 36, co-authored 180 publications receiving 7397 citations. Previous affiliations of Matthew Roughan include Melbourne Institute of Technology & RMIT University.


Papers
More filters
Journal ArticleDOI
TL;DR: The Internet Topology Zoo is a store of network data created from the information that network operators make public, and is the most accurate large-scale collection of network topologies available, and includes meta-data that couldn't have been measured.
Abstract: The study of network topology has attracted a great deal of attention in the last decade, but has been hampered by a lack of accurate data. Existing methods for measuring topology have flaws, and arguments about the importance of these have overshadowed the more interesting questions about network structure. The Internet Topology Zoo is a store of network data created from the information that network operators make public. As such it is the most accurate large-scale collection of network topologies available, and includes meta-data that couldn't have been measured. With this data we can answer questions about network structure with more certainty than ever before - we illustrate its power through a preliminary analysis of the PoP-level topology of over 140 networks. We find a wide range of network designs not conforming as a whole to any obvious model.

1,333 citations

Proceedings ArticleDOI
10 Jun 2003
TL;DR: A new method for practical and rapid inference of traffic matrices in IP networks from link load measurements, augmented by readily available network and routing configuration information is proposed, which is remarkably fast and accurate.
Abstract: A matrix giving the traffic volumes between origin and destination in a network has tremendously potential utility for network capacity planning and management. Unfortunately, traffic matrices are generally unavailable in large operational IP networks. On the other hand, link load measurements are readily available in IP networks. In this paper, we propose a new method for practical and rapid inference of traffic matrices in IP networks from link load measurements, augmented by readily available network and routing configuration information. We apply and validate the method by computing backbone-router to backbone-router traffic matrices on a large operational tier-1 IP network -- a problem an order of magnitude larger than any other comparable method has tackled. The results show that the method is remarkably fast and accurate, delivering the traffic matrix in under five seconds.

608 citations

Proceedings ArticleDOI
25 Oct 2004
TL;DR: It is argued that measurement based automated Class of Service (CoS) mapping is an important practical problem that needs to be studied, and a solution framework for measurement based classification of traffic for QoS based on statistical application signatures is outlined.
Abstract: The ability to provide different Quality of Service (QoS) guarantees to traffic from different applications is a highly desired feature for many IP network operators, particularly for enterprise networks. Although various mechanisms exist for providing QoS in the network, QoS is yet to be widely deployed. We believe that a key factor holding back widespread QoS adoption is the absence of suitable methodologies/processes for appropriately mapping the traffic from different applications to different QoS classes. This is a challenging task, because many enterprise network operators who are interested in QoS do not know all the applications running on their network, and furthermore, over recent years port-based application classification has become problematic. We argue that measurement based automated Class of Service (CoS) mapping is an important practical problem that needs to be studied.In this paper we describe the requirements and associated challenges, and outline a solution framework for measurement based classification of traffic for QoS based on statistical application signatures. In our approach the signatures are chosen in such as way as to make them insensitive to the particular application layer protocol, but rather to determine the way in which an application is used -- for instance is it used interactively, or for bulk-data transport. The resulting application signature can then be used to derive the network layer signatures required to determine the CoS class for individual IP datagrams. Our evaluations using traffic traces from a variety of network locations, demonstrate the feasibility and potential of the approach.

552 citations

Journal ArticleDOI
TL;DR: This article aims to make recent results of this process accessible beyond Internet specialists to the broader scientific community and to clarify several sources of basic methodological differences that are relevant beyond either the Internet or the two specific approaches focused on here (i.e., scale-free networks and highly optimized tolerance networks).
Abstract: The search for unifying properties of complex networks is popular, challenging, and important. For modeling approaches that focus on robustness and fragility as unifying concepts, the Internet is an especially attractive case study, mainly because its applications are ubiquitous and pervasive, and widely available expositions exist at every level of detail. Nevertheless, alternative approaches to modeling the Internet often make extremely different assumptions and derive opposite conclusions about fundamental properties of one and the same system. Fortunately, a detailed understanding of Internet technology combined with a unique ability to measure the network means that these differences can be understood thoroughly and resolved unambiguously. This article aims to make recent results of this process accessible beyond Internet specialists to the broader scientific community and to clarify several sources of basic methodological differences that are relevant beyond either the Internet or the two specific approaches focused on here (i.e., scale-free networks and highly optimized tolerance networks).

464 citations

Proceedings ArticleDOI
16 Aug 2009
TL;DR: This work develops a novel spatio-temporal compressive sensing framework with two key components: a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio/temporal properties, and a mechanism for combining low- rank approximations with local interpolation procedures.
Abstract: Many basic network engineering tasks (e.g., traffic engineering, capacity planning, anomaly detection) rely heavily on the availability and accuracy of traffic matrices. However, in practice it is challenging to reliably measure traffic matrices. Missing values are common. This observation brings us into the realm of compressive sensing, a generic technique for dealing with missing values that exploits the presence of structure and redundancy in many real-world systems. Despite much recent progress made in compressive sensing, existing compressive-sensing solutions often perform poorly for traffic matrix interpolation, because real traffic matrices rarely satisfy the technical conditions required for these solutions.To address this problem, we develop a novel spatio-temporal compressive sensing framework with two key components: (i) a new technique called Sparsity Regularized Matrix Factorization (SRMF) that leverages the sparse or low-rank nature of real-world traffic matrices and their spatio-temporal properties, and (ii) a mechanism for combining low-rank approximations with local interpolation procedures. We illustrate our new framework and demonstrate its superior performance in problems involving interpolation with real traffic matrices where we can successfully replace up to 98% of the values. Evaluation in applications such as network tomography, traffic prediction, and anomaly detection confirms the flexibility and effectiveness of our approach.

364 citations


Cited by
More filters
Proceedings ArticleDOI
22 Jan 2006
TL;DR: Some of the major results in random graphs and some of the more challenging open problems are reviewed, including those related to the WWW.
Abstract: We will review some of the major results in random graphs and some of the more challenging open problems. We will cover algorithmic and structural questions. We will touch on newer models, including those related to the WWW.

7,116 citations

Journal ArticleDOI
01 Jan 2015
TL;DR: This paper presents an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications, and presents the key building blocks of an SDN infrastructure using a bottom-up, layered approach.
Abstract: The Internet has led to the creation of a digital society, where (almost) everything is connected and is accessible from anywhere. However, despite their widespread adoption, traditional IP networks are complex and very hard to manage. It is both difficult to configure the network according to predefined policies, and to reconfigure it to respond to faults, load, and changes. To make matters even more difficult, current networks are also vertically integrated: the control and data planes are bundled together. Software-defined networking (SDN) is an emerging paradigm that promises to change this state of affairs, by breaking vertical integration, separating the network's control logic from the underlying routers and switches, promoting (logical) centralization of network control, and introducing the ability to program the network. The separation of concerns, introduced between the definition of network policies, their implementation in switching hardware, and the forwarding of traffic, is key to the desired flexibility: by breaking the network control problem into tractable pieces, SDN makes it easier to create and introduce new abstractions in networking, simplifying network management and facilitating network evolution. In this paper, we present a comprehensive survey on SDN. We start by introducing the motivation for SDN, explain its main concepts and how it differs from traditional networking, its roots, and the standardization activities regarding this novel paradigm. Next, we present the key building blocks of an SDN infrastructure using a bottom-up, layered approach. We provide an in-depth analysis of the hardware infrastructure, southbound and northbound application programming interfaces (APIs), network virtualization layers, network operating systems (SDN controllers), network programming languages, and network applications. We also look at cross-layer problems such as debugging and troubleshooting. In an effort to anticipate the future evolution of this new paradigm, we discuss the main ongoing research efforts and challenges of SDN. In particular, we address the design of switches and control platforms—with a focus on aspects such as resiliency, scalability, performance, security, and dependability—as well as new opportunities for carrier transport networks and cloud providers. Last but not least, we analyze the position of SDN as a key enabler of a software-defined environment.

3,589 citations

Book
01 Jan 2001
TL;DR: This chapter discusses Decision-Theoretic Foundations, Game Theory, Rationality, and Intelligence, and the Decision-Analytic Approach to Games, which aims to clarify the role of rationality in decision-making.
Abstract: Preface 1. Decision-Theoretic Foundations 1.1 Game Theory, Rationality, and Intelligence 1.2 Basic Concepts of Decision Theory 1.3 Axioms 1.4 The Expected-Utility Maximization Theorem 1.5 Equivalent Representations 1.6 Bayesian Conditional-Probability Systems 1.7 Limitations of the Bayesian Model 1.8 Domination 1.9 Proofs of the Domination Theorems Exercises 2. Basic Models 2.1 Games in Extensive Form 2.2 Strategic Form and the Normal Representation 2.3 Equivalence of Strategic-Form Games 2.4 Reduced Normal Representations 2.5 Elimination of Dominated Strategies 2.6 Multiagent Representations 2.7 Common Knowledge 2.8 Bayesian Games 2.9 Modeling Games with Incomplete Information Exercises 3. Equilibria of Strategic-Form Games 3.1 Domination and Ratonalizability 3.2 Nash Equilibrium 3.3 Computing Nash Equilibria 3.4 Significance of Nash Equilibria 3.5 The Focal-Point Effect 3.6 The Decision-Analytic Approach to Games 3.7 Evolution. Resistance. and Risk Dominance 3.8 Two-Person Zero-Sum Games 3.9 Bayesian Equilibria 3.10 Purification of Randomized Strategies in Equilibria 3.11 Auctions 3.12 Proof of Existence of Equilibrium 3.13 Infinite Strategy Sets Exercises 4. Sequential Equilibria of Extensive-Form Games 4.1 Mixed Strategies and Behavioral Strategies 4.2 Equilibria in Behavioral Strategies 4.3 Sequential Rationality at Information States with Positive Probability 4.4 Consistent Beliefs and Sequential Rationality at All Information States 4.5 Computing Sequential Equilibria 4.6 Subgame-Perfect Equilibria 4.7 Games with Perfect Information 4.8 Adding Chance Events with Small Probability 4.9 Forward Induction 4.10 Voting and Binary Agendas 4.11 Technical Proofs Exercises 5. Refinements of Equilibrium in Strategic Form 5.1 Introduction 5.2 Perfect Equilibria 5.3 Existence of Perfect and Sequential Equilibria 5.4 Proper Equilibria 5.5 Persistent Equilibria 5.6 Stable Sets 01 Equilibria 5.7 Generic Properties 5.8 Conclusions Exercises 6. Games with Communication 6.1 Contracts and Correlated Strategies 6.2 Correlated Equilibria 6.3 Bayesian Games with Communication 6.4 Bayesian Collective-Choice Problems and Bayesian Bargaining Problems 6.5 Trading Problems with Linear Utility 6.6 General Participation Constraints for Bayesian Games with Contracts 6.7 Sender-Receiver Games 6.8 Acceptable and Predominant Correlated Equilibria 6.9 Communication in Extensive-Form and Multistage Games Exercises Bibliographic Note 7. Repeated Games 7.1 The Repeated Prisoners Dilemma 7.2 A General Model of Repeated Garnet 7.3 Stationary Equilibria of Repeated Games with Complete State Information and Discounting 7.4 Repeated Games with Standard Information: Examples 7.5 General Feasibility Theorems for Standard Repeated Games 7.6 Finitely Repeated Games and the Role of Initial Doubt 7.7 Imperfect Observability of Moves 7.8 Repeated Wines in Large Decentralized Groups 7.9 Repeated Games with Incomplete Information 7.10 Continuous Time 7.11 Evolutionary Simulation of Repeated Games Exercises 8. Bargaining and Cooperation in Two-Person Games 8.1 Noncooperative Foundations of Cooperative Game Theory 8.2 Two-Person Bargaining Problems and the Nash Bargaining Solution 8.3 Interpersonal Comparisons of Weighted Utility 8.4 Transferable Utility 8.5 Rational Threats 8.6 Other Bargaining Solutions 8.7 An Alternating-Offer Bargaining Game 8.8 An Alternating-Offer Game with Incomplete Information 8.9 A Discrete Alternating-Offer Game 8.10 Renegotiation Exercises 9. Coalitions in Cooperative Games 9.1 Introduction to Coalitional Analysis 9.2 Characteristic Functions with Transferable Utility 9.3 The Core 9.4 The Shapkey Value 9.5 Values with Cooperation Structures 9.6 Other Solution Concepts 9.7 Colational Games with Nontransferable Utility 9.8 Cores without Transferable Utility 9.9 Values without Transferable Utility Exercises Bibliographic Note 10. Cooperation under Uncertainty 10.1 Introduction 10.2 Concepts of Efficiency 10.3 An Example 10.4 Ex Post Inefficiency and Subsequent Oilers 10.5 Computing Incentive-Efficient Mechanisms 10.6 Inscrutability and Durability 10.7 Mechanism Selection by an Informed Principal 10.8 Neutral Bargaining Solutions 10.9 Dynamic Matching Processes with Incomplete Information Exercises Bibliography Index

3,569 citations

Proceedings ArticleDOI
27 Aug 2013
TL;DR: This work presents the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet, using OpenFlow to control relatively simple switches built from merchant silicon.
Abstract: We present the design, implementation, and evaluation of B4, a private WAN connecting Google's data centers across the planet. B4 has a number of unique characteristics: i) massive bandwidth requirements deployed to a modest number of sites, ii) elastic traffic demand that seeks to maximize average bandwidth, and iii) full control over the edge servers and network, which enables rate limiting and demand measurement at the edge.These characteristics led to a Software Defined Networking architecture using OpenFlow to control relatively simple switches built from merchant silicon. B4's centralized traffic engineering service drives links to near 100% utilization, while splitting application flows among multiple paths to balance capacity against application priority/demands. We describe experience with three years of B4 production deployment, lessons learned, and areas for future work.

2,226 citations