scispace - formally typeset
Search or ask a question
Author

D. Moreland

Bio: D. Moreland is an academic researcher from Commonwealth Scientific and Industrial Research Organisation. The author has contributed to research in topics: Network packet & Provisioning. The author has an hindex of 8, co-authored 15 publications receiving 155 citations.

Papers
More filters
Proceedings ArticleDOI
23 Apr 2006
TL;DR: It is shown via simulations of a realistic network carrying real-time traffic that pacing can significantly reduce losses at the expense of a bounded increase in end-to-end delay, and the loss-delay trade-off mechanism provided by pacing can help achieve desired OPS network performance.
Abstract: In the absence of a cost-effective technology for storing optical signals, emerging optical packet switched (OPS) networks are expected to have severely limited buffering capability. This paper investigates the resulting impact on end-to-end loss and throughput, and proposes that the optical edge switches “pace” packets into the OPS core to improve performance without adversely affecting end-to-end delays. In this context, our contributions are three-fold. We first evaluate the impact of short buffers on the performance of real-time and TCP traffic. This helps us identify short-time-scale burstiness as the major contributor to performance degradation, so we propose that the optical edge switches pace the transmission of packets into the OPS core while respecting their delay-constraints. Our second contribution develops algorithms of poly-logarithmic complexity that can perform optimal real-time pacing of high data rate traffic. Lastly, we show via simulations of a realistic network carrying real-time traffic that pacing can significantly reduce losses at the expense of a bounded increase in end-to-end delay. The loss-delay trade-off mechanism provided by pacing can help achieve desired OPS network performance.

34 citations

Proceedings ArticleDOI
26 Jun 2006
TL;DR: This work presents a service-construct abstraction based on what it term a 'collaborative context' where authorized participants have collective provisioning capabilities for the purpose of negotiating and contributing exclusive resources to create a virtual private enterprise.
Abstract: Our research into secure managed extranets was motivated by the requirements of companies (possibly competitors) needing to collaborate on occasion because transient business windows present mutual opportunities. Our contributions towards satisfying these requirements are twofold: (i) we present a service-construct abstraction based on what we term a ?Collaborative Context? where authorized participants (e.g. project leaders) have collective provisioning capabilities for the purpose of negotiating and contributing exclusive resources to create a virtual private enterprise, and (ii) we present a realization of the collaborative context as a commercially oriented, on-demand Virtual Private eXtranet Service (VPXS).

22 citations

Journal ArticleDOI
TL;DR: It is argued that the loss-delay tradeoff mechanism provided by pacing can be instrumental in overcoming the performance hurdle arising from the scarcity of buffers in OPS networks.
Abstract: In the absence of a cost-effective technology for storing optical signals, emerging optical packet switched (OPS) networks are expected to have severely limited buffering capability. To mitigate the performance degradation resulting from small buffers, this paper proposes that optical edge nodes ldquopacerdquo the injection of traffic into the OPS core. Our contributions relating to pacing in OPS networks are three-fold: first, we develop real-time pacing algorithms of poly-logarithmic complexity that are feasible for practical implementation in emerging high-speed OPS networks. Second, we provide an analytical quantification of the benefits of pacing in reducing traffic burstiness and traffic loss at a link with very small buffers. Third, we show via simulations of realistic network topologies that pacing can significantly reduce network losses at the expense of a small and bounded increase in end-to-end delay for real-time traffic flows. We argue that the loss-delay tradeoff mechanism provided by pacing can be instrumental in overcoming the performance hurdle arising from the scarcity of buffers in OPS networks.

19 citations

Book ChapterDOI
25 Nov 2007
TL;DR: A novel technology, called Trust Extension Device (TED), is proposed, which enables mobility and portability of trust in cooperative information systems that works in a heterogeneous environment.
Abstract: One method for establishing a trust relationship between a server and its clients in a co-operative information system is to use a digital certificate. The use of digital certificates bound to a particular machine works well under the assumption that the underlying computing and networking infrastructure is managed by a single enterprise. Furthermore, managed infrastructures are assumed to have a controlled operational environment, including execution of a standard set of applications and operating system. These assumptions are also valid for recent proposals on establishing trust using hardware-supported systems based on a Trusted Computing Module (TPM) cryptographic microcontroller. However, these assumptions do not hold in today's cooperative information systems. Clients are mobile and work using network connections that go beyond the administrative boundaries of the enterprise. In this paper, we propose a novel technology, called Trust Extension Device (TED), which enables mobility and portability of trust in cooperative information systems that works in a heterogeneous environment. The paper provides an overview of the technology by describing its design, a conceptual implementation and its use in an application scenario.

17 citations

Proceedings ArticleDOI
26 Mar 2008
TL;DR: This framework proposes an abstraction to facilitate performance testing by separating the application logic from the common performance testing functionalities, which leads to a set of general-purpose data models and components, which form the core of the framework.
Abstract: Performance testing is one of the vital activities spanning the whole life cycle of software engineering. While there are a considerable number of performance testing products and open source tools available, they are either too expensive and complicated for small projects, or too specific and simple for diverse performance tests. This paper presents a general-purpose testing framework for both simple and small, and complicated and large-scale performance testing. Our framework proposes an abstraction to facilitate performance testing by separating the application logic from the common performance testing functionalities. This leads to a set of general-purpose data models and components, which form the core of the framework. The framework has been prototyped on both .NET and Java platforms and was used for a number of performance-related projects.

13 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: To support bursty traffic on the Internet (and especially WWW) efficiently, optical burst switching (OBS) is proposed as a way to streamline both protocols and hardware in building the future gener...
Abstract: To support bursty traffic on the Internet (and especially WWW) efficiently, optical burst switching (OBS) is proposed as a way to streamline both protocols and hardware in building the future gener...

674 citations

Journal ArticleDOI
TL;DR: This article presents the first comprehensive review of social and computer science literature on trust in social networks and discusses recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination.
Abstract: Web-based social networks have become popular as a medium for disseminating information and connecting like-minded people. The public accessibility of such networks with the ability to share opinions, thoughts, information, and experience offers great promise to enterprises and governments. In addition to individuals using such networks to connect to their friends and families, governments and enterprises have started exploiting these platforms for delivering their services to citizens and customers. However, the success of such attempts relies on the level of trust that members have with each other as well as with the service provider. Therefore, trust becomes an essential and important element of a successful social network. In this article, we present the first comprehensive review of social and computer science literature on trust in social networks. We first review the existing definitions of trust and define social trust in the context of social networks. We then discuss recent works addressing three aspects of social trust: trust information collection, trust evaluation, and trust dissemination. Finally, we compare and contrast the literature and identify areas for further research in social trust.

615 citations

Journal ArticleDOI
TL;DR: A framework of network-Cloud convergence based on service-oriented network virtualization based on SOA and a survey on key technologies for realizing NaaS are presented, mainly focusing on state of the art of network service description, discovery, and composition.
Abstract: The crucial role that networking plays in Cloud computing calls for a holistic vision that allows combined control, management, and optimization of both networking and computing resources in a Cloud environment, which leads to a convergence of networking and Cloud computing. Network virtualization is being adopted in both telecommunications and the Internet as a key attribute for the next generation networking. Virtualization, as a potential enabler of profound changes in both communications and computing domains, is expected to bridge the gap between these two fields. Service-Oriented Architecture (SOA), when applied in network virtualization, enables a Network-as-a-Service (NaaS) paradigm that may greatly facilitate the convergence of networking and Cloud computing. Recently the application of SOA in network virtualization has attracted extensive interest from both academia and industry. Although numerous relevant research works have been published, they are currently scattered across multiple fields in the literature, including telecommunications, computer networking, Web services, and Cloud computing. In this article we present a comprehensive survey on the latest developments in service-oriented network virtualization for supporting Cloud computing, particularly from a perspective of network and Cloud convergence through NaaS. Specifically, we first introduce the SOA principle and review recent research progress on applying SOA to support network virtualization in both telecommunications and the Internet. Then we present a framework of network-Cloud convergence based on service-oriented network virtualization and give a survey on key technologies for realizing NaaS, mainly focusing on state of the art of network service description, discovery, and composition. We also discuss the challenges brought in by network-Cloud convergence to these technologies and research opportunities available in these areas, with a hope to arouse the research community's interest in this emerging interdisciplinary field.

291 citations

Journal ArticleDOI
31 Mar 2009
TL;DR: This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow performance, and discusses the pros and cons of these different approaches.
Abstract: The past few years have witnessed a lot of debate on how large Internet router buffers should be. The widely believed rule-of-thumb used by router manufacturers today mandates a buffer size equal to the delay-bandwidth product. This rule was first challenged by researchers in 2004 who argued that if there are a large number of long-lived TCP connections flowing through a router, then the buffer size needed is equal to the delay-bandwidth product divided by the square root of the number of long-lived TCP flows. The publication of this result has since reinvigorated interest in the buffer sizing problem with numerous other papers exploring this topic in further detail - ranging from papers questioning the applicability of this result to proposing alternate schemes to developing new congestion control algorithms, etc.This paper provides a synopsis of the recently proposed buffer sizing strategies and broadly classifies them according to their desired objective: link utilisation, and per-flow performance. We discuss the pros and cons of these different approaches. These prior works study buffer sizing purely in the context of TCP. Subsequently, we present arguments that take into account both real-time and TCP traffic. We also report on the performance studies of various high-speed TCP variants and experimental results for networks with limited buffers. We conclude this paper by outlining some interesting avenues for further research.

107 citations

Journal ArticleDOI
TL;DR: The contention resolution and avoidance schemes proposed for bufferless OPS networks are surveyed and the Quality of Service (QoS) issue in a QoS-capable bufferlessOPS network is reviewed.
Abstract: Optical Packet Switching (OPS) is the promising switching technique to utilize the huge bandwidth offered by all-optical networks using the DWDM (Dense Wavelength Division Multiplexing) technology. However, optical packet contention is the major problem in an OPS network. Resolution and avoidance are two schemes to deal with the contention problem. A resolution scheme resolves collisions, while an avoidance scheme tries to reduce the number of potential collision events. Many OPS architectures rely on optical buffers to resolve contention. Unfortunately, optical buffering technology is still immature as it relies on bulky optical fiber delay lines. Furthermore, it requires a complex control. Therefore, a bufferless OPS network could still be the most straightforward implementation in the near future. In this article, we survey the contention resolution and avoidance schemes proposed for bufferless OPS networks. We also review the resolution and avoidance schemes that can handle the Quality of Service (QoS) issue in a QoS-capable bufferless OPS network.

95 citations