scispace - formally typeset
Search or ask a question

Showing papers by "Richard M. Fujimoto published in 2003"


Proceedings ArticleDOI
27 Oct 2003
TL;DR: Results from a recent performance study are presented concerning large-scale network simulation on a variety of platforms ranging from workstations to cluster computers to supercomputers, and an approach to realizing scalable network simulations that leverages existing sequential simulation models and software is described.
Abstract: Parallel and distributed simulation tools are emerging that offer the ability to perform detailed, packet-level simulations of large-scale computer networks on an unprecedented scale. The state-of-the-art in large-scale network simulation is characterized quantitatively. For this purpose, a metric based on the number of packet transmissions that can be processed by a simulator per second of wallclock time (PTS) is used as a means to quantitatively assess packet-level network simulator performance. An approach to realizing scalable network simulations that leverages existing sequential simulation models and software is described. Results from a recent performance study are presented concerning large-scale network simulation on a variety of platforms ranging from workstations to cluster computers to supercomputers. These experiments include runs utilizing as many as 1536 processors yielding performance as high as 106 million PTS. The performance of packet-level simulations of web and ftp traffic, and denial of service attacks on networks containing millions of network nodes are briefly described, including a run demonstrating the ability to simulate a million web traffic flows in near real-time. New opportunities and research challenges to fully exploit this capability are discussed.

201 citations


Proceedings ArticleDOI
07 Dec 2003
TL;DR: The High Level Architecture developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation and time management is discussed as a means to illustrate how this standard supports both approaches to synchronization.
Abstract: An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.

105 citations


Proceedings ArticleDOI
27 Oct 2003
TL;DR: This work designs and develops a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators, which enables the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.
Abstract: The growing interest in peer-to-peer systems (such as Gnutella) has inspired numerous research activities in this area. Although many demonstrations have been performed that show that the performance of a peer-to-peer system is highly dependent on the underlying network characteristics, much of the evaluation of peer-to-peer proposals has used simplified models that fail to include a detailed model of the underlying network. This can be largely attributed to the complexity in experimenting with a scalable peer-to-peer system simulator built on top of a scalable network simulator with packet-level details. In this work we design and develop a framework for an extensible and scalable peer-to-peer simulation environment that can be built on top of existing packet-level network simulators. The simulation environment is portable to different network simulators, which enables us to simulate a realistic large scale peer-to-peer system using existing parallelization techniques. We demonstrate the use of the simulator for some simple experiments that show how Gnutella system performance can be impacted by the network characteristics.

85 citations


Proceedings Article
07 Dec 2003
TL;DR: The High Level Architecture developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation and time management is discussed as a means to illustrate how this standard supports both approaches to synchronization.
Abstract: An overview of technologies concerned with distributing the execution of simulation programs across multiple processors is presented. Here, particular emphasis is placed on discrete event simulations. The High Level Architecture (HLA) developed by the Department of Defense in the United States is first described to provide a concrete example of a contemporary approach to distributed simulation. The remainder of this paper is focused on time management, a central issue concerning the synchronization of computations on different processors. Time management algorithms broadly fall into two categories, termed conservative and optimistic synchronization. A survey of both conservative and optimistic algorithms is presented focusing on fundamental principles and mechanisms. Finally, time management in the HLA is discussed as a means to illustrate how this standard supports both approaches to synchronization.

44 citations


Proceedings ArticleDOI
10 Jun 2003
TL;DR: It is shown that RTI-based parallel simulations can scale extremely well and achieve very high speedup, and two different federated network simulators are examined.
Abstract: Federated simulation interfaces such as the high level architecture (HLA) were designed for interoperability, and as such are not traditionally associated with high-performance computing. We present results of a case study examining the use of federated simulations using runtime infrastructure (RTI) software to realize large-scale parallel network simulators. We examine the performance of two different federated network simulators, and describe RTI performance optimizations that were used to achieve efficient execution. We show that RTI-based parallel simulations can scale extremely well and achieve very high speedup. Our experiments yielded more than 80-fold scaled speedup in simulating large TCP/IP networks, demonstrating performance of up to 6 million simulated packet transmissions per second on a Linux cluster. Networks containing up to two million network nodes (routers and end systems) were simulated.

25 citations


Proceedings ArticleDOI
10 Jun 2003
TL;DR: A novel message accounting scheme, the offset-epoch method, is presented as a way to increase the efficiency of time management algorithms by eliminating transient messages and a synchronized lower-bound on timestamp (LBTS) computation exploits this efficiency to reduce time management overheads.
Abstract: We introduce a technique to control the overhead of time management processes in order to make such mechanisms appropriate for real-time distributed simulation is introduced. A novel message accounting scheme, the offset-epoch method, is presented as a way to increase the efficiency of time management algorithms by eliminating transient messages. A synchronized lower-bound on timestamp (LBTS) computation exploits this efficiency to reduce time management overheads. This approach represents one step in bridging the gap that now exists between analytic and real-time distributed simulations.

15 citations


Proceedings ArticleDOI
10 Jun 2003
TL;DR: This work focuses on the state dissemination problem in DVEs and proposes a new power aware dead reckoning framework for power efficient state dissemination and presents an adaptive dead reckoning algorithm that attempts to dynamically optimize the tradeoff at runtime.
Abstract: In distributed simulations, such as multi-player distributed virtual environments (DVE),power consumption traditionally has not been a major design factor. However, emerging battery-operated mobile computing platforms require revisiting DVE implementation approaches for maximizing power efficiency. In this paper we explore some implications of power considerations in DVE implementation over mobile handhelds connected by wireless networks. We focus on the state dissemination problem in DVEs and propose a new power-aware dead reckoning framework for power-efficient state dissemination. We highlight a fundamental tradeoff between state consistency and power consumption, and present an adaptive dead reckoning algorithm that attempts to dynamically optimize the tradeoff at runtime. We present a quantitative evaluation of our approach using a syntheticDVE benchmark application.

13 citations


Proceedings ArticleDOI
10 Jun 2003
TL;DR: This work presents a time-parallel approach for trace-driven simulation of the CSMA/CD protocol and presents two optimization techniques: the estimation of idle points and the incremental fix-up computation.
Abstract: Time-parallel simulation defines a methodology thatcan be applied to certain specific simulation problems. Inthis paper, we present a time-parallel approach for tracedrivensimulation of the CSMA/CD protocol. The"memoryless" property of the physical system undermoderate traffic loads allows for efficient time-parallelsimulation. We also present two optimization techniques:the estimation of idle points and the incremental fix-upcomputation. The former can improve the probability thata subtrace begins with a known initial system state. Thelatter can speedup the fix-up computation that is requiredwhen the estimation of the initial state is incorrect.

12 citations


Proceedings ArticleDOI
10 Jun 2003
TL;DR: Three approaches to substantially reduce the memory required by multicast simulations are described, including a novel technique called "negative forwarding table" to compress multicast routing state and the NIx-Vector technique to replace the original unicast IP routing table.
Abstract: The simulation of large-scale multicast networks often requires a significant amount of memory that can easily exceed the capacity of current computers, both because of the inherently large amount of state necessary to simulate message routing and because of design oversights in the multicast portion of existing simulators. We describe three approaches to substantially reduce the memory required by multicast simulations: 1) We introduce a novel technique called "negative forwarding table" to compress multicast routing state. 2) We aggregate the routing state objects from one replicator per router per group per source to one replicator per router. 3) We employ the NIx-Vector technique to replace the original unicast IP routing table. We implemented these techniques in the ns2 simulator to demonstrate their effectiveness. Our experiments show that these techniques enable packet level multicast simulations on a scale that was previously unachievable on modern workstations using ns2.

10 citations


Proceedings ArticleDOI
01 Jan 2003
TL;DR: A novel technique called reverse circuit execution is presented as an efficient approach towards integrated parallel execution of multiple sequential circuit simulators and it will be possible to efficiently co-simulate logic circuits partitioned across multiple commercial simulators by synchronizing their execution using optimistic concurrency protocols.
Abstract: A novel technique called reverse circuit execution is presented as an efficient approach towards integrated parallel execution of multiple sequential circuit simulators. The unique aspect of this approach is that it does not require source code modifications to either simulation engines or circuit models, and hence holds appeal in situations where parallelism is desirable but without access to simulator and/or model source code (as in the case of commercial simulators with proprietary code concerns). First, algorithms and methodology are presented for transforming an input circuit into another equivalent circuit that is capable of both forward and reverse execution. Following that, it is shown how the transformed circuit can be used towards optimistic synchronization of multiple circuit simulators. As an end result of using our approach, it will be possible to efficiently co-simulate logic circuits partitioned across multiple commercial simulators, by synchronizing their execution using optimistic concurrency protocols. Keywords: Circuit simulation, parallel execution, reverse execution

9 citations


Proceedings ArticleDOI
18 May 2003
TL;DR: Intelligent Transportation Systems (ITS) are being deployed to attack traffic congestion and other transportation problems, but existing ITS deployments are "infrastructure heavy," relying largely on roadside sensors, cameras, networks, etc. leading to high maintenance costs.
Abstract: Traffic congestion resulted in an estimated cost of $69.5 in extra delays and wasted fuel in 75 urban areas in the U.S. in 2001 [1]. Over 6 million crashes occur each year in the U.S., resulting in over 40,000 fatalities and an estimated $150 billion in economic loss [2, 3]. Intelligent Transportation Systems (ITS) are being deployed to attack these problems [3]. However, existing ITS deployments are "infrastructure heavy," relying largely on roadside sensors, cameras, networks, etc. leading to high maintenance costs. It is often difficult for government agencies to obtain adequate funding to keep these systems fully operational, causing some systems to fall into disrepair, severely degrading their effectiveness.