scispace - formally typeset
Search or ask a question

Showing papers on "Testbed published in 2001"


Proceedings ArticleDOI
16 Jul 2001
TL;DR: A novel approach to the localization of sensors in an ad-hoc network that enables sensor nodes to discover their locations using a set distributed iterative algorithms is described.
Abstract: The recent advances in radio and em beddedsystem technologies have enabled the proliferation of wireless microsensor networks. Such wirelessly connected sensors are released in many diverse environments to perform various monitoring tasks. In many such tasks, location awareness is inherently one of the most essential system parameters. It is not only needed to report the origins of events, but also to assist group querying of sensors, routing, and to answer questions on the network coverage. In this paper we present a novel approach to the localization of sensors in an ad-hoc network. We describe a system called AHLoS (Ad-Hoc Localization System) that enables sensor nodes to discover their locations using a set distributed iterative algorithms. The operation of AHLoS is demonstrated with an accuracy of a few centimeters using our prototype testbed while scalability and performance are studied through simulation.

2,931 citations


Proceedings ArticleDOI
21 Oct 2001
TL;DR: This paper is the first description of the software architecture that supports named data and in- network processing in an operational, multi-application sensor-network and shows that approaches such as in-network aggregation and nested queries can significantly affect network traffic.
Abstract: In most distributed systems, naming of nodes for low-level communication leverages topological location (such as node addresses) and is independent of any application. In this paper, we investigate an emerging class of distributed systems where low-level communication does not rely on network topological location. Rather, low-level communication is based on attributes that are external to the network topology and relevant to the application. When combined with dense deployment of nodes, this kind of named data enables in-network processing for data aggregation, collaborative signal processing, and similar problems. These approaches are essential for emerging applications such as sensor networks where resources such as bandwidth and energy are limited. This paper is the first description of the software architecture that supports named data and in-network processing in an operational, multi-application sensor-network. We show that approaches such as in-network aggregation and nested queries can significantly affect network traffic. In one experiment aggregation reduces traffic by up to 42% and nested queries reduce loss rates by 30%. Although aggregation has been previously studied in simulation, this paper demonstrates nested queries as another form of in-network processing, and it presents the first evaluation of these approaches over an operational testbed.

677 citations


Journal ArticleDOI
01 Dec 2001
TL;DR: The design and experiences with the ADAM (Audit Data Analysis and Mining) system are described, which is used as a testbed to study how useful data mining techniques can be in intrusion detection.
Abstract: Intrusion detection systems have traditionally been based on the characterization of an attack and the tracking of the activity on the system to see if it matches that characterization. Recently, new intrusion detection systems based on data mining are making their appearance in the field. This paper describes the design and experiences with the ADAM (Audit Data Analysis and Mining) system, which we use as a testbed to study how useful data mining techniques can be in intrusion detection.

289 citations


Journal ArticleDOI
TL;DR: This paper presents a systematic approach, testbed evaluation, for the assessment of interaction techniques for VEs, and presents two testbed experiments, covering techniques for the common VE tasks of travel and object selection/manipulation.
Abstract: As immersive virtual environment (VE) applications become more complex, it is clear that we need a firm understanding of the principles of VE interaction. In particular, designers need guidance in choosing three-dimensional interaction techniques. In this paper, we present a systematic approach, testbed evaluation, for the assessment of interaction techniques for VEs. Testbed evaluation uses formal frameworks and formal experiments with multiple independent and dependent variables to obtain a wide range of performance data for VE interaction techniques. We present two testbed experiments, covering techniques for the common VE tasks of travel and object selection/manipulation. The results of these experiments allow us to form general guidelines for VE interaction and to provide an empirical basis for choosing interaction techniques in VE applications. Evaluation of a real-world VE system based on the testbed results indicates that this approach can produce substantial improvements in usability.

226 citations


Proceedings ArticleDOI
01 Apr 2001
TL;DR: A novel pipelining technique for structuring the core index-building system that substantially reduces the index construction time is introduced and a storage scheme for creating and managing inverted files using an embedded database system is proposed.
Abstract: We identify crucial design issues in building a distributed inverted index for a large collection of Web pages. We introduce a novel pipelining technique for structuring the core index-building system that substantially reduces the index construction time. We also propose a storage scheme for creating and managing inverted files using an embedded database system. We suggest and compare different strategies for collecting global statistics from distributed inverted indexes. Finally, we present performance results from experiments on a testbed distributed Web indexing system that we have implemented.

177 citations


Proceedings ArticleDOI
28 Oct 2001
TL;DR: A hierarchical framework and key distribution algorithms for such a dynamic environment as a dynamic, distributed setting in which command and control nodes move along with individual users are presented, with a focus on how keys and trust relationships are transferred when users move between so-called "areas" in the hierarchy.
Abstract: In this paper we consider the problem of key management in a highly-mobile wireless networking environment, such as a dynamic, distributed setting in which command and control nodes move along with individual users. In this scenario, data must be securely multicast from one source to many users, requiring that users be properly keyed. Furthermore, because users move in and out of the session (due to mobility, attrition, and reinforcement), in order to preserve confidentiality, it becomes necessary to rekey each time a user enters or leaves. We present a hierarchical framework and key distribution algorithms for such a dynamic environment, with a focus on how keys and trust relationships are transferred when users move between so-called "areas" in the hierarchy. We present several schemes including one that rekeys every time a member moves from area to area and one that delays rekeying so long as security is not compromised. Our preliminary analytical and simulation results indicate that it is possible to trade off communication throughput with computational and security overheads. We also briefly describe a prototype testbed in which we are implementing and experimenting with these algorithms.

149 citations


Journal ArticleDOI
01 Nov 2001
TL;DR: This paper focuses on an experimental analysis of the perfor mance and scalability of cluster-based web servers and observes that the round robin algorithm performs much worse in comparison with the other two algorithms for low to medium workload, but as the request arrival rate increases, the performance of the three algorithms converge.
Abstract: This paper focuses on an experimental analysis of the performance and scalability of cluster-based web servers. We carry out the comparative studies using two experimental platforms, namely, a hardware testbed consisting of 16 PCs, and a trace-driven discrete-event simulator. Dispatcher and web server service times used in the simulator are determined by carrying out a set of experiments on the testbed. The simulator is validated against stochastic queuing models and the testbed. Experiments on the testbed are limited by the hardware configuration, but our complementary approach allows us to carry out scalability study on the validated simulator. The three dispatcherbased scheduling algorithms analyzed are: round robin scheduling, least connected based scheduling, and least loaded based scheduling. The least loaded algorithm is used as the baseline (upper performance bound) in our analysis and the performance metrics include average waiting time, average response time, and average web server utilization. A synthetic trace generated by the workload generator called SURGE, and a public-domain France Football World Cup 1998 trace are used. We observe that the round robin algorithm performs much worse in comparison with the other two algorithms for low to medium workload. However, as the request arrival rate increases the performance of the three algorithms converge with the least connected algorithm approaching the baseline algorithm as at a much faster rate than the round robin. The least connected algorithm performs well for medium to high workload. At very low load the average waiting time is two to six times higher than the baseline algorithm but the absolute value between these two waiting times is very small.

96 citations


Proceedings ArticleDOI
25 Nov 2001
TL;DR: Practical tests and evaluation of different ways in which the session initiation protocol (SIP) could be used to assist application adaptation for IP applications during a vertical handover (VH), that is one between base stations that are using different wireless link technologies.
Abstract: This paper describes practical tests and evaluation of different ways in which the session initiation protocol (SIP) could be used to assist application adaptation for IP applications during a vertical handover (VH), that is one between base stations that are using different wireless link technologies. Such an approach has been implemented on a software testbed based at the Centre for Telecommunication Research (CTR), King's College London as part of a joint project with BTexact Technologies.

89 citations


Journal ArticleDOI
TL;DR: This article describes an approach for providing dynamic quality of service (QoS) support in a variable bandwidth network, which may include wireless links and mobile nodes, and implemented a new protocol called dynamic resource reservation protocol (dRSVP) and a new QoS application program interface (API).
Abstract: This article describes an approach for providing dynamic quality of service (QoS) support in a variable bandwidth network, which may include wireless links and mobile nodes. The dynamic QoS approach centers on the notion of providing QoS support at some point within a range requested by applications. To utilize dynamic QoS, applications must be capable of adapting to the level of QoS provided by the network, which may vary during the course of a connection. To demonstrate and evaluate the dynamic QoS concept, we have implemented a new protocol called dynamic resource reservation protocol (dRSVP) and a new QoS application program interface (API). The paper describes this new protocol and API and also discusses our experience with adaptive streaming video and audio applications that work with the new protocol in a testbed network, including wireless local area network connectivity and wireless link connectivity emulated over the wired Ethernet. Qualitative and quantitative assessments of the dynamic RSVP protocol are provided.

77 citations


Journal ArticleDOI
TL;DR: Using this software-pipeline, a CyberCut service, modeled on the MOSIS service for VLSI chips, has been now been launched for limited studentuse at a group of cooperating universities.
Abstract: “CyberCut TM ” is a testbed for an Internet-based CAD/CAM system. It was specifically designed to be a networked, automated system, with a seamless communication flow from a client-side designer to a server-side machining service. The creation of CyberCut required several new software modules. These include: a) a Web-based design tool in which Design-forManufacturing information and machining rules constrain the designer to manufacturable parts; b) a geometric representation called SIF-DSG, for unambiguous communication between the client-side designer and the server-side process planner; c) an automated process planning system with several sub-modules that convert an incoming design to a set of tool-paths for execution on a 3-axis CNC milling machine. Using this software-pipeline, a CyberCut service, modeled on the MOSIS service for VLSI chips, has been now been launched for limited studentuse at a group of cooperating universities.

72 citations


Journal Article
TL;DR: A novel approach to the localization of sensors in an ad-hoc network called AHLoS (Ad-Hoc Localization System) that enables sensor nodes to discover their locations using a set distributed iterative algorithms.
Abstract: The recent advances in radio and embedded system technologies have enabled the proliferation of wireless micro-sensor networks. Such wirelessly connected sensors are released in many diverse environments to perform various monitoring tasks. In many such tasks, location awareness is inherently one of the most essential system parameters. It is not only needed to report the origins of events, but also to assist group querying of sensors, routing, and to answer questions on the network coverage. In this paper we present a novel approach to the localization of sensors in an ad-hoc network. We describe a system called AHLoS (Ad-Hoc Localization System) that enables sensor nodes to discover their locations using a set distributed iterative algorithms. The operation of AHLoS is demonstrated with an accuracy of a few centimeters using our prototype testbed while scalability and performance are studied through simulation.

Proceedings ArticleDOI
01 Jun 2001
TL;DR: This paper describes the design and implementation of a flexible streaming video server and client testbed that can support emerging streaming services such as periodic broadcast and patching, and explores and presents solutions to the system and network issues involved in actually implementing these services.
Abstract: Multimedia streaming applications consume a significant amount of server and network resources due to the high bandwidth and long duration of audio and video clips. Patching and periodic broadcast schemes use multicast transmission and client buffering in innovative ways to reduce server and network resource use. Current research in this area has focussed on the theoretical aspects of these approaches, rather than on the challenges involved in implementing and deploying such scalable video transmission services. In this paper, we first describe the design and implementation of a flexible streaming video server and client testbed that can support emerging streaming services such as periodic broadcast and patching. We explore and present solutions to the system and network issues involved in actually implementing these services. Using this testbed, we conduct extensive experimental evaluations, measuring performance both at the server as well as end-end performance at the client, over the local network as well as over VBNS, and present key insights gained from our implementation and experimental evaluations.

Proceedings ArticleDOI
01 Oct 2001
TL;DR: A "testbed on a desktop" constructed using the ideas discussed allows the developer to create stable testing environments in which real-world conditions can be introduced in a convenient, deterministic and reproducible manner.
Abstract: The development of multi-hop routing protocols for mobile ad hoc networks (MANETs) typically begins with extensive simulation and then proceeds to implementation and real-world testing. While simulation environments can be carefully controlled, real-world environments present numerous difficulties that hinder efficient protocol implementation and testing. These include uncontrolled radio interference and propagation events, hard-to-reproduce network topologies and node mobility patterns, and experimental setups that are inconveniently large. In this paper we present a method for supporting protocol implementation and experimentation in a small testbed setting where variables can be controlled and experimental conditions can be reproduced. The testbed operation is independent of the operating system of the implementation platforms and works with most modern wireless networking interfaces without modifications. A "testbed on a desktop"constructed using the ideas we discuss allows the developer to create stable testing environments in which real-world conditions can be introduced in a convenient, deterministic and reproducible manner

05 Nov 2001
TL;DR: This paper describes the architecture of a set of kernel components for developing and testing storage area network transport protocols under Linux as a general prototype for network transport protocol development.
Abstract: This paper describes the architecture of a set of kernel components for developing and testing storage area network transport protocols under Linux. This software is intended for several uses: as a general prototype for network transport protocol development; as a reference implementation of the iSCSI protocol currently under development for standardization by IETF; as a basis for conformance testing for iSCSI; and as a testbed for development of interoperability test suites for iSCSI.

Proceedings ArticleDOI
01 Jan 2001
TL;DR: The ObjectAgent system is being developed to create an agent-based software architecture for autonomous distributed systems that uses agents to implement all of the software functionality and communicate through simplified natural language messages.
Abstract: The ObjectAgent system is being developed to create an agent-based software architecture for autonomous distributed systems. Agents are used to implement all of the software functionality and communicate through simplified natural language messages. Decision-making and fault detection and recovery capabilities are built-in at all levels. During the first phase of development, ObjectAgent was prototyped in Matlab. A complete, GUI-based environment was developed for the creation, simulation, and analysis of multiagent multisatellite systems. Collision avoidance and reconfiguration simulations were performed for a cluster of four satellites. ObjectAgent is now being ported to C++ for demonstration on a real-time, distributed testbed and deployment on TechSat 21 in 2003. The present architecture runs on a PowerPC 750 running Enea's OSE operating system. A preliminary demonstration of using ObjectAgent to perform a cluster reconfiguration of three satellites was performed in November 2000.

Journal ArticleDOI
TL;DR: The proposed wireless Diffserv framework takes into consideration several factors, including signaling requirements, mobility, losses, lower wireless bandwidth and battery power constraints, and is designed to be extensible so that other researchers may use its implementation as a foundation for implementing other wireless network algorithms and mechanisms.
Abstract: This paper describes the design an implementation of an enhanced Differentiated Services (Diffserv) architectural framework for providing Quality of Service (QoS) in wireless networks. The Diffserv architecture has been recently proposed to compliment the Integrated Services (Intserv) model for providing QoS in the wired Internet. The paper studies whether Diffserv framework takes into consideration several factors, including signaling requirements, mobility, losses, lower wireless bandwidth and battery power constraints. It identifies the need for supporting signaling and mobility in wireless networks. The framework and mechanisms have been implemented in the wireless testbed at Washington State University. Experimental results from this testbed show the validity of the proposed Diffserv model and also provide performance analyses. The framework is also designed to be extensible so that other researchers may use our implementation as a foundation for implementing other wireless network algorithms and mechanisms.

Journal ArticleDOI
TL;DR: This paper describes a methodology to create an Internet-based infrastructure to service and maintain RP-oriented tele-manufacturing, using the Java-enabled solution based on WWW/Internet computing model to implement the infrastructure.
Abstract: This paper describes a methodology to create an Internet-based infrastructure to service and maintain RP-oriented tele-manufacturing. One of the most important applications of such an infrastructure is to support the closed-loop product development practice. The Java-enabled solution based on WWW/Internet computing model is used to implement the infrastructure. The main functions include the remote part submitting, queuing and monitoring. Under the control of different access competences, manufacturing sites and queues can be maintained, respectively, in distributed locations. A software testbed has been developed in Java to verify the methodology.

DOI
01 Oct 2001
TL;DR: The Virtual Cybernetic Building Testbed (VCBT) as discussed by the authors is a hybrid software/hardware testbed that can be used to develop and evaluate control strategies and control products that use the BACnet communication protocol.
Abstract: Advances in building automation technology have taken place for a variety of building services including heating, ventilating, and air conditioning (HVAC) control systems, lighting control systems, access control systems, and fire detection systems. In spite of these advances in technology, many building control systems do not work as intended. It is evident that the industry needs to learn how to take advantage of the new ability to interconnect traditionally independent systems in a building. Commissioning, automated fault detection and new approaches to applying system integration are all areas of active research. However, it can be difficult to conduct this research in actual buildings because of the need to maintain comfortable and safe conditions for the building occupants. This report describes two enabling tools that have been developed to advance these research efforts. It focuses on the use of these tools to develop and test automated fault detection and diagnostic (FDD) technology for HVAC systems and their application in the area of Fault Detection and Diagnosis. The two enabling tools are the Virtual Cybernetic Building Testbed (VCBT) and the FDD Test Shell. The VCBT consists of a variety of simulation models that together emulate the characteristics and performance of a cybernetic building system. The simulation models are interfaced to real state-of-the-art BACnet speaking control systems to provide a hybrid software/hardware testbed that can be used to develop and evaluate control strategies and control products that use the BACnet communication protocol. The FDD Test Shell is a data-sharing tool that was developed to enable side-by-side testing and comparison of two or more FDD tools and to support the integration of information from multiple FDD tools. Preliminary tests of some of the faults modeled in the VCBT are described in this report. The primary goal of the tests was to quantify the impact of valve and damper leakage for typical air-handling unit (AHU) with variable-air-volume (VAV) box configurations. In this study, testing revealed that leakage through the outdoor air damper and a stuck open outdoor air damper fault have almost no measurable impact on the operation of the system.

Proceedings ArticleDOI
15 May 2001
TL;DR: The paper reports on the first feasibility study: running a self-migrating version of the Cactus simulation code across the European grid testbed, including "live" remote data visualization and steering from different demonstration booths at Supercomputing 2000, in Dallas, TX.
Abstract: The Testbed and Applications working group of the European Grid Forum (EGrid) is actively building and experimenting with a grid infrastructure connecting several research-based supercomputing sites located in Europe. The paper reports on our first feasibility study: running a self-migrating version of the Cactus simulation code across the European grid testbed, including "live" remote data visualization and steering from different demonstration booths at Supercomputing 2000, in Dallas, TX. We report on the problems that had to be resolved for this endeavour and identify open research challenges for building production-grade grid environments.

Proceedings ArticleDOI
08 Jan 2001
TL;DR: It is shown that an adaptive forward error correction mechanism, which adjusts the level of redundancy in response to packet loss behavior, can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels.
Abstract: This paper describes an experimental study of a proxy service to support collaboration among mobile users. Specifically, the paper addresses the problem of reliably multicasting Web resources across wireless local area networks, whose loss characteristics can be highly variable. The software architecture of the proxy service is described, followed by results of a performance study conducted on a mobile computing testbed. The main contribution of the paper is to show that an adaptive forward error correction mechanism, which adjusts the level of redundancy in response to packet loss behavior, can quickly accommodate worsening channel characteristics in order to reduce delay and increase throughput for reliable multicast channels.

Proceedings ArticleDOI
01 Apr 2001
TL;DR: An experimental study of a proxy service to enhance interactive MPEG-1 video streams when multicast across wireless local area networks is described, showing that a combination of forward and backward error control is effective when applied to video streams for mobile collaborating users.
Abstract: Multicasting of compressed video streams over wireless networks demands significantly different approaches to error control than those used in wired networks, due to high packet loss rate. This paper describes an experimental study of a proxy service to enhance interactive MPEG-1 video streams when multicast across wireless local area networks. The architecture and operation of the proxy service are presented, followed by results of a performance study conducted on a mobile computing testbed The main contribution of the paper is to show that a combination of forward and backward error control is effective when applied to video streams for mobile collaborating users.

Proceedings ArticleDOI
12 Jun 2001
TL;DR: The effect of the network delay on the control performance was evaluated on a Profibus-DP testbed, and a GA-based PID tuning algorithm is proposed to design controllers suitable for networked control systems.
Abstract: As many sensors and actuators are used in automated systems, various industrial networks are adopted for digital control systems. In order to take advantage of the networking, however, the network implementation should be carefully designed to satisfy real-time requirements considering network delays. This paper presents the implementation scheme of a networked control system via Profibus-DP network. More specifically, the effect of the network delay on the control performance was evaluated on a Profibus-DP testbed, and a GA-based PID tuning algorithm is proposed to design controllers suitable for networked control systems.

Proceedings ArticleDOI
01 Oct 2001
TL;DR: This paper describes the design and implementation of a flexible streaming video server and client testbed that implements both periodic broadcast and patching, and explores the issues that arise when implementing these algorithms.
Abstract: Multimedia streaming applications can consume a significant amount of server and network resources. Periodic broadcast and patching are two approaches that use multicast transmission and client buffering in innovative ways to reduce server and network load, while at the same time allowing asynchronous access to multimedia steams by a large number of clients. Current research in this area has focussed primarily on the algorithmic aspects of these approaches, with evaluation performed via analysis or simulation. In this paper, we describe the design and implementation of a flexible streaming video server and client testbed that implements both periodic broadcast and patching, and explore the issues that arise when implementing these algorithms. We present measurements detailing the overheads associated with the various server components (signaling, transmission schedule computation, data retrieval and transmission), the interactions between the various components of the architecture, and the overall end-to-end performance. We also discuss the importance of an appropriate server video segment caching policy. We conclude with a discussion of the insights gained from our implementation and experimental evaluation.

Book ChapterDOI
01 Jan 2001
TL;DR: This paper designs and implements a novel architecture and admission control algorithm termed Egress Admission Control, and describes the implementation of the scheme on a network of prototype routers enhanced with ingress-egress path monitoring and edge admission control.
Abstract: While the IntServ solution to Internet QoS can achieve a strong service model that guarantees flow throughputs and loss rates, it places excessive burdens on high-speed core routers to signal, schedule, and manage state for individual flows. Alternatively, the DiffServ solution achieves scalability via aggregate control, yet cannot ensure a particular QoS to individual flows. To simultaneously achieve scalability and a strong service model, we have designed and implemented a novel architecture and admission control algorithm termed Egress Admission Control. In our approach, the available service on a network path is passively monitored, and admission control is performed only at egress nodes, incorporating the effects of cross traffic with implicit measurements rather than with explicit signaling. In this paper, we describe our implementation of the scheme on a network of prototype routers enhanced with ingress-egress path monitoring and edge admission control. We report the results of testbed experiments and demonstrate the feasibility of an edge-based architecture for providing IntServ-like services in a scalable way.

Journal ArticleDOI
TL;DR: The Ocean Sampling Mobile Network (SAMON) simulator testbed has been developed at Penn State to enable web-based integration of high-fidelity simulators of heterogeneous autonomous undersea vehicles from multiple organizations and a variety of on-board and fixed sensors in a realistic ocean environment.
Abstract: The Ocean Sampling Mobile Network (SAMON) simulator testbed has been developed at Penn State for designing and evaluating multirobot ocean-mapping missions, in realistic underwater environments, prior to in-water testing. The goal in developing the testbed is to enable web-based integration of high-fidelity simulators of heterogeneous autonomous undersea vehicles from multiple organizations and a variety of on-board and fixed sensors in a realistic ocean environment in order to formulate and evaluate intelligent control strategies for mission execution. A formal control language facilitates real-time interactions between heterogeneous autonomous components. A simulation experiment is described that demonstrates multistage inferencing and decision/control strategies for spatio-temporal coordination and multilayered adaptation of group behavior in response to evolving environmental physics or operational dynamics.

Proceedings ArticleDOI
26 Jul 2001
TL;DR: The proposed system will also a yield a testbed for challenging networking research to develop fast protocols dealing with very high bit rate applications relying on multiple streams of time-critical data.
Abstract: Radar design and management concepts are presented for a networked environment to reduce the cost and dramatically enhance the performance of radar systems. The benefits attained by moving from a large centralized radar to a distributed cluster of smaller radars extend far beyond the reduction in cost and the increase in reliability. Additional advantages of multiple radar operation exploiting the ubiquitous networking technology are also presented. In addition to providing the impetus for novel radar algorithms and applications, the proposed system will also a yield a testbed for challenging networking research to develop fast protocols dealing with very high bit rate applications relying on multiple streams of time-critical data.

Proceedings ArticleDOI
25 Nov 2001
TL;DR: Some of the components of the testbed are described and the experiences while building this testbed could be beneficial to some who plan to build a similar testbed to realize several features and capabilities of Mobile Wireless Internet, before actually bringing to the market.
Abstract: In an effort to realize wireless Internet telephony and multimedia streaming in a highly mobile environment a testbed emulating a wireless Internet has been built. This allows the setting up of multimedia calls between IP mobiles and integration between IP and PSTN end-points in a wireless environment. Different functionalities and components involved with the wireless Internet streaming multimedia have been prototyped and experimented in the testbed. These include signaling, registration, dynamic binding, location management as well as supporting the QoS features for the mobile users. This paper describes some of the components of the testbed and highlights the experiences while building this testbed which could be beneficial to some who plan to build a similar testbed to realize several features and capabilities of Mobile Wireless Internet, before actually bringing to the market.

Book ChapterDOI
Bernard Burg1
TL;DR: This article presents these evolutions, positions agents, and introduces the open testbed of this ecosystem currently in construction under the auspices of Agentcities, that is capable of serving each individual user personally.
Abstract: Agents, as well as many other technologies around the semantic web, have shown an increased maturity through standards and open-source These improvements have been very self-centered and led to the creation of silos Time has come to integrate these improvements into an ecosystem, bringing a larger picture towards active web-services, that is capable of serving each individual user personally This article presents these evolutions, positions agents, and introduces the open testbed of this ecosystem currently in construction under the auspices of Agentcities

Proceedings ArticleDOI
Haitao Zheng1, D. Samardzija
07 Oct 2001
TL;DR: A narrowband wireless BLAST testbed with multiple transmit and receive antennas is built and it is shown that adapting the number of transmit antennas achieves remarkable performance improvement.
Abstract: BLAST has been shown to provide high capacity wireless communications by using multiple antennas at both transmitter and receiver. We have built a narrowband wireless BLAST testbed with multiple transmit and receive antennas. We examine the performance of the VBLAST by choosing the antenna configurations and performing link adaptation. It is shown that adapting the number of transmit antennas achieves remarkable performance improvement. To further demonstrate the effectiveness of the testbed, we use over-the-air error traces to simulate H.263+ video transmission.

Posted Content
TL;DR: Nimrod-G as discussed by the authors is a Grid resource management system for scheduling computations on resources distributed across the world with varying quality of service requirements, including resource discovery, trading, and scheduling based on economic principles and a user defined QoS requirement.
Abstract: Computational Grids, coupling geographically distributed resources such as PCs, workstations, clusters, and scientific instruments, have emerged as a next generation computing platform for solving large-scale problems in science, engineering, and commerce. However, application development, resource management, and scheduling in these environments continue to be a complex undertaking. In this article, we discuss our efforts in developing a resource management system for scheduling computations on resources distributed across the world with varying quality of service. Our service-oriented grid computing system called Nimrod-G manages all operations associated with remote execution including resource discovery, trading, scheduling based on economic principles and a user defined quality of service requirement. The Nimrod-G resource broker is implemented by leveraging existing technologies such as Globus, and provides new services that are essential for constructing industrial-strength Grids. We discuss results of preliminary experiments on scheduling some parametric computations using the Nimrod-G resource broker on a world-wide grid testbed that spans five continents.