scispace - formally typeset
Search or ask a question

Showing papers on "Scalability published in 1998"


01 Dec 1998
TL;DR: Differentiated services enhancements to the Internet protocol are intended to enable scalable service discrimination in the Internet without the need for per-flow state and signaling at every hop.
Abstract: Differentiated services enhancements to the Internet protocol are intended to enable scalable service discrimination in the Internet without the need for per-flow state and signaling at every hop. A variety of services may be built from a small, well-defined set of building blocks which are deployed in network nodes. The services may be either end-to-end or intra-domain; they include both those that can satisfy quantitative performance requirements (e.g., peak bandwidth) and those based on relative performance (e.g., "class" differentiation). Services can be constructed by a combination of:

1,850 citations


Patent
21 Dec 1998
TL;DR: In this article, the data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored.
Abstract: Multiple applications request data from multiple storage units over a computer network. The data is divided into segments and each segment is distributed randomly on one of several storage units, independent of the storage units on which other segments of the media data are stored. At least one additional copy of each segment also is distributed randomly over the storage units, such that each segment is stored on at least two storage units. This random distribution of multiple copies of segments of data improves both scalability and reliability. When an application requests a selected segment of data, the request is processed by the storage unit with the shortest queue of requests. Random fluctuations in the load applied by multiple applications on multiple storage units are balanced nearly equally over all of the storage units. This combination of techniques results in a system which can transfer multiple, independent high-bandwidth streams of data in a scalable manner in both directions between multiple applications and multiple storage units.

1,427 citations


Proceedings ArticleDOI
28 Jul 1998
TL;DR: The classified advertisement (classad) matchmaking framework is developed and implemented, a flexible and general approach to resource management in distributed environment with decentralized ownership of resources.
Abstract: Conventional resource management systems use a system model to describe resources and a centralized scheduler to control their allocation. We argue that this paradigm does not adapt well to distributed systems, particularly those built to support high throughput computing. Obstacles include heterogeneity of resources, which make uniform allocation algorithms difficult to formulate, and distributed ownership, leading to widely varying allocation policies. Faced with these problems, we developed and implemented the classified advertisement (classad) matchmaking framework, a flexible and general approach to resource management in distributed environment with decentralized ownership of resources. Novel aspects of the framework include a semi structured data model that combines schema, data, and query in a simple but powerful specification language, and a clean separation of the matching and claiming phases of resource allocation. The representation and protocols result in a robust, scalable and flexible framework that can evolve with changing resources. The framework was designed to solve real problems encountered in the deployment of Condor, a high throughput computing system developed at the University of Wisconsin-Madison. Condor is heavily used by scientists at numerous sites around the world. It derives much of its robustness and efficiency from the matchmaking architecture.

829 citations


Journal ArticleDOI
01 Oct 1998
TL;DR: A simple, practical strategy for locality-aware request distribution (LARD), in which the front-end distributes incoming requests in a manner that achieves high locality in the back-ends' main memory caches as well as load balancing.
Abstract: We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose which back-end will handle this request. Content-based request distribution can improve locality in the back-ends' main memory caches, increase secondary storage scalability by partitioning the server's database, and provide the ability to employ back-end nodes that are specialized for certain types of requests.As a specific policy for content-based request distribution, we introduce a simple, practical strategy for locality-aware request distribution (LARD). With LARD, the front-end distributes incoming requests in a manner that achieves high locality in the back-ends' main memory caches as well as load balancing. Locality is increased by dynamically subdividing the server's working set over the back-ends. Trace-based simulation results and measurements on a prototype implementation demonstrate substantial performance improvements over state-of-the-art approaches that use only load information to distribute requests. On workloads with working sets that do not fit in a single server node's main memory cache, the achieved throughput exceeds that of the state-of-the-art approach by a factor of two to four.With content-based distribution, incoming requests must be handed off to a back-end in a manner transparent to the client, after the front-end has inspected the content of the request. To this end, we introduce an efficient TCP handoflprotocol that can hand off an established TCP connection in a client-transparent manner.

643 citations


Proceedings ArticleDOI
01 Oct 1998
TL;DR: Two new algorithms for solving the least cost matching filter problem at high speeds are described, based on a grid-of-tries construction and works optimally for processing filters consisting of two prefix fields using linear space.
Abstract: In Layer Four switching, the route and resources allocated to a packet are determined by the destination address as well as other header fields of the packet such as source address, TCP and UDP port numbers. Layer Four switching unifies firewall processing, RSVP style resource reservation filters, QoS Routing, and normal unicast and multicast forwarding into a single framework. In this framework, the forwarding database of a router consists of a potentially large number of filters on key header fields. A given packet header can match multiple filters, so each filter is given a cost, and the packet is forwarded using the least cost matching filter.In this paper, we describe two new algorithms for solving the least cost matching filter problem at high speeds. Our first algorithm is based on a grid-of-tries construction and works optimally for processing filters consisting of two prefix fields (such as destination-source filters) using linear space. Our second algorithm, cross-producting, provides fast lookup times for arbitrary filters but potentially requires large storage. We describe a combination scheme that combines the advantages of both schemes. The combination scheme can be optimized to handle pure destination prefix filters in 4 memory accesses, destination-source filters in 8 memory accesses worst case, and all other filters in 11 memory accesses in the typical case.

625 citations


Proceedings ArticleDOI
07 Dec 1998
TL;DR: This paper reviews the architecture for a distributed intrusion detection system based on multiple independent entities working collectively, and calls these entities autonomous agents, which solves some of the problems previously mentioned.
Abstract: The intrusion detection system architectures commonly used in commercial and research systems have a number of problems that limit their configurability, scalability or efficiency. The most common shortcoming in the existing architectures is that they are built around a single monolithic entity that does most of the data collection and processing. In this paper, we review our architecture for a distributed intrusion detection system based on multiple independent entities working collectively. We call these entities autonomous agents. This approach solves some of the problems previously mentioned. We present the motivation and description of the approach, partial results obtained from an early prototype, a discussion of design and implementation issues, and directions for future work.

590 citations


Journal ArticleDOI
TL;DR: Data mining applications place special requirements on clustering algorithms including the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehensiveness, and so on.
Abstract: Data mining applications place special requirements on clustering algorithms including: the ability to find clusters embedded in subspaces of high dimensional data, scalability, end-user comprehens...

437 citations


Journal ArticleDOI
01 Oct 1998
TL;DR: Measurements of the prototype NASD system show that these services can be cost-effectively integrated into a next generation disk drive ASK, and show scaluble bandwidth for NASD-specialized filesystems.
Abstract: This paper describes the Network-Attached Secure Disk (NASD) storage architecture, prototype implementations oj NASD drives, array management for our architecture, and three, filesystems built on our prototype. NASD provides scalable storage bandwidth without the cost of servers used primarily, for transferring data from peripheral networks (e.g. SCSI) to client networks (e.g. ethernet). Increasing datuset sizes, new attachment technologies, the convergence of peripheral and interprocessor switched networks, and the increased availability of on-drive transistors motivate and enable this new architecture. NASD is based on four main principles: direct transfer to clients, secure interfaces via cryptographic support, asynchronous non-critical-path oversight, and variably-sized data objects. Measurements of our prototype system show that these services can be cost-effectively integrated into a next generation disk drive ASK. End-to-end measurements of our prototype drive andfilesysterns suggest that NASD cun support conventional distributed filesystems without performance degradation. More importantly, we show scaluble bandwidth for NASD-specialized filesystems. Using a parallel data mining application, NASD drives deliver u linear scaling of 6.2 MB/s per clientdrive pair, tested with up to eight pairs in our lab.

424 citations


Proceedings ArticleDOI
18 Oct 1998
TL;DR: This paper presents the concept of a structured test access mechanism for embedded cores: test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TEStsHELL is controlled by a dedicated test control mechanism (TCM).
Abstract: The main objective of core-based IC design is improvement of design efficiency and time-to-market. In order to prevent test development from becoming the bottleneck in the entire development trajectory, reuse of pre-computed tests for the reusable pre-designed cores is mandatory. The core user is responsible for translating the test at core level into a test at chip level. A standardized test access mechanism eases this task, therefore contributing to the plug-n-play character of core-based design. This paper presents the concept of a structured test access mechanism for embedded cores. Reusable IP modules are wrapped in a TESTSHELL. Test data access from chip pins to TESTSHELL and vice versa is provided by the TESTRAIL, while the operation of the TESTSHELL is controlled by a dedicated test control mechanism (TCM). Both TESTRAIL as well as TCM are standardized, but open for extensions.

338 citations


Journal ArticleDOI
TL;DR: This paper describes a programming model for large-scale interactive Internet services and a scalable cluster-based framework that has been in production use at UC Berkeley since April 1997, and presents a detailed examination of TranSend, a scalable transformational Web proxy deployed on this framework.
Abstract: Today's Internet clients vary widely with respect to both hardware and software properties: screen size, color depth, effective bandwidth, processing power, and the ability to handle different data formats. The order-of-magnitude span of this variation is too large to hide at the network level, making application-level techniques necessary. We show that on-the-fly adaptation by transformational proxies is a widely applicable, cost-effective, and flexible technique for addressing all these types of variations. To support this claim, we describe our experience with data-type-specific distillation (lossy compression) in a variety of applications. We also argue that placing adaptation machinery in the network infrastructure, rather than inserting it into end servers, enables incremental deployment and amortization of operating costs. To this end, we describe a programming model for large-scale interactive Internet services and a scalable cluster-based framework that has been in production use at UC Berkeley since April 1997. We present a detailed examination of TranSend, a scalable transformational Web proxy deployed on our cluster framework, and give descriptions of several handheld-device applications that demonstrate the wide applicability of the proxy-adaptation philosophy.

327 citations


Journal ArticleDOI
TL;DR: The network software architecture of the distributed interactive virtual environment platform is introduced, designed to scale with a large number of simultaneous participants, while ensuring maximum interaction at each site.
Abstract: We introduce the network software architecture of the distributed interactive virtual environment platform. The platform is designed to scale with a large number of simultaneous participants, while ensuring maximum interaction at each site. Scalability is achieved by making extensive use of multicast techniques and by partitioning the virtual space into smaller regions. We also present an application-level backbone that can connect islands of multicast-aware networks together.

Journal ArticleDOI
TL;DR: The performance of the MOSIX operating system with algorithms for adaptive resource sharing as well as the performance of several large-scale, parallel applications are presented.

Journal ArticleDOI
01 Mar 1998
TL;DR: An algorithm for transforming a centralized state and activity chart into a provably equivalent partitioned one, suitable for distributed execution, is developed and a synchronization scheme is developed that guarantees an execution equivalent to a non-distributed one.
Abstract: Current workflow management systems fall short of supporting large-scale distributed, enterprise-wide applications. We present a scalable, rigorously founded approach to enterprise-wide workflow management, based on the distributed execution of state and activity charts. By exploiting the formal semantics of state and activity charts, we develop an algorithm for transforming a centralized state and activity chart into a provably equivalent partitioned one, suitable for distributed execution. A synchronization scheme is developed that guarantees an execution equivalent to a non-distributed one. This basic solution is further refined in order to reduce communication overhead and exploit parallelism between partitions whenever possible. The developed synchronization schemes are compared in terms of the number and size of synchronization messages.

Journal ArticleDOI
TL;DR: This paper is devoted to database integration, possibly the most critical issue and is to provide a clear picture of what are the approaches and the current solutions and what remains to be achieved.
Abstract: In many large companies the widespread usage of computers has led a number of different application-specific databases to be installed. As company structures evolve, boundaries between departments move, creating new business units. Their new applications will use existing data from various data stores, rather than new data entering the organization. Henceforth, the ability to make data stores interoperable becomes a crucial factor for the development of new information systems. Data interoperability may come in various degrees. At the lowest level, commercial gateways connect specific pairs of database management systems (DBMSs). Software providing facilities for defining persistent views over different databases [6] simplifies access to distant data but does not support automatic enforcement of consistency constraints among different databases. Full interoperability is achieved by distributed or federated database systems, which support integration of existing data into virtual databases (i.e. databases which are logically defined but not physically materialized). The latter allow existing databases to remain under control of their respective owners, thus supporting a harmonious coexistence of scalable data integration and site autonomy requirements [9]. Federated systems are very popular today. However, before they become marketable, many issues remain to be solved. Design issues focus on either human-centered aspects (cooperative work, including autonomy issues and negotiation procedures) or database-centered aspects (data integration, schema/database evolution). Operational issues investigate system interoperability mainly in terms of support of new transaction types, new query processing algorithms, security concerns, etc. General overviews may be found elsewhere [4, 9]. This paper is devoted to database integration, possibly the most critical issue. Simply stated, database integration is the process which takes as input a set of databases, and produces as output a single unified description of the input schemas (the integrated schema) and the associated mapping information supporting integrated access to existing data through the integrated schema. As such, database integration is also used in the process of re-engineering an exist i ng l egacy system. Database integration has attracted many diverse and diverging contributions. The purpose, and the main intended contribution of this article is to provide a clear picture of what are the approaches and the current solutions and what remains to be achieved.

Proceedings ArticleDOI
01 Nov 1998
TL;DR: SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations.
Abstract: In this paper we present SPADE, a new algorithm for fast discovery of Sequential Patterns. The existing solutions to this problem make repeated database scans, and use complex hash structures which have poor locality. SPADE utilizes combinatorial properties to decompose the original problem into smaller sub-problems, that can be independently solved in main-memory using efficient lattice search techniques, and using simple join operations. All sequences are discovered in only three database scans. Experiments show that SPADE outperforms the best previous algorithm by a factor of two, and by an order of magnitude with some pre-processed data. It also has linear scalability with respect to the number of customers, and a number of other database parameters.

Proceedings Article
24 Aug 1998
TL;DR: This paper considers the filter step of the spatial join problem, for the case where neither of the inputs are indexed, and presents a new algorithm, Scalable Sweeping-Based Spatial Join (SSSJ), that achieves both efficiency on real-life data and robustness against highly skewed and worst-case data sets.
Abstract: In this paper, we consider the filter step of the spatial join problem, for the case where neither of the inputs are indexed. We present a new algorithm, Scalable Sweeping-Based Spatial Join (SSSJ), that achieves both efficiency on real-life data and robustness against highly skewed and worst-case data sets. The algorithm combines a method with theoretically optimal bounds on I/O transfers based on the recently proposed distribution-sweeping technique with a highly optimized implementation of internal-memory plane-sweeping. We present experimental results based on an efficient implementation of the SSSJ algorithm, and compare it to the state-ofthe-art Partition-Based Spatial-Merge (PBSM) algorithm of Patel and DeWitt.

Journal ArticleDOI
TL;DR: In this article, the authors analyze the performance of manufacturing systems in terms of reliability and productivity, product quality, capacity scalability, and cost for different system configurations assuming known machine level reliability and process capability.

Proceedings ArticleDOI
28 Jul 1998
TL;DR: WebOS is built that provides the basic operating systems services needed to build applications that are geographically distributed, highly available, incrementally scalable and dynamically reconfigurable and uses Rent-A-Server to implement dynamic replication of overloaded Web services across the wide area in response to client demands.
Abstract: Demonstrates the power of providing a common set of operating system services to wide-area applications, including mechanisms for naming, persistent storage, remote process execution, resource management, authentication and security. On a single machine, application developers can rely on the local operating system to provide these abstractions. In the wide area, however, application developers are forced to build these abstractions themselves or to do without. This ad-hoc approach often results in individual programmers implementing non-optimal solutions, wasting both programmer effort and system resources. To address these problems, we are building a system, WebOS, that provides the basic operating systems services needed to build applications that are geographically distributed, highly available, incrementally scalable and dynamically reconfigurable. Experience with a number of applications developed under WebOS indicates that it simplifies system development and improves resource utilization. In particular, we use WebOS to implement Rent-A-Server to provide dynamic replication of overloaded Web services across the wide area in response to client demands.

Journal ArticleDOI
TL;DR: Traditional scheduling algorithms to the DNS are adapted, new policies are proposed, and their impact under different scenarios are examined.
Abstract: A distributed multiserver Web site can provide the scalability necessary to keep up with growing client demand at popular sites. Load balancing of these distributed Web-server systems, consisting of multiple, homogeneous Web servers for document retrieval and a Domain Name Server (DNS) for address resolution, opens interesting new problems. In this paper, we investigate the effects of using a more active DNS which, as an atypical centralized scheduler, applies some scheduling strategy in routing the requests to the most suitable Web server. Unlike traditional parallel/distributed systems in which a centralized scheduler has full control of the system, the DNS controls only a very small fraction of the requests reaching the multiserver Web site. This peculiarity, especially in the presence of highly skewed load, makes it very difficult to achieve acceptable load balancing and avoid overloading some Web servers. This paper adapts traditional scheduling algorithms to the DNS, proposes new policies, and examines their impact under different scenarios. Extensive simulation results show the advantage of strategies that make scheduling decisions on the basis of the domain that originates the client requests and limited server state information (e.g., whether a server is overloaded or not). An initially unexpected result is that using detailed server information, especially based on history, does not seem useful in predicting the future load and can often lead to degraded performance.

Journal Article
TL;DR: The QoS-aware resource management model called QualMan, as a loadable middleware, is presented, its design, implementation, results, tradeoffs, and experiences, which show that the resource model in QualMan design is very scalable to different types of shared resources and platforms, and it allows a uniform view to embed the QoS inside distributed resource managements.
Abstract: The ability of operating system and network infrastructure to provide end-to-end quality of service (QoS) guarantees in multimedia is a major acceptance factor for various distributed multimedia applications due to the temporal audio-visual and sensory information in these applications. Our constraints on the end-to-end guarantees are (1) QoS should be achieved on a general-purpose platform with a real-time extension support, and (2) QoS should be application-controllable. In order to achieve the users acceptance requirements and to satisfy our constraints on the multimedia systems, we need a QoS-compliant resource management which supports QoS negotiation, admission and reservation mechanisms in an integrated and accessible way. In this paper we present a new resource model and a time-variant QoS management, which are the major components of the QoS-compliant resource management. The resource model incorporates, the resource scheduler, and a new component, the resource broker, which provides negotiation, admission and reservation capabilities for sharing resources such as CPU, network or memory corresponding to requested QoS. The resource brokers are intermediary resource managerss when combined with the resource schedulers, they provide a more predictable and finer granularity control of resources to the applications during the end-to-end multimedia communication than what is available in current general-purpose networked systems. Furthermore, this paper presents the QoS-aware resource management model called QualMan, as a loadable middleware, its design, implementation, results, tradeoffs, and experiences. There are trade-offs when comparing our QualMan QoS-aware resource management in middleware and other QoS-supporting resource management solutions in kernel space. The advantage of QualMan is that it is flexible and scalable on a general-purpose workstation or PC. The disadvantage is the lack of very fine QoS granularity, which is only possible if supports are built inside the kernel. Our overall experience with QualMan design and experiments show that (1) the resource model in QualMan design is very scalable to different types of shared resources and platforms, and it allows a uniform view to embed the QoS inside distributed resource managements (2) the design and implementation of QualMan is easily portables (3) the good results for QoS guarantees such as jitter, synchronization skew, and end-to-end delay, can be achieved for various distributed multimedia applications.

Patent
04 Nov 1998
TL;DR: In this article, the storage cell controller responds to signals received from the workstations, and oversees the operation of the storage cells to facilitate the storage of converted audio and video signals in at least one file that can be simultaneously accessed by one or more application programs.
Abstract: A networked multimedia system (10) comprises a plurality of networks (40) and at least one storage server (100). A signal path interconnects the workstations (12) and the storage server (100). Each workstation (40) includes video and audio reproduction capabilities, as well as video and audio capture capabilities. Any given storage server (100) comprises a set of storage cells (120) that operate under the direction of a storage cell manager (160). A storage cell (120) may include one or more encoding (132) and transcoding converters configured to convert or transform audio and video signals originating at a workstation into a form suitable for storage. A storage cell (120) may further include one or more decoding converters (134) configured to convert stored signals into a form suitable for audio and video signal reproduction at a workstation. Each storage cell (120) additionally includes at least one storage device (150) and storage device controller (152) capable of storing, for later retrieval, signals generated by one or more converters. The storage cell controller responds to signals received from the workstations (40), and oversees the operation of the storage cells to facilitate the storage of converted audio and video signals in at least one file that can be simultaneously accessed by one or more application programs executing on one or more workstions

Book
01 Feb 1998
TL;DR: This comprehensive new text from author Kai Hwang covers four important aspects of parallel and distributed computing — principles, technology, architecture, design, and programming — and can be used for several upper-level courses.
Abstract: From the Publisher: This comprehensive new text from author Kai Hwang covers four important aspects of parallel and distributed computing — principles,technology,architecture,and programming — and can be used for several upper-level courses

Proceedings ArticleDOI
01 Jan 1998
TL;DR: The results demonstrate that the proposed proxy-server-based, network-conscious approach provides an effective and scalable solution to the problem of the end-to-end video delivery over wide-area networks.
Abstract: In this paper we present a novel network-conscious approach to the problem of end-to-end video delivery over wide-area networks using proxy servers situated between local-area networks (LANs) and a backbone wide-area network (WAN). We develop a novel and effective video delivery technique called video staging via intelligent utilization of the disk bandwidth and storage space available at proxy servers. We also design several video staging methods and evaluate their effectiveness in reducing the backbone WAN bandwidth requirement. Our results demonstrate that the proposed proxy-server-based, network-conscious approach provides an effective and scalable solution to the problem of the end-to-end video delivery over wide-area networks.

Proceedings ArticleDOI
18 May 1998
TL;DR: A new switch scheduling algorithm called joined preferred matching (JPM) is proposed that improves Prahhakar and McKeown's results in two aspects and lays the theoretical foundation for designing scalable high-speed CIOQ switches that can provide same throughput and QoS as OQ switches, but require lower-speed internal memory.
Abstract: Combined input-output queueing switches (CIOQ) have better scaling properties than output queueing (OQ) switches. However, a CIOQ switch may have lower switch throughput and, more importantly, it is difficult to control delay in a CIOQ switch due to the existence of multiple queueing points. In this paper, we study the following problem: can a CIOQ switch be designed to behave identically to an OQ switch? B. Prabhakar and N. McKeown (1997) proposed an algorithm such that a CIOQ switch with an internal speedup of 4 can behave identically to an OQ switch with FIFO as the output queueing discipline. In this paper, we propose a new switch scheduling algorithm called joined preferred matching (JPM) that improves Prahhakar and McKeown's results in two aspects. First, with JPM, the internal speedup needed for a CIOQ switch to achieve exact emulation of an OQ switch is only 2 instead of 4. Second, the result applies to OQ switches that employ a general class of output service disciplines, including FIFO and various fair queueing algorithms. This result lays the theoretical foundation for designing scalable high-speed CIOQ switches that can provide same throughput and QoS as OQ switches, but require lower-speed internal memory.

Book ChapterDOI
24 Sep 1998
TL;DR: The Arrow distributed directory protocol is devised, a scalable and local mechanism for ensuring mutually exclusive access to mobile objects and has communication complexity optimal within a factor of (1+MST-stretch(G))/2, where MST-Stretch( G) is the “minimum spanning tree stretch” of the underlying network.
Abstract: Most practical techniques for locating remote objects in a distributed system suffer from problems of scalability and locality of reference We have devised the Arrow distributed directory protocol, a scalable and local mechanism for ensuring mutually exclusive access to mobile objects This directory has communication complexity optimal within a factor of (1+MST-stretch(G))/2, where MST-stretch(G) is the “minimum spanning tree stretch” of the underlying network

Proceedings ArticleDOI
26 May 1998
TL;DR: New policies are proposed, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request.
Abstract: With ever increasing Web traffic, a distributed multi server Web site can provide scalability and flexibility to cope with growing client demands. Load balancing algorithms to spread the requests across multiple Web servers are crucial to achieve the scalability. Various domain name server (DNS) based schedulers have been proposed in the literature, mainly for multiple homogeneous servers. The presence of heterogeneous Web servers not only increases the complexity of the DNS scheduling problem, but also makes previously proposed algorithms for homogeneous distributed systems not directly applicable. This leads us to propose new policies, cabled adaptive TTL algorithms, that take into account both the uneven distribution of client request rates and heterogeneity of Web servers to adaptively set the time-to-live (TTL) value for each address mapping request. Extensive simulation results show that these strategies are robust and effective in balancing load among geographically distributed heterogeneous Web servers.

Journal ArticleDOI
TL;DR: Preliminary work to address concerns over performance, scalability and stability in multi-agent systems is presented; particularly, it investigates the performance and scalability of a multi- agent model developed.
Abstract: Much has been published on the functional properties of multi-agent systems (MASs) including their co-ordination rationality and knowledge modelling. However, an important research area which has so far received only scant attention covers the non-functional properties of MASs which include performance, scalability and stability issues — clearly thes become increasingly important as the MAS field matures, and as more practical MASs become operational. An understanding of how to evaluate and assess such non-functional properties, and hence how to improve on them by altering the underlying MAS design, is gradually emerging as a pressing concern. This paper presents preliminary work to address such concerns; particularly, it investigates the performance and scalability of a multi-agent model we have developed. Firstly, this paper defines performance, scalability and stability within the context of multi-agent applications. Following, we describe a multi-agent model that we later use to illustrate our first attempts at evolving a procedure for analysing such non-functional properties of MASs. Next, we report on our initial attempts to investigate the performance and scalability of the multi-agent model. Finally, the significance of these results in particular and of such investigations in general is discussed.

Proceedings ArticleDOI
23 Jun 1998
TL;DR: A detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service are provided, and features added to make it easier to implement and manage fault-tolerant applications on M SCS are described.
Abstract: Microsoft Cluster Service (MSCS) extends the Windows NT operating system to support high-availability services. The goal is to offer an execution environment where off-the-shelf server applications can continue to operate, even in the presence of node failures. Later versions of MSCS will provide scalability via a node and application management system which allows applications to scale to hundreds of nodes. In this paper we provide a detailed description of the MSCS architecture and the design decisions that have driven the implementation of the service. The paper also describes how some major applications use the MSCS features, and describes features added to make it easier to implement and manage fault-tolerant applications on MSCS.

Proceedings ArticleDOI
26 May 1998
TL;DR: For a trace based workload of Web accesses, it is found that volumes can reduce message traffic at servers by 40% compared to a standard lease algorithm, and that volumesCan considerably reduce the peak load at servers when popular objects are modified.
Abstract: The paper introduces volume leases as a mechanism for providing cache consistency for large scale, geographically distributed networks. Volume leases are a variation of leases, which were originally designed for distributed file systems. Using trace driven simulation, we compare two new algorithms against four existing cache consistency algorithms and show that our new algorithms provide strong consistency while maintaining scalability and fault tolerance. For a trace based workload of Web accesses, we find that volumes can reduce message traffic at servers by 40% compared to a standard lease algorithm, and that volumes can considerably reduce the peak load at servers when popular objects are modified.

Patent
08 Sep 1998
TL;DR: The Component Transaction Server (CTS) as discussed by the authors provides a framework for deploying the middle-tier logic of distributed component-based applications, which simplifies the creation and administration of Internet applications that service thousands of simultaneous clients.
Abstract: A Component Transaction Server (CTS) is described, which provides a framework for deploying the middle-tier logic of distributed component-based applications. The CTS simplifies the creation and administration of Internet applications that service thousands of simultaneous clients. The CTS components, which execute on the middle-tier between end-user client applications and remote databases, provide efficient management of client sessions, security, threads, third-tier database connections, and transaction flow, without requiring specialized knowledge on the part of the component developer. The system's scalability and platform independence allows one to develop application on inexpensive uniprocessor machines, then deploy the application on an enterprise-grade multiprocessor server. In its Result Set module, the CTS provides tabular result sets, thus making the environment very desirable for business applications. In most component-based systems, a component interface returns an object. CTS components can return either an object or a collection of objects called a “result set.” The format of a result set is based on the standard ODBC result set, and it is roughly equivalent to a database cursor. Because they return a result set, CTS components are much simpler and more efficient to work with. In this fashion, graphic user interface (GUI) development with CTS is nearly identical to traditional two-tier systems.