scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 2000"


Journal ArticleDOI
TL;DR: The Grid Security Infrastructure (GSI) offers secure single sign-ons and preserves site control over access policies and local security, and provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications.
Abstract: Participants in virtual organizations commonly need to share resources such as data archives, computer cycles, and networks, resources usually available only with restrictions based on the requested resource's nature and the user's identity. Thus, any sharing mechanism must have the ability to authenticate the user's identity and determine whether the user is authorized to request the resource. Virtual organizations tend to be fluid, however, so authentication mechanisms must be flexible and lightweight, allowing administrators to quickly establish and change resource-sharing arrangements. Nevertheless, because virtual organizations complement rather than replace existing institutions, sharing mechanisms cannot change local policies and must allow individual institutions to maintain control over their own resources. Our group has created and deployed an authentication and authorization infrastructure that meets these requirements: the Grid Security Infrastructure (I. Foster et al., 1998). GSI offers secure single sign-ons and preserves site control over access policies and local security. It provides its own versions of common applications, such as FTP and remote login, and a programming interface for creating secure applications. Dozens of supercomputers and storage systems already use GSI, a level of acceptance reached by few other security infrastructures.

307 citations


07 Feb 2000
TL;DR: An abstract architecture for market-based cluster resource management based on the idea of proportional resource sharing of basic computing resources is described and a 32-node (64 processors) prototype system that provides a market for time-shared CPU usage for sequential and parallel programs is implemented.
Abstract: Enabling technologies in high speed communication and global process scheduling have pushed clusters of computers into the mainstream as general-purpose high-performance computing systems. More generality, however, implies more sharing and this raises new questions in the area of cluster resource management. In particular, in systems where the aggregate demand for computing resources can exceed the aggregate supply, how to allocate resources amongst competing applications is an important problem. Traditional solutions to this problem have focused mainly on global optimization with respect to system-centric performance metrics, metrics which ignore higher level user intent. In this paper, we propose an alternative market-based approach based on the notion of a computational economy which optimizes for user value. Starting with fundamental requirements, we describe an abstract architecture for market-based cluster resource management based on the idea of proportional resource sharing of basic computing resources. Using this architecture, we have implemented a 32-node (64 processors) prototype system that provides a market for time-shared CPU usage for sequential and parallel programs. To begin evaluating our ideas, we are currently in the process of studying how users respond to the system by collecting data on real day-to-day usage of the cluster.

174 citations


Patent
Adi Ofer1
28 Nov 2000
TL;DR: In this paper, the authors propose a cooperative lock override procedure for managing a shared resource in a data processing system, where the detecting processor confirms that the failing processor is the lockholder and passes the lock to the next requestor in the queue.
Abstract: Queued lock services for managing a shared resource in a data processing system include a cooperative lock override procedure. On detecting a protocol failure by another processor, the detecting processor confirms that the failing processor is the lockholder and passes the lock to the next requestor in the queue.

148 citations


01 Jan 2000
TL;DR: The new features of the protocol, focusing on the security enhancements, integrated locking support, changes to fully support Windows file sharing semantics, support for high performance data sharing, and the design points that enhance performance on the Internet are described.
Abstract: The Network File System (NFS) Version 4 is a new distributed file system similar to previous versions of NFS in its straightforward design, simplified error recovery, and independence of transport protocols and operating systems for file access in a heterogeneous network Unlike earlier versions of NFS, the new protocol integrates file locking, strong security, operation coalescing, and delegation capabilities to enhance client performance for narrow data sharing applications on high-bandwidth networks Locking and delegation make NFS stateful, but simplicity of design is retained through well-defined recovery semantics in the face of client and server failures and network partitions This paper describes the new features of the protocol, focusing on the security enhancements, integrated locking support, changes to fully support Windows file sharing semantics, support for high performance data sharing, and the design points that enhance performance on the Internet We describe applications of NFS Version 4 Finally, we describe areas for future work

118 citations


Patent
Gary J. Dennis1
27 Jan 2000
TL;DR: In this paper, the authors propose an approach to publish content associated with an electronic file attached to an electronic mail message by executing instructions contained in the electronic mail attachment and accessing the content at a remote computer server identified by the attached file.
Abstract: Publishing content associated with an electronic file attached to an electronic mail message by executing instructions contained in the electronic mail attachment and accessing the content at a remote computer server identified by the attached file. The attached file includes computer-executable instructions, such as a computer program or script, which include an identifier for a remote server connected to a distributed computer network. This identified remote server typically hosts a web site containing content intended for viewing by the recipient of the electronic mail message. In response to launching the attached file of the electronic mail message with a viewer program, a browser program can be opened to enable the recipient to view the content of the identified remote server, typically a web site on an intranet or the global Internet. This supports the communication of electronic content by using an electronic mail message to transport an electronic file attachment having instructions that, when executed by the recipient's computer, enable the recipient to view the electronic content by accessing a server computer connected to distributed computer network.

104 citations


Proceedings ArticleDOI
22 Oct 2000
TL;DR: Slice as mentioned in this paper is a new storage system architecture for high-speed networks incorporating network-attached block storage, which interposes a request switching filter along each client's network path to the storage service (e.g., in a network adapter or switch).
Abstract: This paper explores interposed request routing in Slice, a new storage system architecture for high-speed networks incorporating network-attached block storage. Slice interposes a request switching filter - called a µproxy - along each client's network path to the storage service (e.g., in a network adapter or switch). The µproxy intercepts request traffic and distributes it across a server ensemble. We propose request routing schemes for I/O and file service traffic, and explore their effect on service structure.The Slice prototype uses a packet filter µproxy to virtualize the standard Network File System (NFS) protocol, presenting to NFS clients a unified shared file volume with scalable bandwidth and capacity. Experimental results from the industry-standard SPECsfs97 workload demonstrate that the architecture enables construction of powerful network-attached storage services by aggregating cost-effective components on a switched Gigabit Ethernet LAN.

101 citations


Patent
Curtis Priem1, Rick M. Iwamoto1
17 May 2000
TL;DR: In this paper, a process of coordinating access to a shared resource by a plurality of execution units is provided, where channel control units are used to coordinate access to the shared resource.
Abstract: A process of coordinating access to a shared resource by a plurality of execution units is provided. Channel control units are used to coordinate access to a shared resource. Each channel control unit reads semaphore values of a semaphore storage unit. In response to synchronization commands and semaphore values, the channel control unit manages the flow of execution instructions to the execution units in order to manage access to the shared resource.

100 citations


Patent
24 Jan 2000
TL;DR: In this paper, the authors describe a streaming multimedia rendering system having a network client and a network server that form part of a hyperlink web such as the Internet, where each resource specifier designates a transport protocol.
Abstract: The invention includes a streaming multimedia rendering system having a network client and a network server that form part of a hyperlink web such as the Internet. In accordance with the invention, a hyperlink to multimedia content is actually an indirect link to a reference file. The reference file contains a plurality of different resource specifiers and a preferred order for attempting communications using the resource specifiers. Each resource specifier designates a transport protocol. A streaming data client open the resource file in response to activation of a hyperlink to the resource file. In response to the resource specifiers contained in the resource file, the network data client repeatedly attempts to establish a streaming data connection using the different resource specifiers, in the preferred order specified in the reference file, or in the preferred order specified by a file referenced by the reference file, until a streaming data connection is successfully established. Each attempt with a different resource specifier uses the transport protocol designated by that different resource specifier. Different types of protocol specifiers are available. Some of the protocol specifiers override configuration settings made at the network data client relating to which transport protocols are permitted.

83 citations


Journal ArticleDOI
TL;DR: This work presents an architecture for network-authenticated disks that implements distributed file systems without file servers or encryption and provides network clients with direct network access to remote storage.
Abstract: We present an architecture for network-authenticated disks that implements distributed file systems without file servers or encryption. Our system provides network clients with direct network access to remote storage.

82 citations


Journal ArticleDOI
TL;DR: This paper presents a system called Cellular Disco, which effectively turns a large-scale shared-memory multiprocessor into a virtual cluster that supports fault containment and heterogeneity, while avoiding operating system scalability bottlenecks and can manage the CPU and memory resources of the machine significantly better than the hardware partitioning approach.
Abstract: Despite the fact that large-scale shared-memory multiprocessors have been commercially available for several years, system software that fully utilizes all their features is still not available, mostly due to the complexity and cost of making the required changes to the operating system. A recently proposed approach, called Disco, substantially reduces this development cost by using a virtual machine monitor that laverages the existing operating system technology. In this paper we present a system called Cellular Disco that extends the Disco work to provide all the advantages of the hardware partitioning and scalable operating system approaches. We argue that Cellular Disco can achieve these benefits at only a small fraction of the development cost of modifying the operating system. Cellular Disco effectively turns a large-scale shared-memory multiprocessor into a virtual cluster that supports fault containment and heterogeneity, while avoiding operating system scalability bottlenecks. Yet at the same time, Cellular Disco preserves the benefits of a shared-memory multiprocessor by implementing dynamic, fine-grained resource sharing, and by allowing users to overcommit resources such as processors and memory. This hybrid approach requires a scalable resource manager that makes local decisions with limited information while still providing good global performance and fault containment. In this paper we describe our experience with a Cellular Disco prototype on a 32-processor SGI Origin 2000 system. We show that the execution time penalty for this approach is low, typically within 10% of the best available commercial operating system formost workloads, and that it can manage the CPU and memory resources of the machine significantly better than the hardware partitioning approach.

66 citations


Patent
11 May 2000
TL;DR: In this paper, a shared resource manager circuit for use in conjunction with multiple processors to manage allocation and deallocation of shared resources is presented, which allocates and deallocates software resources for utilization by the processors.
Abstract: A shared resource manager circuit for use in conjunction with multiple processors to manage allocation and deallocation of a shared resource. The shared resource manager allocates and deallocates software resources for utilization by the processors in response to allocation and deallocation requests by the processors. The shared resource manager may include a bus arbitrator as required in a particular application for interfacing with a system bus coupled to the processors to provide mutual exclusion in access to the shared resource manager among the multiple processors. The shared resource manager may manage a memory block (FIFO queue) with multiple resource control blocks. A system may advantageously apply a plurality of shared resource managers coupled to a plurality of processors via a common interface bus. Each shared resource manager device may then be associated with management of one particular shared resource.

01 Jan 2000
TL;DR: This dissertation defines new protocols for concurrency control and synchronized access to files and data that exploit the SAN environment to improve performance and identifies that the architecture of the direct access file system invalidates traditional protocols for providing operational guarantees when networks fail and authenticating the identity and actions of computers and their applications.
Abstract: Many improvements in computer systems are initiated by new developments in the hardware on which these systems run. Currently, hardware for data storage are experiencing changes in connectivity, access semantics, and data rates because of storage area networks (SANs), which allow many computers to have shared access to storage devices over a high-speed network. The advent of SANs makes it possible to implement a high performance distributed file system by allowing client computers to obtain data directly from storage devices, rather than accessing data through a server that performs read and writes on their behalf. However, a file system design that allows direct client access to data significantly changes both the performance and correctness of traditional protocols for data management. In this dissertation we define new protocols for concurrency control and synchronized access to files and data that exploit the SAN environment to improve performance. We also identify that the architecture of our direct access file system invalidates traditional protocols for providing operational guarantees when networks fail and authenticating the identity and actions of computers and their applications. We develop decentralized and distributed protocols for safe operation and authentication that are correct for the SAN environment. Our protocols make possible the implementation of a distributed file system that employs direct access to storage from clients to achieve high performance.

Patent
26 Dec 2000
TL;DR: In this paper, a server-side recycle bin system for retaining computer files and information is described, which consists of a local computer system, and a server including a server side recycle bin.
Abstract: A server-side recycle bin system for retaining computer files and information is disclosed. The system comprises a local computer system, and a server including a server-side recycle bin. One or more persistent storage devices, providing the files and directories to be protected, are present either as part of the local computer system or as part of the server. The local computer system and the server may be connected via a wide area computer network, a local area network, the Internet, of any other method or combination of methods. A file manager application running on the local computer system interacts with a file serving application on the server such that there is generated a retained file in the server-side recycle bin.

Patent
29 Sep 2000
TL;DR: In this article, the authors propose a method and computer system for resolving simultaneous requests from multiple processing units to load from or store to the same shared resource, where the colliding requests come from two different processing units, and the first processing unit is allowed access to the structure in a predetermined number of sequential collisions.
Abstract: A method and computer system for resolving simultaneous requests from multiple processing units to load from or store to the same shared resource. When the colliding requests come from two different processing units, the first processing unit is allowed access to the structure in a predetermined number of sequential collisions and the second device is allowed access to the structure in a following number of sequential collisions. The shared resource can be a fill buffer, where a collision involves attempts to simultaneously store in the fill buffer. The shared resource can be a shared write back buffer, where a collision involves attempts to simultaneously store in the shared write back buffer. The shared resource can be a data cache unit, where a collision involves attempts to simultaneously load from a same data space in the data cache unit. A collision can also involve an attempt to load and store from a same resource and in such case the device that attempts to load is favored over the device that attempts to store.

Patent
31 Mar 2000
TL;DR: In this article, a host system is provided with a shared resource (such as work queues and completion queues); multiple processors arranged to access the shared resource; and an operating system arranged to allow multiple processors to perform work on the shared resources concurrently while supporting updates.
Abstract: A host system is provided with a shared resource (such as work queues and completion queues); multiple processors arranged to access the shared resource; and an operating system arranged to allow multiple processors to perform work on the shared resource concurrently while supporting updates of the shared resource. Such an operating system may comprise a synchronization algorithm for synchronizing multiple threads of operation with a single thread so as to achieve mutual exclusion between multiple threads performing work on the shared resource and a single thread updating or changing the state of the shared resource without requiring serialization of all threads.

Patent
05 May 2000
TL;DR: In this article, a search is initiated at the scoping level where a task to be performed is defined and proceeds in ascending order towards the root level until the resource is located, when found, a clone of each located resource is generated.
Abstract: Systems, methods and computer program products are provided for sharing resources within an Extensible Markup Language (XML) document that defines a console (i.e., a graphical user interface or GUI) for managing a plurality of application programs and tasks associated therewith. Upon receiving a user request to perform a task associated with an application program managed by a console, resource containers at each scoping level within the XML document are searched for one or more resources required to perform the task. A search is initiated at the scoping level where a task to be performed is defined and proceeds in ascending order towards the root scoping level until the resource is located. When found, a clone of each located resource is generated. The task is then performed using the clone of the resource. The clone of the resource may be discarded after the task has been performed.

Proceedings Article
01 Mar 2000
TL;DR: This paper describes how journaling was implemented in the Global File System (GFS), a shared-disk, cluster file system for Linux, and the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures.
Abstract: In computer systems today, speed and responsiveness is often determined by network and storage subsystem performance. Faster, more scalable networking interfaces like Fibre Channel and Gigabit Ethernet provide the scaffolding from which higher performance computer systems implementations may be constructed, but new thinking is required about how machines interact with network-enabled storage devices. In this paper we describe how we implemented journaling in the Global File System (GFS), a shared-disk, cluster file system for Linux. Our previous three papers on GFS at the Mass Storage Symposium discussed our first three GFS implementations, their performance, and the lessons learned. Our fourth paper describes, appropriately enough, the evolution of GFS version 3 to version 4, which supports journaling and recovery from client failures. In addition, GFS scalability tests extending to 8 machines accessing 8 4-disk enclosures were conducted: these tests showed good scaling. We describe the GFS cluster infrastructure, which is necessary for proper recovery from machine and disk failures in a collection of machines sharing disks using GFS. Finally, we discuss the suitability of Linux for handling the big data requirements of supercomputing centers.

Proceedings ArticleDOI
26 Mar 2000
TL;DR: Stochastic fair sharing (SFS) is proposed to carry out fair link sharing and fair sharing among virtual leased links (VLL) and the potential applications of SFS are fair and efficient resource sharing in telecommunication networks, ATM networks, virtual private networks (VPN) and integrated services or differentiated services-based IP networks.
Abstract: Virtual private networks (VPN) and link sharing are cost-effective ways of realizing corporate intranets. Corporate intranets will increasingly have to provide integrated services for voice and multimedia traffic. Although packet scheduling algorithms can be used to implement integrated services in link sharing and virtual private networks, their statistical multiplexing gains are limited. We propose a new scheme called stochastic fair sharing (SFS) to carry out fair link sharing and fair sharing among virtual leased links (VLL). In the link sharing environment, capacities allocated to different classes are adjusted dynamically as sessions arrive (or depart). The SFS admission control algorithm decides which sessions to accept and which to reject depending upon the current utilizations and provisioned capacities of the classes. SFS gives protection to classes with low session arrival rate against classes with high session arrival rates by ensuring them a low blocking probability. In the case of multi-hop VLL, capacity resizing requests are sent in the service providers's network which are admission-controlled using SFS. Our simulations indicate that using SFS, the equivalent capacity of virtual links converge to their max-min fair capacity, with a fairness index of 0.97 in extreme situations. The average signaling load of the protocol was found to be reasonable. The scheme is simple to implement, efficient, and robust. The potential applications of SFS are fair and efficient resource sharing in telecommunication networks, ATM networks, virtual private networks (VPN) and integrated services or differentiated services-based IP networks.

Patent
21 Nov 2000
TL;DR: In this paper, the authors present a method for providing a transition from a first server node to a second server node, which is useful in a computer system including at least two server nodes, each of which execute clustered server software.
Abstract: The method of the present invention is useful in a computer system including at least two server nodes, each of which execute clustered server software. The program executes a method for providing a transition from a first server node to a second server node. The method includes the steps of responding to a request for the transition and initiating a thread for effecting the transition from the first server node to the second server node. Next, determining if a shared resource is owned by the second server node and if not, calling a driver to enable functionality of the transition, which transition sets up the shared resource access to the second server node.

Patent
Doru Calin1, Cecile Faure1, Tinlin Lee1
29 Dec 2000
TL;DR: In this paper, a flexible spectrum allocation mechanism for wireless communication systems is proposed, which enables the sharing of frequencies, between proprietary and shared frequencies, by dynamically controlling a set of adjustable thresholds for call admission and termination, resulting in flexible spectrum sharing and an increase in spectrum efficiency.
Abstract: A wireless communication system includes a communication resource with a number of Operators providing communication on the communication resource. A first portion of said communication resource is dedicated to a first Operator and a second portion of said communication resource is dedicated as a shared resource for shared use between at least two Operators within the communication system. A method of sharing a communication resource is also provided. This enables the sharing of frequencies, between proprietary and shared frequencies, by dynamically controlling a set of adjustable thresholds for call admission and termination, resulting in flexible spectrum sharing and an increase in spectrum efficiency. Such a mechanism to provide a flexible spectrum allocation is attained by moving thresholds and/or by instruction from a Central Controller

Journal ArticleDOI
TL;DR: This paper describes a new design for the light-weight group protocols that enables such service to function transparently and shows that it is possible to establish mappings that promote resource sharing and, at the same time, minimize interference.

Patent
08 Dec 2000
TL;DR: In this paper, the authors describe a processing system and pieces of software for one or more communication networks, with middleware comprising an application programming interface cast over a data model describing quality-of-service contracts and quality of service adaptation paths specified by quality ofservice aware mobile multimedia applications using said programming interface, in order to manage quality of services and mobility-aware for managing network connections with other applications.
Abstract: The present invention generally relates to the field of mobile multimedia middleware, quality-of-service, shared resource reservation mechanisms, distributed processing systems, handheld computers, computer networking and wireless communication. Particularly, the present invention describes a processing system and pieces of software for one or more communication networks, with middleware comprising an application programming interface (102) cast over a data model describing quality-of-service contracts and quality-of-service adaptation paths specified by quality-of-service aware mobile multimedia applications (101) using said programming interface, in order to manage quality-of-service and mobility-aware for managing network connections with other applications. The present invention hereby relates to a corresponding data model as well as the necessary architecture.

Patent
15 Dec 2000
TL;DR: In this paper, a method and system for reducing network traffic in a data processing network is described, which includes a user station, a boot server from which portions of the user station operating system are retrieved, and an application server from a user application program are retrieved.
Abstract: A method and system for reducing network traffic in a data processing network are disclosed. The data processing system typically includes a user station, a boot server from which portions of the user station operating system are retrieved, and an application server from which portions of a user application program are retrieved. In one embodiment, a user station of the data processing system includes a non-volatile storage device. Portions of the operating system and application that are frequently accessed may be downloaded from the appropriate servers and stored in the non-volatile storage device. In one embodiment, the user station may determine which code segments constitute key code segments by recording page fault in a miss table of the user station. The most frequently accessed pages can then be determined for storing in local memory. To maintain consistency of software when the operating system or an application program is revised or updated, one embodiment of the invention clears the key code segments from all local non-volatile storage devices when an operating system or application program is newly installed on one of the servers. In another embodiment, network traffic is reduced by installing a program on the user station and the data server that monitor changes to a data file. When an application is invoked by the user station and the user begins to modify data, the user station program records the changes that are made to the data file locally in a local change file. Periodically the local change file is transferred to the data server, where the local changes are incorporated into a master change file on the data server. When the user ultimately exits the program or saves the data, the server program reads the master change file and implements the changes to the data file.

01 Jan 2000
TL;DR: This dissertation proposes to organize resource naming and security, not around administrative domains, but around the sharing patterns of users, and uses the formalism of sharing to drive a user-centric sharing implementation for distributed systems.
Abstract: I tackle the problem of naming and sharing resources across administrative boundaries. Conventional systems manifest the hierarchy of typical administrative structure in the structure of their own mechanism. While natural for communication that follows hierarchical patterns, such systems interfere with naming and sharing that cross administrative boundaries, and therefore cause headaches for both users and administrators. I propose to organize resource naming and security, not around administrative domains, but around the sharing patterns of users. The dissertation is organized into four main parts. First, I discuss the challenges and tradeoffs involved in naming resources and consider a variety of existing approaches to naming. Second, I consider the architectural requirements for user-centric sharing. I evaluate existing systems with respect, to these requirements. Third, to support the sharing architecture, I develop a formal logic of sharing that captures the notion of restricted delegation. Restricted delegation ensures that users can use the same mechanisms to share resources consistently, regardless of the origin of the resource, or with whom the user wishes to share the resource next. A formal semantics gives unambiguous meaning to the logic. I apply the formalism to the Simple Public Key Infrastructure and discuss how the formalism either supports or discourages potential extensions to such a system. Finally, I use the formalism to drive a user-centric sharing implementation for distributed systems. I show how this implementation enables end-to-end authorization, a feature that makes heterogeneous distributed systems more secure and easier to audit. Conventionally, gateway services that bridge administrative domains, add abstraction, or translate protocols typically impede the flow of authorization information from client to server. In contrast, end-to-end authorization enables us to build gateway services that preserve authorization information, hence we reduce the size of the trusted computing base and enable more effective auditing. I demonstrate my implementation and show how it enables end-to-end authorization across various boundaries. I measure my implementation and argue that its performance tracks that of similar authorization mechanisms without end-to-end structure. I conclude that my user-centric philosophy of naming and sharing benefits both users and administrators.

Proceedings ArticleDOI
01 May 2000
TL;DR: This work describes a server that acts as the locking authority and implements a lease-based protocol for revoking access to data objects locked by an isolated or failed computer.
Abstract: In a distributed file system built on network attached storage, client computers access data directly from shared storage, rather than submitting I/O requests through a server. Without a server marshaling access to data, if a computer fails or becomes isolated in a network partition while holding locks on cached data objects, those objects become inaccessible to other computers until a locking authority can guarantee that the lock holder will not again directly access these data. We describe a server that acts as the locking authority and implements a lease-based protocol for revoking access to data objects locked by an isolated or failed computer. When a lease expires, the server can be assured that the client no longer acts on locked data, and can safely redistribute locks to other clients. During normal operation, this protocol invokes no message overhead, and uses no memory and performs no computation at the locking authority.

Patent
10 Jan 2000
TL;DR: In this article, a method for managing a shared resource that is allocated among nodes in a distributed computing system includes receiving periodic reports from the nodes regarding their respective allocations of the resource.
Abstract: A method for managing a shared resource that is allocated among nodes in a distributed computing system includes receiving periodic reports from the nodes regarding their respective allocations of the resource. Responsive to the periodic reports, an approximate amount of the resource that is free for further allocation is determined. Typically, the shared resource is a data storage resource, such as a plurality of disks linked to the nodes by a network, which disks are commonly accessible to multiple ones of the nodes.

Patent
13 Nov 2000
TL;DR: In this paper, each user is assigned a priority and is provided with a non-uniform probability distribution function corresponding to that priority, with the sum of the several NUDFs being uniform.
Abstract: A method by which a plurality of users share access to a resource such as a common communications channel. Each user is assigned a priority and is provided with a non-uniform probability distribution function corresponding to that priority, with the sum of the several non-uniform probability distribution functions being uniform. Whenever a user wishes to access the resource, the user selects a random number according to its non-uniform probability distribution function and computes an access time based on the selected random number.

Proceedings ArticleDOI
23 Sep 2000
TL;DR: This work explicitly considers the multiple access issue on both the uplink and downlink-to and from an infostation respectively-and sharing of fixed network links for transporting information to and from infostations and finds that in order to maximize throughput, aninfostation radio link should not be shared among users.
Abstract: The infostations wireless data network architecture features discontinuous coverage and ultra-high radio rates for burst transfers of information between base and mobile. It has been shown previously that the infostations architecture can greatly increase the capacity of wireless data systems at the expense of increased delivery delay. In this work we explicitly consider the multiple access issue on both the uplink and downlink-to and from an infostation respectively-and sharing of fixed network links for transporting information to and from infostations. We find that in order to maximize throughput, an infostation radio link should not be shared among users. Furthermore, this sole use paradigm is echoed in the fixed network which transports information to and from infostations. In order to minimize average delay the fixed link to any given infostation should serve users sequentially, as opposed to in a shared manner.

Proceedings Article
01 May 2000
TL;DR: An overview of the proposed RAGS framework is given, describing an abstract data model with five levels of representation: Conceptual, Semantic, Rhetorical, Document and Syntactic, to facilitate modular development of NLG systams as well as evaluation of components, systems and algorithms.
Abstract: The RAGS project aims to develop a reference architecture for natural language generation, to facilitate modular development of NLG systams as well as evaluation of components, systems and algorithms. This paper gives an overview of the proposed framework, describing an abstract data model with five levels of representation: Conceptual, Semantic, Rhetorical, Document and Syntactic. We report on a re-implementation of an existing system using the RAGS data model. 1. The RAGS enterprise The primary goal of the RAGS project (Cahill et al., 1999a) is to develop a ‘reference architecture’ for applied natural language generation (NLG) systems. The aim is to produce an architectural specification which reflects mainstream current practice and provides a framework for the development of new applications and new components within NLG systems. The architecture is also intended to facilitate evaluation of NLG components, algorithms and systems. To achieve these goals, such an architecture has to be sufficiently conventional to be relevant to developers of existing systems, but also sufficiently generic and detailed to be useful as a resource for novel approaches. One of the distinctive properties of natural language generation when compared with other language engineering applications is that it has to take seriously the full range of linguistic representation, from concepts through to morphology, or even phonetics. Any processing system is only as sophisticated as its input allows, so while a natural language understanding system might be judged by its syntactic prowess – even if its attention to semantics, pragmatics and underlying conceptual analysis is minimal – a generation system is only as good as its deepest linguistic representations. This has particular implications for evaluation of NLG systems: it is hard to think of evaluation exercises along the lines of the MUC tasks (Hirschman and Chinchor, 1997) for information extraction, for instance, where the inputs consist of naturally-occurring text and the outputs are precisely specified by the evaluators. There is no general agreement on how inputs for NLG systems should be specified, nor on how output texts can be evaluated. 2. Pipelines and beyond Generation systems, especially end-to-end, applied generation systems, have, unsurprisingly, much in common. Reiter (Reiter, 1994) proposed an analysis of such systems in terms of a simple three-stage pipeline: * Now at the MITRE Corporation, Bedford, MA, USA, cdoran@mitre.org. Content Determination deciding the content of a message, and organising the component propositions into a text tree; Sentence Planning aggregating propositions into clausal units and choosing lexical items corresponding to concepts in the knowledge base, including referring expressions (RE); Linguistic realisation which takes care of surface details such as agreement, orthography etc. This was presented as a “consensus” model, based on a survey of 5 existing systems and partly motivated on engineering grounds in that a strict pipeline requires a minimal number of interfaces. More recently the RAGS project attempted to repeat and extend the analysis, surveying a set of 19 applied systems (Paiva, 1998; Cahill and Reape, 1998). A component tasks analysis was undertaken examining where in each system each of a set of core linguistic tasks was situated. That set was: lexicalisation, aggregation, rhetorical structuring, referring expression generation, ordering, segmentation and coherence. It was found that, while most systems did implement a pipeline architecture, they did not implement the same pipeline – different functionalities occurred in different places and in different orders in different systems. For instance, different systems situate the lexicalisation task in Content Determination, Sentence Planning or Realisation. Or one functional task may be distributed among different modules: in the CGS system (Mittal et al., 1998), the task of generating referring expressions is distributed among the Text Planner, Lexical Choice and the Referring Expression module itself. To accomodate the results of this survey, we find it more appropriate to specify the architecture in terms of the above set of functions and a number of distinct levels of representation, which between them support the range of implementations observed, including pipelines as well as other more complex control regimes. More details about the assumed definitions of these tasks can be found in (Cahill and Reape, 1998; Cahill et al., 1999b). The central pillar of our proposed architecture is a data model, in the form of a set of declarative representations for the various levels of linguistic representation in generation. That is, the functional modules are defined entirely in terms of the datatypes they manipulate and the operations they can perform on them. On top of such a model, more specific process models can be created, in terms of constraints on the order and level of instantiation of different types of data in the data model. One might then produce a ‘rational reconstruction’ of the pipeline model, but other process models would be equally possible. We argue that by providing a common notation for the representations, it will be possible for generation systems to ultimately share both datasets and processing modules. Although it is not our intention to impose standards on anyone, we expect that the widespread adoption of (at least) an agreed set of representations will benefit the whole generation community by permitting sharing of resources as well as easier evaluation of both whole systems and modules within systems. We assume five levels of representation: Conceptual, Semantic, Rhetorical, Document and Syntactic, each of which may have abstract (provisional) and concrete (finalised) variants. These representations are intended to be theory-neutral and various different formalisms may be used, some of which are mentioned below. I. Conceptual Structure is defined only indirectly through an API via which a knowledge base (providing the content from which generation takes place) can be viewed as if it were defined in a simple KL-ONE (Brachman and Schmolze, 1985) like system. II. Semantic Structure may be represented using a formalism such DRT (Kamp and Reyle, 1993) or SPL (Kasper, 1989), and is the level at which inferences may be defined. It may be useful to distinguish an Abstract Semantic level, which might include pointers to conceptual entities as arguments of semantic predicates in place of terms with semantic content. III. Rhetorical: a rhetorical structure tree with relations in a formalism such as RST (Mann and Thompson, 1987) at the nodes and pointers to semantic representations at the leaves. Abstract Rhetorical Structure may differ from the final structure in various ways: the relations may be selected from a more restricted, generic, set, or it may be a disjunction of semantically equivalent structure. IV. Document Structure in its concrete form will contain all formatting information needed to print or display a document, coded in some formatting language such as HTML or LaTeX. The RAGS scheme only considers Abstract Document Structure, where text-level (Nunberg, 1990), layout and position features are specified without commitment to any particular formatting system. V. Syntactic Structure may be described in a variety of formalisms, such as HPSG, LFG etc. RAGS is only concerned with Abstract SyntacticStructure, a high level syntactic representation (largely) independent of particular syntactic theories, covering for instance head-complement relations. We believe that most current reusable realisers take as their input some combination of semantic and abstract syntactic representations. Note that Abstract Conceptual, Concrete Syntactic and Concrete Document levels of representation are considered outside the remit of the current enterprise. Note also that this model is intended to be permissive rather than prescriptive: there is no implication that the structures should be built in the order listed above, that one structure should be complete before another one can be initiated, or even that all levels should be instantiated in any particular system. To the contrary, we envisage that systems will make use of mixed and partial representations. Given the above set of representations, we have defined the syntactic and semantic requirements of the formalisms, XML DTDs to aid the representation of the formalisms in an easily sharable format and example implementations of the representations. We anticipate being able to define the functionality of existing systems in terms of these representations. For example, a lexicalisation module might receive as input (at least) a set of abstract semantic structures and return a set of lexicalised abstract syntactic structures. The input/output requirements of operations impose some limits on how these can be ordered, but the intention is that the data model should reflect fairly uncontentious minimal constraints. It should be possible to derive most existing architectures by further specialising the data model and imposing extra ordering constraints. For instance, many systems do not clearly distinguish conceptual from semantic information. For these systems one simply assumes additionally that conceptual structures and semantic structures are one and the same thing. Such a data model allows the definition of a network of possible operations that take us from an initial input to a final output. 3. Inter-module communication Systems using the RAGS model will communicate with one another by sending RAGS representations, possibly involving mixed and partial structures, to one another. The most straightforward way that two modules can communicate is if they are written in the same programming language and they agree on how RAGS representations are implemented in that programming language. In that case, the two modules can operate

Journal ArticleDOI
TL;DR: It is shown that, asymptotically, the performance of the learning system is that for the symmetric Nash strategy, despite the allowed arbitrariness and lack of coordination.
Abstract: This paper is motivated by the work of Altman and Shimkin (1998). Customers arrive at a service center and must choose between two types of service: a channel that is shared by all currently in it and a dedicated line. The mean service cost (or time) for any customer entering the shared resource depends on the decisions of all future arrivals up to the time of departure of that customer, and so has a competitive aspect. The decision rule of each arriving customer is based on its own immediate self-interest, given the available data on the past performance. If the current estimate of the cost for the shared resource equals that of the dedicated line, any decision is possible. The procedure is a type of learning algorithm. The convergence problem is one in asynchronous stochastic approximation, where the ODE may be a differential inclusion. It is shown that, asymptotically, the performance of the learning system is that for the symmetric Nash strategy, despite the allowed arbitrariness and lack of coordination.