scispace - formally typeset
Search or ask a question

Showing papers on "Shared resource published in 1996"


Patent
25 Jan 1996
TL;DR: In this article, a method and system for parallel back-up of a plurality of client computers on a network, in particular, a local area network or wide area network, is presented.
Abstract: A method and system for parallel back-up of a plurality of client computers on a network, in particular, a local area network or wide area network. Each client computer has a local storage device that stores files. A number of back-up storage devices are organized into groups, with each back-up storage devices being a member of one group. A server computer is coupled to the plurality of back-up storage devices by a bus and is also coupled to the network. The server computer for executing a back-up job. The server computer accepts parameters for the back-up job, the parameters including a source parameter specifying a set of the client computers and a destination parameter specifying a group. The server computer receives files from each one of the set of client computers specified in the source parameter in parallel. Each received file is stored on one of the back-up storage devices being a member of the group specified in the destination parameter. When that back-up storage device is full or can not receive files, the file back-up process cascades to the next storage device in the group. Files can be transferred to storage devices in different groups in parallel.

392 citations


Patent
10 Jun 1996
TL;DR: In this article, a video file server includes an integrated cached disk array storage subsystem and a plurality of stream server computers linking the cached disk storage subsystem to a data network for the transfer of video data streams.
Abstract: A video file server includes an integrated cached disk array storage subsystem and a plurality of stream server computers linking the cached disk storage subsystem to a data network for the transfer of video data streams. The video file server further includes a server controller for applying an admission control policy to client requests and assigning stream servers to service the client requests. The stream servers include a real-time scheduler for scheduling isochronous tasks, and supports at least one industry standard network file access protocol and one file access protocol for continuous media file access. The cached disk storage subsystem is responsive to video prefetch commands, and the data specified for a prefetch command for a process are retained in an allocated portion of the cache memory from the time that the cached disk storage subsystem has responded to the prefetch command to the time that the cached disk storage subsystem responds to a fetch command specifying the data for the process. The time between prefetching and fetching is selected based on available disk and cache resources. The video file server provides video-on-demand service by maintaining and dynamically allocating sliding windows of video data in the random access memories of the stream server computers.

213 citations


Journal ArticleDOI
TL;DR: An adaptive call admission control mechanism for wireless/mobile networks supporting multiple classes of traffic, and an analytical methodology which shows that the combination of thecall admission control and the resource sharing schemes guarantees a predefined quality-of-service to each class of traffic.
Abstract: We introduce an adaptive call admission control mechanism for wireless/mobile networks supporting multiple classes of traffic, and discuss a number of resource sharing schemes which can be used to allocate wireless bandwidth to different classes of traffic. The adaptive call admission control reacts to changing new call arrival rates, and the resource sharing mechanism reacts to rapidly changing traffic conditions in every radio cell due to mobility of mobile users. In addition, we have provided an analytical methodology which shows that the combination of the call admission control and the resource sharing schemes guarantees a predefined quality-of-service to each class of traffic. One major advantage of our approach is that it can be performed in a distributed fashion removing any bottlenecks that might arise due to frequent invocation of network call control functions.

92 citations


Patent
27 Mar 1996
TL;DR: A network cache system includes a shared cache server and a conventional file server connected to a computer network which serves a plurality of client workstation computers Each client computer may additionally include a local non-volatile cache storage unit for caching data transferred to the client from a network server or from the shared cache as mentioned in this paper.
Abstract: A network cache system includes a shared cache server and a conventional file server connected to a computer network which serves a plurality of client workstation computers Each client computer may additionally include a local non-volatile cache storage unit for caching data transferred to the client from a network server or from the shared cache server Each client computer further includes a resident redirector program which intercepts file manipulation requests from executing application programs and redirects these requests to either the shared network cache or the local non-volatile cache when appropriate

70 citations


Patent
Peter Clark1
18 Dec 1996
TL;DR: In this paper, a method and apparatus for managing how threads of a multi-threaded computer program share a resource is provided, where one thread of the program is given priority over other threads by granting to the thread possession of the lock associated with the resource regardless of whether the thread currently requires use of the resource.
Abstract: A method and apparatus for managing how threads of a multi-threaded computer program share a resource is provided. One thread of the program is given priority over other threads of the program by granting to the thread possession of the lock associated with the resource regardless of whether the thread currently requires use of the resource. The other threads are designed to indicate to the priority thread when they require use of the resource. If the priority thread is done using the resource and detects that another thread is waiting to use the resource, the priority thread releases the resource lock for the resource. After releasing the lock for the resource, the priority thread automatically requests the resource lock. After using the resource, any non-priority thread releases the resource lock to the priority thread if the priority thread has requested the resource, without regard to whether any other threads may be waiting for the resource. According to one embodiment, a timer mechanism is used to cause the priority thread to periodically check whether any threads are waiting to use the resource.

60 citations


Journal ArticleDOI
TL;DR: This work introduces a new primitive, the Resource Controller, which abstracts the problem of controlling the total amount of resources consumed by a distributed algorithm, and presents an efficient distributed algorithm to implement this abstraction.
Abstract: This paper introduces a new distributed data object called Resource Controller that provides an abstraction for managing the consumption of a global resource in a distributed system. Examples of resources that may be managed by such an object include; number of messages sent, number of nodes participating in the protocol, and total CPU time consumed.The Resource Controller object is accessed through a procedure that can be invoked at any node in the network. Before consuming a unit of resource at some node, the controlled algorithm should invoke the procedure at this node, requesting a permit or a rejection.The key characteristics of the Resource Controller object are the constraints that it imposes on the global resource consumption. An (M, W)-Controller guarantees that the total number of permits granted is at most M; it also ensures that, if a request is rejected, then at least M—W permits are eventually granted, even if no more requests are made after the rejected one.In this paper, we describe several message and space-efficient implementations of the Resource Controller object. In particular, we present an (M, W)-Controller whose message complexity is O(n log2n log(M/(W + 1)) where n is the total number of nodes. This is in contrast to the O(nM) message complexity of a fully centralized controller which maintains a global counter of the number of granted permits at some distinguished node and relays all the requests to the node.

57 citations


Proceedings ArticleDOI
12 Jun 1996
TL;DR: The issues associated with automatically managing a heterogeneous environment are reviewed, SmartNet's architecture and implementation are described, and performance data is summarized.
Abstract: SmartNet is a scheduling framework for heterogeneous systems. Preliminary conservative simulation results for one of the optimization criteria, show a 1.21 improvement over Load Balancing and a 25.9 improvement over Limited Best Assignment, the two policies that evolved from homogeneous environments. SmartNet achieves these improvements through the implementation of several innovations. It recognizes and capitalizes on the inherent heterogeneity of computers in today's distributed environments; it recognizes and accounts for the underlying non-determinism of the distributed environment; it implements an original partitioning approach, making runtime prediction more accurate and useful; it effectively schedules based on all shared resource usage, including network characteristics; and it uses statistical and filtering techniques, making a greater amount of prediction information available to the scheduling engine. In this paper, the issues associated with automatically managing a heterogeneous environment are reviewed, SmartNet's architecture and implementation are described, and performance data is summarized.

53 citations


Patent
27 Nov 1996
TL;DR: In this article, an integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides.
Abstract: An integrated cached disk array includes host to global memory (front end) and global memory to disk array (back end) interfaces implemented with dual control processors configured to share substantial resources. The dual processors each access independent control store RAM, but run the same processor independent control program using an implementation that makes the hardware appear identical from both the X and Y processor sides.

47 citations



Patent
29 Aug 1996
TL;DR: In this paper, a multitasking data processing system having a plurality of tasks and a shared resource and a method of controlling allocation of shared resources within a multi-task data processing systems are disclosed.
Abstract: A multitasking data processing system having a plurality of tasks and a shared resource and a method of controlling allocation of shared resources within a multitasking data processing system are disclosed. In response to a resource request for a portion of a shared resource by a particular task among the plurality of tasks, a determination is made whether or not granting the resource request would cause a selected level of resource allocation to be exceeded. In response to a determination that granting the resource request would not cause the selected level of resource allocation to be exceeded, the resource request is granted. However, in response to a determination that granting the resource request would cause the selected level of resource allocation to be exceeded, execution of the particular task is suspended for a selected penalty time. In one embodiment of the present invention, the shared resource is a memory.

43 citations


Patent
25 Nov 1996
TL;DR: A file recording support apparatus for supporting recording of a file on a home page on an internet and an intranet according to the present invention comprises means for entering a recording start command, means for entry a recording stop command, and recording means for recording a file acquired from a server, together with link information included in the file, in storage.
Abstract: A file recording support apparatus for supporting recording of a file on a home page on an internet and an intranet according to the present invention comprises means for entering a recording start command, means for entering a recording stop command, and recording means for recording a file acquired from a server, together with link information included in the file, in storage means in a period elapsed from the time when the recording start command is entered until the recording stop command is entered.

Journal ArticleDOI
TL;DR: This paper suvrey and analyze several well-known distributed mutual exclusion algorithms according to their related characteristics and compares the performance of these algorithms by a simulation study.

Book
01 Jan 1996
TL;DR: Broadband traffic characteristics, broadband service models, and general tools for queueing analysis are presented.
Abstract: Broadband traffic characteristics.- Broadband service models.- Accounting for cell delay variation.- Statistical resource sharing.- Connection admission control.- Weighted fair queueing.- Access network design.- MAC protocols for access to B-ISDN.- Generic architecture and core network design.- Multiservice network dimensioning.- Virtual path network design.- Resource management and routing.- Traffic modelling.- General tools for queueing analysis.- Cell scale queueing.- Burst scale loss systems.- Burst scale delay systems.- Multi-rate models.

Book ChapterDOI
21 Feb 1996
TL;DR: The scheme assigns a nominal capacity to each service class and implements a form of virtual partitioning by means of state-dependent priorities, instead of each class of traffic having a fixed priority, as in traditional trunk reservation schemes.
Abstract: We propose a scheme for sharing an unbuffered resource, such as bandwidth or capacity, by various services The scheme assigns a nominal capacity to each service class and implements a form of virtual partitioning by means of state-dependent priorities That is, instead of each class of traffic having a fixed priority, as in traditional trunk reservation schemes, the priorities depend on the state of the system An approximate method of analysis based on fixed point equations is given Numerical results are obtained from the approximation, exact computations and simulations The results show that the scheme is robust, fair and efficient

Patent
Bernard Charles Drerup1
26 Jul 1996
TL;DR: In this paper, a system and method for transferring a file from one computer to another computer in the network has been proposed, where a user can identify a specific piece of data and then access this data from another computer.
Abstract: A system and method are provided wherein a user of an interconnected computer system can identify a specific piece of data and then access this data from another computer in the network. This is extremely useful since it is often desirable for data to be capable of being displayed and manipulated from another system during meetings, discussions and the like. The user who wishes to transfer a file to another system simply points an untethered stylus to a representation of a file, such as a filename, icon, or the like and then selects the file to be transferred. The user then carries the stylus to a remote interconnected computer and points the stylus at the remote computer which verifies the identity of the stylus and obtains a path to the selected file. The data file is then transferred from the user's computer to the remote computer through the network.

Proceedings ArticleDOI
03 Jan 1996
TL;DR: The results demonstrate that client caches alter workload characteristics in a way that leaves a profound impact on server cache performance, and suggest worthwhile directions for the future development of server caching strategies.
Abstract: A distributed file system provides a file service from one or more shared file servers to a community of client workstations over a network. While the client-server paradigm has many advantages, it also presents new challenges to system designers concerning performance and reliability. As both client workstations and file servers become increasingly well-resourced, a number of system design decision need to be re-examined. This research concerns the caching of disk-blocks in a distributed client-server environment. Some recent research has suggested that various strategies for cache management may not be equally suited to the circumstances at both the client and the server. Since any caching strategy is based on assumptions concerning the characteristics of the demand, the performance of the strategy is only as good as the accuracy of this assumption. The performance of a caching strategy at a file server is strongly influenced by the presence of client caches since these caches alter the characteristics of the stream of requests that reaches the server. This paper presents the results of an investigation of the effect of client caching on the nature of the server workload as a step towards understanding the performance of caching strategies at the server. The results demonstrate that client caches alter workload characteristics in a way that leaves a profound impact on server cache performance, and suggest worthwhile directions for the future development of server caching strategies.

Journal ArticleDOI
TL;DR: This work shows how networks based on high-speed crossbar switches and efficient protocol implementations can support high bandwidth and low latency communication while still enjoying the flexibility of general networks, and uses three applications to demonstrate that network-based multicomputers are a practical architecture.
Abstract: Multicomputers built around a general network are an attractive architecture for a wide class of applications. The architecture provides many benefits compared with special-purpose approaches, including heterogeneity, reuse of application and system code, and sharing of resources. The architecture also poses new challenges to both computer system implementers and users. First, traditional local-area networks do not have enough bandwidth and create a communication bottleneck, thus seriously limiting the set of applications that can be run effectively. Second, programmers have to deal with large bodies of code distributed over a variety of architectures, and work in an environment where both the network and nodes are shared with other users. Our experience in the Nectar project shows that it is possible to overcome these problems. We show how networks based on high-speed crossbar switches and efficient protocol implementations can support high bandwidth and low latency communication while still enjoying the flexibility of general networks, and we use three applications to demonstrate that network-based multicomputers are a practical architecture. We also show how the network traffic generated by this new class of applications poses severe requirements for networks.

Proceedings Article
01 Sep 1996
TL;DR: Measurements of an Andrew File System server upgraded in an effort to improve client performance are analyzed and it is seen that off-loading file server work in a network-attached storage architecture has the potential to benefit user response time.
Abstract: An important trend in the design of storage subsystems is a move toward direct network attachment. Network-attached storage offers the opportunity to off-load distributed file system functionality from dedicated file server machines and execute many requests directly at the storage devices. For this strategy to lead to better performance as perceived by users, the response time of distributed operations must improve. In this paper, we analyze measurements of an Andrew File System (AFS) server that we recently upgraded in an effort to improve client performance in our laboratory. While the original server’s overall utilization was only about 3%, we show how burst loads were sufficiently intense to lead to periods of poor response time significant enough to trigger customer dissatisfaction. In particular, we show how, after adjusting for network load and traffic to non-project servers, 50% of the variation in client response time was explained by variation in server CPU utilization. That is, clients saw long response times in large part because the server was often over-utilized when it was used at all. Using these measures, we see that off-loading file server work in a network-attached storage architecture has the potential to benefit user response time. Computational power in such a system scales directly with storage capacity, so the slowdown during burst periods should be reduced.

Proceedings ArticleDOI
19 Jun 1996
TL;DR: This work has developed a system for easy administration and enforcement of controlled access for end user access to resources via the Internet including X Windows and World Wide Web based interfaces, and DCE based authentication and authorization.
Abstract: The Internet has been identified as one of the most dangerous aspects for an organization in today's information based world. Unauthorized access, misuse, and manipulation of data can create havoc. We have chosen a distributed environment with thousands of users and tens of thousands of resources to illustrate an approach to solve this problem. We have developed a system for easy administration and enforcement of controlled access. Distributed, delegated management of resources is at the core of the project. For portability, the enforcement system is established between the operating system and the user rather than being embedded in the operating system. In particular, we have developed management and access methods for end user access to resources via the Internet including X Windows and World Wide Web based interfaces, and DCE based authentication and authorization. The environment for this project is the Distributed Informatics Computing and Collaborative Environments project associated with the US Department of Energy's Energy Science Network. This project is a joint effort between the Continuous Electron Beam Accelerator Facility (CEBAF), the Chinese Institute of High Energy Physics, and Old Dominion University.

Journal ArticleDOI
TL;DR: An analytic performance modelling methodology for synchronous iterative algorithms executing on networked workstations that includes the effects of application load, background load, and processor heterogeneity is developed and validated.
Abstract: The utilization of networked, shared, heterogeneous workstations as an inexpensive parallel computational platform is an appealing idea. However, most performance models for parallel computation are oriented towards the use of tightly-coupled, dedicated, homogeneous processors. We develop and validate an analytic performance modelling methodology for synchronous iterative algorithms executing on networked workstations. The model includes the effects of application load, background load, and processor heterogeneity. We use two applications, nonlinear optimization and discrete-event simulation, to validate the model. Various policies for the use of the workstations are considered and the optimal (or near-optimal) scheduling found. The performance modelling methodology provides significant help in addressing scheduling and similar issues in a shared resource environment.

Journal ArticleDOI
TL;DR: A general strategy of sharing multiple types of discrete resources with finite capacity under the model of DRS is proposed, based upon a media access protocol, CSMA/CD-W (Carrier Sense Multiple Access with Collision Detection for Wireless), which supports wireless inter-robot communication among multiple autonomous mobile robots without using any centralized mechanism.

Proceedings ArticleDOI
20 Sep 1996
TL;DR: This work presents an algorithm to perform the three tasks of pipelining, resource sharing and component selection, so as to minimize design cost for a given throughput constraint.
Abstract: In general, high performance DSP designs are heavily pipelined and, in order to reduce the pipeline cost, these designs employ techniques such as component selection and resource sharing to select the appropriate number and type of components. We present an algorithm to perform the three tasks of pipelining, resource sharing and component selection, so as to minimize design cost for a given throughput constraint. Experiments conducted on several examples demonstrate the superiority of performing all three tasks, rather than just a combination of any two of these tasks, as done in previously published algorithms.

Patent
Jerrold V. Hauck1
26 Sep 1996
TL;DR: In this paper, a method for requesting use of a shared resource in a computer system is presented, in which the device requesting a resource can predict the need for the resource before the need actually arises, and the amount of local buffering required within the device is therefore chosen to accommodate only the random latency component.
Abstract: A method is provided of requesting use of a shared resource in a computer system. The method is suited to applications in which the device requesting use of the resource can predict the need for the resource before the need actually arises. A request for use of the resource is characterized by a latency between the request and a subsequent granting of the request. The latency has both a deterministic component and a non-deterministic component. In response to an initialization of the computer system, the deterministic component of the latency is measured. The use of the resource is then requested by the requesting device some predetermined time before the time at which the need for such use arises. The predetermined time corresponds to the deterministic component of the latency. The amount of local buffering required within the device is therefore chosen to accommodate only the random latency component.

Journal ArticleDOI
TL;DR: This paper describes the implementation of a highly scalable shared queue, supporting the concurrent insertion and deletion of elements, targeted at the class of distributed memory machines which use a scalable interconnection network.
Abstract: The emergence of low latency, high throughput routers means that network locality issues no longer dominate the performance of parallel algorithms. One of the key performance issues is now the even distribution of work across the machine, as the problem size and number of processors increase. This paper describes the implementation of a highly scalable shared queue, supporting the concurrent insertion and deletion of elements. The main characteristics of the queue are that there is no fixed limit on the number of outstanding requests and the performance scales linearly with the number of processors (subject to increasing network latencies). The queue is implemented using a general-purpose computational model, called the WPRAM. The model includes a shared address space which uses weak coherency semantics. The implementation makes extensive use of pairwise synchronization and concurrent atomic operations to achieve scalable performance. The WPRAM is targeted at the class of distributed memory machines which use a scalable interconnection network.

Journal ArticleDOI
TL;DR: This paper examines the importance of sharing within cooperative systems and argues for a specialized service to support the cooperative aspects of information sharing, and proposes a set of novel and viable design concepts, and the beginnings of an architectural framework for providing shared object services for CSCW systems.
Abstract: This paper examines the importance of sharing within cooperative systems and argues for a specialized service to support the cooperative aspects of information sharing. An investigation into the current information models used by Computer Supported Cooperative Work (CSCW) applications is presented. This is complemented by highlighting the problems faced by traditional database mechanisms in supporting cooperative applications. From this we form a set of requirements for CSCW information systems, and this analysis is used directly in the creation of new software concepts and an associated framework. Rather than present a detailed design, we propose a set of novel and viable design concepts, and the beginnings of an architectural framework for providing shared object services for CSCW systems. The relationship between the cooperative shared object service and existing services is briefly examined. A number of services of particular importance to CSCW systems are identified. More detailed consideration is given for a selection of service elements. The paper presents both the need for these services and the means of realizing the shared object service by augmenting existing object facilities.

Journal ArticleDOI
TL;DR: Some of the commonly known security threats, together with the security services and state-of-the-art mechanisms that can be used to provide protection against these threats are introduced.


Book
01 Jan 1996
TL;DR: The authors describes how interlibrary loans have adapted to economic and technological changes in the electronic age, how dramatic increasing demands have impacted on resource sharing, and how research needs have been filled through resource sharing in the 1990s.
Abstract: This text describes how interlibrary loans have adapted to economic and technological changes in the electronic age, how dramatic increasing demands have impacted on resource sharing, and how research needs have been filled through resource sharing in the 1990s.

Patent
Cheng Marko Ju K1
13 Nov 1996
TL;DR: In this article, a method and apparatus for determining the status of a resource (810) shared by multiple subsystems (802, 804, 806, 808) operating in mutually asynchronous clock domains are presented.
Abstract: A method and apparatus for determining the status of a resource (810) shared by multiple subsystems (802, 804, 806, 808) operating in mutually asynchronous clock domains apply a one-bit counter (814, 818, 822, 826) for each subsystem and synchronize the value of each such bit counter with all asynchronous clocks. Each subsystem exclusive-ORs the value of each bit counter (814, 818, 822, 826) to generate an availability status for the shared resource (810). System delays caused by synchronization are minimized, and circuit design and proof of correctness at the design stage are simplified.

Proceedings ArticleDOI
11 Jun 1996
TL;DR: This work considers the problem of resource sharing among PEs in the star interconnection network (SIN) and presents three different placement strategies and shows that a perfect 2-adjacency resource placement does not exist for all star networks.
Abstract: In a large system with many processing elements (PE), it is very expensive to equip each PE with a copy of the resource. It is desirable to distribute few copies of a given resource to ensure that every PE is able to reach a copy of that resource within a certain number of hops. Previous work has been done on the binary hypercube as well as on the k-ary n-cube. We consider the problem of resource sharing among PEs in the star interconnection network (SIN) and present three different placement strategies. First, we consider the perfect 1-adjacency resource placement. In this placement, resources have to be distributed in such a way that every node without a copy of the resource will find exactly one node adjacent to it having a copy of the resource. Second, the perfect full adjacency placement is considered. In this placement each node without a copy of the resource will find all nodes adjacent to it having a copy of the resource. Finally, the perfect 2-adjacency placement is considered where each non resource node is adjacent to exactly two resource copies. We show that a perfect 2-adjacency resource placement does not exist for all star networks.