scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1994"


Proceedings ArticleDOI
22 Oct 1994
TL;DR: GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles, and protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction.
Abstract: Collaborative filters help people make choices based on the opinions of other people. GroupLens is a system for collaborative filtering of netnews, to help people find articles they will like in the huge stream of available articles. News reader clients display predicted scores and make it easy for users to rate articles after they read them. Rating servers, called Better Bit Bureaus, gather and disseminate the ratings. The rating servers predict scores based on the heuristic that people who agreed in the past will probably agree again. Users can protect their privacy by entering ratings under pseudonyms, without reducing the effectiveness of the score prediction. The entire architecture is open: alternative software for news clients and Better Bit Bureaus can be developed independently and can interoperate with the components we have developed.

5,644 citations


Patent
Rick Dedrick1
03 Nov 1994
TL;DR: In this paper, a method and apparatus for providing electronic advertisements to end users in a consumer best-fit pricing manner includes an index database, a user profile database, and a consumer scale matching process.
Abstract: A method and apparatus for providing electronic advertisements to end users in a consumer best-fit pricing manner includes an index database, a user profile database, and a consumer scale matching process. The index database provides storage space for the titles of electronic advertisements. The user profile database provides storage for a set of characteristics which correspond to individual end users of the apparatus. The consumer scale matching process is coupled to the content database and the user profile database and compares the characteristics of the individual end users with a consumer scale associated with the electronic advertisement. The apparatus then charges a fee to the advertiser, based on the comparison by the matching process. In one embodiment, a consumer scale is generated for each of multiple electronic advertisements. These advertisements are then transferred to multiple yellow page servers, and the titles associated with the advertisements are subsequently transferred to multiple metering servers. At the metering servers, a determination is made as to where the characteristics of the end users served by each of the metering servers fall on the consumer scale. The higher the characteristics of the end users served by a particular metering server fall, the higher the fee charged to the advertiser.

1,523 citations


Patent
29 Jul 1994
TL;DR: In this paper, a closed loop, (networked) information management and security system which provides a secure, end-to-end fully automated solution for controlling access, transmission, manipulation, and auditability of high value information comprising an RFID transponder badge 302 and an RF reader transceiver 315 which is associated with a host peripheral or a network.
Abstract: A closed loop, (networked) information management and security system which provides a secure, end-to-end fully automated solution for controlling access, transmission, manipulation, and auditability of high value information comprising an RFID transponder badge 302 and an RF reader transceiver 315 which is associated with a host peripheral or a network. The RF reader transceiver 315 automatically identifies and verifies authorization of the RFID transponder badge holder via a "handshake" prior to allowing access to the host peripheral. The energy generated by the transmission of the interrogation signal from the RF reader means 315 provides a power source which is accumulated and then used to activate a transponder 304 response from the RFID transponder badge 302. The RF reader/transceiver 315 writes the access transaction on either the RFID transponder badge 302 and/or the host peripheral database or the network controller. Alternatively, the RF reader means 315 may be associated via network server with a LAN, WAN, or MAN. Optionally, an RFID badge 302a may be powered by an independent power source such as a flatpak battery 314.

753 citations


Patent
Pierre Wellner1
15 Aug 1994
TL;DR: In this paper, an apparatus and method enables a user to control the selection of electronic multimedia services to be provided to the user by one or more servers over a communication medium, including a scanner for reading marks on an object and for communicating a request signal, having an object code representing the read marks, to a user interface.
Abstract: An apparatus and method enables a user to control the selection of electronic multimedia services to be provided to the user by one or more servers over a communication medium. The apparatus includes a scanner for reading marks on an object and for communicating a request signal, having an object code representing the read marks, to a user interface. The interface receives the request signal and transmits to the servers a request command including an interface identification code and the object code which is used to select the desired electronic multimedia service. The servers identify the selected electronic multimedia service using the object code. The interface then enables the selected electronic multimedia service transmitted from the servers to be received by the user's receiver.

681 citations


Patent
23 May 1994
TL;DR: In this article, a system and method for maintaining data coherency in a system in which data is replicated on two or more servers is presented, where each server is able to update the data replica present on the server.
Abstract: A system and method for maintaining data coherency in a system in which data is replicated on two or more servers. Each server is able to update the data replica present on the server. Updates are logged for each server. Reconciliation of server data replicas is aggressively initiated upon the occurrence of predefined events. These events include arrival at a scheduled time, a request for data by a client system, server and network failure recovery. Reconciliation is managed by a coordinator server selected to ensure that at most one coordinator server per network partition is selected. Logged updates are merged and transmitted to each server containing a data replica. The logged updates are applied unless a conflict is detected. Conflicts are collected and distributed for resolution. Reconciliation is managed between servers without regard to operating system or physical file system type.

505 citations


Proceedings ArticleDOI
28 Sep 1994
TL;DR: Four per-session guarantees are proposed to aid users and applications of weakly consistent replicated data: "read your writes", "monotonic reads", "writes follow reads", and " monotonic writes".
Abstract: Four per-session guarantees are proposed to aid users and applications of weakly consistent replicated data: "read your writes", "monotonic reads", "writes follow reads", and "monotonic writes". The intent is to present individual applications with a view of the database that is consistent with their own actions, even if they read and write from various, potentially inconsistent servers. The guarantees can be layered on existing systems that employ a read-any/write-any replication scheme while retaining the principal benefits of such a scheme, namely high availability, simplicity, scalability, and support for disconnected operation. These session guarantees were developed in the context of the Bayou project at Xerox PARC in which we are designing and building a replicated storage system to support the needs of mobile computing users who may be only intermittently connected. >

476 citations


Proceedings ArticleDOI
24 May 1994
TL;DR: A taxonomy of different cache invalidation strategies is proposed and it is determined that for the units which are often disconnected (sleepers) the best cache invalidations strategy is based on signatures previously used for efficient file comparison, and for units which is connected most of the time (workaholics), the best Cache invalidation strategy isbased on the periodic broadcast of changed data items.
Abstract: In the mobile wireless computing environment of the future a large number of users equipped with low powered palm-top machines will query databases over the wireless communication channels. Palmtop based units will often be disconnected for prolonged periods of time due to the battery power saving measures; palmtops will also frequencly relocate between different cells and connect to different data servers at different times. Caching of frequently accessed data items will be an important technique that will reduce contention on the narrow bandwidth wireless channel. However, cache invalidation strategies will be severely affected by the disconnection and mobility of the clients. The server may no longer know which clients are currently residing under its cell and which of them are currently on. We propose a taxonomy of different cache invalidation strategies and study the impact of client's disconnection times on their performance. We determine that for the units which are often disconnected (sleepers) the best cache invalidation strategy is based on signatures previously used for efficient file comparison. On the other hand, for units which are connected most of the time (workaholics), the best cache invalidation strategy is based on the periodic broadcast of changed data items.

454 citations


Patent
17 Mar 1994
TL;DR: In this article, the authors present an auto multi-project server (AMPS) system, which is a core piece of software running on a host server computer system and interacting with a messaging system such as electronic mail, fax etc.
Abstract: Design and implementation of an `Auto Multi-Project Server System`, which automates the tasks of Project Management Coordination, for organizational work-group team members. The `Auto Multi-Project Server`, referred to as AMPS, consists of a core piece of software running on a host server computer system and interacting with a messaging system such as electronic mail, fax etc. Once the AMPS system is configured for the work environment, all interactions with it by work-group team members is via messages. First the AMPS system compiles multi-project plans into a multi-project database, and tracks the ownership of projects, tasks and resources within the plans. Second the AMPS system performs automatic checking of resource requests, if resource availability limits are exceeded then resources are re-allocated to projects based on priorities, and project plans are accordingly changed Third the database is processed periodically to send out reminder follow-ups and project status reports. Fourth the databases are continuously updated based on status changes reported by work-group members. These four steps are continuously repeated enabling an automated method of multi-project management for organizational work-group team members.

313 citations


Proceedings ArticleDOI
08 Dec 1994
TL;DR: By integrating wireless, video, speech and real-time data access technologies, a unique shopping assistant service can be created that personalizes the attention provided to a custorner based on individual needs, without limiting his movement, or causing distractions from others in the shopping center.
Abstract: By integrating wireless, video, speech and real-time data access technologies, a unique shopping assistant service can be created that personalizes the attention provided to a custorner based on individual needs, without limiting his movement, or causing distractions from others in the shopping center. We have developed this idea into a service based on two products: a very high volume hand-held wireless communications device. the PSA (Personal Shopping Assistant), that the customer owns (or rnay be provided to a customer by the retailer), and a centralized server located in the shopping center to which the custorner communicates using the PSA. The centralized server maintains the customer database. the store database and provides audio/visual responses to inquiries fronr tens to hundreds of customers in real-time over a snrull areo wrteless network.

288 citations


Journal ArticleDOI
01 Nov 1994
TL;DR: The paper shows how combining the fish search with a cache greatly reduces these problems and highlights the properties and implementation of a client-based search tool called the “ fish-search ” algorithm, and compares it to other approaches.
Abstract: Finding specific information in the World-Wide Web (WWW, or Web for short) is becoming increasingly difficult, because of the rapid growth of the Web and because of the diversity of the information offered through the Web. Hypertext in general is ill-suited for information retrieval as it is designed for stepwise exploration. To help readers find specific information quickly, specific overview documents are often included into the hypertext. Hypertext systems often provide simple searching tools such as full text search or title search, that mostly ignore the “hyper-structure” formed by the links. In the WWW, finding information is further complicated by its distributed nature. Navigation, often via overview documents, still is the predominant method of finding one's way around the Web. Several searching tools have been developed, basically in two types: • A gateway, offering (limited) search operations on small or large parts of the WWW, using a pre-compiled database. The database is often built by an automated Web scanner (a “robot”). • A client-based search tool that does automated navigation, thereby working more or less like a browsing user, but much faster and following an optimized strategy. This paper highlights the properties and implementation of a client-based search tool called the “ fish-search ” algorithm, and compares it to other approaches. The fish-search, implemented on top of Mosaic for X, offers an open-ended selection of search criteria. Client-based searching has some definite drawbacks: slow speed and high network resource consumption. The paper shows how combining the fish search with a cache greatly reduces these problems. The “ Lagoon ” cache program is presented. Caches can call each other, currently only to further reduce network traffic. By moving the algorithm into the cache program, the calculation of the answer to a search request can be distributed among the caching servers.

288 citations


Journal ArticleDOI
TL;DR: The impact that the different VOD system elements have on the video server and set-top are examined from a communications standpoint and opportunities for open or standard interfaces are identified.
Abstract: Open systems will enable video servers and set-tops to provide different services in a variety of environments Hewlett-Packard is interested in applying the principles of open systems to video on demand (VOD) In particular, the company is developing a technology base that will allow their servers and set-tops to operate in a variety of environments and enable the provision of a variety of services The impact that the different VOD system elements have on the video server and set-top are examined from a communications standpoint Opportunities for open or standard interfaces are identified and recommendations are made on what these should be where possible >

Journal ArticleDOI
01 Nov 1994
TL;DR: The methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server is outlined, allowing for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server.
Abstract: While the World Wide Web (www) may appear to be intrinsically scalable through the distribution of files across a series of decentralized servers, there are instances where this form of load distribution is both costly and resource intensive. In such cases it may be necessary to administer a centrally located and managed http server. Given the exponential growth of the internet in general, and www in particular, it is increasingly more difficult for persons and organizations to properly anticipate their future http server needs, both in human resources and hardware requirements. It is the purpose of this paper to outline the methodology used at the National Center for Supercomputing Applications in building a scalable World Wide Web server. The implementation described in the following pages allows for dynamic scalability by rotating through a pool of http servers that are alternately mapped to the hostname alias of the www server. The key components of this configuration include: (1) cluster of identically configured http servers; (2) use of Round-Robin DNS for distributing http requests across the cluster; (3) use of distributed File System mechanism for maintaining a synchronized set of documents across the cluster; and (4) method for administering the cluster. The result of this design is that we are able to add any number of servers to the available pool, dynamically increasing the load capacity of the virtual server. Implementation of this concept has eliminated perceived and real vulnerabilities in our single-server model that had negatively impacted our user community. This particular design has also eliminated the single point of failure inherent in our single-server configuration, increasing the likelihood for continued and sustained availability. while the load is currently distributed in an unpredictable and, at times, deleterious manner, early implementation and maintenance of this configuration have proven promising and effective.

Journal ArticleDOI
TL;DR: Weighted caching is a generalization of paging in which the cost to evict an item depends on the item as discussed by the authors, and it is studied as a restriction of the well-known k-server problem.
Abstract: Weighted caching is a generalization ofpaging in which the cost to evict an item depends on the item. We study both of these problems as restrictions of the well-knownk-server problem, which involves moving servers in a graph in response to requests so as to minimize the distance traveled.

Book ChapterDOI
Michael K. Reiter1
05 Sep 1994
TL;DR: A brief overview of Rampart is given, focusing primarily on its protocol architecture, and its performance in the prototype implementation and ongoing work is sketched.
Abstract: Rampart is a toolkit of protocols to facilitate the development of high-integrity services, i.e., distributed services that retain their availability and correctness despite the malicious penetration of some component servers by an attacker. At the core of Rampart are new protocols that solve several basic problems in distributed computing, including asynchronous group membership, reliable multicast (Byzantine agreement), and atomic multicast. Using these protocols, Rampart supports the development of high-integrity services via the technique of state machine replication, and also extends this technique with a new approach to server output voting. In this paper we give a brief overview of Rampart, focusing primarily on its protocol architecture. We also sketch its performance in our prototype implementation and ongoing work.

Patent
19 May 1994
TL;DR: In this paper, the authors proposed to store the video signals in the local distributed servers in random access read/write memories (HDA), e.g., electronic RAMs, magnetic or optical disks, and/or the like, from which video signals can flexibly be supplied on-line to the user stations and to store in the central server in sequential access memories, providing cheap mass storage.
Abstract: A video on demand network (VODN), transmits video signals (VS) to user stations (US11, . . . , US2N) pursuant to the receipt of control signals (CS) issued by these user stations. In order to optimize the retrieval costs, this video on demand network maintains a large video library in a central video server (CS) and stores locally popular video signals in a plurality of local distributed video servers (DS1/2) from which the latter video signals are transmitted to the user stations. The video signals provided by the local distributed servers are updated from the central server based upon the changing popularity of the video signals. The present invention proposes in particular to store the video signals in the local distributed servers in random access read/write memories (HDA), e.g., electronic RAMs, magnetic or optical disks, and/or the like, from which the video signals can flexibly be supplied on-line to the user stations and to store the video signals in the central server in sequential access memories, e.g. Digital Audio Tapes (DAT) and CD-ROMs (CDR), providing cheap mass storage.

Proceedings ArticleDOI
15 Oct 1994
TL;DR: This paper presents an admission control algorithm for multimedia servers which exploits the variation in access times of media blocks from disk as well as the variations in client load induced by variable rate compression schemes, and provides statistical service guarantees to each client.
Abstract: A large-scale multimedia server, in practice, has to service a large number of clients simultaneously. Given the real-time requirements of each client and the fixed data transfer bandwidth of disks, a multimedia server must employ admission control algorithms to decide whether a new client can be admitted for service without violating the requirements of the clients already being serviced. In this paper, we present an admission control algorithm for multimedia servers which: (1) exploits the variation in access times of media blocks from disk as well as the variation in client load induced by variable rate compression schemes, and (2) provides statistical service guarantees to each client. The effectiveness of the algorithm is demonstrated through trace-driven simulations.

Patent
15 Apr 1994
TL;DR: In this paper, a meeting room is a vehicle whereby the activity of various media servers (50, 54, 58) coordinated to effectuate conferences between multiple participants in more than one medium.
Abstract: A circuit configuration in a multimedia network (10) simulates an actual meeting room where the conferences between two or more people may be held. This facilitates the creation in the network (10) of flexible, long-term multimedia conferences between conferees who are separated from one another. Any number of conferees may communicate with one another via one or more of audio, video, and data. Virtual meeting rooms may persist in the network (10) for predetermined periods of time controlled by the users of the meeting room. The room may remain in the network (10) independent of whether or not a user is connected to the room. The meeting room is a vehicle whereby the activity of various media servers (50, 54, 58) coordinated to effectuate conferences between multiple participants in more than one medium. The servers are associated with storage devices (164) which may record or store certain aspects of multimedia conferences using the virtual meeting room.

Patent
Catherine K. Eilert1, Bernard Pierce1
04 Apr 1994
TL;DR: In this paper, a workload manager creates an in storage representation of a set of performance goals, each goal associated with a class of clients (e.g., client transactions) in a client/server data processing system.
Abstract: A workload manager creates an in storage representation of a set of performance goals, each goal associated with a class of clients (e.g., client transactions) in a client/server data processing system. A set of servers, providing service to the clients, are managed to bring the clients into conformity with the class performance goals by: calculating performance indexes for each class to determine the target class(es) which are farthest behind their class performance goals; analyzing the relationship among servers and client classes to determine which servers serve which classes; determining which resource(s) are impacting the service provided to the key servers (that is, those on which the target class(es) are most heavily reliant), and projecting the effect of making more of these resources available to those servers; and, finally, making the changes to those resources which are projected to most favorably indirectly affect the performance of the target class(es).

Journal ArticleDOI
01 Nov 1994
TL;DR: This paper examines the effects of OS scheduling and page migration policies on the performance of compute servers for multiprogramming and parallel application workloads, and suggests that policies based only on TLB miss information can be quite effective, and useful for addressing the data distribution problems of space-sharing schedulers.
Abstract: Several cache-coherent shared-memory multiprocessors have been developed that are scalable and offer a very tight coupling between the processing resources. They are therefore quite attractive for use as compute servers for multiprogramming and parallel application workloads. Process scheduling and memory management, however, remain challenging due to the distributed main memory found on such machines. This paper examines the effects of OS scheduling and page migration policies on the performance of such compute servers. Our experiments are done on the Stanford DASH, a distributed-memory cache-coherent multiprocessor. We show that for our multiprogramming workloads consisting of sequential jobs, the traditional Unix scheduling policy does very poorly. In contrast, a policy incorporating cluster and cache affinity along with a simple page-migration algorithm offers up to two-fold performance improvement. For our workloads consisting of multiple parallel applications, we compare space-sharing policies that divide the processors among the applications to time-slicing policies such as standard Unix or gang scheduling. We show that space-sharing policies can achieve better processor utilization due to the operating point effect, but time-slicing policies benefit strongly from user-level data distribution. Our initial experience with automatic page migration suggests that policies based only on TLB miss information can be quite effective, and useful for addressing the data distribution problems of space-sharing schedulers.

Patent
18 Feb 1994
TL;DR: Disclosed as mentioned in this paper is a system that provides a common application software interface for a variety of vendor supplied license servers by translating a single set of program calls into a set of calls for each license server.
Abstract: Disclosed is a system that provides a common application software interface for a variety of vendor supplied license servers. The system provides a single set of program calls and translates this single set of calls into a set of calls for each license server. This translation is performed using a translate table, which is easily updated to interface to newly developed or newly released license servers. The system runs as a separate process within the operating environment to monitor the application program, and as long as the application program continues to provide services to the user, the system sends periodic license renewal messages to the license server. The system also notifies the user when the application program cannot obtain a license in order to provide a consistent user interface across applications.

Patent
09 Dec 1994
TL;DR: A distributed program configuration database system is designed for use on a client-server network as discussed by the authors, which consists of a plurality of program servers which maintain version information for various program components, and a program developer, upon logging into a client terminal on the network, establishes a workspace or project and connects with one of the servers.
Abstract: A distributed program configuration database system is designed for use on a client-server network. The system consists of a plurality of program servers which maintain version information for various program components. A program developer, upon logging into a client terminal on the network, establishes a workspace or project and connects with one of the servers. After connection to the server has been made, a draft of the program configuration is retrieved from the server. The configuration draft may include information for constructing some of the program components and "bridge" information identifying other program servers where additional program components are located. The workspace uses the component information to assemble components and the bridge information to connect to other servers and retrieve the remaining components in order to assemble the complete source code for a program in the workspace.

Journal ArticleDOI
TL;DR: An evaluation of the first statewide mandated training for alcohol servers in Oregon found statistically significant reductions in single-vehicle nighttime traffic crashes by the end of 1989 following the implementation of the compulsory server-training policy.

Patent
14 Feb 1994
TL;DR: In this paper, a data retrieval and acquisition system with a wireless handheld interface for data entry by the user is presented, which includes a communication server for communicating, such as through IR signals, with the handheld interfaces.
Abstract: A data retrieval and acquisition system having a wireless handheld interface for data entry by the user. The system includes a communication server for communicating, such as through IR signals, with the handheld interfaces. The communications server communicates with multiple command servers and with a master server and/or other communication servers through a communications bus. The handheld interface includes touch screen which is operated through an event driven architecture. The user is allowed to enter data through virtual rolling keys, a scroll bar, virtual key pad, bar code reader, and the like. The system minimized the transmission time by minimizing the necessary information transmitted and by synchronizing the operation of the handheld interfaces with the corresponding communications server. The communications server transmits information to the handheld through a first unique protocal and to the command server through a second unique protocal. Data transmission is further reduces by using shorthand command codes for constants, such as for commands, user names, and the like.

Journal ArticleDOI
TL;DR: In this paper, the probabilistic location set covering problem is cast as a probabilistically distributed location set coverage problem, where the coverage constraint becomes an availability constraint, and the objective is to minimize the required number of servers in an environment in which servers are frequently busy.
Abstract: The deterministic location set covering problem seeks the minimum number of servers and their positions such that each point of demand has at least one server initially stationed within a time or distance standard. In an environment in which servers are frequently busy, the problem can be cast as the probabilistic location set covering problem. In the probabilistic formulation, the coverage constraint becomes an availability constraint: a requirement that each point of demand has a server actually available within the time standard, with alpha reliability. The objective of minimizing the required number of servers remains the same. An earlier probabilistic statement of this problem assumed that the server availabilities were independent. In this paper, queuing theory is applied to the development of the availability constraints. This new generation of probabilistic location model thus corrects the prior assumption of independence of server availability. Formulations are presented and computational experience is offered, together with an extension: the Maximin Availability Siting Heuristics, MASH.

Proceedings Article
12 Sep 1994
TL;DR: A family of ordered SDDSs, called RP*, is proposed, providing for ordered and dynamic files on multicomputers, and thus for more efficient processing of range queries and of ordered traversals of files.
Abstract: Hash-based scalable distributed data structures (SDDSs), like LH* and DDH, for networks of interconnected computers (multicomputers) were shown to open new perspectives for file management We propose a family of ordered SDDSs, called RP*, providing for ordered and dynamic files on multicomputers, and thus for more efficient processing of range queries and of ordered traversals of files The basic algorithm termed RP*N, builds the file with the same key space partitioning as a B-tree, but avoids indexes through the use of multicast The algorithms, RP*C and RP*S enhance throughput for faster networks, adding the indexes on clients, or on clients and servers, while either decreasing or avoiding multicast RP* files are shown highly efficient with access performance exceeding traditional files by an order of magnitude or two, and, for non-range queries, very close to LH*

Proceedings ArticleDOI
David Kotz1
14 Nov 1994
TL;DR: In this paper, a disk-directed I/O technique was proposed to allow the disk servers to determine the flow of data for maximum performance, which was shown to provide consistent high performance that was largely independent of data distribution, obtained up to 93% of peak disk bandwidth.
Abstract: Many scientific applications that run on today's multiprocessors, such as weather forecasting and seismic analysis, are bottlenecked by their file-I/O needs. Even if the multiprocessor is configured with sufficient I/O hardware, the file-system software often fails to provide the available bandwidth to the application. Although libraries and enhanced file-system interfaces can make a significant improvement, we believe that fundamental changes are needed in the file-server software. We propose a new technique, disk-directed I/O, to allow the disk servers to determine the flow of data for maximum performance. Our simulations show that tremendous performance gains are possible. Indeed, disk-directed I/O provided consistent high performance that was largely independent of data distribution, obtained up to 93% of peak disk bandwidth, and was as much as 16 times faster than traditional parallel file systems.

Journal ArticleDOI
TL;DR: An important and novel feature of this method is that the client need not be able to identify or authenticate even a single server, instead, the client is required to possess only a single public key for the service.
Abstract: We present a method for constructing replicated services that retain their availability and integrity despite several servers and clients being corrupted by an intruder, in addition to others failing benignly. We also address the issue of maintaining a causal order among client requests. We illustrate a security breach resulting from an intruder's ability to effect a violation of causality in the sequence of requests processed by the service and propose an approach to counter this attack. An important and novel feature of our techniques is that the client need not be able to identify or authenticate even a single server. Instead, the client is required to possess only a single public key for the service. We demonstrate the performance of our techniques with a service we have implemented using one of our protocols.

Proceedings ArticleDOI
24 May 1994
TL;DR: The adaptive page server is shown to provide very good performance, generally outperforming the pure page server, the pure object server, and the other alternatives as well.
Abstract: For reasons of simplicity and communication efficiency, a number of existing object-oriented database management systems are based on page server architectures; data pages are their minimum unit of transfer and client caching. Despite their efficiency, page servers are often criticized as being too restrictive when it comes to concurrency, as existing systems use pages as the minimum locking unit as well. In this paper we show how to support object-level locking in a page server context. Several approaches are described, including an adaptive granularity approach that uses page-level locking for most pages but switches to object-level locking when finer-grained sharing is demanded. We study the performance of these approaches, comparing them to both a pure page server and a pure object server. For the range of workloads that we have examined, our results indicate that a page server is clearly preferable to an object server. Moreover, the adaptive page server is shown to provide very good performance, generally outperforming the pure page server, the pure object server, and the other alternatives as well.

Proceedings ArticleDOI
01 Oct 1994
TL;DR: Specific improvements developed for NTP Version 3 are described which have resulted in increased accuracy, stability and reliability in both local-area and wide-area networks and certain enhancements to the Unix operating system software are described to realize submillisecond accuracies with fast workstations and networks.
Abstract: The Network Time Protocol (NTP) is widely deployed in the Internet to synchronize computer clocks to each other and to international standards via telephone modem, radio and satellite. The protocols and algorithms have evolved over more than a decade to produce the present NTP Version 3 specification and implementations. Most of the estimated deployment of 100,000 NTP servers and clients enjoy synchronization to within a few tens of milliseconds in the Internet of today.This paper describes specific improvements developed for NTP Version 3 which have resulted in increased accuracy, stability and reliability in both local-area and wide-area networks. These include engineered refinements of several algorithms used to measure time differences between a local clock and a number of peer clocks in the network, as well as to select the best ensemble from among a set of peer clocks and combine their differences to produce a clock accuracy better than any in the ensemble.This paper also describes engineered refinements of the algorithms used to adjust the time and frequency of the local clock, which functions as a disciplined oscillator. The refinements provide automatic adjustment of message-exchange intervals in order to minimize network traffic between clients and busy servers while maintaining the best accuracy. Finally, this paper describes certain enhancements to the Unix operating system software in order to realize submillisecond accuracies with fast workstations and networks.

Patent
11 Aug 1994
TL;DR: In this article, a system for remote mirroring of digital data from a primary network server to a remote network server includes a primary data transfer unit and a remote data transfer units which are connectable with one another by a conventional communication link.
Abstract: A system for remote mirroring of digital data from a primary network server to a remote network server includes a primary data transfer unit and a remote data transfer unit which are connectable with one another by a conventional communication link. The primary data transfer unit sends mirrored data from the primary network server over the link to the remote data transfer unit which is located a safe distance away. Each data transfer unit includes a server interface and a link interface. The server interface is viewed by the network operating system as another disk drive controller. The link interface includes four interconnected parallel processors which perform read and write processes in parallel. The link interface also includes a channel service unit which may be tailored to commercial communications links such as T1, E1, or analog telephone lines connected by modems.