scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 2001"


Proceedings ArticleDOI
21 Oct 2001
TL;DR: The Cooperative File System is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval with a completely decentralized architecture that can scale to large systems.
Abstract: The Cooperative File System (CFS) is a new peer-to-peer read-only storage system that provides provable guarantees for the efficiency, robustness, and load-balance of file storage and retrieval. CFS does this with a completely decentralized architecture that can scale to large systems. CFS servers provide a distributed hash table (DHash) for block storage. CFS clients interpret DHash blocks as a file system. DHash distributes and caches blocks at a fine granularity to achieve load balance, uses replication for robustness, and decreases latency with server selection. DHash finds blocks using the Chord location protocol, which operates in time logarithmic in the number of servers.CFS is implemented using the SFS file system toolkit and runs on Linux, OpenBSD, and FreeBSD. Experience on a globally deployed prototype shows that CFS delivers data to clients as fast as FTP. Controlled tests show that CFS is scalable: with 4,096 servers, looking up a block of data involves contacting only seven servers. The tests also demonstrate nearly perfect robustness and unimpaired performance even when as many as half the servers fail.

1,733 citations


Journal ArticleDOI
TL;DR: SIENA, an event notification service that is designed and implemented to exhibit both expressiveness and scalability, is presented and the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance are described.
Abstract: The components of a loosely coupled system are typically designed to operate by generating and responding to asynchronous events. An event notification service is an application-independent infrastructure that supports the construction of event-based systems, whereby generators of events publish event notifications to the infrastructure and consumers of events subscribe with the infrastructure to receive relevant notifications. The two primary services that should be provided to components by the infrastructure are notification selection (i. e., determining which notifications match which subscriptions) and notification delivery (i.e., routing matching notifications from publishers to subscribers). Numerous event notification services have been developed for local-area networks, generally based on a centralized server to select and deliver event notifications. Therefore, they suffer from an inherent inability to scale to wide-area networks, such as the Internet, where the number and physical distribution of the service's clients can quickly overwhelm a centralized solution. The critical challenge in the setting of a wide-area network is to maximize the expressiveness in the selection mechanism without sacrificing scalability in the delivery mechanism. This paper presents SIENA, an event notification service that we have designed and implemented to exhibit both expressiveness and scalability. We describe the service's interface to applications, the algorithms used by networks of servers to select and deliver event notifications, and the strategies used to optimize performance. We also present results of simulation studies that examine the scalability and performance of the service.

1,568 citations


Proceedings ArticleDOI
21 Oct 2001
TL;DR: Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.
Abstract: Internet hosting centers serve multiple service sites from a common hardware base. This paper presents the design and implementation of an architecture for resource management in a hosting center operating system, with an emphasis on energy as a driving resource management issue for large server clusters. The goals are to provision server resources for co-hosted services in a way that automatically adapts to offered load, improve the energy efficiency of server clusters by dynamically resizing the active server set, and respond to power supply disruptions or thermal events by degrading service in accordance with negotiated Service Level Agreements (SLAs).Our system is based on an economic approach to managing shared server resources, in which services "bid" for resources as a function of delivered performance. The system continuously monitors load and plans resource allotments by estimating the value of their effects on service performance. A greedy resource allocation algorithm adjusts resource prices to balance supply and demand, allocating resources to their most efficient use. A reconfigurable server switching infrastructure directs request traffic to the servers assigned to each service. Experimental results from a prototype confirm that the system adapts to offered load and resource availability, and can reduce server energy usage by 29% or more for a typical Web workload.

1,492 citations


Journal ArticleDOI
TL;DR: This work uses a limiting, deterministic model representing the behavior as n/spl rarr//spl infin/ to approximate the behavior of finite systems and provides simulations that demonstrate that the method accurately predicts system behavior, even for relatively small systems.
Abstract: We consider the following natural model: customers arrive as a Poisson stream of rate /spl lambda/n, /spl lambda/<1, at a collection of n servers. Each customer chooses some constant d servers independently and uniformly at random from the n servers and waits for service at the one with the fewest customers. Customers are served according to the first-in first-out (FIFO) protocol and the service time for a customer is exponentially distributed with mean 1. We call this problem the supermarket model. We wish to know how the system behaves and in particular we are interested in the effect that the parameter d has on the expected time a customer spends in the system in equilibrium. Our approach uses a limiting, deterministic model representing the behavior as n/spl rarr//spl infin/ to approximate the behavior of finite systems. The analysis of the deterministic model is interesting in its own right. Along with a theoretical justification of this approach, we provide simulations that demonstrate that the method accurately predicts system behavior, even for relatively small systems. Our analysis provides surprising implications. Having d=2 choices leads to exponential improvements in the expected time a customer spends in the system over d=1, whereas having d=3 choices is only a constant factor better than d=2. We discuss the possible implications for system design.

1,444 citations


Patent
30 Mar 2001
TL;DR: In this article, the authors describe a client-server system where a local client computer provides a user interface to interact with at least one remote server computer which implements data processing in response to the local client computers.
Abstract: Client-server systems and methods for transferring data via a network, including a wireless network, between a server (61) and one or more clients (41) or browsers that are spatially distributed (i.e., situated at different locations). At least one local client computer provides a user interface to interact with at least one remote server computer which implements data processing in response to the local client computer. The user interface may be a browser or a thin client.

1,427 citations


Patent
23 Jul 2001
TL;DR: In this paper, the authors propose to assign failover priorities to virtual servers in a cluster of two or more autonomous server nodes, where each virtual server has one or more virtual IP addresses and load balancing can be provided by distributing virtual servers from a failed node to multiple different nodes.
Abstract: Systems and methods, including computer program products, providing high-availability in server systems. In one implementation, a server system is cluster of two or more autonomous server nodes, each running one or more virtual servers. When a node fails, its virtual servers are migrated to one or more other nodes. Connectivity between nodes and clients is based on virtual IP addresses, where each virtual server has one or more virtual IP addresses. Virtual servers can be assigned failover priorities, and, in failover, higher priority virtual servers can be migrated before lower priority ones. Load balancing can be provided by distributing virtual servers from a failed node to multiple different nodes. When a port within a node fails, the node can reassign virtual IP addresses from the failed port to other ports on the node until no good ports remain and only then migrate virtual servers to another node or nodes.

1,351 citations


Patent
09 Apr 2001
TL;DR: In this article, the authors present a system and apparatus for efficient and reliable, control and distribution of data files or portions of files, applications, or other data objects in large-scale distributed networks.
Abstract: The present invention provides a system and apparatus for efficient and reliable, control and distribution of data files or portions of files, applications, or other data objects in large-scale distributed networks. A unique content-management front-end provides efficient controls for triggering distribution of digitized data content to selected groups of a large number of remote computer servers. Transport-layer protocols interact with distribution controllers to automatically determine an optimized tree-like distribution sequence to group leaders selected by network devices at each remote site. Reliable store-and-forward transfer to clusters is accomplished using a unicast protocol in the ordered tree sequence. Once command messages and content arrive at all participating group leaders, local hybrid multicast protocols efficiently and reliably distribute them to the back-end nodes for interpretation and execution. Positive acknowledgement is then sent back to the content manager from each group leader, and the updated content in each remote device autonomously goes 'live' when the content change is locally completed.

1,261 citations


Patent
25 Jan 2001
TL;DR: In this paper, a decoding process extracts the identifier from a media object and possibly additional context information and forwards it to a server, in turn, maps the identifier to an action, such as returning metadata, re-directing the request to one or more other servers, requesting information from another server to identify the media object, etc.
Abstract: Media objects are transformed into active, connected objects via identifiers embedded into them or their containers. In the context of a user's playback experience, a decoding process extracts the identifier from a media object and possibly additional context information and forwards it to a server. The server, in turn, maps the identifier to an action, such as returning metadata, re-directing the request to one or more other servers, requesting information from another server to identify the media object, etc. The linking process applies to broadcast objects as well as objects transmitted over networks in streaming and compressed file formats.

1,026 citations


Patent
05 Nov 2001
TL;DR: In this article, a system and method for maintaining consistent server-side state across a pool of collaborating servers with independent state repositories is presented, where a client performs an event on a collaborating server which affects such state on the server, it publishes notification of the event into a queue maintained in client-side states which is shared by all of the collaborating servers in the pool.
Abstract: A system and method are provided for maintaining consistent server-side state across a pool of collaborating servers with independent state repositories When a client performs an event on a collaborating server which affects such state on the server, it publishes notification of the event into a queue maintained in client-side state which is shared by all of the collaborating servers in the pool As the client makes requests to servers within the pool, the queue is thus included in each request When a collaborating server needs to access its server-side state in question, it first discerns events new to it from the queue and replicates their effects into such server-side state As a result, the effects of events upon server-side state are replicated asynchronously across the servers in the pool, as the client navigates among them

992 citations


Patent
19 Apr 2001
TL;DR: In this article, a centralized and differentiated content and application delivery system allows content providers to directly control the delivery of content based on regional and temporal preferences, client identity and content priority.
Abstract: A technique for centralized and differentiated content and application delivery system allows content providers to directly control the delivery of content based on regional and temporal preferences, client identity and content priority. A scalable system is provided in an extensible framework for edge services, employing a combination of a flexible profile definition language and an open edge server architecture in order to add new and unforeseen services on demand. In one or more edge servers content providers are allocated dedicated resources, which are not affected by the demand or the delivery characteristics of other content providers. Each content provider can differentiate different local delivery resources within its global allocation. Since the per-site resources are guaranteed, intra-site differentiation can be guaranteed. Administrative resources are provided to dynamically adjust service policies of the edge servers.

899 citations


Proceedings ArticleDOI
22 Apr 2001
TL;DR: This work develops several placement algorithms that use workload information, such as client latency and request rates, to make informed placement decisions, and evaluates the placement algorithms using both synthetic and real network topologies, as well as Web server traces.
Abstract: There has been an increasing deployment of content distribution networks (CDNs) that offer hosting services to Web content providers. CDNs deploy a set of servers distributed throughout the Internet and replicate provider content across these servers for better performance and availability than centralized provider servers. Existing work on CDNs has primarily focused on techniques for efficiently redirecting user requests to appropriate CDN servers to reduce request latency and balance load. However, little attention has been given to the development of placement strategies for Web server replicas to further improve CDN performance. We explore the problem of Web server replica placement in detail. We develop several placement algorithms that use workload information, such as client latency and request rates, to make informed placement decisions. We then evaluate the placement algorithms using both synthetic and real network topologies, as well as Web server traces, and show that the placement of Web replicas is crucial to CDN performance. We also address a number of practical issues when using these algorithms, such as their sensitivity to imperfect knowledge about client workload and network topology, the stability of the input data, and methods for obtaining the input.

Journal ArticleDOI
TL;DR: Six key areas of streaming video are covered, including video compression, application-layer QoS control, continuous media distribution services, streaming servers, media synchronization mechanisms, and protocols for streaming media.
Abstract: Due to the explosive growth of the Internet and increasing demand for multimedia information on the Web, streaming video over the Internet has received tremendous attention from academia and industry. Transmission of real-time video typically has bandwidth, delay, and loss requirements. However, the current best-effort Internet does not offer any quality of service (QoS) guarantees to streaming video. Furthermore, for video multicast, it is difficult to achieve both efficiency and flexibility. Thus, Internet streaming video poses many challenges. In this article we cover six key areas of streaming video. Specifically, we cover video compression, application-layer QoS control, continuous media distribution services, streaming servers, media synchronization mechanisms, and protocols for streaming media. For each area, we address the particular issues and review major approaches and mechanisms. We also discuss the tradeoffs of the approaches and point out future research directions.

Journal ArticleDOI
TL;DR: A review of the development of generic user modeling systems over the past twenty years is given in this article, which describes their purposes, their services within user-adaptive systems, and the different design requirements for research prototypes and commercially deployed servers.
Abstract: The paper reviews the development of generic user modeling systems over the past twenty years. It describes their purposes, their services within user-adaptive systems, and the different design requirements for research prototypes and commercially deployed servers. It discusses the architectures that have been explored so far, namely shell systems that form part of the application, central server systems that communicate with several applications, and possible future user modeling agents that physically follow the user. Several implemented research prototypes and commercial systems are briefly described.

Journal ArticleDOI
TL;DR: Performance evaluation results demonstrate that the analytically tuned FCS algorithms provide robust transient and steady state performance guarantees for periodic and aperiodic tasks even when the task execution times vary by as much as 100% from the initial estimate.
Abstract: We develop Feedback Control real-time Scheduling (FCS) as a unified framework to provide Quality of Service (QoS) guarantees in unpredictable environments (such as e-business servers on the Internet). FCS includes four major components. First, novel scheduling architectures provide performance control to a new category of QoS critical systems that cannot be addressed by traditional open loop scheduling paradigms. Second, we derive dynamic models for computing systems for the purpose of performance control. These models provide a theoretical foundation for adaptive performance control. Third, we apply established control methodology to design scheduling algorithms with proven performance guarantees, which is in contrast with existing heuristics-based solutions relying on laborious design/tuning/testing iterations. Fourth, a set of control-based performance specifications characterizes the efficiency, accuracy, and robustness of QoS guarantees. The generality and strength of FCS are demonstrated by its instantiations in three important applications with significantly different characteristics. First, we develop real-time CPU scheduling algorithms that guarantees low deadline miss ratios in systems where task execution times may deviate from estimations at run-time. We solve the saturation problems of real-time CPU scheduling systems with a novel integrated control structure. Second, we develop an adaptive web server architecture to provide relative and absolute delay guarantees to different service classes with unpredictable workloads. The adaptive architecture has been implemented by modifying an Apache web server. Evaluation experiments on a testbed of networked Linux PC's demonstrate that our server provides robust relative/absolute delay guarantees despite of instantaneous changes in the user population. Third, we develop a data migration executor for networked storage systems that migrate data on-line while guaranteeing specified I/O throughput of concurrent applications.

Patent
12 Jul 2001
TL;DR: In this article, an operating room control system for use during a medical procedure on a patient includes an input device, a display device, and a controller that is coupled to the input device and the display device.
Abstract: A method and apparatus for retrieving, accessing, and storing medical data relating to a patient during a medical procedure. The invention provides a single interface to many disparate forms of medical data, which is accessible over a local area network; wide area network, direct connection, or combinations thereof. In one embodiment, an operating room control system for use during a medical procedure on a patient includes an input device, a display device, and a controller that is coupled to the input device and the display device. The controller receives one or more user inputs, transmits a command to a server located outside of the operating room to retrieve medical data, receives the medical data from the server, and displays the medical data on the display device. Medical data can be captured by the controller using, for example, a camera and a video/image capture board, keyboard, and microphone during surgery or examination of the patient. The captured medical data can be stored on one or more remote servers as part of the patient records.

DOI
01 May 2001
TL;DR: The approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load.
Abstract: In this paper we address power conservation for clusters of workstations or PCs. Our approach is to develop systems that dynamically turn cluster nodes on – to be able to handle the load imposed on the system efficiently – and off – to save power under lighter load. The key component of our systems is an algorithm that makes load balancing and unbalancing decisions by considering both the total load imposed on the cluster and the power and performance implications of turning nodes off. The algorithm is implemented in two different ways: (1) at the application level for a cluster-based, localityconscious network server; and (2) at the operating system level for an operating system for clustered cycle servers. Our experimental results are very favorable, showing that our systems conserve both power and energy in comparison to traditional systems.

Patent
17 Jul 2001
TL;DR: In this article, the authors propose a service enablement platform (SEP) to enable virtual office users to access typical office network-based applications, including e-mail, file sharing and hosted thin-client programs through a remotely located network, e.g., WAN.
Abstract: Apparatus and accompanying methods for use therein for implementing an integrated, virtual office user environment, through an office server(s), through which a remotely stationed user can access typical office network-based applications, including e-mail, file sharing and hosted thin-client programs, through a remotely located network, e.g., WAN, connected web browser. Specifically, a front end, namely a service enablement platform (SEP), to one or more office servers on a LAN is connected to both the WAN and LAN and acts both as a bridge between the user and his(her) office applications and as a protocol translator to enable bi-directional, web-based, real-time communication to occur between the browser and each such application. During initial operation, the SEP, operating under a default profile, establishes, over an analog connection to the WAN, a management session with the site to obtain customer WAN access information, then tears down the analog connection and establishes a broadband WAN connection through which the SEP re-establishes its prior session and obtains a client certificate and its customized profile. The SEP then re-initializes itself to that particular profile.

Patent
19 Jul 2001
TL;DR: In this paper, a DNS Server (SPD) load balances network requests among customer Web servers and directs client requests for hosted customer content to the appropriate caching server which is selected by choosing the caching server that is closest to the user, is available, and is the least loaded.
Abstract: A content delivery and global traffic management network system provides a plurality of caching servers connected to a network. The caching servers host customer content that can be cached and stored, and respond to requests for Web content from clients. If the requested content does not exist in memory or on disk, it generates a request to an origin site to obtain the content. A DNS Server (SPD) load balances network requests among customer Web servers and directs client requests for hosted customer content to the appropriate caching server which is selected by choosing the caching server that is closest to the user, is available, and is the least loaded. SPD also supports persistence and returns the same IP addresses, for a given client. The entire Internet address space is broken up into multiple zones. Each zone is assigned to a group of SPD servers. If an SPD server gets a request from a client that is not in the zone assigned to that SPD server, it forwards the request to the SPD server assigned to that zone. Servers write information about the content delivered to log files that are picked up by a log server.

Patent
02 Jun 2001
TL;DR: In this article, the authors provide many-to-one data mirroring, including mirroring from local servers running the same or different operating systems and/or file systems at two or more geographically dispersed locations.
Abstract: Methods, systems, and configured storage media are provided for flexible data mirroring. In particular, the invention provides many-to-one data mirroring, including mirroring from local servers running the same or different operating systems and/or file systems at two or more geographically dispersed locations. The invention also provides one-to-many data mirroring, mirroring with or without a dedicated private telecommunications link, and mirroring with or without a dedicated server or another server at the destination(s) to assist the remote mirroring unit(s). In addition, the invention provides flexibility by permitting the use of various combinations of one or more external storage units and/or RAID units to hold mirrored data. Spoofing, SCSI and other bus emulations, and further tools and techniques are used in various embodiments of the invention.

Patent
12 Jan 2001
TL;DR: In this article, a system and method for integrating disparate business applications, and managing the applications processes in a hardware resource and user effort efficient manner is presented, which uses a business system platform comprised of several unique servers to efficiently manage multiple applications which are themselves generally distributed across a network, and to control the execution of the required tasks with minimum use of redundant data input to the several applications, thereby minimizing the use of hardware resources and user input effort.
Abstract: The present invention provides a system and method for integrating disparate business applications, and managing the applications processes in a hardware resource and user effort efficient manner. The automated system of the present invention uses a business systems platform comprised of several unique servers to efficiently manage multiple applications which are themselves generally distributed across a network, and to control the execution of the required tasks with minimum use of redundant data input to the several applications, thereby minimizing the use of hardware resources and user input effort. Business objects are controlled through a persistence framework which is Java, XML and EJB based.

Patent
13 Sep 2001
TL;DR: In this paper, a distributed recommendation system and method are disclosed that provides greater privacy for the user's private data, which distributes the tasks of a recommendation system between wireless devices (100) and network servers (140), so as to protect the privacy of end users.
Abstract: A distributed recommendation system and method are disclosed that provides greater privacy for the user's private data. The method distributes the tasks of a recommendation system between wireless devices (100) and network servers (140), so as to protect the privacy of end users. The user's wireless device (100) sends (326) a current context-activity pair (515) to a network server (140) in response to either the user's selection (324) of an activity or automatically (322). The user's wireless device (100) includes a service history log (110). The activities stored in the service history log (110) include past recommendations (1) made by the network server (140), past services used (2), prestored service preferences (3), and special requested service requirements (4). Context-activity pair information (515) sent to the server (140) can include any combination of these activities. The server (140) then responds with an appropriate recommendation (515').

Patent
20 Jul 2001
TL;DR: In this article, a method and system for interactively responding to queries from a remotely located user includes a computer server system configured to receive an instant message query or request from the user over the Internet, and appropriate action is taken such as accessing a local or remote data resource and formulating an answer to the user's query.
Abstract: A method and system for interactively responding to queries from a remotely located user (18) includes a computer server system (22) configured to receiving an instant message query or request (20) from the user over the Internet. The query or request is interpreted and appropriate action is taken, such as accessing a local or remote data resource and formulating an answer to the user's query. The answer is formatted as appropriate and returned to the user as an instant message or via another route specified by the user. A method and system of providing authenticated access to a given web page via instant messaging is also disclosed.

Journal ArticleDOI
01 Jul 2001
TL;DR: This paper argues in conclusion in support of "reverse ITRACE" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.
Abstract: Attackers can render distributed denial-of-service attacks more difficult to defend against by bouncing their flooding traffic off of reflectors; that is, by spoofing requests from the victim to a large set of Internet servers that will in turn send their combined replies to the victim. The resulting dilution of locality in the flooding stream complicates the victim's abilities both to isolate the attack traffic in order to block it, and to use traceback techniques for locating the source of streams of packets with spoofed source addresses, such as ITRACE [Be00a], probabilistic packet marking [SWKA00], [SP01], and SPIE [S+01]. We discuss a number of possible defenses against reflector attacks, finding that most prove impractical, and then assess the degree to which different forms of reflector traffic will have characteristic signatures that the victim can use to identify and filter out the attack traffic. Our analysis indicates that three types of reflectors pose particularly significant threats: DNS and Gnutella servers, and TCP-based servers (particularly Web servers) running on TCP implementations that suffer from predictable initial sequence numbers. We argue in conclusion in support of "reverse ITRACE" [Ba00] and for the utility of packet traceback techniques that work even for low volume flows, such as SPIE.

Patent
10 May 2001
TL;DR: In this paper, a system and method for monitoring and analyzing Internet traffic is provided that is efficient, completely automated, and fast enough to handle the busiest websites on the Internet, processing data many times faster than existing systems.
Abstract: A system and method for monitoring and analyzing Internet traffic is provided that is efficient, completely automated, and fast enough to handle the busiest websites on the Internet, processing data many times faster than existing systems. The system and method of the present invention processes data by reading log files produced by web servers, or by interfacing with the web server in real time, processing the data as it occurs. The system and method of the present invention can be applied to one website or thousands of websites, whether they reside on one server or multiple servers. The multi-site and sub-reporting capabilities of the system and method of the present invention makes it applicable to servers containing thousands of websites and entire on-line communities. In one embodiment, the system and method of the present invention includes e-commerce analysis and reporting functionality, in which data from standard traffic logs is received and merged with data from e-commerce systems. The system and method of the present invention can produce reports showing detailed “return on investment” information, including identifying which banner ads, referrals, domains, etc. are producing specific dollars.

Patent
19 Dec 2001
TL;DR: In this paper, the authors propose a virtual file system that enables a plurality of underlying file systems running on various file servers to be virtualized into one or more virtual volumes that appear as a local file system to clients that access the virtual volumes.
Abstract: A virtual file system and method. The system architecture enables a plurality of underlying file systems running on various file servers to be “virtualized” into one or more “virtual volumes” that appear as a local file system to clients that access the virtual volumes. The system also enables the storage spaces of the underlying file systems to be aggregated into a single virtual storage space, which can be dynamically scaled by adding or removing file servers without taking any of the file systems offline and in a manner transparent to the clients. This functionality is enabled through a software “virtualization” filter on the client that intercepts file system requests and a virtual file system driver on each file server. The system also provides for load balancing file accesses by distributing files across the various file servers in the system, through migration of data files between servers.

Journal ArticleDOI
TL;DR: It is shown that web page revisitation is a much more prevalent activity than previously reported, that most pages are visited for a surprisingly short period of time, that users maintain large (and possibly overwhelming) bookmark collections, and that there is a marked lack of commonality in the pages visited by different users.
Abstract: This paper provides an empirical characterization of user actions at the web browser. The study is based on an analysis of 4 months of logged client-side data that describes user actions with recent versions of Netscape Navigator. In particular, the logged data allow us to determine the title, URL and time of each page visit, how often they visited each page, how long they spent at each page, the growth and content of bookmark collections, as well as a variety of other aspects of user interaction with the web. The results update and extend prior empirical characterizations of web use. Among the results we show that web page revisitation is a much more prevalent activity than previously reported (approximately 81% of pages have been previously visited by the user), that most pages are visited for a surprisingly short period of time, that users maintain large (and possibly overwhelming) bookmark collections, and that there is a marked lack of commonality in the pages visited by different users. These results have implications for a wide range of web-based tools including the interface features provided by web browsers, the design of caching proxy servers, and the design of efficient web sites.

Patent
09 Oct 2001
TL;DR: In this paper, a system and method may be provided that allows users to store, retrieve, and manipulate on-demand media content and data stored on a remote server network in an ondemand media delivery system.
Abstract: A system and method may be provided that allows users to store, retrieve, and manipulate on-demand media content and data stored on a remote server network in an on-demand media delivery system. More particularly, the system may allow a user to access his or her on-demand media account from user equipment in different locations as long as the current user equipment can communicate with a remote server that stores user-specific information. The system upon user selection may freeze the delivery of on-demand media at a particular point and allow the user to resume the media at a later time from some other network location in system. Users may upload personal images or files to an on-demand delivery server for later retrieval and display. Users may be permitted to assign access rights to the uploaded files.

Book ChapterDOI
19 Aug 2001
TL;DR: This paper addresses secure service replication in an asynchronous environment with a static set of servers, where a malicious adversary may corrupt up to a threshold of servers and controls the network.
Abstract: Broadcast protocols are a fundamental building block for implementing replication in fault-tolerant distributed systems. This paper addresses secure service replication in an asynchronous environment with a static set of servers, where a malicious adversary may corrupt up to a threshold of servers and controls the network.We develop a formal model using concepts from modern cryptography, give modular definitions for several broadcast problems, including reliable, atomic, and secure causal broadcast, and present protocols implementing them. Reliable broadcast is a basic primitive, also known as the Byzantine generals problem, providing agreement on a delivered message. Atomic broadcast imposes additionally a total order on all delivered messages. We present a randomized atomic broadcast protocol based on a new, efficient multivalued asynchronous Byzantine agreement primitive with an external validity condition. Apparently, no such efficient asynchronous atomic broadcast protocol maintaining liveness and safety in the Byzantine model has appeared previously in the literature. Secure causal broadcast extends atomic broadcast by encryption to guarantee a causal order among the delivered messages. Our protocols use threshold cryptography for signatures, encryption, and coin-tossing.

Patent
Siew Yong Sim1
18 May 2001
TL;DR: In this paper, a virtual file control system creates an illusion that the entire file is present at the connected node, however, since only selective portions of the large payload file may actually be resident at that node's storage at the time of request, a cluster of distribution servers at the distribution station may download the nonresident portions of file as the application server is servicing the user.
Abstract: Large payload files are selectively partitioned in blocks and the blocks distributed to a plurality of distribution stations at the edge of the network qualified to have the data. Each qualified station decides how much and what portion of the content to save locally, based on information such as network location and environment, usage, popularity, and other distribution criteria defined by the content provider. Different pieces of a large payload file may be available from different nodes, however, when a user requests access to the large payload file, for example, through an application server, a virtual file control system creates an illusion that the entire file is present at the connected node. However, since only selective portions of the large payload file may actually be resident at that node's storage at the time of request, a cluster of distribution servers at the distribution station may download the non-resident portions of the file as the application server is servicing the user. The download may be in parallel and usually from the least congested nodes. New nodes added to the network learn from other nodes in the network what content they should have and download the required content, in a desired amount, onto their local storage devices from the nearest and least congested nodes without interrupting network operation. Each node manages its local storage and decides what content to prune based on information such as usage patterns.

Patent
18 Jun 2001
TL;DR: A banking, retail or other transaction network can comprise a number of terminals, for example an ATM, where each terminal comprises a plurality of peripheral devices such as a user interface, card reader, receipt printer and cash dispenser.
Abstract: A banking, retail or other transaction network can comprise a number of terminals, for example an ATM, where each terminal comprises a plurality of peripheral devices such as a user interface, card reader, receipt printer and cash dispenser. The applications software for the peripheral devices can be held in a central server located externally of the terminal and linked to the terminal through a communications link. The link can extend to the individual peripheral devices so that they are direct clients of the server. Additionally the individual peripheral devices can be connected to each other over the link to enable them to communicate directly with each other on a peer-to-peer basis. Each peripheral can have an independent control application. In use, the independent control applications may communicate with each other so that a peripheral operates in response to a signal generated by another peripheral. A peripheral for use in such a terminal, and a network of such terminals are also described. A mainframe or server computer accessing a banking or other information database (e.g., a legacy host) can be connected to the central server through an information signal connection.