scispace - formally typeset
Search or ask a question

Showing papers on "Server published in 1998"


Proceedings ArticleDOI
01 Jun 1998
TL;DR: This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server and addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream.
Abstract: One role for workload generation is as a means for understanding how servers and networks respond to variation in load. This enables management and capacity planning based on current and projected usage. This paper applies a number of observations of Web server usage to create a realistic Web workload generation tool which mimics a set of real users accessing a server. The tool, called Surge (Scalable URL Reference Generator) generates references matching empirical measurements of 1) server file size distribution; 2) request size distribution; 3) relative file popularity; 4) embedded file references; 5) temporal locality of reference; and 6) idle periods of individual users. This paper reviews the essential elements required in the generation of a representative Web workload. It also addresses the technical challenges to satisfying this large set of simultaneous constraints on the properties of the reference stream, the solutions we adopted, and their associated accuracy. Finally, we present evidence that Surge exercises servers in a manner significantly different from other Web server benchmarks.

1,549 citations


Journal ArticleDOI
TL;DR: The structure of the VNC protocol is described, and the ways the authors use VNC technology now and how it may evolve further as new clients and servers are developed are discussed.
Abstract: VNC is an ultra thin client system based on a simple display protocol that is platform independent. It achieves mobile computing without requiring the user to carry any hardware. VNC provides access to home computing environments from anywhere in the world, on whatever computing infrastructure happens to be available-including, for example, public Web browsing terminals in airports. In addition, VNC allows a single desktop to be accessed from several places simultaneously, thus supporting application sharing in the style of computer supported cooperative work (CSCW). The technology underlying VNC is a simple remote display protocol. It is the simplicity of this protocol that makes VNC so powerful. Unlike other remote display protocols such as the X Window System and Citrix's ICA, the VNC protocol is totally independent of operating system, windowing system, and applications. The VNC system is freely available for download from the ORL Web site at http://www.orl.co.uk/vnc/. We begin the article by summarizing the evolution of VNC from our work on thin client architectures. We then describe the structure of the VNC protocol, and conclude by discussing the ways we use VNC technology now and how it may evolve further as new clients and servers are developed.

1,313 citations


Journal ArticleDOI
TL;DR: An interactive protein secondary structure prediction Internet server is presented that simplifies the use of current prediction algorithms and allows conservation patterns important to structure and function to be identified.
Abstract: UNLABELLED An interactive protein secondary structure prediction Internet server is presented. The server allows a single sequence or multiple alignment to be submitted, and returns predictions from six secondary structure prediction algorithms that exploit evolutionary information from multiple sequences. A consensus prediction is also returned which improves the average Q3 accuracy of prediction by 1% to 72.9%. The server simplifies the use of current prediction algorithms and allows conservation patterns important to structure and function to be identified. AVAILABILITY http://barton.ebi.ac.uk/servers/jpred.h tml CONTACT geoff@ebi.ac.uk

1,044 citations


Patent
30 Mar 1998
TL;DR: Content-aware flow switches as discussed by the authors intercepts a client content request in an IP network, and transparently directs the content request to a best-fit server based on the type of content requested, the quality of service requirements implied by the content requests, the degree of load on available servers, network congestion information, and the proximity of the client to available servers.
Abstract: A content-aware flow switch intercepts a client content request in an IP network, and transparently directs the content request to a best-fit server. The best-fit server is chosen based on the type of content requested, the quality of service requirements implied by the content request, the degree of load on available servers, network congestion information, and the proximity of the client to available servers. The flow switch detects client-server flows based on the arrival of TCP SYNs and/or HTTP GETs from the client. The flow switch implicitly deduces the quality of service requirements of a flow based on the content of the flow. The flow switch also provides the functionality of multiple physical web servers on a single web server in a way that is transparent to the client, through the use of virtual web hosts and flow pipes.

793 citations


Patent
18 Aug 1998
TL;DR: In this paper, the synchronization system and associated methods provide synchronization of an arbitrary number of datasets, including more than two datasets, and a unified user interface is provided that allows the user to easily determine which of his or her datasets are currently set to be synchronized.
Abstract: Synchronization system and associated methods provide synchronization of an arbitrary number of datasets, including more than two datasets. To achieve this, a reference dataset is used to store a super-set of the latest or most-recent data from all user datasets to provide a repository of information that is available at all times. Therefore, if the user later wishes to synchronize a new user dataset, such as one in a server computer that stores user information, the system already has all the information necessary for synchronizing the new dataset, regardless of whether any of the other datasets are then available. Further, to simplify use, a unified user interface is provided that allows the user to easily determine which of his or her datasets are currently set to be synchronized and allows the user to conveniently alter the current settings to select one, two, or even more than two clients for synchronization. Various “conflict” or “duplicate” resolution strategies are described for intelligently handling complexities resulting from allowing synchronization for an arbitrary number of datasets and allowing synchronization using even data from datasets that are not available. Architectural support for “plug-in” client accessors and type modules is also provided. This allows support to be added for new datasets or new types of data merely by developing and plugging in new, compact client accessors or type modules, without updating or replacing the core synchronization engine.

717 citations


Patent
26 Oct 1998
TL;DR: In this paper, the authors describe a system for implementing high-level, network policies in a computer network having multiple, dissimilar network devices, which can be translated by one or more policy servers into a set of rules that can be put into effect by specific network devices.
Abstract: A computer network having multiple, dissimilar network devices includes a system for implementing high-level, network policies. The high-level policies, which are generally device-independent, are translated by one or more policy servers into a set of rules that can be put into effect by specific network devices. Preferably, a network administrator selects an overall traffic template for a given domain and may assign various applications and/or users to the corresponding traffic types of the template. Location-specific policies may also be established by the network administrator. The policy server translates the high-level policies inherent in the selected traffic template and location-specific policies into a set of rules, which may include one or more access control lists, and may combine several related rules into a single transaction. Intermediate network devices, which may have one or more roles assigned to their interfaces, are configured to request traffic management information from the policy server which replies with a particular set of transactions and rules. The rules, which may correspond to the particular roles assigned to the interfaces, are then utilized by the intermediate devices to configure their particular services and traffic management mechanisms. Other rules are utilized by the intermediate devices to classify packets with a particular priority and/or service value and to treat classified packets in a particular manner so as to realize the selected high-level policies within the domain.

672 citations


Patent
28 Aug 1998
TL;DR: In this article, a tape backup apparatus (240) comprises a two-tier data storage system in which all backed up data is stored in on-line media, such as a hard disk (244).
Abstract: In a network environment (200), multiple clients (210) and multiple servers (230) are connected via a local area network (LAN) (220) to a tape backup apparatus (240). Each client (210) and each server is provided with backup agent software (215), which schedules backup operations on the basis of time since the last backup, the amount of information generated since the last backup, or the like. An agent (215a) also sends a request to the tape backup apparatus (240), prior to an actual backup, including information representative of the files that it intends to back up. The tape backup apparatus (240) is provided with a mechanism to receive backup requests from the client agents (215) and accept or reject backup requests on the basis of backup server loading, network loading, or the like. The tape backup apparatus (240) is further provided with mechanisms to enact redundant file elimination (RFE), whereby the server indicates to the client agents, prior to files being backed up, that certain of the files to be backed up are already stored by the backup server. Thus, the clients do not need to send the redundant files to be backed up. The tape backup apparatus (240) comprises a two-tier data storage system in which all backed up data is stored in on-line media, such as a hard disk (244). A copy of the data on the on-line media (244) is periodically backed up to off-line media, such as tape. The backup server is provided with mechanisms whereby a client can restore any of its 'lost' data by copying it directly from the on-line storage, without the need for backup administrator assistance.

646 citations


Journal ArticleDOI
01 Oct 1998
TL;DR: A simple, practical strategy for locality-aware request distribution (LARD), in which the front-end distributes incoming requests in a manner that achieves high locality in the back-ends' main memory caches as well as load balancing.
Abstract: We consider cluster-based network servers in which a front-end directs incoming requests to one of a number of back-ends. Specifically, we consider content-based request distribution: the front-end uses the content requested, in addition to information about the load on the back-end nodes, to choose which back-end will handle this request. Content-based request distribution can improve locality in the back-ends' main memory caches, increase secondary storage scalability by partitioning the server's database, and provide the ability to employ back-end nodes that are specialized for certain types of requests.As a specific policy for content-based request distribution, we introduce a simple, practical strategy for locality-aware request distribution (LARD). With LARD, the front-end distributes incoming requests in a manner that achieves high locality in the back-ends' main memory caches as well as load balancing. Locality is increased by dynamically subdividing the server's working set over the back-ends. Trace-based simulation results and measurements on a prototype implementation demonstrate substantial performance improvements over state-of-the-art approaches that use only load information to distribute requests. On workloads with working sets that do not fit in a single server node's main memory cache, the achieved throughput exceeds that of the state-of-the-art approach by a factor of two to four.With content-based distribution, incoming requests must be handed off to a back-end in a manner transparent to the client, after the front-end has inspected the content of the request. To this end, we introduce an efficient TCP handoflprotocol that can hand off an established TCP connection in a client-transparent manner.

643 citations


Patent
12 Mar 1998
TL;DR: In this paper, a secure online transaction between a vendor computer and a user computer is described, in which the user computer (12) transmits a transaction request message to the vendor computer (14) via the computer network (16), the financial transaction request comprising user identification data unique to the user and data indicative of the requested transaction; in response to receiving the transaction verification request, the trust server computer (18) authenticating the user computers by using the user identification and communicating with the user Computer (12), for verification with the users identification data.
Abstract: A method for executing a secure online transaction between a vendor computer (14) and a user computer (12), wherein vendor (14) and user (12) computers are interconnected to a network (16). The method comprises the steps of the user computer (12) transmitting a transaction request message to the vendor computer (14) via the computer network (16), the financial transaction request comprising user identification data unique to the user computer (12); in response to receiving the transaction request, the vendor computer (14) sending a transaction verification request to a trust server computer (18) interconnected to the computer network (16), the transaction verification request comprising the user identification data and data indicative of the requested transaction; in response to receiving the transaction verification request, the trust server computer (18) authenticating the user computer (12) by using the user identification data and communicating with the user computer (12) for verification with the user identification data; and the trust server (18) authorizing the transaction when the authenticating step has passed.

626 citations


Patent
21 Jul 1998
TL;DR: An improved method and apparatus for storing and delivering information over the Internet and using Internet technologies is described in this paper, where a client device (100) may request from a server on a network.
Abstract: An improved method and apparatus is used for storing and delivering information over the Internet and using Internet technologies. According to one embodiment of the present invention, a method and apparatus for maintaining statistics on a server (204) is disclosed. According to an alternative embodiment, a method and apparatus (204) is disclosed for predicting data that a client device (100) may request from a server on a network. In another embodiment of the present invention, a method and apparatus (204) is disclosed for managing bandwith between a client device (100) and a network. According to yet another embodiment, a method and apparatus (204) is disclosed for validating a collection of data (200). According to yet another embodiment, a method for providing notification to clients (100) from servers (204) is disclosed.

604 citations


Journal ArticleDOI
TL;DR: The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.
Abstract: We develop a general model, called latency-rate servers (/spl Lscr//spl Rscr/ servers), for the analysis of traffic scheduling algorithms in broadband packet networks. The behavior of an /spl Lscr//spl Rscr/ server is determined by two parameters-the latency and the allocated rate. Several well-known scheduling algorithms, such as weighted fair queueing, virtualclock, self-clocked fair queueing, weighted round robin, and deficit round robin, belong to the class of /spl Lscr//spl Rscr/ servers. We derive tight upper bounds on the end-to-end delay, internal burstiness, and buffer requirements of individual sessions in an arbitrary network of /spl Lscr//spl Rscr/ servers in terms of the latencies of the individual schedulers in the network, when the session traffic is shaped by a token bucket. The theory of /spl Lscr//spl Rscr/ servers enables computation of tight upper bounds on end-to-end delay and buffer requirements in a heterogeneous network, where individual servers may support different scheduling architectures and under different traffic models.

Patent
26 Aug 1998
TL;DR: The DataTreasury System (600) as discussed by the authors is a system for remote data acquisition and centralized processing and storage, which provides comprehensive support for the processing of documents and electronic data associated with different applications including sale, business, banking and general consumer transactions.
Abstract: A system for remote data acquisition and centralized processing and storage is disclosed called the DataTreasury System (600). The DataTreasury System provided comprehensive support for the processing of documents and electronic data associated with different applications including sale, business, banking and general consumer transactions. The system retrieves transaction data such as credit card receipts checks in either electronic or paper form at one or more remote locations, encrypts the data, transmits the encrypted data to a central location, transforms the data to a usable form, performs identification verification using signature data and biometric data, generates informative reports from the data and transmits the informative reports to the remote location(s). The DataTreasury System (200, 400, 600) has many advantageous features which work together to provide high performance, security, reliability, fault tolerance and low cost. First, the network architecture facilitates secure communication between the remote location(s) and the central processing facility. A dynamic address assignment algorithm performs load balancing among the system's servers for faster performance and higher utilization. Finally, partitioning scheme improves the error correction process.

Patent
06 Oct 1998
TL;DR: In this article, a system which distributes requests among a plurality of network servers receives a request from a remote source at a first one of the network servers, and determines whether to process the request in the first network server.
Abstract: A system which distributes requests among a plurality of network servers receives a request from a remote source at a first one of the network servers, and determines whether to process the request in the first network server. The request is processed in the first network server in a case that it is determined that the request should be processed in the first network server. On the other hand, the request is routed to another network server in a case that it is determined that the request should not be processed in the first network server.

Patent
19 Mar 1998
TL;DR: In this article, a system for dynamically transcoding data transmitted between computers is implemented in an apparatus for use in transmitting data between a network server (10) and a network client (12) over a communications link.
Abstract: A system for dynamically transcoding data transmitted between computers is implemented in an apparatus for use in transmitting data between a network server (10) and a network client (12) over a communications link (14). The apparatus includes a parser (22) coupled to a transcode service provider (24). The parser (22) is configured to selectively invoke the transcode service provider (24) in response to a predetermined selection criterion.

Patent
23 Jun 1998
TL;DR: In this paper, a policy server determines priority of network traffic through control points on a network by examining packets passing through these control points, such as the source and destination IP address and TCP ports.
Abstract: Low-level network services are provided by network-service-provider plugins. These plugins are controlled by an extensible service provider that is layered above the TCP or other protocol layer but below the Winsock-2 library and API. Policy servers determine priority of network traffic through control points on a network. Examining packets passing through these control points provides limited data such as the source and destination IP address and TCP ports. Many applications on a client machine may use the same IP address and TCP ports, so packet examination is ineffective for prioritizing data from different applications on one client machine. Often some applications such as videoconferencing or data-entry for corporate sales are more important than other applications such as web browsing. A application-classifier plugin to the extensible service provider intercepts network traffic at above the client's TCP/IP stack and associates applications and users with network packets. These associations and statistics such as maximum, average, and instantaneous data rates and start and stop time are consolidated into tables. The policy server can query these tables to find which application is generating network traffic and prioritize the traffic based on the high-level application. Bandwidth-hogging applications such as browsers can be identified from the statistics and given lower priority.

Patent
05 May 1998
TL;DR: In this article, an update script is stored on a network server for each software product to be updated and, where appropriate, for each different country or locale in which that product will be installed.
Abstract: A technique for automatically updating software, including but not limited to application programs, residing on, e.g., a client computer. Specifically, an update script is stored on a network server for each software product to be updated and, where appropriate, for each different country or locale in which that product will be installed. At a scheduled time, the client computer automatically, through an executing updating application: establishes a network connection to the server; constructs a file name for a file containing an appropriate update script; and then downloads that file from the server. The script contains appropriate update information, including whether the update is to occur through a web site or through the script, and if the latter, listings of operating system (O/S) specific and O/S-independent product update files. For a script-based update, the updating application downloads those update files, as specified by the script, corresponding to the executing O/S and then, in a sequence specified in the script, executes various files therein to complete the update. Once the update successfully concludes, the updating application appropriately updates the locally stored version number of the installed software and schedules the next update accordingly.

Patent
15 May 1998
TL;DR: In this paper, a technique for automatic, transparent, distributed, scalable and robust caching, prefetching, and replication in a computer network that request messages for a particular document follow paths from the clients to a home server that form a routing graph.
Abstract: A technique for automatic, transparent, distributed, scalable and robust caching, prefetching, and replication in a computer network that request messages for a particular document follow paths from the clients to a home server that form a routing graph. Client request messages are routed up the graph towards the home server as would normally occur in the absence of caching. However, cache servers are located along the route, and may intercept requests if they can be serviced. In order to be able to service requests in this manner without departing from standard network protocols, the cache server needs to be able to insert a packet filter into the router associated with it, and needs also to proxy for the homer server from the perspective of the client. Cache servers may cooperate to service client requests by caching and discarding documents based on its local load, the load on its neighboring caches, attached communication path load, and on document popularity. The cache servers can also implement security schemes and other document transformation features.

Patent
Rick Dedrick1
09 Jan 1998
TL;DR: In this paper, the authors describe a computer network system that contains a metering mechanism which can meter the flow of electronic information to a client computer within a network, and charge the price of the information to an electronic account of the end user stored in a database of the metering server.
Abstract: A computer network system that contains a metering mechanism which can meter the flow of electronic information to a client computer within a network. The information can be generated by a publisher and electronically distributed to a plurality of metering servers which each contain the metering mechanism. The metering servers each reside in a local area network that contains a number of client computers. The client computers each contain a graphical user interface that allows an end user to request consumption of the information. The metering mechanisms control the transfer of information into the client computers. Each unit of information has an associated cost type and cost value that are used to calculate a price for the information. When the end user request consumption of information, the metering mechanism determines whether the end user can consume the information. If the end user can access the information, the meter will transfer the information to the end user and charge the price of the information to an electronic account of the end user stored in a database of the metering server. The metering mechanism can periodically transfer the balance of the account, and the charges associated with the account to a billing database that resides in a regional server which automatically generates a bill for the end user.

Journal ArticleDOI
01 Oct 1998
TL;DR: Measurements of the prototype NASD system show that these services can be cost-effectively integrated into a next generation disk drive ASK, and show scaluble bandwidth for NASD-specialized filesystems.
Abstract: This paper describes the Network-Attached Secure Disk (NASD) storage architecture, prototype implementations oj NASD drives, array management for our architecture, and three, filesystems built on our prototype. NASD provides scalable storage bandwidth without the cost of servers used primarily, for transferring data from peripheral networks (e.g. SCSI) to client networks (e.g. ethernet). Increasing datuset sizes, new attachment technologies, the convergence of peripheral and interprocessor switched networks, and the increased availability of on-drive transistors motivate and enable this new architecture. NASD is based on four main principles: direct transfer to clients, secure interfaces via cryptographic support, asynchronous non-critical-path oversight, and variably-sized data objects. Measurements of our prototype system show that these services can be cost-effectively integrated into a next generation disk drive ASK. End-to-end measurements of our prototype drive andfilesysterns suggest that NASD cun support conventional distributed filesystems without performance degradation. More importantly, we show scaluble bandwidth for NASD-specialized filesystems. Using a parallel data mining application, NASD drives deliver u linear scaling of 6.2 MB/s per clientdrive pair, tested with up to eight pairs in our lab.

Journal ArticleDOI
TL;DR: This article surveys the risks connected with the use of mobile agents, and security techniques available to protect mobile agents and their hosts, and identifies the inadequacies of the security techniques developed from the information fortress model.
Abstract: The practicality of mobile agents hinges on realistic security techniques. Mobile agent systems are combination client/servers that transport, and provide an interface with host computers for, mobile agents. Transport of mobile agents takes place between mobile agent systems, which are located on heterogeneous platforms, making up an infrastructure that has the potential to scale to the size of any underlying network. Mobile agents can be rapidly deployed, and can respond to each other and their environment. These abilities expose flaws in current security technology. This article surveys the risks connected with the use of mobile agents, and security techniques available to protect mobile agents and their hosts. The inadequacies of the security techniques developed from the information fortress model are identified. They are the result of using a good model in an inappropriate context (i.e. a closed system model in a globally distributed networking computing base). Problems with commercially available techniques include: (1) conflicts between security techniques protecting hosts and mobile agents, (2) inability to handle multiple collaborative mobile agents, and (3) emphasis on the credentials of software instead of on the integrity of software to determine the level of trust.

Patent
23 Jul 1998
TL;DR: In this article, a client-based system for fault tolerant delivery of real-time or continuous data streams, such as realtime multimedia streams, e.g., live audio and video clips, is presented.
Abstract: A client-based system for the fault tolerant delivery of real-time or continuous data streams, such as real-time multimedia streams, e.g., live audio and video clips. Multimedia servers are grouped into two or more sets, for example wherein a first set includes one or more primary servers using odd-numbered ports and a second set includes one or more secondary servers using even-numbered ports. The client requests a multimedia stream through a control server or gateway which routes requests to the multimedia servers; and the client receives the stream directly from a selected (primary) server. The client automatically detects load imbalances and/or failures (complete or partial) and dynamically switches to a secondary server in order to continue receiving the real-time multimedia stream with minimal disruption and while maintaining a balanced load across multiple servers in a distributed network environment. The determination can be made based on: the received bit or frame rate (for video); a bit rate or sample rate (for audio); monitoring a delivery rate or for packets arriving out of order: for example using packet numbering mechanisms available in TCP; sequence numbering or time stamp capabilities of RTP (in combination with the User Datagram Protocol (UDP)). In any case, the determination could be based on the rate measurement or monitoring mechanism falling below (or exceeding) some threshold. Alternately, the primary server or the control server could send an explicit distress or switch signal to the client. An explicit signal can be used for example to switch clients in phases with minimal disruption.

Patent
07 Apr 1998
TL;DR: In this paper, a graphical user interface 116 is presented to a user to allow the user to perform a large number of functions and to access databases of information associated with calling and called parties.
Abstract: A telecommunications system (10) is provided that provides for telephone functions to be accessed through client computer system (14). A server computer system (16) provides telephony services, database services and access to E-mail, voice mail, video conferencing and facsimile systems. A graphical user interface 116 is presented to a user to allow the user to perform a large number of functions and to access databases of information associated with calling and called parties.

Proceedings ArticleDOI
16 Apr 1998
TL;DR: A detailed performance study of three important classes of commercial workloads: online transaction processing (OLTP), decision support systems (DSS), and Web index search, which characterizes the memory system behavior of these workloads through a large number of architectural experiments augmented with full system simulations to determine the impact of architectural trends.
Abstract: Commercial applications such as databases and Web servers constitute the largest and fastest-growing segment of the market for multiprocessor servers. Ongoing innovations in disk subsystems, along with the ever increasing gap between processor and memory speeds, have elevated memory system design as the critical performance factor for such workloads. However, most current server designs have been optimized to perform well on scientific and engineering workloads, potentially leading to design decisions that are non-ideal for commercial applications. The above problem is exacerbated by the lack of information on the performance requirements of commercial workloads, the lack of available applications for widespread study, and the fact that most representative applications are too large and complex to serve as suitable benchmarks for evaluating trade-offs in the design of processors and servers.This paper presents a detailed performance study of three important classes of commercial workloads: online transaction processing (OLTP), decision support systems (DSS), and Web index search. We use the Oracle commercial database engine for our OLTP and DSS workloads, and the AltaVista search engine for our Web index search workload. This study characterizes the memory system behavior of these workloads through a large number of architectural experiments on Alpha multiprocessors augmented with full system simulations to determine the impact of architectural trends. We also identify a set of simplifications that make these workloads more amenable to monitoring and simulation without affecting representative memory system behavior. We observe that systems optimized for OLTP versus DSS and index search workloads may lead to diverging designs, specifically in the size and speed requirements for off-chip caches.

Patent
30 Jul 1998
TL;DR: In this paper, a plurality of users communicate in real-time text conversations (e.g., chat sessions) in a client-server message processing environment using messages including a conversation index, a conversation-initiator ID and a list of message recipients.
Abstract: A plurality of users communicate in a plurality of real-time text conversations (e.g., “chat sessions”) in a client-server message processing environment using messages including a conversation index, a conversation-initiator ID and a list of message recipients. Each conversation is maintained at client terminals in an individual window. Dropping and controlled adding of conversation participants is attended by message updates to other participants. Alternative peer-to-peer message handling reduces the processing burden on servers while allowing clients to perform control and display functions. Voice or other non-text messages are also communicated using described techniques.

Patent
11 Dec 1998
TL;DR: In this paper, a user control interface is provided that is location dependent, where context control parameters are associated with location, and the user interface is customized to the context within which the device is being operated.
Abstract: A user control interface is provided that is location dependent. Context control parameters are associated with location, and the user control interface is customized to the context within which the device is being operated. The control interface includes the presentation of context sensitive information and the communication of corresponding context sensitive user commands via the interface. The location determination is effected using any number of commonly available techniques, such as direct entry, infrared sensors and active badges for relative positioning, as well as the conventional absolute positioning devices such as LORAN and GPS. Preferably, the device communicates with a remote information source that provides the context sensitive control information. The remote information source may be a home network server, an Internet server, a public service network, or other communication network.

Patent
16 Oct 1998
TL;DR: In this article, the authors present a system where participants in a network use self defining electronic documents, such as XML-based documents, which can be easily understood amongst the partners, to handle the transaction in a way which closely parallels the way in which paper based businesses operate.
Abstract: Participant servers in a network of customers, suppliers and other trading partners exchange machine readable documents. The participants in the network use self defining electronic documents, such as XML based documents, which can be easily understood amongst the partners. Definitions of the electronic business documents, called business interface definitions, are posted on the Internet, or otherwise communicated to members of the network. The business interface definitions tell potential trading partners the services the company offers and the documents to use when communicating with such services. Thus, a typical business interface definition allows a customer to place an order by submitting a purchase order or a supplier checks availability by downloading an inventory status report. Participants are programmed by the composition of the input and output documents, coupled with interpretation information in a common business library, to handle the transaction in a way which closely parallels the way in which paper based businesses operate.

Patent
02 Apr 1998
TL;DR: In this article, the VOW server interprets each hyperlink request in consideration of the identity of the exercising client SUV and additional data of a demographic, socioeconomic, credit, viewing preference, security and/or past hyperlinking history nature.
Abstract: Streaming digital hypervideo including copious embedded hyperlinks is distributed upon a digital communications network from a hypervideo server, normally an Internet Service Provider, to multitudinous client subscribers/users/viewers (client SUVs). Some or all of the client SUVs receive the same hyperlinks at the same place in the streaming hypervideo. Some small fraction of the client SUVs selectively volitionally exercise a fraction of the total hyperlinks, causing an access in the background of the unfolding hypervideo across the digital communications network to yet another server commonly called a "Video On Web server", or "VOW server". The VOW sever interprets each hyperlink request in consideration of (i) the identity of the exercising client SUV and, most commonly, (ii) additional data of a demographic, socioeconomic, credit, viewing preference, security and/or past hyperlinking history nature. The VOW Server supplies each hyperlink-exercising client SUV with a potentially custom hyperlink --normally in the form of a network universal resource locator (URL) or an index to a file of URLs--while keeping track of commercially useful data regarding the client SUV response(s). Each client SUV uses its own associated received URL to retrieve a potentially unique resource. The resource can be internal, such as an executable software program, but is normally located somewhere on the network and is typically in the nature of tailored and/or targeted advertisements, messages of personal or local or temporal pertinence and/or urgency, and/or the results of contests or lotteries. Hypervideo hyperlinks are thus dynamically resolved during streaming network communications to support full custom hyperlinking by each of multitudinous networked client SUVs.

Patent
23 Jun 1998
TL;DR: In this paper, a client-side dispatcher performs TCP state migration to relocate the client-server TCP connection to a new server by storing packets locally and later altering them before transmission, and the altered packets then establish a connection with the relocated server.
Abstract: A client-side dispatcher resides on a client machine below high-level client applications and TCP/IP layers. The client-side dispatcher performs TCP state migration to relocate the client-server TCP connection to a new server by storing packets locally and later altering them before transmission. The client-side dispatcher operates in several modes. In an error-recovery mode, when a server fails, error packets from the server are intercepted by the client-side dispatcher. Stored connection packet's destination addresses are changed to an address of a relocated server. The altered packets then establish a connection with the relocated server. Source addresses of packets from the server are changed to that of the original server that crashed so that the client application is not aware of the error. In a delayed URL-based dispatch mode, the client-side dispatcher intercepts connection packets before they are sent over the network. Reply packets are faked by the client-side dispatcher to appear to be from a server and then sent to up to the client TCP/IP layers. The client's TCP then sends URL packet identifying the resource requested. The client-side dispatcher decodes the URL and picks a server and sends the packet to the server. Reply packets from the server are intercepted, and data packets altered to have the source address of the faked server. Multicast of the initial packet to multiple servers is used for empirical load-balancing by the client. The first server to respond is chosen while the others are reset. Thus the client-side dispatcher picks the fastest of several servers.

Patent
30 Apr 1998
TL;DR: In this article, the authors present an architecture and system that loads and uses a smart card for payment of goods and/or services purchased on-line over the Internet, where a client module on a client terminal interfaces to a card reader which accepts the consumer's smart card and allows loading and debiting of the card.
Abstract: An architecture and system loads and uses a smart card for payment of goods and/or services purchased on-line over the Internet. A client module on a client terminal controls the interaction with a consumer and interfaces to a card reader which accepts the consumer's smart card and allows loading and debiting of the card. Debiting works in conjunction with a merchant server and a payment server. Loading works in conjunction with a bank server and a load server. The Internet provides the routing functionality between the client terminal and the various servers. A payment server on the Internet includes a computer and a security module (or a security card in a terminal) to handle the transaction, data store and collection. A merchant server advertises the goods and/or services offered by a merchant for sale on a web site. The merchant contracts with an acquirer to accept smart card payments for goods and/or services purchased over the Internet. A consumer uses his smart card at the client terminal in order to purchase goods and/or services from the remote merchant server. The client terminal sends a draw request to the payment server. The payment server processes, confirms and replies to the merchant server (optionally by way of the client terminal). To load value, the client terminal requests a load from a user account at the bank server. A load request is sent from the card to the load server which processes, confirms and replies to the bank server (optionally by way of the client terminal). The bank transfers loaded funds to the card issuer for later settlement for a merchant from whom the user purchases goods with value on the card.

Patent
20 Jan 1998
TL;DR: In this paper, a server based communications system provides dynamic customization of hypertext tagged documents presented to clients accessing the system, which pertains to the content of the documents, is based on the specific requirements of a class to which the client belongs to.
Abstract: The present server based communications system provides dynamic customization of hypertext tagged documents presented to clients accessing the system. The customization, which pertains to the content of the documents, is based on the specific requirements of a class to which the client belongs to. The class may be defined by the identity of the source which refers the client to the system. The system utilizes a database which dynamically retrieves stored data in response to a server software tool which configures the data into hypertext tagged documents. The system utilizes a dynamic token scheme to pass the identity of the referring network site from document to document to eventual purchase document accessed by the client through the hypertext tags.