scispace - formally typeset
Search or ask a question

Showing papers on "Web service published in 1999"


Journal ArticleDOI
TL;DR: This paper presents several data preparation techniques in order to identify unique users and user sessions and Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system.
Abstract: The World Wide Web (WWW) continues to grow at an astounding rate in both the sheer volume of traffic and the size and complexity of Web sites. The complexity of tasks such as Web site design, Web server design, and of simply navigating through a Web site have increased along with this growth. An important input to these design tasks is the analysis of how a Web site is being used. Usage analysis includes straightforward statistics, such as page access frequency, as well as more sophisticated forms of analysis, such as finding the common traversal paths through a Web site. Web Usage Mining is the application of data mining techniques to usage logs of large Web data repositories in order to produce results that can be used in the design tasks mentioned above. However, there are several preprocessing tasks that must be performed prior to applying data mining algorithms to the data collected from server logs. This paper presents several data preparation techniques in order to identify unique users and user sessions. Also, a method to divide user sessions into semantically meaningful transactions is defined and successfully tested against two other methods. Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system [15].

1,616 citations


Journal ArticleDOI
Jia Wang1
05 Oct 1999
TL;DR: This paper first describes the elements of a Web caching system and its desirable properties, then the state-of-art techniques which have been used in Web caching systems are surveyed, and the research frontier in Web cache is discussed.
Abstract: The World Wide Web can be considered as a large distributed information system that provides access to shared data objects. As one of the most popular applications currently running on the Internet, the World Wide Web is of an exponential growth in size, which results in network congestion and server overloading. Web caching has been recognized as one of the effective schemes to alleviate the service bottleneck and reduce the network traffic, thereby minimize the user access latency. In this paper, we first describe the elements of a Web caching system and its desirable properties. Then, we survey the state-of-art techniques which have been used in Web caching systems. Finally, we discuss the research frontier in Web caching.

759 citations


Journal ArticleDOI
TL;DR: This paper describes Mercator, a scalable, extensible Web crawler written entirely in Java, and comments on Mercator's performance, which is found to be comparable to that of other crawlers for which performance numbers have been published.
Abstract: This paper describes Mercator, a scalable, extensible Web crawler written entirely in Java. Scalable Web crawlers are an important component of many Web services, but their design is not well-documented in the literature. We enumerate the major components of any scalable Web crawler, comment on alternatives and tradeoffs in their design, and describe the particular components used in Mercator. We also describe Mercator’s support for extensibility and customizability. Finally, we comment on Mercator’s performance, which we have found to be comparable to that of other crawlers for which performance numbers have been published.

672 citations


Book
01 Jan 1999
TL;DR: This book examines the unique aspects of modeling web applications with the Web Application Extension for the Unified Modeling Language (WAE) enabling developers to model web-specific architectural elements using the Rational Unified Process or an alternative methodology.
Abstract: Building Web Applications with UML is a guide to building robust, scalable, and feature-rich web applications using proven object-oriented techniques. Written for the project manager, architect, analyst, designer, and programmer of web applications, this book examines the unique aspects of modeling web applications with the Web Application Extension (WAE) for the Unified Modeling Language (UML). The UML has been widely accepted as the standard modeling language for software systems, and as a result is often the best option for modeling web application designs.The WAE extends the UML notation with semantics and constraints enabling developers to model web-specific architectural elements using the Rational Unified Process or an alternative methodology. Using UML allows developers to model their web applications as a part of the complete system and the business logic that must be reflected in the application. Readers will gain not only an understanding of the modeling process, but also the ability to map models directly into code.Key topics include: A basic introduction to web servers, browsers, HTTP, and HTML Gathering requirements and defining the system's use cases Transforming requirements into a model and then a design that maps directly into components of the system Defining the architecture of a web application with an examination of three architectural patterns describing architectures for thin web client, thick web client, and web delivery designs Modeling, at the appropriate level of abstraction and detail, the appropriate artifacts, including web application pages, page relationships, navigate routes, client-side scripts, and server-side generation Creating code from UML models using ASP and VBScript Client-side scripting using DHTML, Java Script, VBScript, Applets, ActiveX controls, and DOM Using client/server protocols including DCOM, CORBA/IIOP, and Java's RMI Securing a web application with SET, SSL, PGP, Certificates, and Certificate Authorities 0201615770B04062001

662 citations


Patent
10 Nov 1999
TL;DR: In this article, a system is described that facilitates web-based comparison shopping in conventional, physical, non-web retail environments, where a wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with a miniature bar code reader (installed either inside the phone or on a short cable).
Abstract: A system is disclosed that facilitates web-based comparison shopping in conventional, physical, non-web retail environments. A wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with a miniature bar code reader (installed either inside the phone or on a short cable) and utilized to obtain definitive product identification by, for example, scanning a Universal Product Code (UPC) bar code from a book or other product. The wireless device transmits the definitive product identifier to a service routine (running on a Web server), which converts it to (in the case of books) its International Standard Book Number or (in the case of other products) whatever identifier is appropriate. The service routine then queries the Web to find price, shipping and availability information on the product from various Web suppliers. This information is formatted and displayed on the hand-held device's screen. The user may then use the hand-held device to place an order interactively.

568 citations


Patent
05 Mar 1999
TL;DR: In this article, a system is disclosed that facilitates web-based information retrieval and display system, where a wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with other peripherals to provide a portable portal into the Internet.
Abstract: A system is disclosed that facilitates web-based information retrieval and display system. A wireless phone or similar hand-held wireless device with Internet Protocol capability is combined with other peripherals to provide a portable portal into the Internet. The wireless device prompts a user to input information of interest to the user. This information is transmitted a query to a service routine (running on a Web server). The service routine then queries the Web to find price, shipping and availability information from various Web suppliers. This information is then available for use by application programs such as wordprocessors, e-mail, accounting, graphical editors and other user tools. The system provides an innovative collaborative interface to many popular user applications that are useful in a mobile environment.

507 citations


Patent
28 Jan 1999
TL;DR: In this article, the authors describe a virtual desktop in a virtual computing environment, where a user is able to access the virtual desktop from a variety of systems through various communications links.
Abstract: A network of servers coupled to the Internet provides a virtual desktop in a virtual computing environment. A user is able to access the virtual desktop from a variety of systems through various communications links. A site server initially receives a URL access from the user at a local system. After a successful login, a personal web page of the user is retrieved from a file server and returned to the local system. Through the personal web page, the user is able to send commands that are received and processed by one or more backend servers. The web page represents the virtual desktop of the user and includes links for applications available to the user, files and folders accessible by the user, and other personal information of the user. The network provides facilities to manipulate and manage files, and facilities to access and process data from web sites on the Internet.

479 citations


Patent
Teresa Win1, Emilio Belmonte1
12 Feb 1999
TL;DR: In this paper, a single secure sign-on gives a user access to authorized Web resources, based on the user's role in the organization that controls the Web resources; the information resources are stored on a protected Web server.
Abstract: A single secure sign-on gives a user access to authorized Web resources, based on the user's role in the organization that controls the Web resources. The information resources are stored on a protected Web server. A user of a client or browser logs in to the system. A runtime module on the protected server receives the login request and intercepts all other request by the client to use a resource. The runtime module connects to an access server that can determine whether a particular user is authentic and which resources the user is authorized to access. User information is associated with roles and functional groups of an organization to which the user belongs; the roles are associated with access privileges. The access server connects to a registry server that stores information about users, roles, functional groups, resources, and associations among them. The access server and registry server exchange encrypted information that authorized the user to use the resource. The user is presented with a customized Web page showing only those resources that the user may access. Thereafter, the access server can resolve requests to use other resources without contacting the registry server. The registry server controls a flexible, extensible, additive data model stored in a database that describes the user, the resources, roles of the user, and functional groups in the enterprise that are associated with the user.

406 citations


Journal ArticleDOI
TL;DR: This study compares two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University and finds that for the computing facility represented by traces between 1995 and 1998, the benefits of using size‐based caching policies have diminished and the potential for caching requested files in the network has declined.
Abstract: Understanding the nature of the workloads and system demands created by users of the World Wide Web is crucial to properly designing and provisioning Web services. Previous measurements of Web client workloads have been shown to exhibit a number of characteristic featuress however, it is not clear how those features may be changing with time. In this study we compare two measurements of Web client workloads separated in time by three years, both captured from the same computing facility at Boston University. The older dataset, obtained in 1995, is well known in the research literature and has been the basis for a wide variety of studies. The newer dataset was captured in 1998 and is comparable in size to the older dataset. The new dataset has the drawback that the collection of users measured may no longer be representative of general Web userss however, using it has the advantage that many comparisons can be drawn more clearly than would be possible using a new, different source of measurement. Our results fall into two categories. First we compare the statistical and distributional properties of Web requests across the two datasets. This serves to reinforce and deepen our understanding of the characteristic statistical properties of Web client requests. We find that the kinds of distributions that best describe document sizes have not changed between 1995 and 1998, although specific values of the distributional parameters are different. Second, we explore the question of how the observed differences in the properties of Web client requests, particularly the popularity and temporal locality properties, affect the potential for Web file caching in the network. We find that for the computing facility represented by our traces between 1995 and 1998, (1) the benefits of using size-based caching policies have diminisheds and (2) the potential for caching requested files in the network has declined.

401 citations


Proceedings Article
06 Jun 1999
TL;DR: This paper presents the design of a new Web server architecture called the asymmetric multi-process event-driven (AMPED) architecture, and evaluates the performance of an implementation of this architecture, the Flash Web server.
Abstract: This paper presents the design of a new Web server architecture called the asymmetric multi-process event-driven (AMPED) architecture, and evaluates the performance of an implementation of this architecture, the Flash Web server. The Flash Web server combines the high performance of single-process event-driven servers on cached workloads with the performance of multiprocess and multi-threaded servers on disk-bound workloads. Furthermore, the Flash Web server is easily portable since it achieves these results using facilities available in all modern operating systems. The performance of different Web server architectures is evaluated in the context of a single implementation in order to quantify the impact of a server's concurrency architecture on its performance. Furthermore, the performance of Flash is compared with two widely-used Web servers, Apache and Zeus. Results indicate that Flash can match or exceed the performance of existing Web servers by up to 50% across a wide range of real workloads. We also present results that show the contribution of various optimizations embedded in Flash.

396 citations


Journal ArticleDOI
17 May 1999
TL;DR: The PageGather algorithm, which automatically identifies candidate link sets to include in index pages based on user access logs, is presented and it is demonstrated experimentally that PageGathering outperforms the Apriori data mining algorithm on this task.
Abstract: The creation of a complex Web site is a thorny problem in user interface design. In this paper we explore the notion of adaptive Web sites : sites that semi-automatically improve their organization and presentation by learning from visitor access patterns. It is easy to imagine and implement Web sites that offer shortcuts to popular pages. Are more sophisticated adaptive Web sites feasible? What degree of automation can we achieve? To address the questions above, we describe the design space of adaptive Web sites and consider a case study: the problem of synthesizing new index pages that facilitate navigation of a Web site. We present the PageGather algorithm, which automatically identifies candidate link sets to include in index pages based on user access logs. We demonstrate experimentally that PageGather outperforms the Apriori data mining algorithm on this task. In addition, we compare PageGather's link sets to pre-existing, human-authored index pages.

Patent
06 Aug 1999
TL;DR: In this paper, a multi-threaded name server is proposed to handle multiple concurrent name requests for a large number of domain names, and one or more additional network services are also provided, preferably using a centralized database.
Abstract: A method and apparatus for providing network hosting services is provided. According to one aspect of the invention, a multi-threaded name server handles multiple concurrent name requests, and is particularly well suited for a host system controlling information relating to a large number of domain names. In a preferred embodiment as described herein, a multi-threaded name server comprises a request dispatcher thread capable of spawning multiple child threads. The result is a multi-threaded, non-blocking name server capable of handling multiple concurrent name requests for a large number of domain names. In one embodiment, one or more additional network services are also provided, preferably using a centralized database. For example, in a particular embodiment, electronic message forwarding services are provided wherein an advertisement is associated with an electronic message based on the message contents. In another embodiment, web services are provided wherein hypertext markup language (HTML) pages are dynamically generated. In still another embodiment, both electronic message forwarding services and web services are provided on by the same system using the centralized database.

Patent
22 Jun 1999
TL;DR: In this article, a system and method is disclosed for gathering and disseminating detailed information regarding web site visitation, where a server system is connected to the Internet and receives, processes and supplies detailed information from subscribed users.
Abstract: A system and method is disclosed for gathering and disseminating detailed information regarding web site visitation. A server system is connected to the Internet and receives, processes and supplies detailed information from subscribed users. In response to user queries, the server system provides detailed information regarding the sites that have been visited, the duration and times of such visits, the most popular web sites, the most popular jump sites from a particular web page, etc. Such information is gathered and transmitted to subscribers who have downloaded a client-side reporting and communicating software application that is compatible with the server system. In addition, since users submit profile information about themselves, much demographic information is known about the users. Demographic information as to the popularity of visited web sites may then be easily determined, stored and updated by the server system. This demographic information, in turn, may be provided to other users, or web site operators and advertisers. The invention disclosed also allows users to initiate chat sessions with other users visiting a particular web site, or post a virtual note on the site for other subscribers to read.

Patent
19 Aug 1999
TL;DR: In this paper, a method for replicating changes in a source file set on a destination file system includes identifying changes, storing the identified changes, and transmitting the modification list to a plurality of web servers.
Abstract: This invention relates to managing multiple web servers, and more particularly to a web service system and method that allows a system operator to distribute content to each web server in the web service system. In one embodiment, a method for replicating changes in a source file set on a destination file system includes identifying changes in a source file set, storing the identified changes in a modification list, and transmitting the modification list to an agent having access to a destination file system. In another embodiment, a method for replicating changes in a source file set on a destination file system includes identifying changes in a source file set, storing the identified changes in a modification list, and transmitting the modification list to a plurality of web servers. In another embodiment, a web service system includes a manager for managing the web service system, a host comprising a web server for receiving web page requests and an agent in communication with the manager, and a content distributor for providing content changes to the host. In another embodiment, a content distributor includes an identification module for identifying changes in a source file set, a modification list for storing identified changes, and a transmitter for transmitting the modification list to an agent having access to a destination file system.

Proceedings ArticleDOI
07 Nov 1999
TL;DR: An effective technique for capturing common user profiles based on association rule discovery and usage based clustering is proposed and techniques for combining this knowledge with the current status of an ongoing Web activity to perform real time personalization are proposed.
Abstract: We describe an approach to usage based Web personalization taking into account both the offline tasks related to the mining of usage data, and the online process of automatic Web page customization based on the mined knowledge. Specifically, we propose an effective technique for capturing common user profiles based on association rule discovery and usage based clustering. We also propose techniques for combining this knowledge with the current status of an ongoing Web activity to perform real time personalization. Finally, we provide an experimental evaluation of the proposed techniques using real Web usage data.

Proceedings ArticleDOI
01 May 1999
TL;DR: Results show that prefetching combined with large browser cache and delta-compression can reduce client latency up to 23.4% using the Prediction-by-Partial-Matching (PPM) algorithm, and which generates 1% to 15% extra trafhc on the modem links.
Abstract: The majority of the Internet population access the World Wide Web via dial-up modem connections. Studies have shown that the limited modem bandwidth is the main contributor to latency perceived by users. In this paper, we investigate one approach to reduce latency: prefetching between caching proxies and browsers. The approach relies on the proxy to predict which cached documents a user might reference next, and takes advantage of the idle time between user requests to push or pull the documents to the user. Using traces of modem Web accesses, we evaluate the potential of the technique at reducing client latency, examine the design of prediction algorithms, an’d investigate their performance varying the parameters and implementation concerns. Our results show that prefetching combined with large browser cache and delta-compression can reduce client latency up to 23.4%. The reduction is achieved using the Prediction-by-Partial-Matching (PPM) algorithm, whose accuracy ranges from 40% to 73% depending on its parameters, and which generates 1% to 15% extra trafhc on the modem links. A perfect predictor can increase the latency reduction to 28.50/o, whereas without prefetching, large browser cache and delta-compression can only reduce latency by 14.4%. Depending on the desired properties of the algorithm, several configurations for PPM can be best choices. Among several attractive simplifications of the scheme, some do more harm than others; in particular, it is important for the predictor to observe all accesses made by users, including browser cache hits.

Patent
07 Dec 1999
TL;DR: In this article, a system and method of manipulating notes linked to Web pages and of manipulating the web pages can be found. But it is not yet a system for linking notes to web pages.
Abstract: A system and method of manipulating notes linked to Web pages, and of manipulating the Web pages. These Web pages (or portions of Web pages) can be stored at a Web site or in a local file system. The method of linking notes to Web pages operates by enabling a user to select a portion of a Web page, creating a annotation, linking the annotation to the selected portion, receiving a request from a user viewing the annotation to display the selected portion linked to the annotation, and invoking an application, if the application is not already invoked, and for causing the application to load the Web page and present the selected portion.

Patent
27 Jul 1999
TL;DR: In this paper, a rotating carousel consisting of Web pages in HTML format is broadcasted in a one-way broadcast digital video network, and a control map permitting the viewer to navigate among the HTML Web pages of the rotating carousels is presented.
Abstract: In a one way broadcast digital video network, Internet HTML Web page data is formatted to fit within a standard MPEG-2 data packet structure, and multiplexed along with other MPEG-2 digital video signals for transport within a multiple channel digital video system. In particular, the headend server broadcasts a rotating carousel comprising an ensemble of Web pages in HTML format. The rotating carousel contains both broadcast Web pages and simulcast Web pages and a control map permitting the viewer to navigate among the HTML Web pages of the rotating carousel. In particular, the control map contains the locations of the HTML Web pages in the rotating carousel that correspond to broadcast Web pages. The control map further contains the locations of the HTML Web pages in the rotating carousel that correspond to simulcast Web pages. The control map is updated and rebroadcast whenever there is a change at the start, middle or end of a broadcast video program, thereby synchronizing the simulcast Web pages to the multiple channel broadcast digital video programs.

Proceedings ArticleDOI
31 Oct 1999
TL;DR: A Web traffic model designed to assist in the evaluation and engineering of shared communication networks and is behavioral, which can extrapolate the model to assess the effect of changes in protocols, the network or user behavior.
Abstract: The growing importance of Web traffic on the Internet makes it important that we have accurate traffic models in order to plan and provision. In this paper we present a Web traffic model designed to assist in the evaluation and engineering of shared communication networks. Because the model is behavioral we can extrapolate the model to assess the effect of changes in protocols, the network or user behavior. The increasing complexity of Web traffic has required that we base our model on the notion of a Web-request, rather a Web page. A Web-request results in the retrieval of information that might consist of one or more Web pages. The parameters of our model are derived from an extensive trace of Web traffic. Web-requests are identified by analyzing not just the TCP header in the trace but also the HTTP headers. The effect of Web caching is incorporated into the model. The model is evaluated by comparing independent statistics from the model and from the trace. The reasons for differences between the model and the traces are given.

Patent
12 Feb 1999
TL;DR: In this article, the authors present a web management system including a database having a directory structure associating each web page of a web site with attributes thereof, which can be automatically or user-initiated.
Abstract: A web management system including a database having a directory structure associating each web page of a web site with attributes thereof. The web site management system may include a web server for displaying each web page, and a server-side front end daemon communicatable with the web server and the database. The front end daemon may identify the attributes of any user-changed web page and store the attributes of any user-changed web page in that database. The identifying and/or the storing may be automatic or user-initiated. In addition to, or in the alternative, the web management system may include a file system caching all web pages in a web site. The web pages so cached may be static. The web management system may include a server-side back end daemon communicatable with the database and the file system. The back end daemon may parse the attributes to generate the at least partially static web pages and store the generated, at least partially static web pages in the file system.

Patent
02 Feb 1999
TL;DR: In this paper, the authors describe a web page authoring interface that includes the ability to add force sensations to web page objects and immediately feel how the web page will feel to an end user.
Abstract: Force feedback is provided to a user of a client computer receiving information such as a web page over a network such as the World Wide Web from a server machine. The client machine has a force feedback interface device through which the user experiences physical force feedback. The web page may include force feedback information to provide authored force effects. Force feedback is correlated to web page objects by a force feedback program running on the client and based on input information from the interface device, the web page objects, and the force feedback information. Generic force effects can also be provided, which are applied uniformly at the client machine to all web page objects of a particular type as defined by user preferences at the client machine. A web page authoring interface is also described that includes the ability to add force sensations to a web page. The user may assign force effects to web page objects and immediately feel how the web page will feel to an end user. A web page is output by the interface, including force information to provide the force effects at a client. The authoring tool can include or access a force design interface for creating or modifying force effects.

Patent
10 Nov 1999
TL;DR: In this paper, the authors present a collection of software components for facilitating the analysis and management of web sites and Web site content. Butler et al. present a visual web site analysis program, which is implemented as an extensible architecture that allows plug-in applications to manipulate the display of the site map.
Abstract: A visual Web site analysis program, implemented as a collection of software components, provides a variety of features for facilitating the analysis and management of web sites and Web site content. A mapping component scans a Web site over a network connection and builds a site map which graphically depicts the URLs and links of the site. Site maps are generated using a unique layout and display methodology which allows the user to visualize the overall architecture of the Web site. Various map navigation and URL filtering features are provided to facilitate the task of identifying and repairing common Web site problems, such as links to missing URLs. A dynamic page scan feature enables the user to include dynamically-generated Web pages within the site map by capturing the output of a standard Web browser when a form is submitted by the user, and then automatically resubmitting this output during subsequent mappings of the site. The Web site analysis program is implemented using an extensible architecture which includes an API that allows plug-in applications to manipulate the display of the site map. Various plug-ins are provided which utilize the API to extend the functionality of the analysis program, including an action tracking plug-in which detects user activity and behavioral data (link activity levels, common site entry and exit points, etc.) from server log files and then superimposes such data onto the site map.

Journal ArticleDOI
TL;DR: WebOQL as mentioned in this paper is a query language for web data restructuring, which synthesizes ideas from query languages for the Web, for semistructured data and for website restructuring.
Abstract: The widespread use of the Web has originated several new data management problems, such as extracting data from Web pages and making databases accessible from Web browsers, and has renewed the interest in problems that had appeared before in other contexts, such as querying graphs, semistructured data and structured documents. Several systems and languages have been proposed for solving each of these Web data management problems, but none of these systems addresses all the problems from a unified perspective. Many of these problems essentially amount to data restructuring: we have information represented according to a certain structure and we want to construct another representation of (part of it) using a different structure. We present the WebOQL system, which supports a general class of data restructuring operations in the context of the Web. WebOQL synthesizes ideas from query languages for the Web, for semistructured data and for Website restructuring.

Proceedings ArticleDOI
07 Nov 1999
TL;DR: WEST, a WEb browser for Small Terminals, is described, that aims to solve some of the problems associated with accessing web pages on hand-held devices through a novel combination of text reduction and focus+context visualization.
Abstract: We describe WEST, a WEb browser for Small Terminals, that aims to solve some of the problems associated with accessing web pages on hand-held devices. Through a novel combination of text reduction and focus+context visualization, users can access web pages from a very limited display environment, since the system will provide an overview of the contents of a web page even when it is too large to be displayed in its entirety. To make maximum use of the limited resources available on a typical hand-held terminal, much of the most demanding work is done by a proxy server, allowing the terminal to concentrate on the task of providing responsive user interaction. The system makes use of some interaction concepts reminiscent of those defined in the Wireless Application Protocol (WAP), making it possible to utilize the techniques described here for WAP-compliant devices and services that may become available in the near future.

Proceedings ArticleDOI
01 May 1999
TL;DR: A taxonomy of tasks undertaken on the World-Wide Web, based on naturally-collectedverbal protocol data, reveals that several previous claims aboutrowsing behavior are questionable, and suggests thatwidget-centered approaches to interface design and evaluation maybe incomplete with respect to good user interfaces for the Web.
Abstract: A prerequisite to the effective design of user interfaces is an understanding of the tasks for which that interface will actually be used. Surprisingly little task analysis has appeared for one of the most discussed and fastest-growing computer applications, browsing the World-Wide Web (WWW). Based on naturally-collected verbal protocol data, we present a taxonomy of tasks undertaken on the WWW. The data reveal that several previous claims about browsing behavior are questionable, and suggests that that widget-centered approaches to interface design and evaluation may be incomplete with respect to good user interfaces for the Web.

Patent
10 Jun 1999
TL;DR: Web Services as mentioned in this paper is a method and apparatus for accessing and using services and applications from a number of sources into a customized application through an entity referred to as a web service, which maintains a directory of services available to provide processing or services along with the location of the services and the input/output schemas required by the services.
Abstract: The present invention provides a method and apparatus for accessing and using services and applications from a number of sources into a customized application. The present invention accomplishes this through an entity referred to as a web service. The web services architecture maintains a directory of services available to provide processing or services, along with the location of the services and the input/output schemas required by the services. When a request for data or services is received, appropriate services are invoked by a web services engine using service drivers associated with each service. A web services application is then generated from a runtime model and is invoked to satisfy the request, by communicating as necessary with services in proper I/O formats. In one embodiment, the web services application provides responses in the form of HTML that can be used to generate pages to a browser.

Proceedings ArticleDOI
01 Jan 1999
TL;DR: In this article, the authors present a new approach to Web server resource management based on Web content adaptation, which subsumes traditional admission control-based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements.
Abstract: The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and selling a myriad of emerging services. The World-Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging E-service infrastructure with increasing requirements for service quality, reliability, and security guarantees in an unpredictable and highly dynamic environment. Towards that end, we introduce a Web server QoS provisioning architecture for performance differentiation among classes of clients, performance isolation among independent services, and capacity planning to provide QoS guarantees on request rate and delivered bandwidth. We present a new approach to Web server resource management based on Web content adaptation. This approach subsumes traditional admission control-based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements. Our QoS management solutions can be implemented either in middleware transparent to the server or by direct modification of the server software. We present experimental data to illustrate the practicality of our approach.

Patent
09 Dec 1999
TL;DR: In this article, a method, device and system for displaying at a user interface device a richly-detailed video or TV file upon selecting a link to a web page corresponding to the file.
Abstract: A method, device and system for displaying at a user interface device a richly-detailed video or TV file upon selecting a link to a web page corresponding to the file. After the video or TV file has played, the web page is displayed, inserting a television or video experience into a web surfing experience. The video or TV file is downloaded to the user interface device during an otherwise idle time when bandwidth for downloading is available. Clicking the link to the web page may send a specialized URI from the device to a service center that signals the device to play the video or TV file and retrieve the web page, allowing the service center to manage display of the video or TV files. This has particular utility for links that are banner advertisements and provide video or TV advertisements upon clicking the banner to access a web page.

Patent
30 Apr 1999
TL;DR: In this paper, an Internet web interface to a network of at least one programmable logic control system running an application program for controlling output devices in response to status of input devices is presented.
Abstract: A control system includes an Internet web interface to a network of at least one programmable logic control system running an application program for controlling output devices in response to status of input devices. The Web interface runs Web pages from an Ethernet board coupled directly to the PLC back plane and includes an HTTP protocol interpreter, a PLC back plane driver, a TCP/IP stack, and an Ethernet board kernel. The Web interface provides access to the PLC back plane by a user at a remote location through the Internet. The interface translates the industry standard Ethernet, TCP/IP and HTTP protocols used on the Internet into data recognizable to the PLC. Using this interface, the user can retrieve all pertinent data regarding the operation of the programmable logic controller system.

01 Dec 1999
TL;DR: This work presents a new approach to Web server resource management based on Web content adaptation that subsumes traditional admission control-based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements.
Abstract: The Internet is undergoing substantial changes from a communication and browsing infrastructure to a medium for conducting business and selling a myriad of emerging services. The World-Wide Web provides a uniform and widely-accepted application interface used by these services to reach multitudes of clients. These changes place the Web server at the center of a gradually emerging E-service infrastructure with increasing requirements for service quality, reliability, and security guarantees in an unpredictable and highly dynamic environment. Towards that end, we introduce a Web server QoS provisioning architecture for performance differentiation among classes of clients, performance isolation among independent services, and capacity planning to provide QoS guarantees on request rate and delivered bandwidth. We present a new approach to Web server resource management based on Web content adaptation. This approach subsumes traditional admission control-based techniques and enhances server performance by selectively adapting content in accordance with both load conditions and QoS requirements. Our QoS management solutions can be implemented either in middleware transparent to the server or by direct modification of the server software. We present experimental data to illustrate the practicality of our approach.