scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 2001"


Journal ArticleDOI
TL;DR: This survey aims at providing a glimpse at the past, present, and future of this upcoming technology and highlights why it is expected that knowledge discovery and data mining can benefit from RDF and the Semantic Web.
Abstract: Universality, the property of the Web that makes it the largest data and information source in the world, is also the property behind the lack of a uniform organization scheme that would allow easy access to data and information. A semantic web, wherein different applications and Web sites can exchange information and hence exploit Web data and information to their full potential, requires the information about Web resources to be represented in a detailed and structured manner. Resource Description Framework (RDF), an effort in this direction supported by the World Wide Web Consortium, provides a means for the description of metadata which is a necessity for the next generation of interoperable Web applications. The success of RDF and the semantic web will depend on (1) the development of applications that prove the applicability of the concept, (2) the availability of application interfaces which enable the development of such applications, and (3) databases and inference systems that exploit RDF to identify and locate most relevant Web resources. In addition, many practical issues, such as security, ease of use, and compatibility, will be crucial in the success of RDF. This survey aims at providing a glimpse at the past, present, and future of this upcoming technology and highlights why we believe that the next generation of the Web will be more organized, informative, searchable, accessible, and, most importantly, useful. It is expected that knowledge discovery and data mining can benefit from RDF and the Semantic Web.

1,112 citations


Proceedings Article
11 Sep 2001
TL;DR: In this paper, the authors address the problem of designing a crawler capable of extracting content from the hidden web, i.e., the set of web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration.
Abstract: Current-day crawlers retrieve content only from the publicly indexable Web, i.e., the set of Web pages reachable purely by following hypertext links, ignoring search forms and pages that require authorization or prior registration. In particular, they ignore the tremendous amount of high quality content “hidden” behind search forms, in large searchable electronic databases. In this paper, we address the problem of designing a crawler capable of extracting content from this hidden Web. We introduce a generic operational model of a hidden Web crawler and describe how this model is realized in HiWE (Hidden Web Exposer), a prototype crawler built at Stanford. We introduce a new Layout-based Information Extraction Technique (LITE) and demonstrate its use in automatically extracting semantic information from search forms and response pages. We also present results from experiments conducted to test and validate our techniques.

698 citations


Proceedings Article
01 Jan 2001
TL;DR: It is argued that versatility is an important feature of successful Web-based education systems and ELM-ART, an intelligent interactive educational system to support learning programming in LISP, demonstrates how some interactive and adaptive educational component can be implemented in WWW context and how multiple components can be naturally integrated together in a single system.
Abstract: This paper discusses the problems of developing versatile adaptive and intelligent learning systems that can be used in the context of practical Web-based education. We argue that versatility is an important feature of successful Web-based education systems. We introduce ELM-ART, an intelligent interactive educational system to support learning programming in LISP. ELM-ART provides all learning material online in the form of an adaptive interactive textbook. Using a combination of an overlay model and an episodic student model, ELM-ART provides adaptive navigation support, course sequencing, individualized diagnosis of student solutions, and example-based problem-solving support. Results of an empirical study show different effects of these techniques on different types of users during the first lessons of the programming course. ELM-ART demonstrates how some interactive and adaptive educational components can be implemented in WWW context and how multiple components can be naturally integrated together in a single system.

582 citations


Proceedings ArticleDOI
01 Jul 2001
TL;DR: A UML model of Web applications is proposed for their high-level representation, which is the starting point for several analyses, which can help in the assessment of the static site structure and drives Web application testing.
Abstract: The economic relevance of Web applications increases the importance of controlling and improving their quality. Moreover, the newly available technologies for their development allow the insertion of sophisticated functions, but often leave the developers responsible for their organization and evolution. As a consequence, a high demand is emerging for methodologies and tools for the quality assurance of Web-based systems. In this paper, a UML model of Web applications is proposed for their high-level representation. Such a model is the starting point for several analyses, which can help in the assessment of the static site structure. Moreover, it drives Web application testing, in that it can be exploited to define white-box testing criteria and to semi-automatically generate the associated test cases. The proposed techniques were applied to several real-world Web applications. The results suggest that automatic support for verification and validation activities can be extremely beneficial. In fact, it guarantees that all paths in the site which satisfy a selected criterion are properly exercised before delivery. The high level of automation that is achieved in test case generation and execution increases the number of tests that are conducted and simplifies the regression checks.

523 citations


Proceedings ArticleDOI
09 Nov 2001
TL;DR: This paper proposes effective and scalable techniques for Web personalization based on association rule discovery from usage data that can achieve better recommendation effectiveness, while maintaining a computational advantage over direct approaches to collaborative filtering such as the k-nearest-neighbor strategy.
Abstract: To engage visitors to a Web site at a very early stage (i.e., before registration or authentication), personalization tools must rely primarily on clickstream data captured in Web server logs. The lack of explicit user ratings as well as the sparse nature and the large volume of data in such a setting poses serious challenges to standard collaborative filtering techniques in terms of scalability and performance. Web usage mining techniques such as clustering that rely on offline pattern discovery from user transactions can be used to improve the scalability of collaborative filtering, however, this is often at the cost of reduced recommendation accuracy. In this paper we propose effective and scalable techniques for Web personalization based on association rule discovery from usage data. Through detailed experimental evaluation on real usage data, we show that the proposed methodology can achieve better recommendation effectiveness, while maintaining a computational advantage over direct approaches to collaborative filtering such as the k-nearest-neighbor strategy.

499 citations


Journal ArticleDOI
TL;DR: The results from published studies of Web searching are reviewed and the searching characteristics of Web users are compared and contrasted with users of traditional information retrieval and online public access systems to discover if there is a need for more studies that focus predominantly or exclusively on Web searching.
Abstract: Research on Web searching is at an incipient stage. This aspect provides a unique opportunity to review the current state of research in the field, identify common trends, develop a methodological framework, and define terminology for future Web searching studies. In this article, the results from published studies of Web searching are reviewed to present the current state of research. The analysis of the limited Web searching studies available indicates that research methods and terminology are already diverging. A framework is proposed for future studies that will facilitate comparison of results. The advantages of such a framework are presented, and the implications for the design of Web information retrieval systems studies are discussed. Additionally, the searching characteristics of Web users are compared and contrasted with users of traditional information retrieval and online public access systems to discover if there is a need for more studies that focus predominantly or exclusively on Web searching. The comparison indicates that Web searching differs from searching in other environments.

466 citations


Book
01 Sep 2001
TL;DR: Capacity Planning for Web Services: Metrics, Models, and Methods introduces quantitative performance predictive models for every major Web scenario, showing precisely how to identify and address both potential and actual performance problems.
Abstract: From the Publisher: The #1 guide to Web capacity planning — now completely updated! A quantitative analysis of Web service availability An integrated coverage of benchmarking, load testing, workload forecasting, and performance modeling of Web services Example and case studies show how to use each technique in the latest Web services, portals, search engines, mobile and streaming-media applications A quantitative framework for planning the capacity of Web services and understanding their behavior The world's #1 book on Web capacity planning now covers the latest Web services, e-business, and mobile applications! Capacity Planning for Web Services: Metrics, Models, and Methods introduces quantitative performance predictive models for every major Web scenario, showing precisely how to identify and address both potential and actual performance problems. Coverage includes: Web services: protocols, interaction models, and unique performance, reliability, and availability challenges State-of-the-art capacity planning methodologies Spreadsheets implement the solutions of the models presenteed in the book Specific issues and workloads associated with HTTP and TCP/IP protocols Benchmarking current performance at system and component levels From accommodating current usage peaks to defining service provider SLAs, Daniel A. Menasce and Virgilio Almeida cover every aspect of capacity planning — helping you optimize every tradeoff between cost and performance. "This bookis the best guide available to understanding the uniqinvolved in delivering today's Web services." — Mark Crovella,Associate Professor, Boston University; Technical Director, Network Appliance "...asuperb starting point for anyone wishing to explore the world of Web performance." — Jeffrey P. Buzen, President of CMG; Co-Founder, BGS Systems "There is no other book like this. It is a first." — Peter J. Denning, Professor of Computer Science, George Mason University and former President of the ACM "Web servers have bursty andhighly-skewed load characteristics. This book presents a new way tomodel, analyze, and plan for these new performance problems." — Jim Gray, Senior Researcher, Microsoft Research; 1998 ACM Turing Award Recipient "...a welcome approach to the performance analysis of today'sWeb-based Internet. ... no simple and practical treatment has been offered before, andtheirs is a timely contribution." — Leonard Kleinrock, Professor of Computer Science, UCLA; Chairman and Founder, Nomadix, Inc.

437 citations


Journal ArticleDOI
TL;DR: Ranging from simple to complex, Web services bring the promise of flexible, open-standards-based, distributed computing to the Internet.
Abstract: Web services are a new breed of Web applications. These independent application components are published on to the Web in such a way that other Web applications can find and use them. They take the Web to its next stage of evolution, in which software components can discover other software components and conduct business transactions. Examples of Web services include a credit card service that processes credit card transactions for a given account number, a market data service that provides stock market data associated with a specified stock symbol, and an airline service that provides flight schedule, availability, and reservation functionalities. Major vendors like IBM, Microsoft, Hewlett-Packard, and Sun, among others, are investing heavily in Web services technology. Ranging from simple to complex, Web services bring the promise of flexible, open-standards-based, distributed computing to the Internet.

426 citations


Journal ArticleDOI
TL;DR: Using traditional and emerging access control approaches to develop secure applications for the Web with a focus on mobile devices.
Abstract: Using traditional and emerging access control approaches to develop secure applications for the Web.

307 citations


Journal ArticleDOI
TL;DR: The emerging field of Web engineering aims to bring the current chaos in Web based system development under control, minimize risks, and enhance Web site maintainability and quality.
Abstract: Within a short period, the Internet and World Wide Web have become ubiquitous, surpassing all other technological developments in our history. They've also grown rapidly in their scope and extent of use, significantly affecting all aspects of our lives. Industries such as manufacturing, travel and hospitality, banking, education, and government are Web-enabled to improve and enhance their operations. E-commerce has expanded quickly, cutting across national boundaries. Even traditional legacy information and database systems have migrated to the Web. Advances in wireless technologies and Web-enabled appliances are triggering a new wave of mobile Web applications. As a result, we increasingly depend on a range of Web applications. Now that many of us rely on Web based systems and applications, they need to be reliable and perform well. To build these systems and applications, Web developers need a sound methodology, a disciplined and repeatable process, better development tools, and a set of good guidelines. The emerging field of Web engineering fulfils these needs. It uses scientific, engineering, and management principles and systematic approaches to successfully develop, deploy, and maintain high-quality Web systems and applications. It aims to bring the current chaos in Web based system development under control, minimize risks, and enhance Web site maintainability and quality.

294 citations


Journal ArticleDOI
TL;DR: This work identifies two different architectures for RBAC on the Web, called user-pull and server-pull, and implements each architecture by integrating and extending well-known technologies such as cookies, X.509, SSL, and LDAP, providing compatibility with current web technologies.
Abstract: Current approaches to access control on the Web servers do not scale to enterprise-wide systems because they are mostly based on individual user identities. Hence we were motivated by the need to manage and enforce the strong and efficient RBAC access control technology in large-scale Web environments. To satisfy this requirement, we identify two different architectures for RBAC on the Web, called user-pull and server-pull. To demonstrate feasibility, we implement each architecture by integrating and extending well-known technologies such as cookies, X.509, SSL, and LDAP, providing compatibility with current web technologies. We describe the technologies we use to implement RBAC on the Web in different architectures. Based on our experience, we also compare the tradeoffs of the different approaches.

Journal ArticleDOI
TL;DR: The recent development by a consortium lead by the AICPA of the so-called “eXtensible Business Reporting Language” (XBRL) is an initiative to develop an XML-based Web-based business reporting specification that would mean that both humans and intelligent software agents could operate on financial information disseminated on the Web with a high degree of accuracy and reliability.

Journal ArticleDOI
01 Mar 2001
TL;DR: The World Wide Web Wrapper Factory (W4F) is presented, a toolkit for the generation of wrappers for Web sources that offers an expressive language to specify the extraction of complex structures from HTML pages and a declarative mapping to various data formats like XML.
Abstract: The Web so far has been incredibly successful at delivering information to human users. So successful actually, that there is now an urgent need to go beyond a browsing human. Unfortunately, the Web is not yet a well organized repository of nicely structured documents but rather a conglomerate of volatile HTML pages. To address this problem, we present the World Wide Web Wrapper Factory (W4F), a toolkit for the generation of wrappers for Web sources, that offers: (1) an expressive language to specify the extraction of complex structures from HTML pages; (2) a declarative mapping to various data formats like XML; (3) some visual tools to make the engineering of wrappers faster and easier.

01 Jan 2001
TL;DR: This work is proposing frame- based representation as a suitable paradigm for building ontologies as well as the World Wide Web Consortium's RDF-formalism as a manifestation of frame-based representation for the Web.
Abstract: We believe that to build the Semantic Web, the sharing of ontological information is required. This allows agents to reach partial shared understanding and thus interoperate. We are proposing frame-based representation as a suitable paradigm for building ontologies as well as the World Wide Web Consortium's RDF-formalism (and its extensions, such as the DARPA Agent Markup Language) as a manifestation of frame-based representation for the Web.

Patent
12 Jan 2001
TL;DR: The Web Content Server (800) as mentioned in this paper is a web content server that separates data production, interaction elements, and display information and maintains these aspects of page production in different files.
Abstract: The present invention provides a system and method for integrating the disparate platforms and software applications, and dynamic generation of web content in a user effort efficient manner. The system of the present invention uses an Web Content Server (800) to interact with disparate systems and to improve the productivity of building web content. The Web Content Server provides a page engine or platform that separates data production, interaction elements, and display information and maintains these aspects of page production in different files.

DOI
01 Jan 2001
TL;DR: Some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour are discussed.
Abstract: Web-based technology is often the technology of choice for distance education given the ease of use of the tools to browse the resources on the Web, the relative affordability of accessing the ubiquitous Web, and the simplicity of deploying and maintaining resources on the WorldWide Web. Many sophisticated web-based learning environments have been developed and are in use around the world. The same technology is being used for electronic commerce and has become extremely popular. However, while there are clever tools developed to understand online customer’s behaviours in order to increase sales and profit, there is very little done to automatically discover access patterns to understand learners’ behaviour on webbased distance learning. Educators, using on-line learning environments and tools, have very little support to evaluate learners’ activities and discriminate between different learners’ on-line behaviours. In this paper, we discuss some data mining and machine learning techniques that could be used to enhance web-based learning environments for the educator to better evaluate the leaning process, as well as for the learners to help them in their learning endeavour.

Proceedings Article
11 Sep 2001
TL;DR: In this article, the authors present an extensive characterization of the graph structure of the Web, with a view to enabling high-performance applications that make use of this structure, showing that the Web emerges as the outcome of a number of essentially independent stochastic processes that evolve at various scales.
Abstract: Algorithmic tools for searching and mining the Web are becoming increasingly sophisticated and vital. In this context, algorithms that use and exploit structural information about the Web perform better than generic methods in both efficiency and reliability.We present an extensive characterization of the graph structure of the Web, with a view to enabling high-performance applications that make use of this structure. In particular, we show that the Web emerges as the outcome of a number of essentially independent stochastic processes that evolve at various scales. A striking consequence of this scale invariance is that the structure of the Web is "fractal"---cohesive subregions display the same characteristics as the Web at large. An understanding of this underlying fractal nature is therefore applicable to designing data services across multiple domains and scales.We describe potential applications of this line of research to optimized algorithm design for Web-scale data analysis.

Book ChapterDOI
TL;DR: The principles and roles of Web Engineering are presented, the similarities and differences between development of traditional software and Web-based systems are assessed, and key Web engineering activities are identified.
Abstract: In most cases, development of Web-based systems has been ad hoc, lacking systematic approach, and quality control and assurance procedures. Hence, there is now legitimate and growing concern about the manner in which Web-based systems are developed and their quality and integrity. Web Engineering, an emerging new discipline, advocates a process and a systematic approach to development of high quality Web-based systems. It promotes the establishment and use of sound scientific, engineering and management principles, and disciplined and systematic approaches to development, deployment and maintenance of Web-based systems. This paper gives an introductory overview on Web Engineering. It presents the principles and roles of Web Engineering, assesses the similarities and differences between development of traditional software and Web-based systems, and identifies key Web engineering activities. It also highlights the prospects of Web engineering and the areas that need further study.


Proceedings ArticleDOI
01 Apr 2001
TL;DR: The goal of this paper is to argue the need to approach the personalization issues in Web applications from the very beginning in the application’s development cycle through a design view, rather than only an implementation view.
Abstract: The goal of this paper is to argue the need to approach the personalization issues in Web applications from the very beginning in the application’s development cycle. Since personalization is a critical aspect in many popular domains such as e-commerce, it important enough that it should be dealt with through a design view, rather than only an implementation view (which discusses mechanisms, rather than design options). We present different scenarios of personalization covering most existing applications. Since our design approach is based on the Object-Oriented Hypermedia Design Method, we briefly introduce i the way in which we build Web application models as object -oriented views of conceptual models. We show how we specify personalized Web applications by refining views according to users’ profiles or preferences; we show that an object -oriented approach allows maximizing reuse in these specifications. We discuss some implementation aspects and compare our work with related approaches, and present some concluding remarks.

Journal ArticleDOI
TL;DR: This work proposes the OO-H method, an object-oriented software approach that captures relevant properties involved in modeling and implementing Web application interfaces, and proposes a solution to the problem of inadequate tools for building and deploying complex Web sites.
Abstract: Existing tools for building and deploying complex Web sites are inadequate for dealing with the software production process that involves connecting with underlying logic in a unified and systematic way. As a solution, we propose the OO-H method, an object-oriented software approach that captures relevant properties involved in modeling and implementing Web application interfaces.

Journal ArticleDOI
TL;DR: This work has developed methods for mapping web sources into a uniform representation that makes it simple and efficient to integrate multiple sources and makes it easy to maintain these agents and incorporate new sources as they become available.
Abstract: The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to do this is to build specialized applications, which are time-consuming to develop and difficult to maintain. We have addressed this problem by creating the technology and tools for rapidly constructing information agents that extract, query, and integrate data from web sources. Our approach is based on a uniform representation that makes it simple and efficient to integrate multiple sources. Instead of building specialized algorithms for handling web sources, we have developed methods for mapping web sources into this uniform representation. This approach builds on work from knowledge representation, databases, machine learning and automated planning. The resulting system, called Ariadne, makes it fast and easy to build new information agents that access existing web sources. Ariadne also makes it easy to maintain these agents and incorporate new sources as they become available.

Proceedings ArticleDOI
01 Mar 2001
TL;DR: A replicable WWW protocol analysis methodology illustrated by application to data collected in the laboratory is introduced, visualizing the structure of the interaction and showing the strong effect of information scent in determining the path followed.
Abstract: The purpose of this paper is to introduce a replicable WWW protocol analysis methodology illustrated by application to data collected in the laboratory. The methodology uses instrumentation to obtain detailed recordings of user actions with a browser, caches Web pages encoutered, and videotapes talk-aloud protocols. We apply the current form of the method to the analysis of eight Web protocols, visualizing the structure of the interaction and showing the strong effect of information scent in determining the path followed.

Proceedings ArticleDOI
05 Oct 2001
TL;DR: This paper describes the results of an observational study into the methods people use to manage web information for re-use and the functional analysis can help to assess the likely success of various tools, current and proposed.
Abstract: This paper describes the results of an observational study into the methods people use to manage web information for re-use. People observed in our study used a diversity of methods and associated tools. For example, several participants emailed web addresses (URLs) along with comments to themselves and to others. Other methods observed included printing out web pages, saving web pages to the hard drive, pasting the address for a web page into a document and pasting the address into a personal web site. Ironically, two web browser tools that have been explicitly developed to help users track web information - the bookmarking tool and the history list - were not widely used by participants in this study. A functional analysis helps to explain the observed diversity of methods. Methods vary widely in the functions they provide. For example, a web address pasted into a self-addressed email can provide an important reminding function together with a context of relevance: The email arrives in an inbox which is checked at regular intervals and the email can include a few lines of text that explain the URL's relevance and the actions to be taken. On the other hand, for most users in the study, the bookmarking tool ("Favorites" or "Bookmarks" depending on the browser) provided neither a reminding function nor a context of relevance. The functional analysis can help to assess the likely success of various tools, current and proposed.

Book
29 Oct 2001
TL;DR: This book tells you how to design usable web sites in a systematic process applicable to almost any business need, and examines the entire spectrum of usability issues, including architecture, navigation, graphical presentation, and page structure.
Abstract: Every stage in the design of a new web site is an opportunity to meet or miss deadlines and budgetary goals. Every stage is an opportunity to boost or undercut the site's usability.This book tells you how to design usable web sites in a systematic process applicable to almost any business need. You get practical advice on managing the project and incorporating usability principles from the project's inception. This systematic usability process for web design has been developed by the authors and proven again and again in their own successful businesses.A beacon in a sea of web design titles, this book treats web site usability as a preeminent, practical, and realizable business goal, not a buzzword or abstraction. The book is written for web designers and web project managers seeking a balance between usability goals and business concerns. * Examines the entire spectrum of usability issues, including architecture, navigation, graphical presentation, and page structure.* Explains clearly the steps relevant to incorporating usability into every stage of the web development process, from requirements to tasks analysis, prototyping and mockups, to user testing, revision, and even postlaunch evaluations.* Includes forms, checklists, and practical techniques that you can easily incorporate into your own projects at http://www.mkp.com/uew/.

Proceedings ArticleDOI
01 May 2001
TL;DR: This paper describes the architectural framework of the CachePortal system for enabling dynamic content caching for database-driven e-commerce sites, and describes techniques for intelligently invalidating dynamically generated web pages in the caches, thereby enabling caching of web pages generated based on database contents.
Abstract: Web performance is a key differentiation among content providers. Snafus and slowdowns at major web sites demonstrate the difficulty that companies face trying to scale to a large amount of web traffic. One solution to this problem is to store web content at server-side and edge-caches for fast delivery to the end users. However, for many e-commerce sites, web pages are created dynamically based on the current state of business processes, represented in application servers and databases. Since application servers, databases, web servers, and caches are independent components, there is no efficient mechanism to make changes in the database content reflected to the cached web pages. As a result, most application servers have to mark dynamically generated web pages as non-cacheable. In this paper, we describe the architectural framework of the CachePortal system for enabling dynamic content caching for database-driven e-commerce sites. We describe techniques for intelligently invalidating dynamically generated web pages in the caches, thereby enabling caching of web pages generated based on database contents. We use some of the most popular components in the industry to illustrate the deployment and applicability of the proposed architecture.

Book
01 Mar 2001
TL;DR: This presentation discusses the development of a Web-based learning model for on-line learning, the review of existing courseware, and the future of on- line learning.
Abstract: Web-based learning is the event or subject suitable for Web-based learning? a Web-based learning model learning at a distance Web-based learner support copyright 21 steps for building Web-based learning materials delivery of on-line learning materials review of existing courseware the future of on-line learning.

Proceedings ArticleDOI
01 May 2001
TL;DR: This paper introduces X-Sec, an XML-based language for specifying subject credentials and security policies and for organizing them into subject profiles and policy bases, respectively, and complemented by a set of subscription-based schemes for accessing distributed Web documents, which rely on defined XML subject profile profiles and XML policy bases.
Abstract: The rapid growth of the Web and the ease with which data can be accessed facilitate the distribution and sharing of information. Information dissemination often takes the form of documents that are made available at Web servers, or that are actively broadcasted by Web servers to interested clients. In this paper, we present an XML-compliant formalism for specifying security-related information for Web document protection. In particular, we introduceX-Sec, an XML-based language for specifying subject credentials and security policies and for organizing them into subject profiles and policy bases, respectively. The language is complemented by a set of subscription-based schemes for accessing distributed Web documents, which rely on defined XML subject profiles and XML policy bases.

Journal ArticleDOI
TL;DR: This work extracts Web usage and failure information from existing Web logs to measure the reliability of Web applications and the potential effectiveness of statistical Web testing.
Abstract: Statistical testing and reliability analysis can be used effectively to assure quality for Web applications. To support this strategy, we extract Web usage and failure information from existing Web logs. The usage information is used to build models for statistical Web testing. The related failure information is used to measure the reliability of Web applications and the potential effectiveness of statistical Web testing. We applied this approach to analyze some actual Web logs. The results demonstrated the viability and effectiveness of our approach.

Patent
11 Dec 2001
TL;DR: In this paper, the authors propose a software construct, termed a Web service container, for managing Web services at a network node and an adaptive model for the dynamic configuration of a plurality of Web service containers distributed throughout a network, such as the Internet or an intranet, in a software and hardware platform-independent manner.
Abstract: The invention provides a software construct, herein termed a Web service container, for managing Web services at a network node and an adaptive model for the dynamic configuration of a plurality of Web service containers distributed throughout a network, such as the Internet or an intranet, in a software and hardware platform-independent manner. Containers can communicate with each other via the network to determine contextual information such as the identity of each other, the capabilities of each other, the operating system or platforms of each others, the contents of the container (i.e., the available Web services at that location), etc. By providing a container framework and the ability to exchange contextual information, the present invention allows servers as well as clients to dynamically exchange Web services software as well as contextual information, such as current workload, so that servers and clients are virtually limitlessly reconfigurable based on context.