scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 1997"


Proceedings ArticleDOI
03 Nov 1997
TL;DR: This paper defines Web mining and presents an overview of the various research issues, techniques, and development efforts, and briefly describes WEBMINER, a system for Web usage mining, and concludes the paper by listing research issues.
Abstract: Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is no established vocabulary, leading to confusion when comparing research efforts. The term Web mining has been used in two distinct ways. The first, called Web content mining in this paper, is the process of information discovery from sources across the World Wide Web. The second, called Web usage mining, is the process of mining for user browsing and access patterns. We define Web mining and present an overview of the various research issues, techniques, and development efforts. We briefly describe WEBMINER, a system for Web usage mining, and conclude the paper by listing research issues.

1,365 citations


Proceedings ArticleDOI
08 Feb 1997
TL;DR: ShopBot, a fully-implemented, domainindependent comparison-shopping agent that relies on a combination of heuristic search, pattern matching, and inductive learning techniques, enables users to both find superior prices and substantially reduce Web shopping time.
Abstract: The WorldWideWeb is less agent-friendly than we might hope. Most information on the Web is presented in loosely structured natural language text with no agent-readable semantics. HTML annotations structure the display of Web pages, but provide virtually no insight into their content. Thus, the designers of intelligent Web agents need to address the following questions: (1) To what extent can an agent understand information published at Web sites? (2) Is the agent's understanding sufficient to provide genuinely useful assistance to users? (3) Is site-specific hand-coding necessary, or can the agent automatically extract information from unfamiliar Web sites? (4) What aspects of the Web facilitate this competence? In this paper we investigate these issues with a case study using ShopBot, a fully-implemented, domainindependent comparison-shopping agent. Given the home pages of several online stores, ShopBot autonomously learns how to shop at those vendors. After learning, it is able to speedily visit over a dozen software and CD vendors, extract product information, and summarize the results for the user. Preliminary studies show that ShopBot enables users to both find superior prices and substantially reduce Web shopping time. Remarkably, ShopBot achieves this performance without sophisticated natural language processing, and requires only minimal knowledge about different product domains. Instead, ShopBot relies on a combination of heuristic search, pattern matching, and inductive learning techniques. PERMISSION TO COPY WITHOUT FEE ALL OR OR PART OF THIS MATERIAL IS GRANTED PROVIDED THAT THE COPIES ARE NOT MADE OR DISTRIBUTED FOR DIRECT COMMERCIAL ADVANTAGE, THE ACM copyRIGHT NOTICE AND THE TITLE OF THE PUBLICATION AND ITS DATE APPEAR, AND NOTICE IS GIVEN THAT COPYING IS BY PERMISSION OF ACM. To COPY OTHERWISE, OR TO REPUBLISH, REQUIRES A FEE AND/OR SPECIFIC PERMISSION. AGENTS '97 CONFERENCE PROCEEDINGS, COPYRIGHT 1997 ACM.

593 citations


Book
01 Jun 1997
TL;DR: This book is the most comprehensive data demonstrating how Web sites actually work when users need specific answers, and offers guidance for evaluating and improving the usability of Web sites.
Abstract: From the Publisher: "Without a doubt, the most important book I've read this year on Web design isWeb Site Usability: A Designer's Guide. The book is easy to read and full of relevant information." —Bill Skeet, Chief Designer, Knight-Ridder New Media "Even experienced Web designers should read these usability findings about 11 different site designs. Competitive usability testing is one of the most powerful ways of learning about design and this book will save you hours of lab time." —Dr. Jakob Nielsen, The Neilsen Norman Group "This report challenges many of my assumptions about Web design, but that's a good thing. We're still babes in the woods, crawling along trying to distinguish the trees from the forest. Any sign posts are helpful, right now." —Mary Deaton, KNOWware Web Site Usability: A Designer's Guide is a report that every person involved in Web design, commerce, or online marketing will want to have. This book is, undoubtedly, the most comprehensive data demonstrating how Web sites actually work when users need specific answers. Researched and compiled by User Interface Engineering, the results are written in an easy to understand style, illustrating the need to make Web sites useful, not complicated. Features: - Based on an extensive study of actual users — not theory, not graphic design principles, and not new tricks to make a "cool" Web sites - Demonstrates how people actually navigate and extract information on Web sites - Offers guidance for evaluating and improving the usability of Web sites Jared M. Spool, Principal Investigator, is with User Interface Engineering, a consulting firm specializing in product usability and design. User Interface Engineering's mission is to empower product development teams to build applications that meet the needs of their users by providing critical data for creating designs and products that work.

530 citations


Proceedings ArticleDOI
04 Nov 1997
TL;DR: A model of user browsing behavior is identified that separates web page references into those made for navigation purposes and those for information content purposes and Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system.
Abstract: Web-based organizations often generate and collect large volumes of data in their daily operations. Analyzing such data involves the discovery of meaningful relationships from a large collection of primarily unstructured data, often stored in Web server access logs. While traditional domains for data mining, such as point of sale databases, have naturally defined transactions, there is no convenient method of clustering web references into transactions. This paper identifies a model of user browsing behavior that separates web page references into those made for navigation purposes and those for information content purposes. A transaction identification method based on the browsing model is defined and successfully tested against other methods, such as the maximal forward reference algorithm proposed in (Chen et al., 1996). Transactions identified by the proposed methods are used to discover association rules from real world data using the WEBMINER system.

398 citations


Proceedings ArticleDOI
27 Mar 1997
TL;DR: Web Browser Intelligence (WBI, pronounced “WEB-ee”) is an implemented system that organizes agents on a user’s workstation to observe user actions, proactively offer assistance, modify web documents, and perform new functions.
Abstract: Agents can personalize otherwise impersonal computational systems. The World Wide Web presents the same appearance to every user regardless of that user’s past activity. Web Browser Intelligence (WBI, pronounced “WEB-ee”) is an implemented system that organizes agents on a user’s workstation to observe user actions, proactively offer assistance, modify web documents, and perform new fi.mctions. WBI can annotate hyperlinks with network speed information, record pages viewed for later access, and provide shortcut links for common paths. In this way, WBI personalizes a user’s web experience by joining personal information with global information to effectively tailor what the user sees.

362 citations


Journal ArticleDOI
TL;DR: The current version of the Basic Support for Cooperative Work system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.
Abstract: The emergence and widespread adoption of the World Wide Web offers a great deal of potential in supporting cross-platform cooperative work within widely dispersed working groups. The Basic Support for Cooperative Work (BSCW) project at GMD is attempting to realize this potential through development of web-based tools which provide cross-platform collaboration services to groups using existing web technologies. This paper describes one of these tools, theBSCW Shared Workspace system?a centralized cooperative application integrated with an unmodified web server and accessible from standard web browsers. The BSCW system supports cooperation through “shared workspaces”; small repositories in which users can upload documents, hold threaded discussions and obtain information on the previous activities of other users to coordinate their own work. The current version of the system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.

334 citations


Journal ArticleDOI
TL;DR: The paper discusses the MetaCrawler Softbot parallel Web search service that has been available at the University of Washington since June 1995 and has some sophisticated features that allow it to obtain results of much higher quality than simply regurgitating the output from each search service.
Abstract: The paper discusses the MetaCrawler Softbot parallel Web search service that has been available at the University of Washington since June 1995. It provides users with a single interface for querying popular general-purpose Web search services, such as Lycos and AltaVista, and has some sophisticated features that allow it to obtain results of much higher quality than simply regurgitating the output from each search service.

303 citations


Journal ArticleDOI
TL;DR: The goal of the REFERRAL WEB Project is to create models of social networks by data mining the web and develop tools that use the models to assist in locating experts and related information search and evaluation tasks.
Abstract: The difficulty of finding information on the World Wide Web by browsing hypertext documents has led to the development and deployment of various search engines and indexing techniques. However, many information-gathering tasks are better handled by finding a referral to a human expert rather than by simply interacting with online information sources. A personal referral allows a user to judge the quality of the information he or she is receiving as well as to potentially obtain information that is deliberately not made public. The process of finding an expert who is both reliable and likely to respond to the user can be viewed as a search through the net-work of social relationships between individuals as opposed to a search through the network of hypertext documents. The goal of the REFERRAL WEB Project is to create models of social networks by data mining the web and develop tools that use the models to assist in locating experts and related information search and evaluation tasks.

300 citations


Proceedings Article
23 Aug 1997
TL;DR: This work believes that AI techniques can be used to examine user access logs in order to automatically improve the site and challenges the AI community to create adaptive web sites: sites that automatically improve their organization and presentation based on user access data.
Abstract: The creation of a complex web site is a thorny problem in user interface design. First, different visitors have distinct goals. Second, even a single visitor may have different needs at different times. Much of the information at the site may also be dynamic or time-dependent. Third, as the site grows and evolves, its original design may no longer be appropriate. Finally, a site may be designed for a particular purpose but used in unexpected ways. Web servers record data about user interactions and accumulate this data over time. We believe that AI techniques can be used to examine user access logs in order to automatically improve the site. We challenge the AI community to create adaptive web sites: sites that automatically improve their organization and presentation based on user access data. Several unrelated research projects in plan recognition, machine learning, knowledge representation, and user modeling have begun to explore aspects of this problem. We hope that posing this challenge explicitly will bring these projects together and stimulate fundamental AI research. Success would have a broad and highly visible impact on the web and the AI community.

250 citations


Proceedings ArticleDOI
08 Feb 1997
TL;DR: SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”, is described.
Abstract: This paper describes SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”. These annotations are expressed in terms of ontological knowledge which can be generated by using or extending standard ontologies available on the Web. This makes it possible to ask Web agent queries such as “Find me all graduate students in Maryland who are working on a project funded by DoD initiative 123-4567”, instead of simplistic keyword searches enabled by current search engines. We have also developed a web-crawling agent, Expos´ e, which interns SHOE knowledge from web documents, making these kinds queries a reality.

246 citations


Proceedings Article
14 Aug 1997
TL;DR: Webfoot, a preprocessor that parses web pages into logically coherent segments based on page layout cues, is introduced and passed on to CRYSTAL, an NLP system that learns text extraction rules from example.
Abstract: There is a wealth of information to be mined from narrative text on the World Wide Web. Unfortunately, standard natural language processing (NLP) extraction techniques expect full, grammatical sentences, and perform poorly on the choppy sentence fragments that are often found on web pages. This paper1 introduces Webfoot, a preprocessor that parses web pages into logically coherent segments based on page layout cues. Output from Webfoot is then passed on to CRYSTAL, an NLP system that learns text extraction rules from example. Webfoot and CRYSTAL transform the text into a formal representation that is equivalent to relational database entries. This is a necessary first step for knowledge discovery and other automated analysis of free text.

Journal ArticleDOI
01 Sep 1997
TL;DR: The varieties of link information (not just hyperlinks) on the Web, how the Web differs from conventional hypertext, and how the links can be exploited to build useful applications are discussed.
Abstract: Web information retrieval tools typically make use of only the text on pages, ignoring valuable information implicitly contained in links. At the other extreme, viewing the Web as a traditional hypertext system would also be mistake, because heterogeneity, cross-domain links, and the dynamic nature of the Web mean that many assumptions of typical hypertext systems do not apply. The novelty of the Web leads to new problems in information access, and it is necessary to make use of the new kinds of information available, such as multiple independent categorization, naming, and indexing of pages. This paper discusses the varieties of link information (not just hyperlinks) on the Web, how the Web differs from conventional hypertext, and how the links can be exploited to build useful applications. Specific applications presented as part of the ParaSite system find individuals' homepages, new locations of moved pages, and unindexed information.

Journal Article
TL;DR: The advertiser-supported Web site is one of several business models vying for legitimacy in the emerging medium of the World Wide Web on the Internet.
Abstract: The advertiser-supported Web site is one of several business models vying for legitimacy in the emerging medium of the World Wide Web on the Internet (Hoffman, Novak, and Chatterjee 1995). Currently, there are three major types of advertiser-supported sites: 1) sponsored content sites like Hotwired, ESPNET Sportszone, and ZD Net, 2) sponsored search agents and directories like InfoSeek, Excite, and Yahoo, and 3) entry portal sites like Netscape. At present, these three classes of sites are split at about 55 percent, 36 percent and 19 percent, respectively, in terms of advertising revenue (Jupiter Communications 1996).

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper reports on the experience using WebSQL, a high level declarative query language for extracting information from the Web that takes advantage of multiple index servers without requiring users to know about them, and integrates full-text with topology-based queries.
Abstract: In this paper we report on our experience using WebSQL, a high level declarative query language for extracting information from the Web. WebSQL takes advantage of multiple index servers without requiring users to know about them, and integrates full-text with topology-based queries. The WebSQL query engine is a library of Java classes, and WebSQL queries can be embedded into Java programs much in the same way as SQL queries are embedded in C programs. This allows us to access the Web from Java at a much higher level of abstraction than bare HTTP requests. We illustrate the use of WebSQL for application development by describing two applications we are experimenting with: Web site maintenance and specialized index construction. We also sketch several other possible applications. Using the library, we have also implemented a client-server architecture that allows us to perform interactive intelligent searches on the Web from an applet running on a browser.

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper introduces the WebComposition system, which is based on a fine-grained object-oriented web application model, and maintains access to it throughout the lifecycle for management and maintenance activities.
Abstract: Maintenance of web applications is a difficult and error-prone task because many design decisions are not directly accessible at run time, but rather embedded in file-based resources. In this paper we introduce the WebComposition system addressing this problem. This system is based on a fine-grained object-oriented web application model, and maintains access to it throughout the lifecycle for management and maintenance activities. Modifications of the model are made effective in the web by incrementally mapping the model to file-based resources.

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper presents a unique approach that tightly integrates searching and browsing in a manner that improves both paradigms and is embodied in WebCutter, a client-server system fully integrated with Web software.
Abstract: Conventional information discovery tools can be classified as being either search oriented or browse oriented. In the context of the Web, search-oriented tools employ text-analysis techniques to find Web documents based on user-specified queries, whereas browse-oriented ones employ site mapping and visualization techniques to allow users to navigate through the Web. This paper presents a unique approach that tightly integrates searching and browsing in a manner that improves both paradigms. When browsing is the primary task, it enables semantic content-based tailoring of Web maps in both the generation as well as the visualization phases. When search is the primary task, it enables to contextualize the results by augmenting them with the documents' neighborhoods. The approach is embodied in WebCutter, a client-server system fully integrated with Web software. WebCutter consists of a map generator running off a standard Web server and a map visualization client implemented as a Java applet runnable from any standard Web-browser and requiring no installation or external plug-in application. WebCutter is in beta stage and is in the process of being integrated into the Lotus Domino.Applications product line.

Journal ArticleDOI
TL;DR: The history and precursors of the Lycos system for collecting, storing, and retrieving information about pages on the Web are outlined and some of the design choices made in building this Web indexer are discussed.
Abstract: One of the enabling technologies of the World Wide Web, along with browsers, domain name servers, and hypertext markup language, is the search engine. Although the Web contains over 100 million pages of information, those millions of pages are useless if you cannot find the pages you need. All major Web search engines operate the same way: a gathering program explores the hyperlinked documents of the Web, foraging for Web pages to index. These pages are stockpiled by storing them in some kind of database or repository. Finally, a retrieval program takes a user query and creates a list of links to Web documents matching the words, phrases, or concepts in the query. Although the retrieval program itself is correctly called a search engine, by popular usage the term now means a database combined with a retrieval program. For example, the Lycos search engine comprises the Lycos Catalog of the Internet and the Pursuit retrieval program. This paper describes the Lycos system for collecting, storing, and retrieving information about pages on the Web. After outlining the history and precursors of the Lycos system, the paper discusses some of the design choices made in building this Web indexer and touches briefly on the economic issues involved in working with very large retrieval systems.

Proceedings ArticleDOI
26 Sep 1997
TL;DR: This paper describes how ARTour Web Express has been enhanced to support both disconnected and asynchronous operation.
Abstract: In a previous paper [l], we described ARTour Web Express, a software system that makes it possible to run World Wide Web applications over wide-area wireless nehvorks. Our earlier paper discussed how our system significantly reduces user cost and response time during online browsing over wireless communications links. Even with these savings, however, users may experience slow performance. This is a result of the inherent delay of wireless communication coupled with congestion in the Internet and Web servers, which cannot be masked Corn users under the synchronous request/response model of browsing. Furthermore, disconnection - both voluntary and involuntary - is common in the mobile environment, and the standard browsing model provides no support for disconnected operation. This paper describes how ARTour Web Express has been enhanced to support both disconnected and asynchronous operation.

Journal ArticleDOI
TL;DR: A number of Web-based information visualization prototypes and applications are developed by adapting several well-known information visualization ideas and techniques for use within Web environments by adapting them to help visualize complex relational information.
Abstract: Increasingly, the World Wide Web is being used to help visualize complex relational information. We have developed a number of Web-based information visualization prototypes and applications by adapting several well-known information visualization ideas and techniques for use within Web environments. Before delving into specific examples, we offer some relevant background about the Web and our use of visualization for analysis.

Journal ArticleDOI
01 May 1997
TL;DR: Analytical methods developed in the field of Computer Supported Cooperative Work are used to investigate the reasons for the World Wide Web's present success, its strengths and weaknesses as a platform for CSCW, and prospects for future development.
Abstract: This paper investigates some of the issues which will determine the viability of the World Wide Web as an infrastructure for cooperative work. In fact, taking a weak definition of collaboration, the Web is already a very successful collaborative environment. In addition, it is already being used as the basis for experimental and commercial groupware. The paper takes this as a starting point and uses analytic methods developed in the field of Computer Supported Cooperative Work to investigate the reasons for the Web‘s present success, its strengths and weaknesses as a platform for CSCW, and prospects for future development.

Journal ArticleDOI
01 Sep 1997
TL;DR: A method for analysis and design of web-based information systems, and tools to support the method, WebArchitect and PilotBoat, which focuses on architectures and functions of web sites, rather than on appearance of each web resource (page), such as graphics and layouts.
Abstract: We have developed a method for analysis and design of web-based information systems (WBISs), and tools to support the method, WebArchitect and PilotBoat The method and the tools focus on architectures and functions of web sites, rather than on appearance of each web resource (page), such as graphics and layouts Our goal is to efficiently develop WBISs that best support particular business processes at least maintenance cost Our method consists of two approaches, static and dynamic We use the entity relation (E-R) approach for the statis aspects of WBISs, and use scenario approach for the dynamic aspects The E-R analysis and design, based on relationship management methodology (RMM) developed by Isakowitz et al, defines what are entities and how they are related The scenario analysis defines how web resources are accessed, used, and changed by whom The method also defines attributes of each web resource, which are used in maintaining the resource WebArchitect enables designers and maintainers to directly manipulate meta-level links between web resources that are represented in a hierarchical manner PilotBoat is a web client that navigates and lets users collaborate through web sites We have applied our approaches to the WWW6 proceedings site

Journal ArticleDOI
TL;DR: It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.
Abstract: The paper considers the usability of the World Wide Web in the light of a decade of research into the usability of hypertext and hypermedia systems. The concepts of virtual hierarchies and virtual networks are introduced as a mechanism for alleviating some of the shortcomings inherent in the current implementations of the web, without violating its basic philosophy. It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.The paper first analyses the published work on hypermedia usability, identifying the assumptions that underlie this research and relating them to the assumptions underlying the web. Some general conclusions are presented about both hypermedia usability principles and their applicability to the web. These results are coupled with problems identified from other sources to produce a requirements list for improving web usability. A possible solution is then presented which utilizes the capabilities of existing distributed information management software to permit web users to create virtual hierarchies and virtual networks. Some ways in which these virtual structures assist searchers to find useful information, and thus help authors to publicize their information more effectively, are described. The explanation is illustrated by examples taken from the GENIE Service, an implementation of some of the ideas. This uses the World Wide Web as a means of allowing global environmental change researchers throughout the world to find data that may be relevant to their research.

Journal ArticleDOI
TL;DR: It is concluded that customer feedback must be managed in a disciplined way, by ensuring that feedback is representative of the customer population as a whole, not just of those with a propensity to comment.
Abstract: Presents a model which organizations can use to monitor Web site effectiveness. Argues that anecdotal evidence can be colorful but is not useful in structuring and managing an effective site. Suggests that traditional disciplines of composition and communication ‐ explicit purpose, coherent structure, relevant conclusion ‐ should be applied to Web site design. Concludes that customer feedback must be managed in a disciplined way, by ensuring that feedback is representative of the customer population as a whole, not just of those with a propensity to comment; and that the purpose and aims of a Web site must be thought through with the utmost care and attention to give a higher likelihood of creating an effective site.

Journal ArticleDOI
TL;DR: This paper describes the approach to the development of an Internet-based course designed for distance education and provides general observations on the opportunities and constraints which the web provides and on the pedagogic issues which arise when using this delivery mechanism.
Abstract: The phenomenal growth of the Internet over the last few years, coupled with the development of various multimedia applications which exploit the Internet presents exciting opportunities for educators. In the context of distance education, the World Wide Web provides a unique challenge as a new delivery mechanism for course material allowing students to take a course (potentially) from anywhere in the world. In this paper, we describe our approach to the development of an Internet-based course designed for distance education. Using this experience, we provide general observations on the opportunities and constraints which the web provides and on the pedagogic issues which arise when using this delivery mechanism.We have found that the process of developing web-based courses is one area which requires careful consideration as technologies and tools for both the authoring and the delivery of courses are evolving so rapidly. We have also found that current tools are severely lacking in a number of important respects?particularly with respect to the design of pedagogically sound courseware.

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper proposes that each Web server does its own housekeeping, and a software agent named SiteHelper is designed to act as a housekeeper for the Web server and as a helper for a Web user to find relevant information at a particular site.
Abstract: The World Wide Web (the Web for short) is rapidly becoming an information flood as it continues to grow exponentially. This causes difficulty for users to find relevant pieces of information on the Web. Search engines and robots (spiders) are two popular techniques developed to address this problem. Search engines are indexing facilities over searchable databases. As the Web continues to expand, search engines are becoming redundant because of the large number of Web pages they return for a single search. Robots are similar to search engines; rather than indexing the Web, they traverse (“walk through”) the Web, analyzing and storing relevant documents. The main drawback of these robots is their high demand on network resources that results in networks being overloaded. This paper proposes an alternate way in assisting users in finding information on the Web. Since the Web is made up of many Web servers, instead of searching all the Web servers, we propose that each server does its own housekeeping. A software agent named SiteHelper is designed to act as a housekeeper for the Web server and as a helper for a Web user to find relevant information at a particular site. In order to assist the Web user finding relevant information at the local site, SiteHelper interactively and incrementally learns about the Web user's areas of interest and aids them accordingly. To provide such intelligent capabilities, SiteHelper deploys enhanced HCV with incremental learning facilities as its learning and inference engines.

Book ChapterDOI
07 Sep 1997
TL;DR: A toolkit for application developers, MetaWeb, is presented, which augments the Web with basic features which provide new and legacy applications with better support for synchronous cooperation.
Abstract: The World Wide Web is increasingly seen as an attractive technology for the deployment and evaluation of groupware However the underlying architecture of the Web is inherently stateless - best supporting asynchronous types of cooperation. This paper presents a toolkit for application developers, MetaWeb, which augments the Web with basic features which provide new and legacy applications with better support for synchronous cooperation. Using three simple abstractions, User, Location and Session, MetaWeb allows applications to be coupled as tightly or as loosely to the Web as desired The paper presents two distinct applications of MetaWeb, including the extension of an existing application, the BSCW shared workspace system, from which a number of observations are drawn.

Patent
Margaret Gardner MacPhail1
25 Feb 1997
TL;DR: In this paper, the authors propose a method of providing information about a set of interrelated objects or files on one or more servers in a computer network, such as pages on the Internet's world wide web.
Abstract: A method of providing information about a set of interrelated objects or files on one or more servers in a computer network, such as pages on the Internet's world wide web. The method involves construction of a web links object that contains information regarding the links between the various web pages. When a user at a workstation sends a request for a specific web page (designating a particular universal resource locator), the server instead transmits the web links object to allow the user to see the hierarchy of the web site before downloading the contents of the web pages. The server can store the links object (such as one created by the web site designer), or can dynamically create a web links object upon request by analyzing the links in the various web pages. Alternatively, the workstation can perform the analysis and construct the web links object.

Journal ArticleDOI
TL;DR: This article presents various log file analysis techniques and issues related to the interpretation of log file data as a means of Web use evaluation through the analysis of Web server-generated log files.

Journal ArticleDOI
TL;DR: If users, designers, MIS departments and organizations don't demand hypermedia support, hypermedia may get lost in the frenzy of Web integration and become the jewel of the Web environment.
Abstract: As organizations rush to embrace the World Wide Web as their primary application infrastructure, they should not bypass the benefit of hypermedia support. The Web's infrastructure can serve as an interface to all interactive applications and, over time, will become the graphical user interface model for new applications. Ubiquitous hypermedia support should become the jewel of the Web environment. Through Web integration, hypermedia could become an integral part of every interactive application. With the proper tools to support hypermedia in Web application development, it will become second nature for developers and individual authors to provide supplemental links and hypermedia navigation. However, as organizations adopt the Web as their primary application infrastructure, designers may use Java and other tools to recreate current application functionality, and not take advantage of the Web's hypermedia-augmented infrastructure. If users, designers, MIS departments and organizations don't demand hypermedia support, hypermedia may get lost in the frenzy of Web integration.

Book ChapterDOI
Paul P. Maglio1, Rob Barrett1
01 Jan 1997
TL;DR: A model of what people do when they search for information on the web is sketched to provide personal support for information-searching and to effectively transfer knowledge gained by one person to another.
Abstract: In this paper, we sketch a model of what people do when they search for information on the web. From a theoretical perspective, our interest lies in the cognitive processes and internal representations that are both used in and affected by the search for information. From a practical perspective, our aim is to provide personal support for information-searching and to effectively transfer knowledge gained by one person to another. Toward these ends, we first collected behavioral data from people searching for information on the web; we next analyzed these data to learn what the searchers were doing and thinking; and we then constructed specific web agents to support searching behaviors we identified.