scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 1997"


Proceedings ArticleDOI
03 Nov 1997
TL;DR: This paper defines Web mining and presents an overview of the various research issues, techniques, and development efforts, and briefly describes WEBMINER, a system for Web usage mining, and concludes the paper by listing research issues.
Abstract: Application of data mining techniques to the World Wide Web, referred to as Web mining, has been the focus of several recent research projects and papers. However, there is no established vocabulary, leading to confusion when comparing research efforts. The term Web mining has been used in two distinct ways. The first, called Web content mining in this paper, is the process of information discovery from sources across the World Wide Web. The second, called Web usage mining, is the process of mining for user browsing and access patterns. We define Web mining and present an overview of the various research issues, techniques, and development efforts. We briefly describe WEBMINER, a system for Web usage mining, and conclude the paper by listing research issues.

1,365 citations



Journal ArticleDOI
TL;DR: The World-Wide Web presents survey researchers with an unprecedented tool for the collection of data, and features can interactively provide participants with customized feedback, but these features come at a price—ensuring that appropriately written software manages the data collection process.
Abstract: The World-Wide Web presents survey researchers with an unprecedented tool for the collection of data. The costs in terms of both time and money for publishing a survey on the Web are low compared with costs associated with conventional surveying methods. The data entry stage is eliminated for the survey administrator, and software can ensure that the data acquired from participants is free from common entry errors. Importantly, Web surveys can interactively provide participants with customized feedback. These features come at a price—ensuring that appropriately written software manages the data collection process. Although the potential for missing data, unacceptable responses, duplicate submissions, and Web abuse exist, one can take measures when creating the survey software to minimize the frequency and negative consequences of such incidents.

691 citations


Journal ArticleDOI
TL;DR: Based on users' revisitation patterns to World Wide Web pages, eight design guidelines for web browser history mechanisms were formulated and explain why some aspects of today's browsers seem to work well, and other's poorly.
Abstract: We report on users' revisitation patterns to World Wide Web (web) pages, and use the results to lay an empirical foundation for the design of history mechanisms in web browsers. Through history, a user can return quickly to a previously visited page, possibly reducing the cognitive and physical overhead required to navigate to it from scratch. We analysed 6 weeks of detailed usage data collected from 23 users of a well-known web browser. We found that 58% of an individual's pages are revisits, and that users continually add new web pages into their repertoire of visited pages. People tend to revisit pages just visited, access only a few pages frequently, browse in very small clusters of related pages and generate only short sequences of repeated URL paths. We compared different history mechanisms, and found that the stack-based prediction method prevalent in commercial browsers is inferior to the simpler approach of showing the last few recently visited URLs with duplicates removed. Other predictive approaches fare even better. Based on empirical evidence, eight design guidelines for web browser history mechanisms were then formulated. When used to evaluate the existing hypertext-based history mechanisms, they explain why some aspects of today's browsers seem to work well, and other's poorly. The guidelines also indicate how history mechanisms in the web can be made even more effective.f1f1This article is a major expansion of a conference paper (Tauscher & Greenberg, 1997). This research reported in this article was performed as part of an M.Sc. project (Tauscher, 1996).

638 citations


Proceedings ArticleDOI
08 Feb 1997
TL;DR: ShopBot, a fully-implemented, domainindependent comparison-shopping agent that relies on a combination of heuristic search, pattern matching, and inductive learning techniques, enables users to both find superior prices and substantially reduce Web shopping time.
Abstract: The WorldWideWeb is less agent-friendly than we might hope. Most information on the Web is presented in loosely structured natural language text with no agent-readable semantics. HTML annotations structure the display of Web pages, but provide virtually no insight into their content. Thus, the designers of intelligent Web agents need to address the following questions: (1) To what extent can an agent understand information published at Web sites? (2) Is the agent's understanding sufficient to provide genuinely useful assistance to users? (3) Is site-specific hand-coding necessary, or can the agent automatically extract information from unfamiliar Web sites? (4) What aspects of the Web facilitate this competence? In this paper we investigate these issues with a case study using ShopBot, a fully-implemented, domainindependent comparison-shopping agent. Given the home pages of several online stores, ShopBot autonomously learns how to shop at those vendors. After learning, it is able to speedily visit over a dozen software and CD vendors, extract product information, and summarize the results for the user. Preliminary studies show that ShopBot enables users to both find superior prices and substantially reduce Web shopping time. Remarkably, ShopBot achieves this performance without sophisticated natural language processing, and requires only minimal knowledge about different product domains. Instead, ShopBot relies on a combination of heuristic search, pattern matching, and inductive learning techniques. PERMISSION TO COPY WITHOUT FEE ALL OR OR PART OF THIS MATERIAL IS GRANTED PROVIDED THAT THE COPIES ARE NOT MADE OR DISTRIBUTED FOR DIRECT COMMERCIAL ADVANTAGE, THE ACM copyRIGHT NOTICE AND THE TITLE OF THE PUBLICATION AND ITS DATE APPEAR, AND NOTICE IS GIVEN THAT COPYING IS BY PERMISSION OF ACM. To COPY OTHERWISE, OR TO REPUBLISH, REQUIRES A FEE AND/OR SPECIFIC PERMISSION. AGENTS '97 CONFERENCE PROCEEDINGS, COPYRIGHT 1997 ACM.

593 citations


Book
01 Jun 1997
TL;DR: This book is the most comprehensive data demonstrating how Web sites actually work when users need specific answers, and offers guidance for evaluating and improving the usability of Web sites.
Abstract: From the Publisher: "Without a doubt, the most important book I've read this year on Web design isWeb Site Usability: A Designer's Guide. The book is easy to read and full of relevant information." —Bill Skeet, Chief Designer, Knight-Ridder New Media "Even experienced Web designers should read these usability findings about 11 different site designs. Competitive usability testing is one of the most powerful ways of learning about design and this book will save you hours of lab time." —Dr. Jakob Nielsen, The Neilsen Norman Group "This report challenges many of my assumptions about Web design, but that's a good thing. We're still babes in the woods, crawling along trying to distinguish the trees from the forest. Any sign posts are helpful, right now." —Mary Deaton, KNOWware Web Site Usability: A Designer's Guide is a report that every person involved in Web design, commerce, or online marketing will want to have. This book is, undoubtedly, the most comprehensive data demonstrating how Web sites actually work when users need specific answers. Researched and compiled by User Interface Engineering, the results are written in an easy to understand style, illustrating the need to make Web sites useful, not complicated. Features: - Based on an extensive study of actual users — not theory, not graphic design principles, and not new tricks to make a "cool" Web sites - Demonstrates how people actually navigate and extract information on Web sites - Offers guidance for evaluating and improving the usability of Web sites Jared M. Spool, Principal Investigator, is with User Interface Engineering, a consulting firm specializing in product usability and design. User Interface Engineering's mission is to empower product development teams to build applications that meet the needs of their users by providing critical data for creating designs and products that work.

530 citations



Proceedings ArticleDOI
27 Mar 1997
TL;DR: Web Browser Intelligence (WBI, pronounced “WEB-ee”) is an implemented system that organizes agents on a user’s workstation to observe user actions, proactively offer assistance, modify web documents, and perform new functions.
Abstract: Agents can personalize otherwise impersonal computational systems. The World Wide Web presents the same appearance to every user regardless of that user’s past activity. Web Browser Intelligence (WBI, pronounced “WEB-ee”) is an implemented system that organizes agents on a user’s workstation to observe user actions, proactively offer assistance, modify web documents, and perform new fi.mctions. WBI can annotate hyperlinks with network speed information, record pages viewed for later access, and provide shortcut links for common paths. In this way, WBI personalizes a user’s web experience by joining personal information with global information to effectively tailor what the user sees.

362 citations


Journal ArticleDOI
TL;DR: The current version of the Basic Support for Cooperative Work system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.
Abstract: The emergence and widespread adoption of the World Wide Web offers a great deal of potential in supporting cross-platform cooperative work within widely dispersed working groups. The Basic Support for Cooperative Work (BSCW) project at GMD is attempting to realize this potential through development of web-based tools which provide cross-platform collaboration services to groups using existing web technologies. This paper describes one of these tools, theBSCW Shared Workspace system?a centralized cooperative application integrated with an unmodified web server and accessible from standard web browsers. The BSCW system supports cooperation through “shared workspaces”; small repositories in which users can upload documents, hold threaded discussions and obtain information on the previous activities of other users to coordinate their own work. The current version of the system is described in detail, including design choices resulting from use of the web as a cooperation platform and feedback from users following the release of a previous version of BSCW to the public domain.

334 citations


Proceedings ArticleDOI
08 Feb 1997
TL;DR: SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”, is described.
Abstract: This paper describes SHOE, a set of Simple HTML Ontology Extensions which allow World-Wide Web authors to annotate their pages with semantic knowledge such as “I am a graduate student” or “This person is my graduate advisor”. These annotations are expressed in terms of ontological knowledge which can be generated by using or extending standard ontologies available on the Web. This makes it possible to ask Web agent queries such as “Find me all graduate students in Maryland who are working on a project funded by DoD initiative 123-4567”, instead of simplistic keyword searches enabled by current search engines. We have also developed a web-crawling agent, Expos´ e, which interns SHOE knowledge from web documents, making these kinds queries a reality.

246 citations


Journal ArticleDOI
01 Sep 1997
TL;DR: The varieties of link information (not just hyperlinks) on the Web, how the Web differs from conventional hypertext, and how the links can be exploited to build useful applications are discussed.
Abstract: Web information retrieval tools typically make use of only the text on pages, ignoring valuable information implicitly contained in links. At the other extreme, viewing the Web as a traditional hypertext system would also be mistake, because heterogeneity, cross-domain links, and the dynamic nature of the Web mean that many assumptions of typical hypertext systems do not apply. The novelty of the Web leads to new problems in information access, and it is necessary to make use of the new kinds of information available, such as multiple independent categorization, naming, and indexing of pages. This paper discusses the varieties of link information (not just hyperlinks) on the Web, how the Web differs from conventional hypertext, and how the links can be exploited to build useful applications. Specific applications presented as part of the ParaSite system find individuals' homepages, new locations of moved pages, and unindexed information.

Journal Article
TL;DR: The advertiser-supported Web site is one of several business models vying for legitimacy in the emerging medium of the World Wide Web on the Internet.
Abstract: The advertiser-supported Web site is one of several business models vying for legitimacy in the emerging medium of the World Wide Web on the Internet (Hoffman, Novak, and Chatterjee 1995). Currently, there are three major types of advertiser-supported sites: 1) sponsored content sites like Hotwired, ESPNET Sportszone, and ZD Net, 2) sponsored search agents and directories like InfoSeek, Excite, and Yahoo, and 3) entry portal sites like Netscape. At present, these three classes of sites are split at about 55 percent, 36 percent and 19 percent, respectively, in terms of advertising revenue (Jupiter Communications 1996).

Book
01 Jan 1997
TL;DR: Web Teaching walks educators and trainers through the process of creating customized Web-based teaching aids, as well as the nuts and bolts of multimedia, to jump-start access to the Web's many resources.
Abstract: From the Publisher: Can there be any doubt that the future of education is linked to the World Wide Web? "Web Teaching" walks educators and trainers through the process of creating customized Web-based teaching aids, as well as the nuts and bolts of multimedia. Illustrations and appendices help jump-start access to the Web's many resources. 236 pp. Pub: 4/97.

Proceedings ArticleDOI
03 Jan 1997
TL;DR: It is suggested that Web site designers consider the genres that are appropriate for their situation and attempt to reuse familiar genres, as well as examining randomly selected Web pages and categorizing the type of genre represented.
Abstract: The World Wide Web is growing quickly and being applied to many new types of communications. As a basis for studying organizational communications, Yates and Orlikowski (1992) proposed using genres. They defined genres as, "typified communicative actions characterized by similar substance and form and taken in response to recurrent situations". They further suggested that communications in a new media will show both reproduction or adaptation of existing communicative genres as well as the emergence of new genres. We studied this phenomena on the World Wide Web by examining randomly selected Web pages (100 in one sample and 1000 in a second) and categorizing the type of genre represented. Perhaps most interestingly, we saw examples of genres being adapted to take advantage of the linking and interactivity of the new medium, such as solicitations for help and genealogies. We suggest that Web site designers consider the genres that are appropriate for their situation and attempt to reuse familiar genres.

Journal ArticleDOI
TL;DR: This descriptive case study illustrates the value of user-centered design and usability testing of World Wide Web sites at a large midwestern university.
Abstract: Administrators at a large midwestern university recognized that their World Wide Web site was rapidly becoming an important factor in recruiting new students. They also expected this Web site to serve many different types of information needs for existing students, faculty, staff, and alumni. an interdisciplinary team of faculty, graduate students, and staff was formed to evaluate the existing Web site. A group from this team first conducted a needs analysis to determine the kinds of information the target population was seeking. This analysis led to the creation of a new information structure for the Web site. Usability tests of the both the new and old designs were conducted on paper. Users were able to find answers to frequently asked questions much more rapidly and successfully with the new information structure. This structure was further refined through additional usability tests conducted on the Web itself. This descriptive case study illustrates the value of user-centered design and usability testing of World Wide Web sites.

Patent
06 Jun 1997
TL;DR: In this article, Web page URLs are stored as attribute-values of directory objects and Web page hyperlinks to those directory objects are provided together with access logic responsive to the hyperlinks for retrieving the URLs for use by a client.
Abstract: Provided is a method and apparatus for improved access to material via the World Wide Web Internet service. Web page URLs are stored as attribute-values of directory objects and Web page hyperlinks to those directory objects are provided together with access logic responsive to the hyperlinks for retrieving the URLs for use by a client. This indirect access to Web pages via hyperlinks to directories has significant advantages for Web page organization and facilitates more flexible methods of Web page access than the known use of hyperlinks which include URLs pointing directly to the target Web pages.

Book
01 Oct 1997
TL;DR: This book focuses on the development of web interfaces for people with disabilities and the design of web pages and applications for People With Disabilities.
Abstract: Contents: Preface. D.J. Mayhew, Introduction. Part I: Perspectives From Psychology. C.P. Seltzer, The Use of Investigatory Responses as a Measure of Learning and Memory. W. Marks, C.L. Dulaney, Visual Information Processing on the World Wide Web. J.P. Magliano, M.C. Schleich, K.K. Millis, Discourse Process and Its Relevance to the Web. L.A. Whitaker, Human Navigation. Part II: Web User Populations. A. Druin, M. Platt, Children's Online Environments. L. Laux, Designing Web Pages and Applications for People With Disabilities. P. Burden, J. Davies, The World Wide Web as a Teaching Resource. J. Ratner, Easing the Learning Curve for Novice Web Users. Part III: Web Design Guidelines and Development Processes. E. Grose, C. Forsythe, J. Ratner, Using Web and Traditional Style Guides to Design Web Interfaces. J.A. Borges, I. Morales, N.J. Rodriguez, Page Design Guidelines Developed Through Usability Testing. P. Vora, Human Factors Methodology for Designing Web Sites. Part IV: Web Research and Development. A.M. Wichansky, G. Hackman, Jr., Web User Interface Development at Oracle Corporation. A. Kanerva, K. Keeker, K. Risden, E. Schuh, M. Czerwinski, Web Usability Research at Microsoft Corporation. R.C. Omanson, G.S. Lew, R.M. Schumacher, Creating Content for Both Paper and the Web. C. Johnson, The Ten Golden Rules for Providing Video Over the Web or 0% of 2.4M (at 270k/sec, 340 sec remaining). Part V: Collaboration and Visualization. E.N. Wiebe, J.E. Howe, Graphics Design on the Web. S. Greenberg, Collaborative Interfaces for the Web. B.B. Bederson, J.D. Hollan, J. Stewart, D. Rogers, D. Vick, L. Ring, E. Grose, C. Forsythe, A Zooming Web Browser.



Journal ArticleDOI
01 May 1997
TL;DR: Analytical methods developed in the field of Computer Supported Cooperative Work are used to investigate the reasons for the World Wide Web's present success, its strengths and weaknesses as a platform for CSCW, and prospects for future development.
Abstract: This paper investigates some of the issues which will determine the viability of the World Wide Web as an infrastructure for cooperative work. In fact, taking a weak definition of collaboration, the Web is already a very successful collaborative environment. In addition, it is already being used as the basis for experimental and commercial groupware. The paper takes this as a starting point and uses analytic methods developed in the field of Computer Supported Cooperative Work to investigate the reasons for the Web‘s present success, its strengths and weaknesses as a platform for CSCW, and prospects for future development.

Journal ArticleDOI
01 Sep 1997
TL;DR: A method for analysis and design of web-based information systems, and tools to support the method, WebArchitect and PilotBoat, which focuses on architectures and functions of web sites, rather than on appearance of each web resource (page), such as graphics and layouts.
Abstract: We have developed a method for analysis and design of web-based information systems (WBISs), and tools to support the method, WebArchitect and PilotBoat The method and the tools focus on architectures and functions of web sites, rather than on appearance of each web resource (page), such as graphics and layouts Our goal is to efficiently develop WBISs that best support particular business processes at least maintenance cost Our method consists of two approaches, static and dynamic We use the entity relation (E-R) approach for the statis aspects of WBISs, and use scenario approach for the dynamic aspects The E-R analysis and design, based on relationship management methodology (RMM) developed by Isakowitz et al, defines what are entities and how they are related The scenario analysis defines how web resources are accessed, used, and changed by whom The method also defines attributes of each web resource, which are used in maintaining the resource WebArchitect enables designers and maintainers to directly manipulate meta-level links between web resources that are represented in a hierarchical manner PilotBoat is a web client that navigates and lets users collaborate through web sites We have applied our approaches to the WWW6 proceedings site

Journal ArticleDOI
TL;DR: In this paper, the authors review and discuss elements to consider in Web page construction and evaluation, and provide a form to assist in assessment, as well as a set of criteria for web page construction.
Abstract: a growing concern has emerged for the quality of health-related documents contained on the World Wide Web. Increased use of the World Wide Web by consumers and health education professionals, as well as ease of Web page publication, has heightened the need for criteria in Web page construction and evaluation. This article reviews and discusses elements to consider in Web page construction and evaluation, and provides a form to assist in assessment.

Journal ArticleDOI
TL;DR: The major features required of a Web environment deploying digital credentials, including the introduction of security assistants for both clients and servers are described, and the status of the investigation into a credential-based environment is reported on.
Abstract: Often an information source on the Web would like to provide different classes of service to different clients. In the autonomous. highly distributed world of the Web, the traditional approach of using authentication to differentiate between classes of clients is no longer sufficient, as knowledge of a client's identity will often not suffice to determine whether a client is authorized to use a service. Our goal in this research project is to explore the use of digital credentials, digital analogues of the paper credentials we carry in our wallets today, to help solve this problem. In this paper we describe the major features required of a Web environment deploying digital credentials, including the introduction of security assistants for both clients and servers, and report on the status of our investigation into a credential-based environment.

Journal ArticleDOI
TL;DR: It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.
Abstract: The paper considers the usability of the World Wide Web in the light of a decade of research into the usability of hypertext and hypermedia systems. The concepts of virtual hierarchies and virtual networks are introduced as a mechanism for alleviating some of the shortcomings inherent in the current implementations of the web, without violating its basic philosophy. It is suggested that virtual hierarchies and virtual networks will assist users to find task-relevant information more easily and quickly and also help web authors to ensure that their pages are targeted at the users who wish to see them.The paper first analyses the published work on hypermedia usability, identifying the assumptions that underlie this research and relating them to the assumptions underlying the web. Some general conclusions are presented about both hypermedia usability principles and their applicability to the web. These results are coupled with problems identified from other sources to produce a requirements list for improving web usability. A possible solution is then presented which utilizes the capabilities of existing distributed information management software to permit web users to create virtual hierarchies and virtual networks. Some ways in which these virtual structures assist searchers to find useful information, and thus help authors to publicize their information more effectively, are described. The explanation is illustrated by examples taken from the GENIE Service, an implementation of some of the ideas. This uses the World Wide Web as a means of allowing global environmental change researchers throughout the world to find data that may be relevant to their research.

Proceedings ArticleDOI
15 Apr 1997
TL;DR: A technique to form focus+context views of WorldWide Web nodes that shows the immediate neighborhood of the current node and its position with respect to the important (landmark) nodes in the information space.
Abstract: With the explosive growth of information that is available on the World-Wide Web, it is very easy for the user to get lost in hyperspace. When the user feels lost, some idea of the position of the current node in the overall information space will help to orient the user. Therefore we have developed a technique to form focus+context views of WorldWide Web nodes. The view shows the immediate neighborhood of the current node and its position with respect to the important (landmark) nodes in the information space. The views have been used to enhance a Web search engine. We have also used the landmark nodes and the focus+ context views in forming overview diagrams of Web sites.

Journal ArticleDOI
TL;DR: It is concluded that customer feedback must be managed in a disciplined way, by ensuring that feedback is representative of the customer population as a whole, not just of those with a propensity to comment.
Abstract: Presents a model which organizations can use to monitor Web site effectiveness. Argues that anecdotal evidence can be colorful but is not useful in structuring and managing an effective site. Suggests that traditional disciplines of composition and communication ‐ explicit purpose, coherent structure, relevant conclusion ‐ should be applied to Web site design. Concludes that customer feedback must be managed in a disciplined way, by ensuring that feedback is representative of the customer population as a whole, not just of those with a propensity to comment; and that the purpose and aims of a Web site must be thought through with the utmost care and attention to give a higher likelihood of creating an effective site.

01 Jan 1997
TL;DR: The CoBrow 1 project proposes to extend the model of the WWW to include its users, which will enable many new applications like WWW based conferencing, help desks, online presentations, online tours, and grou p entertainment.
Abstract: The World Wide Web (WWW) is today the most successful service of the Internet. The richness of information available combined with easy access to this information makes it a premier information gathering tool for researchers and consumers, however, the model of today’s WWW does not include the users. The WWW is a purely information focused environment, consisting of documents and links between these documents. The virtual world formed by the linked information on the WWW is completely separated from the world of its users. The CoBrow 1 project [6] proposes to extend the model of the WWW to include its users. This will enable many new applications like WWW based conferencing, help desks, online presentations, online tours, and grou p entertainment. We believe that the WWW is well suited to become the unifying platform for synchronous, interactive collaboration across the

Journal ArticleDOI
01 Sep 1997
TL;DR: This paper proposes that each Web server does its own housekeeping, and a software agent named SiteHelper is designed to act as a housekeeper for the Web server and as a helper for a Web user to find relevant information at a particular site.
Abstract: The World Wide Web (the Web for short) is rapidly becoming an information flood as it continues to grow exponentially. This causes difficulty for users to find relevant pieces of information on the Web. Search engines and robots (spiders) are two popular techniques developed to address this problem. Search engines are indexing facilities over searchable databases. As the Web continues to expand, search engines are becoming redundant because of the large number of Web pages they return for a single search. Robots are similar to search engines; rather than indexing the Web, they traverse (“walk through”) the Web, analyzing and storing relevant documents. The main drawback of these robots is their high demand on network resources that results in networks being overloaded. This paper proposes an alternate way in assisting users in finding information on the Web. Since the Web is made up of many Web servers, instead of searching all the Web servers, we propose that each server does its own housekeeping. A software agent named SiteHelper is designed to act as a housekeeper for the Web server and as a helper for a Web user to find relevant information at a particular site. In order to assist the Web user finding relevant information at the local site, SiteHelper interactively and incrementally learns about the Web user's areas of interest and aids them accordingly. To provide such intelligent capabilities, SiteHelper deploys enhanced HCV with incremental learning facilities as its learning and inference engines.

Book ChapterDOI
07 Sep 1997
TL;DR: A toolkit for application developers, MetaWeb, is presented, which augments the Web with basic features which provide new and legacy applications with better support for synchronous cooperation.
Abstract: The World Wide Web is increasingly seen as an attractive technology for the deployment and evaluation of groupware However the underlying architecture of the Web is inherently stateless - best supporting asynchronous types of cooperation. This paper presents a toolkit for application developers, MetaWeb, which augments the Web with basic features which provide new and legacy applications with better support for synchronous cooperation. Using three simple abstractions, User, Location and Session, MetaWeb allows applications to be coupled as tightly or as loosely to the Web as desired The paper presents two distinct applications of MetaWeb, including the extension of an existing application, the BSCW shared workspace system, from which a number of observations are drawn.

Journal ArticleDOI
TL;DR: This article presents various log file analysis techniques and issues related to the interpretation of log file data as a means of Web use evaluation through the analysis of Web server-generated log files.