scispace - formally typeset
Search or ask a question

Showing papers on "Web modeling published in 1998"


Book
01 Jan 1998
TL;DR: Information Architecture for the World Wide Web is a guide to how to design Web sites and intranets that support growth, management, and ease of use for Webmasters, designers, and anyone else involved in building a Web site.
Abstract: From the Publisher: Some Web sites "work" and some don't Good Web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls You need to know who will be using the site, and what they'll be using it for You need some idea of what you'd like to draw their attention to during their visit Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable Information Architecture for the World Wide Web is about applying the principles of architecture and library science to Web site design Each Web site is like a public building, available for tourists and regulars alike to breeze through at their leisure The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday Most books on Web development concentrate either on the aesthetics or the mechanics of the site This book is about the framework that holds the two together With this book, you learn how to design Web sites and intranets that support growth, management, and ease of use Special attention is given to: The process behind architecting a large, complex site Web site hierarchy design and organization Techniques for making your site easier to search Information Architecture for the World Wide Web is for Webmasters, designers, and anyone else involved in building a Web site It's for novice Web designers who, from the start, want to avoid the traps that result in poorly designed sites It's for experienced Web designers who have already created sites but realize that something "is missing" from their sites and want to improve them It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their Web pages into a cohesive site The authors are two of the principals of Argus Associates, a Web consulting firm At Argus, they have created information architectures for Web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical

1,297 citations


Proceedings ArticleDOI
01 May 1998
TL;DR: This investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.
Abstract: The World Wide Web grows through a decentralized, almost anarchic process, and this has resulted in a large hyperlinked corpus without the kind of logical organization that can be built into more tradit,ionally-created hypermedia. To extract, meaningful structure under such circumstances, we develop a notion of hyperlinked communities on the www t,hrough an analysis of the link topology. By invoking a simple, mathematically clean method for defining and exposing the structure of these communities, we are able to derive a number of themes: The communities can be viewed as containing a core of central, “authoritative” pages linked togh and they exhibit a natural type of hierarchical topic generalization that can be inferred directly from the pat,t,ern of linkage. Our investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.

905 citations


Proceedings Article
01 Jul 1998
TL;DR: The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web, and several machine learning algorithms for this task are described.
Abstract: The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs: an ontology defining the classes and relations of interest, and a set of training data consisting of labeled regions of hypertext representing instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system.

766 citations


Proceedings ArticleDOI
22 Apr 1998
TL;DR: The design of WebLogMiner is presented, current progress is reported and future work in this direction is outlined, which can improve the system performance, enhance the quality and delivery of Internet information services to the end user, and identify populations of potential customers for electronic commerce.
Abstract: As a confluence of data mining and World Wide Web technologies, it is now possible to perform data mining on Web log records collected from the Internet Web-page access history. The behaviour of Web page readers is imprinted in the Web server log files. Analyzing and exploring regularities in this behaviour can improve the system performance, enhance the quality and delivery of Internet information services to the end user, and identify populations of potential customers for electronic commerce. Thus, by observing people using collections of data, data mining can bring a considerable contribution to digital library designers. In a joint effort between the TeleLearning-NCE (Networks of Centres of Excellence) project on the Virtual University and the NCE-IRIS project on data mining, we have been developing a knowledge discovery tool, called WebLogMiner, for mining Web server log files. This paper presents the design of WebLogMiner, reports current progress and outlines future work in this direction.

514 citations


Journal ArticleDOI
TL;DR: This paper presents SoftMealy, a novel wrapper representation formalism based on a finite-state transducer and contextual rules that can wrap a wide range of semistructured Web pages because FSTs can encode each different attribute permutation as a path.

476 citations


Journal ArticleDOI
01 Apr 1998
TL;DR: InterBook as discussed by the authors is an approach for developing adaptive textbooks and presents an authoring tool based on this approach which simplifies the development of adaptive electronic textbooks on the Web, which can adapt to users with very different backgrounds, prior knowledge of the subject and learning goals.
Abstract: Many Web-based educational applications are expected to be used by very different groups of users without the assistance of a human teacher. Accordingly there is a need for systems which can adapt to users with very different backgrounds, prior knowledge of the subject and learning goals. An electronic textbook is one of the most prominent varieties of Web-based educational systems. In this paper we describe an approach for developing adaptive textbooks and present InterBook—an authoring tool based on this approach which simplifies the development of adaptive electronic textbooks on the Web.

445 citations


Journal ArticleDOI
TL;DR: This paper discusses the use of an object-oriented approach for web-based applications design, based on a method named Object-Oriented Hypermedia Design Method (OOHDM), and introduces OOHDM, describing its main activities, namely: conceptual design, navigational design, abstract interface design and implementation.
Abstract: In this paper we discuss the use of an object-oriented approach for web-based applications design, based on a method named Object-Oriented Hypermedia Design Method (OOHDM). We first motivate our work discussing the problems encountered while designing large scale, dynamic web-based applications, which combine complex navigation patterns with sophisticated computational behavior. We argue that a method providing systematic guidance to design is needed. Next, we introduce OOHDM, describing its main activities, namely: conceptual design, navigational design, abstract interface design and implementation, and discuss how OOHDM designs can be implemented in the WWW. Finally, related work and future research in this area are further discussed. © 1998 John Wiley & Sons, Inc.

441 citations


Proceedings ArticleDOI
01 May 1998
TL;DR: A Web based information agent that assists the user in the process of performing a scientific literature search and can find papers which are similar to a given paper using word information and byanalyzing common citations made by the papers.
Abstract: Research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in forms opaque to search engines (e.g. Postscript), and increase in quantity daily. Significant amounts of time and effort are typically needed in order to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations. The agents Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both citing and cited publications can be followed. In addition to simple browsing and keyword searches, the agent can find papers which are similar to a given paper using word information and by analyzing common citations made by the papers.

357 citations


Proceedings Article
01 Jul 1998
TL;DR: This paper introduces a novel approach to clustering, which is called cluster mining, and presents PageGather, a cluster mining algorithm that takes Web server logs as input and outputs the contents of candidate index pages.
Abstract: The creation of a complex web site is a thorny problem in user interface design. In IJCAI '97, we challenged the AI community to address this problem by creating adaptive web sites: sites that automatically improve their organization and presentation by mining visitor access data collected in Web server logs. In this paper we introduce our own approach to this broad challenge. Specifically, we investigate the problem of index page synthesis -- the automatic creation of pages that facilitate a visitor's navigation of a Web site.First, we formalize this problem as a clustering problem and introduce a novel approach to clustering, which we call cluster mining: Instead of attempting to partition the entire data space into disjoint clusters, we search for a small number of cohesive (and possibly overlapping) clusters. Next, we present PageGather, a cluster mining algorithm that takes Web server logs as input and outputs the contents of candidate index pages. Finally, we show experimentally that PageGather is both faster (by a factor of three) and more effective than traditional clustering algorithms on this task. Our experiment relies on access logs collected over a month from an actual web site.

326 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: The WebOQL system is presented, which supports a general class of data restructuring operations in the context of the Web and synthesizes ideas from query languages for the Web, for semistructured data and for Website restructuring.
Abstract: The widespread use of the Web has originated several new data management problems, such as extracting data from Web pages and making databases accessible from Web browsers, and has renewed the interest in problems that had appeared before in other contexts, such as querying graphs, semistructured data and structured documents. Several systems and languages have been proposed for solving each of these Web data management problems, but none of these systems addresses all the problems from a unified perspective. Many of these problems essentially amount to data restructuring: we have information represented according to a certain structure and we want to construct another representation of (part of it) using a different structure. We present the WebOQL system, which supports a general class of data restructuring operations in the context of the Web. WebOQL synthesizes ideas from query languages for the Web, for semistructured data and for Website restructuring.

296 citations



Patent
19 Jun 1998
TL;DR: In this article, a software tool is provided for use with a computer system for simplifying the creation of Web sites, which comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas.
Abstract: A software tool is provided for use with a computer system for simplifying the creation of Web sites. The tool comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas. The templates preferably correspond to different types of Web pages and other features commonly found on or available to Web sites. Each feature may have various options. To create a web site, a Web site creator (the person using the tool to create a web site) is prompted by the tool through a series of views stored in the tool to select the features and options desired for the Web site. Based on these selections, the tool prompts the web site creator to supply data to populate fields of the templates determined by the tool to correspond to the selected features and options. Based on the identified templates and supplied data, the tool generates the customized Web site without the web site creator writing any HTML or other programming code.

Patent
02 Mar 1998
TL;DR: In this paper, a system for the retrieval, construction and manipulation of any kind of objects using Structured Query Language (SQL) over disparate relational storage systems on the web is presented.
Abstract: The present invention provides a system for the retrieval, construction and manipulation of any kind of objects using Structured Query Language (SQL) over disparate relational storage systems on the web. Uniform Resource Locators (URLs) are used by the present invention to locate objects corresponding to component relational databases on the web and other web objects. URLs locating relational schema components and other web objects are stored as attribute values in tables. Object methods and operators on such web objects are defined as part of user defined type definition for an attribute type in a table. Object request brokers apply such methods or operators on web objects anywhere on the web. Since URLs can point to relational data store under a remote schema definition, a business application logic in the form of object package is executed after constructing proper sets of records by relational operations at the remote schema location. This leads to partitioning of a logical schema into many physical schema components with business objects. Also by this invention, parts of a web object can be intelligently manipulated and access methods through index creation enable range access over web objects. Additionally, this invention suggests possible internet security by authorizations at component schema locations and by further maintaining processing logic for secured transmission over the internet. SQL queries create, retrieve and manipulate disparate web objects with implicit or explicit calls to business application logic as object methods. This invention uniquely incorporates a cooperative method of preparation, execution and resolution of a SQL query manipulating uniform resource locators and object definitions at multiple locations on the web.

Proceedings Article
01 Jul 1998
TL;DR: This work has developed methods for mapping web sources into a simple, uniform representation that makes it efficient to integrate multiple sources and makes it easy to maintain these agents and incorporate new sources as they become available.
Abstract: The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to do this is to build specialized applications, which are time-consuming to develop and difficult to maintain. We are addressing this problem by creating the technology and tools for rapidly constructing information agents that extract, query, and integrate data from web sources. Our approach is based on a simple, uniform representation that makes it efficient to integrate multiple sources. Instead of building specialized algorithms for handling web sources, we have developed methods for mapping web sources into this uniform representation. This approach builds on work from knowledge representation, machine learning and automated planning. The resulting system, called Ariadne, makes it fast and cheap to build new information agents that access existing web sources. Ariadne also makes it easy to maintain these agents and incorporate new sources as they become available.

Patent
19 Jun 1998
TL;DR: In this paper, a software tool is provided for use with a computer system for simplifying the creation of Web sites, which comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas.
Abstract: A software tool is provided for use with a computer system for simplifying the creation of Web sites. The tool comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas. The templates preferably correspond to different types of Web pages and other features commonly found on or available to Web sites. Each feature may have various options. To create a web site, a Web site creator (the person using the tool to create a web site) is prompted by the tool through a series of views stored in the tool to select the features and options desired for the Web site. Based on these selections, the tool prompts the web site creator to supply data to populate fields of the templates determined by the tool to correspond to the selected features and options. Based on the identified templates and supplied data, the tool generates the customized Web site without the web site creator writing any HTML or other programming code. Help documents pertaining to the selected features of the web site are automatically posted to the web site.

Proceedings ArticleDOI
01 Jan 1998
TL;DR: New techniques for Web Ecology and Evolution Visualization (WEEV) are presented, intended to aid authors and webmasters with the production and organization of content, assist Web surfers making sense of information, and help researchers understand the Web.
Abstract: Several visualizations have emerged which attempt to visualize all or part of the World Wide Web. Those visualizations, however, fail to present the dynamically changing ecology of users and documents on the Web. We present new techniques for Web Ecology and Evolution Visualization (WEEV). Disk Trees represent a discrete time slice of the Web ecology. A collection of Disk Trees forms a Time Tube, representing the evolution of the Web over longer periods of time. These visualizations are intended to aid authors and webmasters with the production and organization of content, assist Web surfers making sense of information, and help researchers understand the Web.

Book
01 Jan 1998
TL;DR: This practical book explains in detail how to construct agents capable of learning and competing, including both design principles and actual code for personal agents, network or Web agents, multi-agent systems and commercial agents.
Abstract: From the Publisher: A state-of-the-art guide on how to build intelligent Web-based applications using Java Joseph and Jennifer Bigus update and significantly expand their book on building intelligent Web-based applications using Java. Geared to network programmers or Web developers who have previously programmed agents in Smalltalk or C++, this practical book explains in detail how to construct agents capable of learning and competing, including both design principles and actual code for personal agents, network or Web agents, multi-agent systems and commercial agents. New and revised coverage includes agent tools, agent uses for Web applications (including personalization, cross-selling, and e-commerce), and additional AI technologies such as fuzzy logic and genetic algorithms.

Journal ArticleDOI
Ora Lassila1
TL;DR: This paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results.
Abstract: The sheer volume of information can make searching the Web frustrating. The paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results. RDF, a foundation for processing metadata, provides interoperability between applications that exchange machine understandable information on the Web.

Patent
Victor S. Moore1, Glen R. Walters1
31 Mar 1998
TL;DR: In this article, the authors present a template-driven approach for web page design, which can be implemented in a Java application or applet, and can be used to create web sites.
Abstract: Methods and systems for designing a Web page, to be hosted on a Web page server. The development applications provide an object-oriented, template-driven interface for a customer or merchant to utilize in the design of a Web page or a complete Web site. The Web site produced allows the merchant to become a part of a distributed electronic commerce system or Internet commerce system for doing business on the World Wide Web. The design tool can be implemented in a Java application or applet.

Journal ArticleDOI
01 Apr 1998
TL;DR: WebL is a high level, object-oriented scripting language that incorporates two novel features: service combinators and a markup algebra that extracts structured and unstructured values from pages for computation, and is based on algebraic operations on sets of markup elements.
Abstract: In this paper we introduce a programming language for Web document processing called WebL. WebL is a high level, object-oriented scripting language that incorporates two novel features: service combinators and a markup algebra. Service combinators are language constructs that provide reliable access to Web services by mimicking a Web surfer's behavior when a failure occurs while retrieving a page. The markup algebra extracts structured and unstructured values from pages for computation, and is based on algebraic operations on sets of markup elements. WebL is used to quickly build and experiment with custom Web crawlers, meta-search engines, page transducers, shopping robots, etc.

Patent
19 Jun 1998
TL;DR: In this paper, a software tool is provided for use with a computer system for simplifying the creation of Web sites, which comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas.
Abstract: A software tool is provided for use with a computer system for simplifying the creation of Web sites. The tool comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas. The templates preferably correspond to different types of Web pages and other features commonly found on or available to Web sites. Each feature may have various options. To create a web site, a Web site creator (the person using the tool to create a web site) is prompted by the tool through a series of views stored in the tool to select the features and options desired for the Web site. Based on these selections, the tool prompts the web site creator to supply data to populate fields of the templates determined by the tool to correspond to the selected features and options. Based on the identified templates and supplied data, the tool generates the customized Web site without the web site creator writing any HTML or other programming code. Automated work flow is enabled based on information supplied by the site creator during creation of the web site.

Book
01 Oct 1998
TL;DR: In this article, the authors focus on the Web - enabler or disabler collaborative learning in networked simulation environments media integration web-based student support systems innovations in large-scale supported distance learning promoting learner dialogues on the web new scenarios in scholarly publishing telepresence on the Internet KMi planet sharing programming knowledge over the web accessing AI applications over web knowledge modelling the world wide design lab psychological agents and the new web media a tutor's assistant for electronic conferencing.
Abstract: Can you get my hard nose in focus the Web - enabler or disabler collaborative learning in networked simulation environments media integration web-based student support systems innovations in large-scale supported distance learning promoting learner dialogues on the web new scenarios in scholarly publishing telepresence on the Internet KMi planet sharing programming knowledge over the web accessing AI applications over the web knowledge modelling the world wide design lab psychological agents and the new web media a tutor's assistant for electronic conferencing.

Journal ArticleDOI
Rob Barrett1, Paul P. Maglio1
01 Apr 1998
TL;DR: This paper describes WBI, an implemented architecture for building intermediaries that has been used to construct many applications, including personal histories, password management, image distillation, collaborative filtering, targeted advertising, and Web advising.
Abstract: We propose a new approach to programming Web applications that increases the Web's computational power, the Web's flexibility, and Web programmer productivity. Whereas Web servers have traditionally been responsible for producing all content, intermediaries now provide new places for producing and manipulating Web data. We define intermediaries as computational elements that lie along the path of a Web transaction. In this paper, we describe the fundamental ideas behind intermediaries and provide a collection of example applications. We also describe WBI, an implemented architecture for building intermediaries that we have used to construct many applications, including personal histories, password management, image distillation, collaborative filtering, targeted advertising, and Web advising.

Journal ArticleDOI
TL;DR: A comprehensive framework for effective commercial Web application development based on prior research in hypermedia and human‐computer interfaces is proposed, which should result in more effective commercial web application development.
Abstract: The World Wide Web (WWW) or the Web has been recognized as a powerful new information exchange channel in recent years. Today, an ever‐increasing number of businesses have set up Web sites to publicize their products and services. However, careful planning and preparation is needed to achieve the intended purpose of this new information exchange channel. This paper proposes a comprehensive framework for effective commercial Web application development based on prior research in hypermedia and human‐computer interfaces. The framework regards Web application development as a special type of software development project. At the onset of the project, its social acceptability is investigated. Next, economic, technical, operational, and organizational viability are examined. For Web page design, both the functionality and usability of Web pages are thoroughly considered. The use of the framework should result in more effective commercial Web application development.

Book ChapterDOI
23 Mar 1998
TL;DR: This paper introduces a methodology for the development of applications for the WWW by using HDM-lite, a design notation supporting the specification of the structural, navigational, and presentation semantics of the application.
Abstract: This paper introduces a methodology for the development of applications for the WWW. Web applications are modelled at the conceptual level by using HDM-lite, a design notation supporting the specification of the structural, navigational, and presentation semantics of the application. Conceptual specifications are transformed into a logical-level representation, which enables the generation of the application pages from content data stored in a repository. The proposed approach is substantiated by the implementation of the Autoweb System, a set of software tools supporting the development process from conceptual modelling to the deployment of the application pages on the Web. Autoweb can be used both for developing new applications and for reverse engineering existing applications based on a relational representation of data.

Proceedings ArticleDOI
Atsushi Sugiura1, Yoshiyuki Koseki1
01 Nov 1998
TL;DR: This paper describes a programming-by-demonstration system, called Internet Scrapbook, which allows users with little programming skill to automate repetitive browsing tasks and the accuracy of the data extraction algorithm, 96 percent of user-specified portions were correctly extracted.
Abstract: This paper describes a programming-by-demonstration system, called Internet Scrapbook, which allows users with little programming skill to automate repetitive browsing tasks. With the system, the user can create a personal page by clipping only the necessary portions from multiple Web pages. Once the personal page is created, the system updates it on behalf of the user by extracting the specified parts from the latest Web pages. The data extraction method in Scrapbook is based on the regularity in modifications of Web pages, i.e. that headings and positions of articles are rarely changed even though the articles themselves are modified. In the experiments to examine the accuracy of the data extraction algorithm, 96 percent of user-specified portions were correctly extracted.

Patent
19 Jun 1998
TL;DR: In this paper, a software tool is provided for use with a computer system for simplifying the creation of Web sites, which comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas.
Abstract: A software tool is provided for use with a computer system for simplifying the creation of Web sites. The tool comprises a plurality of pre-stored templates, comprising HTML formatting code, text, fields and formulas. The templates preferably correspond to different types of Web pages and other features commonly found on or available to Web sites. Each feature may have various options. To create a web site, a Web site creator (the person using the tool to create a web site) is prompted by the tool through a series of views stored in the tool to select the features and options desired for the Web site. Based on these selections, the tool prompts the web site creator to supply data to populate fields of the templates determined by the tool to correspond to the selected features and options. Based on the identified templates and supplied data, the tool generates the customized Web site without the web site creator writing any HTML or other programming code. Based on roles-based, multi-level security, various advantages for e-commerce applications are enabled.

Book
01 Apr 1998
TL;DR: This chapter discusses the evolution of Web Site Design, software engineering principles and the Web, and current practices in Web Development.
Abstract: 1. Introduction: Evolution of Web Site Design. Web Design. Generations Don't Matter, Purpose Does. Initial Failure of Web RAD. Summary. 2. Software Engineering Principles and the Web. Web Sites as Software. Current Practices in Web Development. The Need for Process. Process Models. Beyond Process. Web Engineering Is Not Software Engineering. Summary. 3. The Medium of the Web. Networked Communication. Overview of a Web Session. Components of the Web Medium. Summary. 4. Problem Definition, Concept Exploration, and Feasibility Analysis. Understanding the Problem. Writing the Problem Definition. Concept Exploration and Feasibility-The Whirlpool Approach. Answering the Problem Definition: The Overall Purpose. Establishing a Measurement of Success. Logistics. Summary. 5. Requirements Analysis and Specification. Classifying the Site. Requirements Analysis. Specification. Estimation and Resource Requirements. Conclusion. 6. Designing the Web Site and System. What Does Web Design Include? Information Design. Web Site: Application versus Information. Program Design. Structured Design. Choosing a Design Approach. Navigation Design. Graphic Design. Network/Server Design. Summary. 7. Implementation: Building a Web Site. Programming Technologies. Client-Side Technologies. When to Use Client-Side Technologies. Server-Side Technologies. When to Use Server-Side Technologies. Content Technologies. Development Tools. Assembling the Beta Site. The Implementation Process. Developer Test. Summary. 8. Web Testing. Issues with Testing. Realistic Testing. Test Plans and Procedures. Functionality Testing. Content Testing. User Test: Usability and Beta Testing. The Result of Testing. Summary. 9. Post-Development: Promotion and Maintenance. Promotion and How People Find Sites and Information. Maintenance. Using Feedback to Grow or Modify a Web Site. Summary. 10. Beyond Web Site Engineering. Real Life: That Which Can't Be Planned For. Defending Web Projects. Politics. Web Sites Affect Organizations. Staying In Bounds. Summary. Index.

Journal ArticleDOI
TL;DR: This paper describes the development of a first-generation Web-groupware system called TCBWorks that enables anyone with a Web browser to use groupware and discusses the design strategy, the overall design, and the technical architecture.
Abstract: The Internet and World Wide Web hold many possibilities for virtual communities. In this paper we describe the development of a first-generation Web-groupware system called TCBWorks that enables anyone with a Web browser to use groupware. We discuss the design strategy, the overall design, and the technical architecture, and contrast it with other forms of groupware. We then discuss the results of a series of interviews with users in four organizations and a survey of sixty-nine organizations to better understand how organizations are using Web groupware and the advantages and disadvantages they encountered.

Journal ArticleDOI
01 Apr 1998
TL;DR: In this paper, the authors address the problem of how to cope with such intrinsic limits of Web metadata, and propose a method that is able to partially solve the above two problems, and showing concrete evidence of its effectiveness.
Abstract: The World Wide Web currently has a huge amount of data, with practically no classification information, and this makes it extremely difficult to handle effectively. It has been realized recently that the only feasible way to radically improve the situation is to add to Web objects a metadata classification, to help search engines and Web-based digital libraries to properly classify and structure the information present in the WWW. However, having a few standard metadata sets is insufficient in order to have a fully classified World Wide Web. The first major problem is that it will take some time before a reasonable number of people start using metadata to provide a better Web classification. The second major problem is that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. In this paper, we address the problem of how to cope with such intrinsic limits of Web metadata, proposing a method that is able to partially solve the above two problems, and showing concrete evidence of its effectiveness. In addition, we examine the important problem of what is the required “critical mass” in the World Wide Web for metadata in order for it to be really useful.