scispace - formally typeset
Search or ask a question

Showing papers on "Web standards published in 1998"


Book
01 Jan 1998
TL;DR: Information Architecture for the World Wide Web is a guide to how to design Web sites and intranets that support growth, management, and ease of use for Webmasters, designers, and anyone else involved in building a Web site.
Abstract: From the Publisher: Some Web sites "work" and some don't Good Web site consultants know that you can't just jump in and start writing HTML, the same way you can't build a house by just pouring a foundation and putting up some walls You need to know who will be using the site, and what they'll be using it for You need some idea of what you'd like to draw their attention to during their visit Overall, you need a strong, cohesive vision for the site that makes it both distinctive and usable Information Architecture for the World Wide Web is about applying the principles of architecture and library science to Web site design Each Web site is like a public building, available for tourists and regulars alike to breeze through at their leisure The job of the architect is to set up the framework for the site to make it comfortable and inviting for people to visit, relax in, and perhaps even return to someday Most books on Web development concentrate either on the aesthetics or the mechanics of the site This book is about the framework that holds the two together With this book, you learn how to design Web sites and intranets that support growth, management, and ease of use Special attention is given to: The process behind architecting a large, complex site Web site hierarchy design and organization Techniques for making your site easier to search Information Architecture for the World Wide Web is for Webmasters, designers, and anyone else involved in building a Web site It's for novice Web designers who, from the start, want to avoid the traps that result in poorly designed sites It's for experienced Web designers who have already created sites but realize that something "is missing" from their sites and want to improve them It's for programmers and administrators who are comfortable with HTML, CGI, and Java but want to understand how to organize their Web pages into a cohesive site The authors are two of the principals of Argus Associates, a Web consulting firm At Argus, they have created information architectures for Web sites and intranets of some of the largest companies in the United States, including Chrysler Corporation, Barron's, and Dow Chemical

1,297 citations


Journal ArticleDOI
TL;DR: In this paper, a theory-based, strategic framework to facilitate relationship building with publics through the World Wide Web is presented, and five strategies are provided for communication professionals use to create dialogic relationships with Internet publics.

1,077 citations


Proceedings Article
01 Jul 1998
TL;DR: The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web, and several machine learning algorithms for this task are described.
Abstract: The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs: an ontology defining the classes and relations of interest, and a set of training data consisting of labeled regions of hypertext representing instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system.

766 citations


Journal ArticleDOI
01 Sep 1998
TL;DR: The primary goal of this survey is to classify the different tasks to which database concepts have been applied, and to emphasize the technical innovations that were required to do so.
Abstract: The popularity of the World-Wide Web (WWW) has made it a prime vehicle for disseminating information. The relevance of database concepts to the problems of managing and querying this information has led to a significant body of recent research addressing these problems. Even though the underlying challenge is the one that has been traditionally addressed by the database community how to manage large volumes of data the novel context of the WWW forces us to significantly extend previous techniques. The primary goal of this survey is to classify the different tasks to which database concepts have been applied, and to emphasize the technical innovations that were required to do so.

642 citations


Proceedings ArticleDOI
01 May 1998
TL;DR: A Web based information agent that assists the user in the process of performing a scientific literature search and can find papers which are similar to a given paper using word information and byanalyzing common citations made by the papers.
Abstract: Research papers available on the World Wide Web (WWW or Web) are often poorly organized, often exist in forms opaque to search engines (e.g. Postscript), and increase in quantity daily. Significant amounts of time and effort are typically needed in order to find interesting and relevant publications on the Web. We have developed a Web based information agent that assists the user in the process of performing a scientific literature search. Given a set of keywords, the agent uses Web search engines and heuristics to locate and download papers. The papers are parsed in order to extract information features such as the abstract and individually identified citations. The agents Web interface can be used to find relevant papers in the database using keyword searches, or by navigating the links between papers formed by the citations. Links to both citing and cited publications can be followed. In addition to simple browsing and keyword searches, the agent can find papers which are similar to a given paper using word information and by analyzing common citations made by the papers.

357 citations


01 Jan 1998
TL;DR: A short review and a state-of-the-art report on Web-based adaptive educational systems are provided in this paper, where the systems are analyzed according to applied adaptation technologies, and a comparison of the systems is made.
Abstract: This paper provides a short review and a state of the art report on Webbased adaptive educational systems. The systems are analyzed according to applied adaptation technologies.

308 citations


Proceedings ArticleDOI
23 Feb 1998
TL;DR: The WebOQL system is presented, which supports a general class of data restructuring operations in the context of the Web and synthesizes ideas from query languages for the Web, for semistructured data and for Website restructuring.
Abstract: The widespread use of the Web has originated several new data management problems, such as extracting data from Web pages and making databases accessible from Web browsers, and has renewed the interest in problems that had appeared before in other contexts, such as querying graphs, semistructured data and structured documents. Several systems and languages have been proposed for solving each of these Web data management problems, but none of these systems addresses all the problems from a unified perspective. Many of these problems essentially amount to data restructuring: we have information represented according to a certain structure and we want to construct another representation of (part of it) using a different structure. We present the WebOQL system, which supports a general class of data restructuring operations in the context of the Web. WebOQL synthesizes ideas from query languages for the Web, for semistructured data and for Website restructuring.

296 citations



Journal ArticleDOI
TL;DR: This work attempts to descriptively document the types and nature of marketing information on commercial home‐pages, with a view to identifying the major objectives of contemporary commercial Web sites that pre‐dominate the Web.
Abstract: There are two main objectives of the paper. First, in a systematic and statistically rigorous manner, we attempt to descriptively document the types and nature of marketing information on commercial home‐pages, with a view to identifying the major objectives of contemporary commercial Web sites that pre‐dominate the Web. Using Resnik and Stern’s “information content” paradigm, we evaluate the informativeness of commercial home pages. Second, we attempt to empirically examine various important factors of commercial home‐pages that lead to increased visits, or hit‐rates. The identification of hit‐rate determinants is likely to be of great value, both to Web page designers and to the many small and large firms seeking to establish their presence on the Web.

216 citations


Journal ArticleDOI
TL;DR: A range of Web-based instructional options is outlined, general guidelines for designing Web-delivered instruction are provided, and two case studies are discussed.
Abstract: There are many methods and techniques for delivering instruction through the Web. Academic and industrial courses (taught in a traditional classroom) can be enhanced with links to resources on the Web, or the courses can be delivered virtually—completely via the Web. Instructional content can be delivered through email “correspondence-type” courses, via Web pages written in HTML, or with very complex interactions developed with Java, JavaScript, Shockwave, ActiveX, or other tools. In this article, a range of Web-based instructional options is outlined, general guidelines for designing Web-delivered instruction are provided, and two case studies are discussed. In addition, links to example Web-based training (WBT) sites are included. This article was originally published on the ITForum, an international listserv that is subscribed to by over 1000 professors, graduate students, and practitioners in Instructional Technology. General reactions to the article (that were posted on the ITForum), and responses from the author are included as postscripts to this article.

208 citations


Journal ArticleDOI
Ora Lassila1
TL;DR: This paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results.
Abstract: The sheer volume of information can make searching the Web frustrating. The paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results. RDF, a foundation for processing metadata, provides interoperability between applications that exchange machine understandable information on the Web.

Book
01 Oct 1998
TL;DR: In this article, the authors focus on the Web - enabler or disabler collaborative learning in networked simulation environments media integration web-based student support systems innovations in large-scale supported distance learning promoting learner dialogues on the web new scenarios in scholarly publishing telepresence on the Internet KMi planet sharing programming knowledge over the web accessing AI applications over web knowledge modelling the world wide design lab psychological agents and the new web media a tutor's assistant for electronic conferencing.
Abstract: Can you get my hard nose in focus the Web - enabler or disabler collaborative learning in networked simulation environments media integration web-based student support systems innovations in large-scale supported distance learning promoting learner dialogues on the web new scenarios in scholarly publishing telepresence on the Internet KMi planet sharing programming knowledge over the web accessing AI applications over the web knowledge modelling the world wide design lab psychological agents and the new web media a tutor's assistant for electronic conferencing.

Proceedings Article
18 May 1998
TL;DR: Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW that are based on the use of ontologies, and can be achieved without requiring to change the semiformal nature of web documents.
Abstract: Abstraet The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW. The technique is based on the use of ontologies. Ontologies are applied to annotate web documents and to provide query access and inference service that deal with the semantics of the presented information. In consequence, intelligent brokering services for web documents can be achieved without requiring to change the semiformal nature of web documents.

Patent
11 Feb 1998
TL;DR: In this article, a system for automatically creating databases containing industry, service, product and subject classification data, contact data, geographic location data (CCG-data) and links to web pages from HTML, XML or SGML encoded web pages posted on computer networks such as the Internet or Intranets.
Abstract: A system for automatically creating databases containing industry, service, product and subject classification data, contact data, geographic location data (CCG-data) and links to web pages from HTML, XML or SGML encoded web pages posted on computer networks such as the Internet or Intranets. The web pages containing HTML, XML or SGML encoded CCG-data, database update controls and web browser display controls are created and modified by using simple text editors, HTML, XML or SGML editors or purpose built editors. The CCG databases may be searched for references (URLs) to web pages by use of enquiries which reference one or more of the items of the CCG-data. Alternatively, enquiries referencing the CCG-data in the databases may supply contact data without web page references. Data duplication and coordination is reduced by including in the web page CCG-data display controls which are used by web browsers to format for display the same data that is used to automatically update the databases.

Book ChapterDOI
27 Mar 1998
TL;DR: It is considered how to efficiently compute the overlap between all pairs of web documents, which can be used to improve web crawlers, web archivers and in the presentation of search results, among others.
Abstract: We consider how to efficiently compute the overlap between all pairs of web documents. This information can be used to improve web crawlers, web archivers and in the presentation of search results, among others. We report statistics on how common replication is on the web, and on the cost of computing the above information for a relatively large subset of the web – about 24 million web pages which corresponds to about 150 Gigabytes of textual information.

Journal ArticleDOI
Rob Barrett1, Paul P. Maglio1
01 Apr 1998
TL;DR: This paper describes WBI, an implemented architecture for building intermediaries that has been used to construct many applications, including personal histories, password management, image distillation, collaborative filtering, targeted advertising, and Web advising.
Abstract: We propose a new approach to programming Web applications that increases the Web's computational power, the Web's flexibility, and Web programmer productivity. Whereas Web servers have traditionally been responsible for producing all content, intermediaries now provide new places for producing and manipulating Web data. We define intermediaries as computational elements that lie along the path of a Web transaction. In this paper, we describe the fundamental ideas behind intermediaries and provide a collection of example applications. We also describe WBI, an implemented architecture for building intermediaries that we have used to construct many applications, including personal histories, password management, image distillation, collaborative filtering, targeted advertising, and Web advising.

Journal ArticleDOI
TL;DR: This paper considers how the World Wide Web Distributed Authoring and Versioning (WebDAV) working group is extending HTTP 1.1 to provide a standards-based infrastructure for asynchronous collaborative authoring on the Web.
Abstract: The paper considers how the World Wide Web Distributed Authoring and Versioning (WebDAV) working group is extending HTTP 1.1 to provide a standards-based infrastructure for asynchronous collaborative authoring on the Web. The WebDAV extensions support the use of HTTP for interoperable publishing of a variety of content, providing a common interface to many types of repositories and making the Web analogous to a large-grain, network-accessible file system.

Journal ArticleDOI
TL;DR: A comprehensive framework for effective commercial Web application development based on prior research in hypermedia and human‐computer interfaces is proposed, which should result in more effective commercial web application development.
Abstract: The World Wide Web (WWW) or the Web has been recognized as a powerful new information exchange channel in recent years. Today, an ever‐increasing number of businesses have set up Web sites to publicize their products and services. However, careful planning and preparation is needed to achieve the intended purpose of this new information exchange channel. This paper proposes a comprehensive framework for effective commercial Web application development based on prior research in hypermedia and human‐computer interfaces. The framework regards Web application development as a special type of software development project. At the onset of the project, its social acceptability is investigated. Next, economic, technical, operational, and organizational viability are examined. For Web page design, both the functionality and usability of Web pages are thoroughly considered. The use of the framework should result in more effective commercial Web application development.

Book
04 Sep 1998
TL;DR: This paper focuses on developing Web-based training training organizations listservs, threaded discussions, notes conferences, and forums, and the bibliography matrix of Web- based training types netiquette.
Abstract: Why deliver instruction on the Web? principles of adult eduction the Web-based training process assessing learner needs selecting the most appropriate Web-based training method designing lessons asynchronous interactions creating blueprints evaluating programs ready, set, go. Appendices: tools for developing Web-based training training organizations listservs, threaded discussions, notes conferences, and forums selected bibliography matrix of Web-based training types netiquette.

Book
01 Apr 1998
TL;DR: This chapter discusses the evolution of Web Site Design, software engineering principles and the Web, and current practices in Web Development.
Abstract: 1. Introduction: Evolution of Web Site Design. Web Design. Generations Don't Matter, Purpose Does. Initial Failure of Web RAD. Summary. 2. Software Engineering Principles and the Web. Web Sites as Software. Current Practices in Web Development. The Need for Process. Process Models. Beyond Process. Web Engineering Is Not Software Engineering. Summary. 3. The Medium of the Web. Networked Communication. Overview of a Web Session. Components of the Web Medium. Summary. 4. Problem Definition, Concept Exploration, and Feasibility Analysis. Understanding the Problem. Writing the Problem Definition. Concept Exploration and Feasibility-The Whirlpool Approach. Answering the Problem Definition: The Overall Purpose. Establishing a Measurement of Success. Logistics. Summary. 5. Requirements Analysis and Specification. Classifying the Site. Requirements Analysis. Specification. Estimation and Resource Requirements. Conclusion. 6. Designing the Web Site and System. What Does Web Design Include? Information Design. Web Site: Application versus Information. Program Design. Structured Design. Choosing a Design Approach. Navigation Design. Graphic Design. Network/Server Design. Summary. 7. Implementation: Building a Web Site. Programming Technologies. Client-Side Technologies. When to Use Client-Side Technologies. Server-Side Technologies. When to Use Server-Side Technologies. Content Technologies. Development Tools. Assembling the Beta Site. The Implementation Process. Developer Test. Summary. 8. Web Testing. Issues with Testing. Realistic Testing. Test Plans and Procedures. Functionality Testing. Content Testing. User Test: Usability and Beta Testing. The Result of Testing. Summary. 9. Post-Development: Promotion and Maintenance. Promotion and How People Find Sites and Information. Maintenance. Using Feedback to Grow or Modify a Web Site. Summary. 10. Beyond Web Site Engineering. Real Life: That Which Can't Be Planned For. Defending Web Projects. Politics. Web Sites Affect Organizations. Staying In Bounds. Summary. Index.

Book
01 Mar 1998
TL;DR: In this article, Allen, Kania, and Yaeckel present case studies of some of the most successful one-to-one Web marketing initiatives and provide valuable lessons, tips, and guidelines on how to make the best technology selections for budget and goals.
Abstract: From the Publisher: Experts Allen, Kania, and Yaeckel get you up to speed on all the hot new Web technologies that marketers are using to forge lasting relationships, one customer at a time. With the help of case studies of some of the most successful one-to-one Web marketing initiatives, they show you exactly how those technologies are being employed to customize offerings and create dialogs with customers. They provide valuable lessons, tips, and guidelines on how to make the best technology selections for your budget and goals, and plan a successful one-to-one Web marketing initiative; build relationships with customers using personalization, push, interactivity, telephone and A/V conferencing, e-mail, virtual community, and other cutting-edge Web technologies; and integrate one-to-one Web marketing strategies with other processes and systems, such as customer service and support and databases.

Journal ArticleDOI
01 Apr 1998
TL;DR: In this paper, the authors address the problem of how to cope with such intrinsic limits of Web metadata, and propose a method that is able to partially solve the above two problems, and showing concrete evidence of its effectiveness.
Abstract: The World Wide Web currently has a huge amount of data, with practically no classification information, and this makes it extremely difficult to handle effectively. It has been realized recently that the only feasible way to radically improve the situation is to add to Web objects a metadata classification, to help search engines and Web-based digital libraries to properly classify and structure the information present in the WWW. However, having a few standard metadata sets is insufficient in order to have a fully classified World Wide Web. The first major problem is that it will take some time before a reasonable number of people start using metadata to provide a better Web classification. The second major problem is that no one can guarantee that a majority of the Web objects will be ever properly classified via metadata. In this paper, we address the problem of how to cope with such intrinsic limits of Web metadata, proposing a method that is able to partially solve the above two problems, and showing concrete evidence of its effectiveness. In addition, we examine the important problem of what is the required “critical mass” in the World Wide Web for metadata in order for it to be really useful.

01 Jan 1998
TL;DR: The bottlenecks of the approach that stem from the fact that the applicability of Ontobroker requires two time-consuming activities: developing shared ontologies that reflect the consensus of a group of web users and annotating web documents with additional information.
Abstract: The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service in the WWW. It provides languages to annotate web documents with ontological information, to represent ontologies, and to formulate queries. The tool set of Ontobroker allows us to access information and knowledge from the web and to infer new knowledge with an inference engine based on techniques from logic programming. This article provides several examples that illustrate these languages and tools and the kind of service that is provided. We also discuss the bottlenecks of our approach that stem from the fact that the applicability of Ontobroker requires two time-consuming activities: (1) developing shared ontologies that reflect the consensus of a group of web users and (2) annotating web documents with additional information.

Journal Article
TL;DR: This study establishes a research-based set of guidelines for design of World Wide Web pages by providing an analysis of what guidelines currently exist and compares selected parts of these guidelines with a sample of existing Web pages to determine whether or not Web page designers are currently following the published guidelines.
Abstract: The Internet and its World Wide Web (WWW) are rapidly becoming a way of life for many in business, industry, and education. Many libraries are placing a Web home page on the Internet. However, who knows what a successful Web page should look like? How do we define success in this new context? This study establishes a research-based set of guidelines for design of World Wide Web pages. It provides an analysis of what guidelines currently exist and compares selected parts of these guidelines with a sample of existing Web pages. The Internet and its World Wide Web (WWW) are rapidly becoming a way of life for many in business, industry, and education. Numerous newspaper advertisements as well as television commercials and programs list the companies' WWW address. WWW home pages are readily available to anyone with a computer, a modem, and a way to connect to the Web. The number of hosts worldwide on the Web increased from 1.3 million in January 1993 to over 12.8 million in July 1996[1,2] and appears to be doubling in size approximately every twelve to fifteen months.[3] During the same time period, in Europe alone, the number of hosts increased from 303,828 to over 3 million.[4] Virtually anyone, anywhere, can place a Web page on the Internet. Libraries and other information agencies have quickly joined the ranks of companies and agencies creating Web pages. An obvious reason for this interest in placing Web pages on the Internet is to communicate information about the company or agency providing the pages. This is done through the use of visual elements, such as print or photos. Pages that do not communicate the desired information because of poor page design fail in their purpose. An agency which places a Web page on the Internet does so with the assumption that the user will comprehend the content of the page and that he or she will continue through the provided links to other pages in the Web site. The design of the page can affect whether or not the user goes beyond the first page. In addition, the design of the page sends a message to the user about the organization. The fact that anyone may place a Web page on the Internet provides ample reason for the enumeration of some perimeters based on current and relevant research. By following sound, research-based guidelines, a library or other organization can be assured that it is represented on the Web in a complimentary manner and that the pages to which it provides organized access are useful to their users. Unfortunately, little research has been done, probably due to the newness of the Web. Research has certainly been conducted on the design of television and computer screens. However, in virtually all cases the purpose of the screen being examined is very different from that of a person, agency, or company placing a home page on the Web. In many cases the research relates to screen design for education or training, but also may be for noninstructional situations (e.g., air traffic monitoring, airline arrival/departure schedules, pilot/driver navigation systems, online job aids). Looking at existing Web sites one can find pages with many colors in various combinations, an extraordinary number of graphics having little or nothing to do with the content of the page, typefaces of every conceivable style, and layouts that would make even the most novice of graphic artists scream in horror. This exploratory study is a preliminary step to establishing a research-based set of guidelines for design of World Wide Web pages. It provides an analysis of what guidelines currently exist and compares selected parts of these guidelines with a sample of existing Web pages to determine whether or not Web page designers are currently following the published guidelines. This synthesis and comparison has implications for universities, businesses, and other agencies around the world. Research Questions For this study, the following research questions were posed: 1. …

Journal ArticleDOI
TL;DR: This paper reviews how the Internet has progressed from delivering simple static, text-based material to sophisticated interactive Web sites based on CGI Technology and illustrates how patients can appreciate the 3-D structure of bones and organs using virtual reality in a VRML Web environment.

Proceedings ArticleDOI
16 Apr 1998
TL;DR: An object-oriented modeling framework is defined, called WOOM, which provides constructs and abstractions for a high-level implementation of a Web site and clearly separates the data that are presented through the site from the context in which the user accesses such data.
Abstract: The World Wide Web (WWW) has become "the" global infrastructure for delivering information and services. The demands and expectations of information providers and consumers are pushing WWW technology towards higher-level quality of presentation, including active contents and improved usability of the hypermedia distributed infrastructure. This technological evolution, however, is not supported by adequate Web design methodologies. Web site development is usually carried out without following a well-defined process and lacks suitable tool support. In addition, Web technologies are quite powerful but rather low-level and their semantics is often left largely unspecified. As a consequence, understanding the conceptual structure of a complex Web site and managing its evolution are complex and difficult tasks. The approach we advocate here is based on sound software engineering principles. The Web site development process goes through requirements analysis, design, and implementation in a high-level language. We define an object-oriented modeling framework, called WOOM, which provides constructs and abstractions for a high-level implementation of a Web site. An important feature of WOOM is that it clearly separates the data that are presented through the site from the context in which the user accesses such data. This feature not only enhances separation of concerns in the design stage, but also favors its subsequent evolution. The paper provides a view of the approach and of its current prototype implementation.

Proceedings ArticleDOI
01 Jun 1998
TL;DR: The technology and tools for rapidly constructing information mediators that extract, query, and integrate data from web sources are created, called Ariadne, which makes it feasible to rapidly build information mediator that access existing web sources.
Abstract: The Web is based on a browsing paradigm that makes it difficult to retrieve and integrate data from multiple sites. Today, the only way to achieve this integration is by building specialized applications, which are time-consuming to develop and difficult to maintain. We are addressing this problem by creating the technology and tools for rapidly constructing information mediators that extract, query, and integrate data from web sources. The resulting system, called Ariadne, makes it feasible to rapidly build information mediators that access existing web sources.

Journal ArticleDOI
TL;DR: The World Wide Web is a tool that can beused in many ways for basic statistics education, and educators can now include interactive demonstrations in the form of J...
Abstract: The World Wide Web (WWW) is a tool that can beused in many ways for basic statistics education. Using the latest WWW technology, educators can nowinclude interactive demonstrations in the form of J...

Journal ArticleDOI
TL;DR: Using the well-known industrial marketing concepts of purchasing decision processes and hierarchy of effects models, this paper introduces a conceptual framework for measuring the efficiency of a Web site.
Abstract: This paper discusses the role of the Word Wide Web as communication tool for industrial marketers and its position in the business-to-business promotional mix Using the well-known industrial marketing concepts of purchasing decision processes and hierarchy of effects models, it introduces a conceptual framework for measuring the efficiency of a Web site Examples are given of both large and small industrial marketers who are currently using their Web sites to achieve these effects Efficiency indexes are defined for five Web communication activities and an overall measure of Web site efficiency measure is presented

Journal Article
TL;DR: In this poster, the implications of Trust Management to future Web applications are summarized and developers and others in asking “why” trust is granted are summarized.
Abstract: As once-proprietary mission-specific information systems migrate onto the Web, traditional security analysis cannot sufficiently protect each subsystem atomically. The Web encourages open. decentralized systems that span multiple administrative domains. Trust Management is an emerging framework for decentralizing security decisions that helps developers and others in asking “why” trust is granted rather than immediately focusing on “how” cryptography can enforce it. In this poster, we summarize the implications of Trust Management to future Web applications.