scispace - formally typeset
Search or ask a question

Showing papers on "Semantic Web published in 1998"


Proceedings ArticleDOI
01 May 1998
TL;DR: This investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.
Abstract: The World Wide Web grows through a decentralized, almost anarchic process, and this has resulted in a large hyperlinked corpus without the kind of logical organization that can be built into more tradit,ionally-created hypermedia. To extract, meaningful structure under such circumstances, we develop a notion of hyperlinked communities on the www t,hrough an analysis of the link topology. By invoking a simple, mathematically clean method for defining and exposing the structure of these communities, we are able to derive a number of themes: The communities can be viewed as containing a core of central, “authoritative” pages linked togh and they exhibit a natural type of hierarchical topic generalization that can be inferred directly from the pat,t,ern of linkage. Our investigation shows that although the process by which users of the Web create pages and links is very difficult to understand at a “local” level, it results in a much greater degree of orderly high-level structure than has typically been assumed.

905 citations


Proceedings Article
01 Jul 1998
TL;DR: The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web, and several machine learning algorithms for this task are described.
Abstract: The World Wide Web is a vast source of information accessible to computers, but understandable only to humans. The goal of the research described here is to automatically create a computer understandable world wide knowledge base whose content mirrors that of the World Wide Web. Such a knowledge base would enable much more effective retrieval of Web information, and promote new uses of the Web to support knowledge-based inference and problem solving. Our approach is to develop a trainable information extraction system that takes two inputs: an ontology defining the classes and relations of interest, and a set of training data consisting of labeled regions of hypertext representing instances of these classes and relations. Given these inputs, the system learns to extract information from other pages and hyperlinks on the Web. This paper describes our general approach, several machine learning algorithms for this task, and promising initial results with a prototype system.

766 citations


Journal ArticleDOI
01 Sep 1998
TL;DR: The primary goal of this survey is to classify the different tasks to which database concepts have been applied, and to emphasize the technical innovations that were required to do so.
Abstract: The popularity of the World-Wide Web (WWW) has made it a prime vehicle for disseminating information. The relevance of database concepts to the problems of managing and querying this information has led to a significant body of recent research addressing these problems. Even though the underlying challenge is the one that has been traditionally addressed by the database community how to manage large volumes of data the novel context of the WWW forces us to significantly extend previous techniques. The primary goal of this survey is to classify the different tasks to which database concepts have been applied, and to emphasize the technical innovations that were required to do so.

642 citations


01 Jan 1998

377 citations


Proceedings ArticleDOI
01 Jan 1998
TL;DR: New techniques for Web Ecology and Evolution Visualization (WEEV) are presented, intended to aid authors and webmasters with the production and organization of content, assist Web surfers making sense of information, and help researchers understand the Web.
Abstract: Several visualizations have emerged which attempt to visualize all or part of the World Wide Web. Those visualizations, however, fail to present the dynamically changing ecology of users and documents on the Web. We present new techniques for Web Ecology and Evolution Visualization (WEEV). Disk Trees represent a discrete time slice of the Web ecology. A collection of Disk Trees forms a Time Tube, representing the evolution of the Web over longer periods of time. These visualizations are intended to aid authors and webmasters with the production and organization of content, assist Web surfers making sense of information, and help researchers understand the Web.

218 citations


Journal ArticleDOI
Ora Lassila1
TL;DR: This paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results.
Abstract: The sheer volume of information can make searching the Web frustrating. The paper considers how the Resource Description Framework, with its focus on machine-understandable semantics, has the potential for saving time and yielding more accurate search results. RDF, a foundation for processing metadata, provides interoperability between applications that exchange machine understandable information on the Web.

192 citations


Proceedings Article
18 May 1998
TL;DR: Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW that are based on the use of ontologies, and can be achieved without requiring to change the semiformal nature of web documents.
Abstract: Abstraet The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service of the WWW. The technique is based on the use of ontologies. Ontologies are applied to annotate web documents and to provide query access and inference service that deal with the semantics of the presented information. In consequence, intelligent brokering services for web documents can be achieved without requiring to change the semiformal nature of web documents.

145 citations


01 Jan 1998
TL;DR: The bottlenecks of the approach that stem from the fact that the applicability of Ontobroker requires two time-consuming activities: developing shared ontologies that reflect the consensus of a group of web users and annotating web documents with additional information.
Abstract: The World Wide Web (WWW) is currently one of the most important electronic information sources. However, its query interfaces and the provided reasoning services are rather limited. Ontobroker consists of a number of languages and tools that enhance query access and inference service in the WWW. It provides languages to annotate web documents with ontological information, to represent ontologies, and to formulate queries. The tool set of Ontobroker allows us to access information and knowledge from the web and to infer new knowledge with an inference engine based on techniques from logic programming. This article provides several examples that illustrate these languages and tools and the kind of service that is provided. We also discuss the bottlenecks of our approach that stem from the fact that the applicability of Ontobroker requires two time-consuming activities: (1) developing shared ontologies that reflect the consensus of a group of web users and (2) annotating web documents with additional information.

88 citations


01 Jan 1998
TL;DR: This paper describes how SHOE, a set of Simple HTML Ontological Extensions, can be used to discover implicit knowledge from the World-Wide Web through the use of context, inheritance and inference.
Abstract: This paper describes how SHOE, a set of Simple HTML Ontological Extensions, can be used to discover implicit knowledge from the World-Wide Web (WWW) SHOE allows authors to annotate their pages with ontology-based knowledge about page contents In previous papers, we discussed how the semantic knowledge provided by SHOE allows users to issue queries that are much more sophisticated than keyword search techniques, including queries that require retrieval of information from many sources Here, we expand upon this idea by describing how SHOE’s ontologies allow agents to understand more than what is explicitly stated in Web pages through the use of context, inheritance and inference We use examples to illustrate the usefulness of these features to Web agents and query engines

61 citations


Book ChapterDOI
30 Mar 1998
TL;DR: It is noted that some structural properties can be identified with semantic properties of the data and provide measures for comparison between HTML documents.
Abstract: When we describe a Web page informally, we often use phrases like “it looks like a newspaper site”, “there are several unordered lists” or “it's just a collection of links”. Unfortunately, no Web search or classification tools provide the capability to retrieve information using such informal descriptions that are based on the appearance, i.e., structure, of the Web page. In this paper, we take a look at the concept of structurally similar Web pages. We note that some structural properties can be identified with semantic properties of the data and provide measures for comparison between HTML documents.

41 citations


Proceedings Article
01 Jan 1998
TL;DR: A declarative query language that would allow resource discovery on the Internet with interactive and progressively interactive inquiries and consents to the discovery of knowledge within the content of the documents and the structure of the hyperspace is proposed.
Abstract: There is a massive increase of information available on electronic networks. This profusion of resources on the WorldWide Web gave rise to considerable interest in the research community. Traditional information retrieval techniques have been applied to the document collection on the Internet, and a myriad of search engines and tools have been proposed and implemented. However, the e ectiveness of these tools is not satisfactory. None of them is capable of discovering knowledge from the Internet. We propose a declarative query language that would allow resource discovery on the Internet with interactive and progressively re ned inquiries. The language also consents to the discovery of knowledge within the content of the documents and the structure of the hyperspace.

Journal ArticleDOI
TL;DR: LogicWeb illustrates that logic programming possesses many advantages for writing Web applications, including the simple representation of information, the ability to write meta-level descriptions, and the encoding of rules and heuristics necessary for “intelligent” behaviour.
Abstract: LogicWeb is a model of the World Wide Web, where Web pages are rephrased as logic programs, and hypertext links are relationships between these programs. A logic language based on LogicWeb has been developed which supports these high-level abstractions for Web programming. We have also implemented a client-side extension to a Web browser for executing applications written in that language. The LogicWeb language is particularly suitable for coding important classes of applications, and this paper considers two in some detail: Web search, and the structuring of Web information using deductive databases. LogicWeb illustrates that logic programming possesses many advantages for writing Web applications, including the simple representation of information (e.g., as deductive databases or as logic grammars), the ability to write meta-level descriptions (e.g., of pages and the connections between pages), and the encoding of rules and heuristics necessary for “intelligent” behaviour.

Journal Article
TL;DR: HyperAT, a hypertext research authoring tool developed to help designers build usable web documents on the World Wide Web without getting \lost, is presented.
Abstract: Users tend to lose their way in the maze of information within hypertext. Much work done to address the \\lost in hyperspace\" problem is reactive, that is, doing remedial work to correct the de ciencies within hypertexts because they are (or were) poorly designed and built. What if solutions are sought to avoid the problem? What if we do things well from the start? This paper reviews the \\lost in hyperspace\" problem, and suggests a framework to understand the design and usability issues. The issues cannot be seen as purely psychological or purely computing, they are multi-disciplinary. Our proactive, multi-disciplinary approach is drawn from current technologies in sub-disciplines of hypertext, humancomputer interaction, cognitive psychology and software engineering. To demonstrate these ideas, this paper presents HyperAT, a hypertext research authoring tool, developed to help designers build usable web documents on the World Wide Web without getting \\lost.\

Journal ArticleDOI
TL;DR: A new formal model of query and computation on the Web is presented, focusing on two important aspects that distinguish the access to Web data from theAccess to a standard database system: the navigational nature of the access and the lack of concurrency control.


01 Jan 1998
TL;DR: The lessons learned from the experiences with the Web-base Management system are discussed, ranging from database-style query interfaces to popular Web sites, to the design and implementation of several sites, among which an integrated Web museum, which correlates data coming from several virtual museums on the Web.
Abstract: The Araneus project aims at developing tools for data-management on the World Wide Web. Web-based information systems deal with data of heterogeneous nature, mainly database data and HTML documents. We have implemented a system, called a Web-base Management system, for managing such repositories. The system is designed to support several classes of applications: (i) high-level access to data in the Web; (ii) design, implementation and maintenance of Web sites; (iii) cooperative applications on the Web. We discuss the lessons learned from our experiences with the system, ranging from database-style query interfaces to popular Web sites, to the design and implementation of several sites, among which an integrated Web museum, which correlates data coming from several virtual museums on the Web.

Book
04 Dec 1998
TL;DR: This book gives a thorough technical description of all relevant WWW developments up to the time of writing, including the latest versions of the transfer protocol (HTTP/1.1) and description language (HTML 4.0).
Abstract: From the Publisher: The World Wide Web is undoubtedly the development of the decade in the media world. Since its beginnings in 1990, the WWW has evolved from a rather simple model of resource names (URL), a transfer protocol (HTTP), and a language for the description of interconnected information pages (HTML), to a far more complex infrastructure. This book gives a thorough technical description of all relevant WWW developments up to the time of writing, including the latest versions of the transfer protocol (HTTP/1.1) and description language (HTML 4.0), the foundations of the description language (SGML and XML), style sheets (CSS1), server issues (SSL, CGI, and Apache as an example of a Web server), and some issues that will be of increasing importance in future (MathML, VRML, PNG).

Proceedings ArticleDOI
01 May 1998
TL;DR: The requirements of a tourism hypermedia system resulting from ethnographic studies of tourist advisers are presented, and it is concluded that an open semantic hypermedia (SH) approach is appropriate.
Abstract: Web-based Public Information Systems of the kind common in tourism do not satisfy the needs of the customer because they do not offer a sufficiently flexible linking environment capable of emulating the mediation role of a tourist adviser. We present the requirements of a tourism hypermedia system resulting from ethnographic studies of tourist advisers, and conclude that an open semantic hypermedia (SH) approach is appropriate. We present a novel and powerful SH prototype based on the use of a semantic model expressed as a terminology. The terminological model is implemented by a Description Logic, GRAIL, capable of the automatic and dynamic multi-dimensional classification of concepts, and hence the web pages they describe, We show how GRAIL-Link has been used within the TourisT hypermedia system and conclude with a discussion.

Proceedings ArticleDOI
01 Aug 1998
TL;DR: The first way is achieved on the World Wide Web by using one the many available search engines (such as Altavista, Lycos, or InfoSeek), and the second way is mainly achieved by browsing the WWW, following predefined links between documents.
Abstract: The first way is achieved on the World Wide Web by using one the many available search engines (such as Altavista, Lycos, or InfoSeek), and the second way is mainly achieved by browsing the WWW, following predefined links between documents. Most of the people tend to move between these two modes depending on the task to perform. Nowadays, most users are facing difficulties using the existing tools to achieve satisfaction in their information seeking task. We think that the two main problems inherent to the WWW, which are behind these problems, are :

Journal ArticleDOI
TL;DR: This paper attempts to organise the available tools into a number of categories, according to their information acquisition and retrieval methods, with the intention of exposing the strengths and weaknesses of the various approaches.
Abstract: Search Engines and Classified Directories have become essential tools for locating information on the World Wide Web. A consequence of increasing demand, as the volume of information on the Web has expanded, has been a vast growth in the number of tools available. Each one claims to be more comprehensive, more accurate and more intuitive to use than the last. This paper attempts to organise the available tools into a number of categories, according to their information acquisition and retrieval methods, with the intention of exposing the strengths and weaknesses of the various approaches. The importance and implications of Information Retrieval (IR) techniques are discussed. Description of the evolution of automated tools enables an insight into the aims of recent and future implementations

Proceedings ArticleDOI
W.W. Noah1
06 Jan 1998
TL;DR: TRW's Digital Media Systems Lab has developed a research platform, InfoWeb/sup TM/, that can be described as an "information infrastructure" that provides seamless access to Web search services, Web pages, intranet databases, special purpose search engines, and legacy systems.
Abstract: The explosive growth in the volume of information available on the Web and in enterprise databases continues unabated. Managing these large quantities of information remains a challenge for both government and industry. TRW's Digital Media Systems Lab has developed a research platform, InfoWeb/sup TM/, that can be described as an "information infrastructure" that provides seamless access to Web search services, Web pages, intranet databases, special purpose search engines, and legacy systems. InfoWeb/sup TM/ generates and manages descriptive metadata associated with "content objects"-text, compound documents, graphics, images, video, audio, numeric data etc. The metadata add value to the intellectual content of the content objects and provide for varied retrieval strategies, notably through complex searches and automatic hyperlinks.

Journal ArticleDOI
TL;DR: The efforts that apply established database techniques to retrieving Web information are summarized and some possible extensions to the traditional database techniques are investigated for building fully fledged Web‐based database applications.
Abstract: Integrating database and World Wide Web technologies is another topic where industrial and practical activities lead ahead of academic ones The purpose of this article is to survey the related activities from database people’s view and stimulate the interests among the database community It covers three aspects First, the efforts that apply established database techniques to retrieving Web information are summarized These efforts aim to overcome the inadequacy of file system technology on which the Web is based, so that information can be retrieved easily and quickly from the Web Second, various approaches to interfacing databases via the Web are discussed, with examples of accomplished prototypes and commercial products showing recent advances Finally, some possible extensions to the traditional database techniques are investigated for building fully fledged Web-based database applications

Proceedings ArticleDOI
06 Jan 1998
TL;DR: The presented meta model, the Extended World Wide Web Design Technique (eW3DT), focuses on the document oriented storage layer of the Dexter Hypertext Reference Model, and distinguishes between technical and content specific responsibilities for designing, implementing, and maintaining WIS.
Abstract: Due to a constantly changing environment as well as a lack of willingness to modify existing organizational structures and decision models, the full economic potential of Web information systems (WIS) has not yet been realized. A reference model as a normative concept represents an abstraction of a typical company, its functional units, or its (Web) information systems and is intended to facilitate the exploitation of this potential. The data object types of the presented meta model, the Extended World Wide Web Design Technique (eW3DT), focus on the document oriented storage layer of the Dexter Hypertext Reference Model. They provide hypertext designers with a framework and graphical notation for the construction of both reference and implementation models, during the software development process of commercial WIS. As a precondition for pursuing a partial globalization strategy, eW3DT distinguishes between technical and content specific responsibilities for designing, implementing, and maintaining WIS.

Proceedings ArticleDOI
27 Feb 1998
TL;DR: The benefits of databases to keep the web service information and the use of the HTML++ toolset to map the database contents to World Wide Web documents are discussed, which introduces a high level of flexibility to the Vienna International Festival WWW presence.
Abstract: World Wide Web service and application engineering is a complex task, comparable to the software engineering process. Similar to CASE environments in software engineering, web service editors and web document designers are successful products on the market. Most of the tools concentrate on single web page creation; only a few manage the organization of complete web services. Large web applications containing hundreds or thousands of documents and complex interactive services need a more sophisticated engineering approach. Basic requirements for a web application management system are proper organizations of both the data and the navigation model. Consistent interface design and integration facilities for static information and dynamic interactive services identify a good engineering toolset. A real challenge to the engineering task is the introduction of flexibility to the web applications in terms of content changes, layout design updates, temporal information changes, multilinguai support and online content management. We discuss the benefits of databases to keep the web service information and the use of the HTML++ toolset to map the database contents to World Wide Web documents. Databases are used to introduce an abstract level of data management for web applications. The object-based HTML++ toolset is used to engineer complex web applications like the Vienna International Festival WWW presence. This case study contains 300+ multilingual pages and several interactive services. The combination of database support and HTML++ introduces a high level of flexibility to the festival's web service engineering. 1. I N T R O D U C T I O N Since the development of the lntemet. Ihe World Wide Web is the most popular application using ~his global Permission to make digital/hard copy of all or part of this work tbr personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage, the copyright notice, the title of the publication and its date appear, and notice is given that copying is by permission of ACM, Inc. To copy otherwise, to republish, to post on servers or tO redistribute to li~t~_ reolllre~ nrictr ~n~ . t ' i f i r n a t m i © e ; a n a n , 4 / n . ~ te~* information infrastructure. Due to its immense growth rate, millions of computers are currently connected and myriades of information packets are shared amongst the distributed systems and their users. Hundreds of thousands web servers offer interesting multimedia information for all kinds of user interests and publicly available browser allow anyone to connect to those sources. Web surfers download data directly or employ search engines that provide lists of information sources containing the individually requested data. The available information is that rich that service providers are forced to use innovative technologies and ideas to stand out of the competition. When Tim Berners-Lee[3] introduced his HTrP protocol and the first WWW servers in 1993, page after page of new information was added to a rapidly growing number of hosts. The first generation of web services, simply showing static HTML pages[2] is already outdated and multimedia contents and active components are manifested as state-of-the-art. Database interaction with the WWW provides a proper technology basis for active and dynamic information services that are able to show up-todate news and temporally changing data without loosing the manageability of the contents. Managing complex web applications is not well supported by current WWW development tools. The generation of static HTML pages with commonly available web editors or program implementations to create pages on the fly J from external sources like files and databases or as a reply to user defined form input parameters is a rather time consuming, but conceptually minor part of WWW service engineering. In the hypertext research area web se~'ice information needs to be well organized and managed ideally following a methodology such as RMM[8.10], Tomas lsakowitz et. al, describe in their methodology how to organize the data set, portion the information, design the navigation model and implement the web service. Parts of the steps proposed in the RMM methodology like the strueturization of the data and the service implementation can be solved with current web editors. There also exist tools [11,17] that integrate information from database~, h,, interpreting retrieving commands based on extended HTML • This work was supported by Hewlett Packard Laboratories. P.~l,,h A l t n a n , t lb .* V i P n n a F e ~ : l i v : l l i "~orr l f ' t l i l r t '¢ • pages. These tools provide the presentation of database contents whereas the online management of the data is rarely solved through proprietary systems. In this paper we present a flexible approach to engineering complex web applications: an object-based langnage is used together with latest database technology to provide easy to manage web services. As opposed to related WWW applications our system provides online content management and high flexibility to information changes. As an example for a modern service we discuss the web presence of the Wiener Festwochen 97112] (Vienna International Festival), implemented by the Distributed Systems Group at TU Vienna (DSG). The innovation used in the engineering process of this website is the combination of relational database technology and a web service engineering tool also developed by DSG that introduce a new degree of flexibility to web service engineering. Aside the beneficial employment of the HTML++[I] toolset for managing the hypertext information system, this web application explains the strength of databases for data management and online content manipulation. The interface design of the service can be updated quickly and easily and content changes can be done online by accessing database contents from a restricted management platform using standard WWW browser. Layout changes are applied consistently throughout the whole information set and hypertext links are implicitly handled in the abstract object definition language of our engineering system. This paper is structured as follows: Section 2 describes the benefits of using databases for the data organization and content management of WWW services. Section 3 provides a discussion of our web service engineering tool used to implement the Wiener Festwochen web application (WFA). In Section 4 we concentrate on the benefits that are gained by combining a database and HTML++, the web service engineering toolset. In the conclusion we discuss the benefits of our approach as well as future plans to improve HTML++.

Proceedings Article
01 Jan 1998
TL;DR: In this scheme, tasks for query translation/capability mapping (named as query naturalization) between wrappers and web sources and tasks for semantic caching are seamlessly integrated, resulting in easier query optimization.
Abstract: A semantic caching scheme suitable for web database environments is proposed. In our scheme, tasks for query translation/capability mapping (named as query naturalization) between wrappers and web sources and tasks for semantic caching are seamlessly integrated, resulting in easier query optimization. A semantic cache consists of three components: 1) semantic view , a description of the contents in the cache using sub-expressions of the previous queries, 2) semantic index , an index for the tuple IDs that satisfy the semantic view, and 3) physical storage, a storage containing the tuples (or objects) that are shared by all semantic views in the cache. Types of matching between the native query and cache query are discussed. Algorithms for nding the optimal match of the input query in semantic cache and for cache replacement are presented. The proposed techniques are being implemented in a cooperative web database (CoWeb) prototype at UCLA.

01 Jan 1998
TL;DR: Since the functionality of wrappers and mediators is integrated into a single declarative language the development of advanced applications based on the Web as an information source is signi cantly simpli ed.
Abstract: Languages supporting deduction and object orientation seem particularly promising for querying and reasoning about structure and contents of the Web and for the integration of information from heterogeneous sources Florid an implementation of the deductive object oriented language F logic has been extended to provide a declarative semantics for querying the Web This extension allows extraction and restructuring of data from the Web and a seamless integration with local data Since the functionality of wrappers and mediators is integrated into a single declarative language the development of advanced applications based on the Web as an information source is signi cantly simpli ed This claim is substantiated using a comprehensive example

Journal ArticleDOI
TL;DR: Two prototypical elements of a World Wide Web‐based system for visualization and analysis of data produced in the software development process are described, including Live Documents and SeeSoftTM, which incorporates interactive applets and visualization techniques into Web pages.
Abstract: We describe two prototypical elements of a World Wide Web-based system for visualization and analysis of data produced in the software development process Our system incorporates interactive applets and visualization techniques into Web pages A particularly powerful example of such an applet, SeeSoft^{\mathrm{TM}}, can display thousands of lines of text on a single screen, allowing detection of patterns not discernible directly from the text In our system, Live Documents replace static statistical tables in ordinary documents by dynamic Web-based documents, in effect allowing the “reader” to customize the document as it is read Use of the Web provides several advantages The tools access data from a very large central data base, instead of requiring that it be downloadeds this ensures that readers are always working with the most up-to-date version of the data, and relieves readers of the responsibility of preparing data for their use The tools encourage collaborative research, as one researcher’s observations can easily be replicated and studied in greater detail by other team members We have found this particularly useful while studying software data as part of a team that includes researchers in computer science, software engineering, and statistics, as well as development managers Live documents will also help the Web revolutionize scientific publication, as papers published on the Web can contain Java applets that permit readers to confirm the conclusions reached by the authors’ statistical analyses

Journal ArticleDOI
01 Sep 1998
TL;DR: This document presents a brief summary of the papers presented at a workshop as part of SIGIR'98 on Hypertext Information Retrieval for the Web, along with a set of themes identified as a result of group discussion and some conclusions on where to go next.
Abstract: The notion of searching a hypertext corpus has been around for some time, and is an especially important topic given the growth of the World Wide Web and the general dissatisfaction users have with the tools currently available for finding information on the Web. In response to this, a workshop was held as part of SIGIR'98 on Hypertext Information Retrieval for the Web and this document presents a brief summary of the papers presented at that workshop, along with a set of themes identified as a result of group discussion and some conclusions on where to go next.


Proceedings ArticleDOI
04 Nov 1998
TL;DR: With the release and widespread support of XML (extensible markup language) and the development of MathML, Web pages not only can display mathematics and equations in TeX-like fashion, but, beyond that, retain the meaning of the equations so that they can be opened and processed by a variety of mathematical software applications.
Abstract: One of the ironies of the World Wide Web (WWW or simply the Web) is that even though it was initially conceived and implemented for use by physicists, it provided no special capabilities for mathematics and equations. With the release and widespread support of XML (extensible markup language) and the development of MathML, Web pages not only can display mathematics and equations in TeX-like fashion, but, beyond that, retain the meaning of the equations so that they can be opened and processed by a variety of mathematical software applications. The Web thus can expand the scope of its inherent intense interactivity to include equations and mathematics, as well as text and multimedia.