scispace - formally typeset
Search or ask a question

Showing papers on "Hyperlink published in 1995"


Patent
26 May 1995
TL;DR: In this paper, the authors describe a method of using a computer to hyperlink through automatically generated hyperlinks, and a data structure which can be used to support such hyperlinking.
Abstract: Method and apparatus to enable scanning one or more documents, automatically identifying significant key topics, concepts, and phrases in the documents, and creating summary pages for, and hyperlinks between, some or all of these key topics. Optionally, documents are divided into segments, in order that only the needed segment of a hyperlinked-to document need be transferred to a viewer's display. A process running on a computer can be used which (a) allows an author to select source documents and then, using a semantic analyzer program running on a computer, (b) automatically identifies significant key topics within the selected documents, (c) compiles those key topics into summary pages, (d) generates presentation pages and optionally segmenting the selected documents into smaller pieces, and (e) embeds hyperlinks from these summary pages to the locations where key topics appear in the presentation pages. Different types of summary-page are available, including abstract, concept, phrase, and table-of-contents summary pages. A summary page provides an index into the source document, and can be appended to the source document. A method of using a computer to hyperlink through automatically generated hyperlinks, and a data structure which can be used to support such hyperlinking are described.

657 citations


ReportDOI
01 Feb 1995
TL;DR: An information seeking assistant for the world wide web, called WebWatcher, interactively helps users locate desired information by employing learned knowledge about which hyperlinks are likely to lead to the target information.
Abstract: We describe an information seeking assistant for the world wide web. This agent, called WebWatcher, interactively helps users locate desired information by employing learned knowledge about which hyperlinks are likely to lead to the target information. Our primary focus to date has been on two issues: (1) organizing WebWatcher to provide interactive advice to Mosaic users while logging their successful and unsuccessful searches as training data, and (2) incorporating machine learning methods to automatically acquire knowledge for selecting an appropriate hyperlink given the current web page viewed by the user and the user’s information goal. We describe the initial design of WebWatcher, and the results of our preliminary learning experiments.

644 citations


Patent
William C. Hill1
18 Dec 1995
TL;DR: In the context of global hypertext, a new solution to the human interface problem of waiting for the content of a next page to arrive and be displayed by a WWW browser is proposed in this paper.
Abstract: In the context of global hypertext, a new solution to the human interface problem of waiting for the content of a next page to arrive and be displayed by a WWW browser. Small amounts of relevant content are stored and maintained in the hyperlinks themselves. This extra content is revealed to users during download wait time. Hypertext links that contain and reveal extra content are called content laden links. By displaying content which is useful and relevant to the user while the next WWW page is being fetched, useless dead time can be turned into productive time and the satisfaction level of the user increased.

104 citations


Proceedings ArticleDOI
14 Aug 1995
TL;DR: This contribution considers the construction of hyperdocuments; converting scanned paper documents into electronic hypertext, with a focus on hyperlinks between the text and labels in a picture.
Abstract: In this contribution we consider the construction of hyperdocuments; converting scanned paper documents into electronic hypertext. Hyperlink creation is automated by analyzing the structure and content of the scanned document. The focus is on hyperlinks between the text and labels in a picture. A number of tools for such hyperlink detection are described. Practical results are presented.

33 citations


Proceedings Article
01 Jan 1995
TL;DR: This work has developed a process for implementing algorithmic guidelines into a graphical format that allows the user to browse these guidelines in an interactive fashion and to visualize the traversed parts of the algorithm by flowcharts.
Abstract: The widespread utility of clinical practice guidelines is greatly dependent on the ease with which they can be accessed, used, and applied. Because it supports hyperlinking and is widely accessible, the World-Wide Web is a medium that is well suited for browsing through guidelines. We have developed a process for implementing algorithmic guidelines into a graphical format that allows the user to browse these guidelines in an interactive fashion. The guidelines we used were already in or could be transformed to an algorithmic format that lends itself well to analysis with decision table techniques, which in turn permits a fairly straightforward conversion into a graphical representation. The results of this process allow a user to browse a particular guideline algorithm and to visualize the traversed parts of the algorithm by flowcharts. Our first experiences with this method of representing a few sample clinical practice guidelines have been encouraging, and we hope to extend this method to other guidelines.

31 citations


ReportDOI
29 May 1995
TL;DR: The first implementation of WebWatcher, a Learning Apprentice for the World Wide Web, and an algorithm which identifies pages that are related to a given page using only hypertext structure are described.
Abstract: : This paper describes the first implementation of WebWatcher, a Learning Apprentice for the World Wide Web. We also explore the possibility of extracting information from the structure of hypertext. We introduce an algorithm which identifies pages that are related to a given page using only hypertext structure. We motivate the algorithm by using the Minimum Description Length principle.

30 citations


Journal ArticleDOI
TL;DR: The paper includes a brief introduction to this library, some of the museum sites linked to it, visitor statistics, and possible future directions.
Abstract: The World Wide Web (WWW) Virtual Library of museums is an interactive directory of on-line museums on the global Internet computer network of networks. Virtual 'visitors' can select a 'hyperlink' to a museum of their choice (categorised by country) and view on-line hypermedia information and exhibits provided by that museum. Since its inception in 1994, the page has received over 200,000 visits, with around a thousand a day recently, easily the most popular page at our site. The paper includes a brief introduction to this library, some of the museum sites linked to it, visitor statistics, and possible future directions.

11 citations


01 Jan 1995
TL;DR: Using the World-Wide Web, a system for creating hypertext links on the fly in a library composed of bitmapped images of paper documents and text derived from those images by optical-character recognition is described.
Abstract: Hypertext is an appealing interface for digital libraries, but using existing paper documents to build such a library poses several challenges. We describe a system for creating hypertext links on the fly in a library composed of bitmapped images of paper documents and text derived from those images by optical-character recognition. We present two simple ideas: text-image maps coordinate text and image representations of a document, and our probabilistic search heuristics generate hypertext links from the text of citations. Using the World-Wide Web, we built an interface that lets readers move from a bibliography entry to the cited document with a mouse click. Similarly, readers can click on entries in the table of contents and move directly to them.

10 citations


01 Jan 1995
TL;DR: The design of the user interface which is generated by LoganWeb in HTML is described, which includes the extensive use of hyperlinks to bring together related meeting information.
Abstract: Log files generated by electronic meeting software record the remarks typed by meeting participants and many other meeting events. In their raw format, these meeting logs are not convenient for the meeting participants to read and use as input to future meetings. LoganWeb is a tool which processes meeting log files and produces polymorphic meeting documents which contain a variety of summaries in human-readable form such as keyword indexes and participant summaries. LoganWeb generates polymorphic documents in the HTML format used for laying out documents on the World-Wide Web (Web). This allows exploitation of the powerful Web layout, hypertext and user interface facilities. Using a Web browser alongside the electronic meeting tool allows remotely located participants to consult valuable polymorphic documents from the current and past meetings. This paper describes the design of the user interface which is generated by LoganWeb in HTML. The design includes the extensive use of hyperlinks to bring together related meeting information . The powerful features of LoganWeb are illustrated by means of a meeting scenario which shows the main features of the meeting document tool and its user interface.

9 citations


Book
01 Dec 1995
TL;DR: The HTML3 Manual of Style as discussed by the authors provides instructions for using HTML syntax and formatting tags, annotated examples of actual Web pages, helpful hints for designing a home page and converting existing documents, and explanations for incorporating graphics into HTML documents.
Abstract: From the Publisher: Hypertext Markup Language (HTML) is not a programming language. It's a surprisingly simple system of formatting tags that allows even those with no programming experience to design World Wide Web pages. How simple is HTML? So simple that this one small book contains all you need to know to create your own personal, hyperlinked Web pages. Author Larry Aronson covers every vital aspect of HTML3, using concise explanations and lucid examples. This handy manual shows you how to exploit the newest features of HTML3, including how to use inline figures, tables, and style sheets. You'll produce professional-looking Web pages with hyperlinks and graphics in no time! HTML3 Manual of Style features instructions for using HTML syntax and formatting tags, annotated examples of actual Web pages, helpful hints for designing a home page and converting existing documents, and explanations for incorporating graphics into your HTML documents.

8 citations



Proceedings ArticleDOI
14 Nov 1995
TL;DR: Could it be possible that because hypertext authors themselves are 'lost' in the process of designing and authoring hypertext systems, they inadvertently contribute to poorly designed hyper text systems, which in turn leads to users often being lost in 'hyperspace'?
Abstract: Hypertext authors lack experience. In fact, the whole business of designing and producing hypertext systems is still a relatively new discipline. Creating hypertexts is complex because of the richness of interconnectivity that exists among nodes and links in hypertexts. As such, the demands placed on hypertext authors in authoring hypertexts cannot be underestimated. Hypertext authors have to perform many balancing acts: (1) ensure that the design and structure of the hypertext system is 'best' according to its function; (2) ensure that all the nodes and links created in the hypertext database correspond directly to the windows and links in the display screen so that there should be no redundant/missing links or nodes; and (3) incorporate good design guidelines for screen and information display, dialogue design, navigation aids and online assistance. Hypertext authors don't make good hypermedia documents because it is difficult to do so. They are faced with a vast range of potential structures and an astronomically large number of choices when creating a hypertext document. Just as users are often 'lost' while navigating in hypertexts, so are hypertext authors themselves in authoring hypertext systems! Could it be possible that because hypertext authors themselves are 'lost' in the process of designing and authoring hypertext systems, they inadvertently contribute to poorly designed hypertext systems, which in turn leads to users often being lost in 'hyperspace'? If so, how can authors then be helped in designing well-structured hypertext systems?.

Proceedings ArticleDOI
27 Jun 1995
TL;DR: The vision for a system which will support interactive remote instruction (IRI) to support college-level education across spatial boundaries in a manner largely transparent to students and faculty is described.
Abstract: Today a plethora of multimedia, hyperlink software and hardware components are available, as well as teleconferencing systems supporting collaborative work. However, few systems are easy for people to use for real applications such as support of remote instruction. Most importantly, none of these tools scale to use by more than a few users simultaneously. We describe our vision for a system which will support interactive remote instruction (IRI) to support college-level education across spatial boundaries in a manner largely transparent to students and faculty. We have implemented a prototype of this system which has been used to evaluate both the user-interface design and performance requirements. The goal is to have a user interface which models most of the interactions which occurs in a regular classroom. A key focus in this paper is the assessment of system requirements necessary to support large classes. That means that multimedia tools must support O(10), rather than O(1), simultaneous users and user-system interaction protocols must be effective and accepted by students and faculty with diverse backgrounds and computer sophistication.

Journal Article
TL;DR: Hypertext Markup Language is another vital tool relating to virtual reality, in that it allows users to create hyperlinks to other documents, graphics and Web sites, which in turn enables users to achieve.
Abstract: Virtual reality, an advanced visualization technology, will change certain aspects of civil engineering. The method allows people to simulate conditions, render designs and test these before proceeding to the next stage of a project. Research is being done on software programs that link real-time construction schedules with 3-CAD designs and animate projects to create images. Stanford University and the Dillingham Construction Co. are working together on a project involving construction of a new hospital around an existing one. Researchers developed a model that helps hospital staff and suppliers see how they have to coordinate operations with the construction schedule. Using the system, engineers can see conflicts early in the planning process and avoid them and also visualize the iterative effect of a particular change on other aspects of the construction. Greiner Inc., is developing virtual reality capabilities in-house. According to the firm's senior animator, the equipment can cost as much as $500,000, and most clients aren't willing to pay for virtual reality yet. Greiner uses the system to visualize complicated buildings and other projects. They can tie a building construction procedure to a time line, for example. The method is also used for traffic flow simulations and toll booth operations. Virtual reality modeling language was standardized in early 1995 and this will enhance the development of this technology. Hypertext Markup Language is another vital tool relating to virtual reality, in that it allows users to create hyperlinks to other documents, graphics and Web sites, which in turn enables users to achieve. The San Diego Supercomputer Center is active in this effort and have a new repository on the World Wide Web for exchanging information, software and utilities related to the virtual reality markup language.

Proceedings ArticleDOI
03 Jul 1995
TL;DR: This paper investigates the use of messengers as a platform for hypertext generation over the World Wide Web (WWW) with a focus on hypertext transfer protocol and hypertext markup language (HTML).
Abstract: Messengers constitute a new methodology for network-based operations. This paper investigates the use of messengers as a platform for hypertext generation over the World Wide Web (WWW). A messenger is a program that is executable on any computer on a network that provides the executing environment. There are two main WWW protocols, the hypertext transfer protocol (HTTP) and the hypertext markup language (HTML). HTTP describes the protocol for the transfer of HTML (and other) documents from servers to the client browsers, while HTML describes the way to mark a hypertext document to achieve the desired display on the browsers screen.

ReportDOI
01 Mar 1995
TL;DR: This manual describes a program that takes LaTeX input files, as well as files of link information, and produces a hypertext document that can contain extensive cross-links, a major advantage of hypertext.
Abstract: The World Wide Web has made it possible to---use and disseminate documents as ``hypertext.`` One of the major advantages of hypertext over conventional text is that references to other documents or items can be linked directly into the document, allowing the easy retrieval of related information. A collection of documents can also be read this way, jumping from one document to another based on the interests of the reader. This does require that the hypertext documents be extensively cross-linked. Unfortunately, most existing documents are designed as linear documents. Even worse, most authors still think of documents as linear structures, to be read from front to back. To deal with this situation, a number of tools have been created that take documents in an existing word-processing system or markup language and generate ``HTML,`` the hypertext markup language used on the Web. While this process makes a single document available in a convenient form on the Web, it does not give access to cross-document linking, a major advantage of hypertext. This manual describes a program, tohtml, that takes LaTeX input files, as well as files of link information, and produces a hypertext document that can contain extensive cross-links. A related program, doctext, aids in the generation of manual pages that can be referenced by a LaTeX document.

Journal ArticleDOI
TL;DR: The research interests have to do with the cognitive processes that occur when children and young adults use information retrieval systems that allow much more control over the searching environment than do traditionally constructed bibliographic information retrieved systems.
Abstract: Our research interests have to do with the cognitive processes that occur when children and young adults use information retrieval systems that allow much more control over the searching environment than do traditionally constructed bibliographic information retrieval systems. We are especially curious about what kind of connection there might be between the search process itself and learning. There is great potential to digital library environments to engender entirely new models of the ways in which information is selected and manipulated by learners. For example, with programming environments like HTML, learners have the opportunity not only to search for information, but also to create and disseminate information using the same medium. We believe that such capacity adds significant dimension and new meaning to the concept of information retrieval.The information science literature contains extensive documentation of the kinds of failures that occur when children and other novice users attempt to retrieve information using traditionally designed formal retrieval tools, both automated and non-automated. Interface design typically has been tightly linked to the formal structure of bibliographic databases. Users must be brought to engage directly with those structures in order to produce successful searches. While the construction of bibliographic systems provides the flexibility and manageability needed by experts (i.e., librarians), young users often experience great difficulty in converting their natural language queries into viable search strategies.On the other hand, the Internet (as an example of an informal information retrieval system) is the librarian's nightmare in terms of the lack of control over searching rules and expectations. "Traditional" Internet searching tools (e.g., Gopher, Veronica, Archie), are rather loosely constructed, reflecting the nature of the greatly disparate and ever-evolving resources of the Internet. Gopher menus, for example, contain whatever terms developers choose to include and no two gopher servers are constructed in the same way, even if they connect users to identical resources.User guides and local finding tools are inherently ephemeral, as they can only be as reliable as the moving target of the Internet allows them to be. On the other hand, Internet browsers enable the user to have greater control of search pathways and to develop hyperlinks across information sources in a way that has not previously been possible. Users do not need to know which technical operation they are evoking, be it FTP or telnet. Internet browsers also allow users to become information providers and producers. Digital libraries present the next radical stage in altering traditional approaches to information retrieval and manipulation.We are also interested in observing how teachers and librarians go about teaching students to find information on rapidly changing systems. Almost by definition, instructors are in the position of learning while they teach. In contrast, formal bibliographic information retrieval systems don't undergo significant structural change within short periods of time. In addition, there are a number of factors that might make teachers wary of using the Internet as an information gathering tool (e.g., credibility and authority issues, the lack of assumptions that can be made about commonly used types of publications, the "permanency" of paper, etc.). Do teachers find ways of overcoming or managing these concerns? For example, do they tell students to search Readers' Guide first to get "real" articles and then search the Net for interesting quotes? Do they ask students to post their questions on bulletin boards first in case someone in cyberspace can either provide answers or else tell them where to search?

01 Sep 1995
TL;DR: The multi media capabilities of the WEB system are now well known but at present the project has not sought to avail itself of for example, video or sound, but future enhancements will certainly do so.
Abstract: A Consortium of four partners - the University of Kent, University of Southampton, University of Wales College of Cardiff and Queen Mary and Westfield College have been pooling efforts to produce hypertext courseware to help teach High Performance Computing, jointly funded under the TLTP Phase II Programme by the Higher Education Funding Councils HEFCE, HEFCE,SHEFC and DENI By high performance Computing we mean all aspects of parallelism:- Data Parallel concepts, Paradigms, Algorithms, Languages such as Fortran 90, High Performance Fortran, Occam, Message Passing Architectures such as the Single Instruction Multiple Data(SIMD) eg Maspar, Multiple Instruction Multiple Data (MIMD) eg Transputer Systems, NCube etc Users can choose the section of material, page order, point to clickable maps and explore to different levels of detail by following hyperlinks This interaction only allows the user control over navigation Electronic forms allow data input and this is going to be used in a variety of ways to provide self assessment and input of source code for example Password control is another feature now available which has enabled control of particular resources to be set up and this opens the way for safe remote access The multi media capabilities of the WEB system are now well known but at present the project has not sought to avail itself of for example, video or sound, but future enhancements will certainly do so

31 May 1995
TL;DR: This paper provides a brief description of Hyper-G, the first second-generation hypermedia system that implements powerful search mechanisms, such as Boolean searching of titles, keywords, and fulltext with user-defined scope from one collection on one server to all servers worldwide.
Abstract: This paper provides a brief description of Hyper-G, the first second-generation hypermedia system. The first section identifies problems with first generation hypermedia systems. The following sections discuss the new concepts that are implemented in Hyper-G, including user accounts and billing, scructuring of data, caching and replication, native document types, navigation, and editing and authoring. These concepts, such as a world-wide distributed network database, a separated link database and bidirectional links allow for highly sophisticated navigation and hyperlinks in all native document types such as hypertext, images, Postscript documents and even in movies, sound, and 30 scenes. Hyper-G also implements powerful search mechanisms, such as Boolean searching of titles, keywords, and fulltext with user-defined scope from one collection on one server to all servers worldwide. The system is compatible to first generation systems like Gopher and World Wide Web. (Author/AEF) *********************************************************************** Reproductions supplied by EDRS are the best that can be made from the original document. ***********************************************************************

01 Jan 1995
TL;DR: A pilot study that collected, reviewed, and evaluated image maps from homepages of educational institutions, revealing primarily that viewers placed a higher premium on simplicity than on pure visual appeal.
Abstract: As information delivery systems on the Internet increasingly evolve into World Wide Web browsers, understanding key graphical elements of the browser interface is critical to the design of effective information display and access tools. Image maps are one such element, and this document describes a pilot study that collected, reviewed, and evaluated image maps from homepages of educational institutions. World Wide Web browsers offer a high level of interaction through hyperlinks, most of which involve text or a simple image. Image maps, on the other hand, are complex visuals that contain multiple hyperlinks to a number of information resources. Effective image maps offer clearly defined multiple links or "hot spots," present visual content that supports the theme or purpose of the site, permit backtracking and bookmarking, help the user build mental models of the interrelationships of information resources, do not take too long to load, and do not clutter the display. Researchers developed a survey form, for use by nine independent viewers, that sought to evaluate sites by those visual, navigational, and practical criteria. Fifty-five surveys on institutional homepages were collected from the nine viewers, and they revealed primarily that viewers placed a higher premium on simplicity than on pure visual appeal. Artistically captivating image maps often violated rules of simplicity; individual hot spots were hard to distinguish, choices were too multilayered to allow for a quick return to the starting point, and loading was slow. Reproductions of 11 institutional homepages accompany the text. Two other figures include a bar graph comparing average viewer ratings by site and a list of tips for image map design. (Contains 16 references.) (BEW) U.S. DEPARTMENT OF EDUCATION Office ol Educational Research and improvement EDUCATIONAL RESOURCES INFORMATION CENTER (ERIC) O This document has been reproduced as received from the person or organization originating it O Minor changes have been made to improve reproduction quality Points of view or opinions stated in this document do not necessarily represent official OERI position or policy "PERMISSION TO REPRODUCE THIS MATERIAL HAS BEEN GRANTED BY

Journal ArticleDOI
TL;DR: The aggregation and presentation of medical data in multimedia documents is discussed, based on the implementation of an application for generating and presenting ultrasound examination reports, and some introductory material on object-oriented methods and on document architectures is presented.
Abstract: This paper discusses the aggregation and presentation of medical data in multimedia documents, based on the implementation of an application for generating and presenting ultrasound examination reports. The requirements for such an application and how a document architecture supporting document templates and hyperlinks helps meet these requirements are presented.Full exploitation of the features offered by multimedia documents also depends on the surrounding technical infrastructure and work organization. To allow a controlled and synchronized development of these with the multimedia document application itself, the introduction of multimedia document support for isolated tasks is suggested as a first step.Object-oriented techniques were used in all phases of the application development, and experience gained from this is presented. To increase readability to non-IT specialists the paper includes some introductory material on object-oriented methods and on document architectures.

01 Jan 1995
TL;DR: An SGML-based syntax is presented, adapted from that used in the Microcosm system, which allows links and other hypertextual material to be kept in an abstract form in separate link bases, and is of great value in keeping hyperlinks relevant, up-to-date and in a com- mon link-base which is independent of the finally delivered electronic document format.
Abstract: SUMMARY The two complementary de facto standards for the publication of electronic documents are HTML on the World Wide Web and Adobe's Acrobat viewers using PDF (Portable Document Format). A brief overview is given of these two systems followed by an analysis of why the embedded, and very concrete, nature of their hypertext links leads to great problems with keeping the 'hyperstructure' up to date. An SGML-based syntax is presented, adapted from that used in the Microcosm system, which allows links and other hypertextual material to be kept in an abstract form in separate link bases. The links can then be interpreted or compiled at any stage and applied, in the correct format to some specific representation such as HTML or PDF. This approach is of great value in keeping hyperlinks relevant, up-to-date and in a com- mon link-base which is independent of the finally delivered electronic document format. Four models are discussed for allowing publishers to insert links into documents at a late stage, e.g at the time that the document is requested by the end-user from the publisher. These methods ensure that disseminated papers always contain up-to-date links. The techniques discussed have been implemented using a combination of Acrobat plug-ins, Web servers and Web browsers.