scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1994"


Journal ArticleDOI
TL;DR: Results from several prototype agents that have been built using an approach to building interface agents are presented, including agents that provide personalized assistance with meeting scheduling, email handling, electronic news filtering, and selection of entertainment.
Abstract: Publisher Summary Computers are becoming the vehicle for an increasing range of everyday activities. Acquisition of news and information, mail and even social interactions and entertainment have become more and more computer based. These technological developments are not in line with a change in the way people interact with computers. Techniques from the field of AI, in particular so-called autonomous agents, can be used to implement a complementary style of interaction, which has been referred to as indirect management. Agents assist users in a range of different ways. They hide the complexity of difficult tasks, they perform tasks on the user's behalf, they can train or teach the user, they help different users collaborate, and they monitor events and procedures. The chapter focuses on an approach to building interface agents. It presents results from several prototype agents that have been built using this approach, including agents that provide personalized assistance with meeting scheduling, email handling, electronic news filtering, and selection of entertainment.

2,582 citations


Journal ArticleDOI
TL;DR: In this approach to software development, application programs are written as software agents, i.e. software “components” that communicate with their peers by exchanging messages in an expressive agent communication language.
Abstract: The software world is one of great richness and diversity. Many thousands of software products are available to users today, providing a wide variety of information and services in a wide variety of domains. While most of these programs provide their users with significant value when used in isolation, there is increasing demand for programs that can interoperate – to exchange information and services with other programs and thereby solve problems that cannot be solved alone. Part of what makes interoperation difficult is heterogeneity. Programs are written by different people, at different times, in different languages; and, as a result, they often provide different interfaces. The difficulties created by heterogeneity are exacerbated by dynamics in the software environment. Programs are frequently rewritten; new programs are added; old programs removed. Agent-based software engineering was invented to facilitate the creation of software able to interoperate in such settings. In this approach to software development, application programs are written as software agents, i.e. software “components” that communicate with their peers by exchanging messages in an expressive agent communication language. Agents can be as simple as subroutines; but typically they are larger entities with some sort of persistent control (e.g. distinct control threads within a single address space, distinct processes on a single machine, or separate processes on different machines). The salient feature of the language used by agents is its expressiveness. It allows for the exchange of data and logical information, individual commands and scripts (i.e. programs). Using this language, agents can communicate complex information and goals, directly or indirectly “programming” each other in useful ways. Agent-based software engineering is often compared to object-oriented programming. Like an “object”, an agent provides a message-based interface independent of its internal data structures and algorithms. The primary difference between the two approaches lies in the language of the interface. In general object-oriented programming, the meaning of a message can vary from one object to another. In agent-based software engineering, agents use a common language with an agent-independent semantics. The concept of agent-based software engineering raises a number of important questions.

2,373 citations


Journal ArticleDOI
TL;DR: Today, the authors have microwave ovens and washing machines that can figure out on their own what settings to use to perform their tasks optimally; cameras that come close to professional photographers in picture-taking ability; and many other products that manifest an impressive capability to reason, make intelligent decisions, and learn from experience.
Abstract: Prof. Zadeh presented a comprehensive lecture on fuzzy logic, neural networks, and soft computing. In addition, he lead a spirited discussion of how these relatively new techniques may be applied to safety evaluation of time variant and nonlinear structures based on identification approaches. The abstract of his lecture is given as follows.

1,390 citations


Journal ArticleDOI
TL;DR: This chapter discusses challenges for developers while using groupware applications, noting that most interest in groupware development is found among the developers and users of commercial off-the-shelf products who previously focused exclusively on single-user applications.
Abstract: Publisher Summary This chapter discusses challenges for developers while using groupware applications. To understand the problems encountered by groupware applications, it is essential to realize that most interest in groupware development is found among the developers and users of commercial off-the-shelf products who previously focused exclusively on single-user applications. In addition to technical challenges, groupware poses a fundamental problem for product developers: Because individuals interact with a groupware application, it has all the interface design challenges of single-user applications, supplemented by a host of new challenges arising from its direct involvement in group processes. A groupware application never provides precisely the same benefit to every group member. Costs and benefits depend on preferences, prior experience, roles, and assignments. Although a groupware application is expected to provide a collective benefit, some people must adjust more than others. Ideally, each individual benefits, even if they do not benefit equally. Most groupware requires some people to do additional work to enter or process information required or produced by the application.

1,343 citations


Journal ArticleDOI
TL;DR: The idea of believability has long been studied and explored in literatttre, theater, film, radio drama, and other media and traditional character animators are among those artists who have sought to create believable characters.
Abstract: Joseph Bates here is a notioti in the Arts of \"believable character.\" It does not mean an honest or reliable character, btit otie that provides the illtision of life, thtis permitting the atidience's stispension of disbelief The idea of believability has long been studied and explored in literatttre, theater, film, radio drama, and other media. Traditional character animators are among those artists who have sought to create believable characters, and the Disney animators of the 1930s made great strides toward this goal. The first page of the enormous classic reference work on Disney animaticjn [12] begins with these words:

1,202 citations


Journal ArticleDOI
TL;DR: The Dexter hypertext reference model as mentioned in this paper is an attempt to capture, both formally and informally, the important abstractions found in a wide range of existing and future hypertext systems, providing a principled basis for comparing systems as well as for developing interchange and interoperability standards.
Abstract: This paper presents the Dexter hypertext reference model. The Dexter model is an attempt to capture, both formally and informally, the important abstractions found in a wide range of existing and future hypertext systems. The goal of the model is to provide a principled basis for comparing systems as well as for developing interchange and interoperability standards. The model is divided into three layers. The storage layer describes the network of nodes and links that is the essence of hypertext. The runtime layer describes mechanisms supporting the user’s interaction with the hypertext. The within-component layer covers the content and structures within hypertext nodes. The focus of the model is on the storage layer as well as on the mechanisms of anchoring and presentation specification that form the interfaces between the storage layer and the within-component and runtime layers, respectively. The model is formalized in the specification language Z, a specification language based on set theory. The paper briefly discusses the issues involved in comparing the characteristics of existing systems against the model.

1,075 citations


Journal ArticleDOI
Tim Berners-Lee1, Robert Cailliau1, Ari Luotonen1, Henrik Frystyk Nielsen1, Arthur Secret1 
TL;DR: The World Wide Web (W3) as mentioned in this paper is a pool of human knowledge that allows collaborators in remote sites to share their ideas and all aspects of a common project, which is the basis of the Web.
Abstract: Publisher Summary This chapter discusses the history and growth of World Wide Web (W3). The World-Wide Web was developed to be a pool of human knowledge, which would allow collaborators in remote sites to share their ideas and all aspects of a common project. Physicists and engineers at CERN, the European Particle Physics Laboratory in Geneva, Switzerland, collaborate with many other institutes to build the software and hardware for high-energy physics research. The idea of the Web was prompted by positive experience of a small “home-brew” personal hypertext system used for keeping track of personal information on a distributed project. The Web was designed so that if it was used independently for two projects, and later relationships were found between the projects, then no major or centralized changes would have to be made, but the information could smoothly reshape to represent the new state of knowledge. This property of scaling has allowed the Web to expand rapidly from its origins at CERN across the Internet irrespective of boundaries of nations or disciplines.

1,065 citations


Journal ArticleDOI
TL;DR: MBone-the Multicast Backbone, which is a virtual network on “top” of the Internet providing a multicasting facility to the Internet, has been the cause of severe problam m the NSFnet backbone, saturation of major international links rendering them useless as well as sites being completely disconnected due to Internet Connrction Management Protocol (ICMP) responses flooding the networks.
Abstract: The first thing many researchers like me do when they come to work is read their email. The second thing on my list is to check what is on the MBone-the Multicast Backbone, which is a virtual network on “top” of the Internet providing a multicasting facility to the Internet. There might be video from the Space Shuttle, a seminar from Xerox, a plenary session from an interesting conference or a software demonstration for the Swedish prime minister. It all started in March 1992 when the first audiocast on the Internet took place from the Internet Enginrering Task Force (IETF) meeting in San Diego. At that event 20 sites listened to the audiocast. Two years later, at the IETF meeting in Scattle about 567 hosts in 15 countries tuned in to the two parallel broadcasting channels (audio and video) and also talked back (audio) and joined the discussions! The networking community now takes it for grantrd that the IETF meetings will be distributed via MBone. MBone has also been used to distribute experimental data from a robot at the bottom of the Sea of Cortee (as will be described later) as well as a late Saturday night feature movie WAX or the Dmovely sf Televmon Among the Bee.~ by David Blair. As soon as some crucial tools existed, the usage just exploded. Many people started using MBone for conferences, weather maps, research experiments, to follow the Space Shuttle, for rxample. At thr Swedish Institute of Computer Science (SICS) we saw our contribution to the Swedish University Network SUNET, increase from 26GB per month in February 1993 to 69GB per month in March 1993. This was mainly due to multicast tmff,c as SICS at that time was the major connection point between the U.S. and Europe in MBone. MBone has also (in)directIy been the cause of severe problam m the NSFnet backbone, saturation of major international links rendering them useless as well as sites being completely disconnected due to Internet Connrction Management Protocol (ICMP) responses flooding the networks. We will expand on this later in this article.

771 citations



Journal ArticleDOI
Manojit Sarkar1, Marc H. Brown
TL;DR: This paper describes a system for viewing and browsing graphs using a software analog of a fisheye lens and describes a more general transformation that allows global information about the graph to affect the view.
Abstract: A fisheye camera lens is a very wide angle lens that magnifies nearby objects while shrinking distant objects. It is a valuable tool for seeing both "local detail" and "global context" simultaneously. This paper describes a system for viewing and browsing graphs using a software analog of a fisheye lens. We first show how to implement such a view using solely geometric transformations. We then describe a more general transformation that allows global information about the graph to affect the view. Our general transformation is a fundamental extension to previous research in fisheye views.

614 citations


Journal ArticleDOI
TL;DR: Just four years ago, the only widely reported commercial application of neural network technology outside the financial industry was the airport baggage explosive detection system developed at Science Applications International Corporation (SAIC).
Abstract: Just four years ago, the only widely reported commercial application of neural network technology outside the financial industry was the airport baggage explosive detection system [27] developed at Science Applications International Corporation (SAIC). Since that time scores of industrial and commercial applications have come into use, but the details of most of these systems are considered corporate secrets and are shrouded in secrecy. This hastening trend is due in part to the availability of an increasingly wide array of dedicated neural network hardware. This hardware is either in the form of accelerator cards for PCs and workstations or a large number of integrated circuits implementing digital and analog neural networks either currently available or in the final stages of design

Journal ArticleDOI
TL;DR: Etzioni, Lcsh, and Segal as discussed by the authors developed the Internet Softbot (software robot) which uses a UNIX shell and the World Wide Web to interact with a wide range of internet resources.
Abstract: The Internet Softbot (software robot) is a fullyimplemented AI agent developed at the University of Washington (Etzioni, Lcsh, & Segal 1993). The softbot uses a UNIX shell and the World-Wide Web to interact with a wide range of internet resources. The softbot’s effectors include ftp, telnet, mail, and numerous file manipulation commaslds. Its sensors include internet facilities such as archie, gopher, netfind, and many more. The softbot is designed to incorporate new facilities into its repertoirc as they become available. The softbot’s "added value" is three-fold. First, it provides an integrated and expressive interface to the internet. Second, the softbot dynamically chooses which facilities to invoke, and in what sequence. For example, the softbot might use netfind to determine David McAllester’s e-mail address. Since it knows that netfind requires a person’s institution as input, the softbot would first search bibliographic databases for a technical report by McAllester which would reveal his institutkm, and then feed that information to netfind. Third, the softbot fluidly backtracks from one facility to another based on information collected at run time. As a result., the softbot’s behavior changes in response to transient system conditions (e.g., the UUCP gateway is down). In this article, we focus on the ideas underlying the softbot-based interface.

Journal ArticleDOI
TL;DR: Measurable usability parameters fall into two broad categories: subjective user preference measures, assessing how much the users like the system, and objective performance measures, which measure how capable the users are at using the system.
Abstract: Simplistically stated, usability engineering aims at improving interactive systems and their user interfaces. Defined slightly more precisely [8], usability is a general concept that cannot be measured but is related to several usability parameters that can be measured. Measurable usability parameters fall into two broad categories: subjective user preference measures, assessing how much the users like the system, and objective performance measures, which measure how capable the users are at using the system

Journal ArticleDOI
Marc Rettig1
TL;DR: The technique of building user interface prototypes on paper and testing them with real users is called low-fidelity (lo-fi) prototyping, which is a simple and effective tool that has failed to come into general use in the software community.
Abstract: The technique of building user interface prototypes on paper and testing them with real users is called low-fidelity (lo-fi) prototyping. Lo-fi prototyping is a simple and effective tool that has failed to come into general use in the software community. Paper prototyping is potentially a breakthrough idea for organizations that have never tried it, since it allows developers to demonstrate the behavior of an interface very early in development, and test designs with real users. If quality is partially a function of the number of iterations and refinements a design undergoes before it enters the market, lo-fi prototyping is a technique that can dramatically increase quality. It is fast, it brings results early in development, and it allows a team to try far more ideas then they could with high-fidelity prototypes. The steps for building a lo-fi prototype include: 1. Assemble a kit. 2. Set a deadline. 3. Construct models, not illustrations. Steps for preparing for and conducting a test of the prototype are also discussed.

Journal ArticleDOI
TL;DR: The design of one particular learning assistant is described: a calendar manager, called CAP (Calendar APprentice), that learns user scheduling preferences from experience and suggests that machine learning methods may play an important role in future personal software assistants.
Abstract: Personal software assistants that help users with tasks like finding information, scheduling calendars, or managing work-flow will require significant customization to each individual user. For example, an assistant that helps schedule a particular user’s calendar will have to know that user’s scheduling preferences. This paper explores the potential of machine learning methods to automatically create and maintain such customized knowledge for personal software assistants. We describe the design of one particular learning assistant: a calendar manager, called CAP (Calendar APprentice), that learns user scheduling preferences from experience. Results are summarized from approximately five user-years of experience, during which CAP has learned an evolving set of several thousand rules that characterize the scheduling preferences of its users. Based on this experience, we suggest that machine learning methods may play an important role in future personal software assistants.

Journal ArticleDOI
TL;DR: It turns out that the threat model commonly used by cryptosystem designers was wrong: most frauds were not caused by cryptanalysis or other technical attacks, but by implementation errors and management failures, suggesting that a paradigm shift is overdue in computer security.
Abstract: Designers of cryptographic systems are at a disadvantage to most other engineers, in that information on how their systems fail is hard to get: their major users have traditionally been government agencies, which are very secretive about their mistakes.In this article, we present the results of a survey of the failure modes of retail banking systems, which constitute the next largest application of cryptology. It turns out that the threat model commonly used by cryptosystem designers was wrong: most frauds were not caused by cryptanalysis or other technical attacks, but by implementation errors and management failures. This suggests that a paradigm shift is overdue in computer security; we look at some of the alternatives, and see some signs that this shift may be getting under way.

Journal ArticleDOI
TL;DR: An interest in looking at the brain as a model of a parallel computational device very different from that of a traditional serial computer is looked at.
Abstract: Interest in the study of neural networks has grown remarkably in the last several years. This effort has been characterized in a variety of ways: as the study of brain-style computation, connectionist architectures, parallel distributed-processing systems, neuromorphic computation, artificial neural systems. The common theme to these efforts has been an interest in looking at the brain as a model of a parallel computational device very different from that of a traditional serial computer

Journal ArticleDOI
TL;DR: Hoftware agents are the authors' hope during the 1990s for obtaining more power and utility from personal computers, but people do not want generic agents-they want help with their jobs, their tasks, their goals.
Abstract: oftware agents are our besf hope during the 1990s for obtaining more power and utility from personal computers. Agents have the potential to partiti$xzte nrtively in accomplishing tasks, rather than serving as passive tools as do today’s applications. However, people do not want generic agents-they want help with lhtir jobs, their tasks, their goals. Agents must be flexible enough to be tailored to each individual. The most flexible way to tailor a software entity is to program it. The problem is that programming is too difficult for most people today. Consider:

Journal ArticleDOI
TL;DR: Darwinian evolution has spawned a family of computational methods called genetic algorithms (GAs) or evolutionary algorithms (EAs) that are derived from natural examples and used in computer science and engineering.
Abstract: Before there were computers, there was thinking about the mind as a computer-as a machine. And in this way, computer science and engineering trace their roots to using natural examples. Within these fields of endeavor, AI drew its initial inspiration from nature, and work on computer-simulated brains received the lion's share of the early attention. But even back then, nature's other metaphor of adaptation planted a different seed that is now blossoming around the globe. Specifically, Darwinian evolution has spawned a family of computational methods called genetic algorithms (GAs) or evolutionary algorithms (EAs)

Journal ArticleDOI
TL;DR: A central hypothesis of this paper is that a parsingoriented recognition model based on formal, predominately structural patterns of programming language features is necessary but insufficient for the general concept assignment problem.
Abstract: A person understands a program because he is able to relate the structures of the program and its environment to his conceptual knowledge about the world. The problem of discovering individual human oriented concepts and assigning them to their implementation oriented counterparts for a given program is the concept assignment problem. We argue that the solution to this problem requires methods that have a strong plausible reasoning component based on a priori knowledge. We illustrate these ideas through example scenarios using an existing design recovery system called DESIRE. 1. Human understanding and the concept assignment problem A person understands a program when he is able to explain the program, its structure, its behavior, its effects on its operational context, and its relationships to its application domain in terms that are qualitatively different from the tokens used to construct the source code of the program. That is, it is qualitatively different for me to claim that a program "reserves an airline seat" than for me to assert that "if (seat = request(flight)) && available(seat) then reserve(seat,customer)." Apart from the obvious differences of level of detail and formality, the first case expresses computational intent in human oriented terms, terms that live in a rich context of knowledge about the world. In the second case, the vocabulary and grammar are narrowly restricted, formally controlled and do not inherently reference the human oriented context of knowledge about the world. The first expression of computational intent is designed for succinct, intentionally ambiguous (i.e., informal), human level communication whereas the second is designed for automated treatment, e.g., program verification or compilation. Both forms of the information must be present for a human to manipulate programs (create, maintain, explain, re-engineer, reuse or document) in any but the most trivial way. Moreover, one must understand the association between the formal and the informal expressions of computational intent. If a person tries to build an understanding of a unfamiliar program or portion of a program, he or she must create or reconstruct the informal, human oriented expression of computational intent through a process of analysis, experimentation, guessing and crossword puzzle-like assembly. Importantly, as the informal concepts are discovered and interrelated concept by concept, they are simultaneously associated with or assigned to the specific implementation structures within the program (and its operational context) that are the concrete instances of those concepts. The problem of discovering these human oriented concepts and assigning them to their implementation instances within a program is the concept assignment problem [4] and we address this problem in this paper. 2. The concept assignment problem 2.1. Programming Oriented Concepts vs. Human Oriented Concepts A central hypothesis of this paper is that a parsingoriented recognition model based on formal, predominately structural patterns of programming language features is necessary but insufficient for the general concept assignment problem. While parsingoriented recognition schemes certainly play a role in program understanding, the signatures of most human oriented concepts are not constrained in ways that are convenient for parsing technologies. (See Sidebar on Automatic Concept Recognition) So there is more to program understanding than parsing. In particular, there is the general concept assignment problem, which requires a different approach. More specifically, parsing technologies lend themselves nicely to the recognition of programming

Journal ArticleDOI
Donald A. Norman1
TL;DR: The new crop of intelligent agents the authors different from the automated devices of earlier eras because of their computational power, they take over human tasks, and they interact with people in human-like ways-perhaps with a form of natural language, prrhaps with animated graphics or video.
Abstract: A gents occupy a strange plarc in the rcahn ofrech-nology-generating Sear. fiction, and extraw-gant claims. The reasons are not hard to find. The concept of an qynl, especially when modified by the term m/ulliKntt, brings forth images of human-like automatons. working xvithout super-visi

Journal ArticleDOI
TL;DR: The Amsterdam Hypermedia Model (AHM) as discussed by the authors is a general model for hypermedia that includes attention to timing and composite objects as well as implementation issues such as channels and having the sources of components residing over a distributed system.
Abstract: From Computing Reviews, by Jeanine Meyer The purpose of this paper is to convince the reader of the need for a general model for hypermedia and present the Amsterdam hypermedia model (AHM) as fulfilling that need. Hypertext, which is described by the authors as a 'relatively mature discipline,' has the Dexter model, but the authors show that enhancing that model for hypermedia is not a straightforward task. In particular, it requires attention to issues of synchronization. The AHM model includes attention to timing and composite objects as well as implementation issues such as channels and having the sources of components residing over a distributed system. The paper features one example and also describes an authoring and presentation environment called CMIFed. It is generally well written. The paper can be understood even if one has not studied the Dexter hypertext reference model or the CMIF multimedia document model and, in fact, this paper could serve as an introduction to the issues involved. Too much of the focus, however, is on other systems and not on what AHM actually is. The authors do not demonstrate the model by using it to express the featured example. Moreover, to really merit the term 'model,' AHM should be shown as serving a substantial role in describing and implementing applications in terms of two or more distinct authoring or runtime environments. This is not done, though it appears well within the experience and understanding of the authors.


Journal Article
TL;DR: Computational simulation is used in scientific research and engineering development to augment theoretical analysis, experimentation, and testing to solve problems that are far too complex to yield to mathematical analyses.
Abstract: Scientific research and engineering development are relying increasingly on computational simulation to augment theoretical analysis, experimentation, and testing. Many of today's problems are far too complex to yield to mathematical analyses. Likewise, large-scale experimental testing is often infeasible for a variety of economic, political, or environmental reasons. At the very least, testing adds to the time and expense of product development

Journal ArticleDOI
TL;DR: IP multicast, formerly known as PL-“l”C”l, recently has been enhanced to allow the routing of IP multicast datagrams, and the resulting protocol, capable of routing both unicast and multicast traffic, is called the Multicast Extensions for OSPF or MOSPF.
Abstract: pl-“l”C”l, recently has been enhanced to allow the routing of IP multicast datagrams. The resulting protocol, capable of routing both unicast and multicast traffic, has hccn called the Multicast Extensions for OSPF or MOSPF. (For a brief introduction to TCP/ IF routing, and OSPF in particular, see the sidebar.) Multicasting is the ability of an application to send a single message to the network and have it he delivered to multiple recipients at possibly different locations. A good example of this is a multisite audio/video conference. Distributed simulations and games, such as a tank battle simulation involving several geographically separated participants, are other examples. Commercially, distrihutcd databases such as stock quotations generated in a crntral location and then deliverrd to many separate traders are good candidates for multicast. IP multicast is an extension ofIAN multicasting to a TCPiIP Internet. IP multicast permits an IP host to send a single datagram (called an IP multicast datagram) that will he delivered to multiple destinations. IP multicast datagrams are identified as those packets whose destinations are class D IP addresses (i.e., addresses whose first byte lies in the range 224-439 inclusive). Each Class D address is said to represent a multicast group. The extensions required of a computer running TCWIP in order to participate in multicasting have been defined for some time. A protocol called the lntunet Group Managemenl Pmtocol (IGMPj is used by TCPIIP applications in order tojoin and leave particular multicast groups. An IP datagram addrrssed to the group address will he delivered to all group members, assuming that there are multicast routers (for example, routers running MOSPF) connecting the source and group members. MOSPF uses IGMP to discover the location of group members. The group members are then pinpointed in the routing database, which is essentially a map of the internetwork. This in turn allows the MOSPF routers to calculate an &cient path for each multicast datagram.


Journal ArticleDOI
TL;DR: This article maps the testability terrain for object-oriented development to assist the reader in finding relatively shorter and cheaper paths to high reliability.
Abstract: estability is the relative ease and expense of revealing software faults. This article maps the testability terrain for object-oriented development to assist the reader in finding relatively shorter and cheaper paths to high reliability. Software testing adds value by revealing faults. It is fundamentally an economic problem characterized by a continuum between two goals. A reliability -dtiven process uses testing to produce evidence that a pre-release reliability goal has been met. Time and money are expended on testing until the reliability goal is attained. This view of testing is typically associated with stringent, quantifiable reliability requirements. Other things being equal, a more testable system will reduce the time and cost needed to meet reliability goals. A resource-limited process views testing as a way to remove as many rough edges from a system as time or money permits. Testing continues until available test resources have been expended. Measurement of test adequacy or system reliability are incidental to the decision to release the system. This is the typical view of testing. Other things being equal, a more testable system provides increased reliability for a fixed testing budget.

Journal ArticleDOI
TL;DR: The cognitive structure of graphics is examined and a structural classification of visual representations of graphs and images is reported to report, if visualization is to continue to advance as an interdisciplinary science, it must become more than a grab bag of techniques for displaying data.
Abstract: hy du we often prefer glancing at a graph to studying a table of numbers? What might be a better graphic than either a graph or table for seeing how a biological process unfolds with time? To begin to answer these kinds of questions we examine the cognitive structure of graphics and report a structural classification of visual representadnns. McCormick, DeFami, and Brown [16] define visualization as \" the study of mechanisms in computers and in humans which allow them in concert to perceive , use, and communicate visual information. \" Thus, visualization includes the study of both image synthesis and image understanding. Given this broad focus, it is not surprising that visualization spans many academic disciplines, scientific fields, and multiple domains of inquiry. However, if visualization is to continue to advance as an interdisciplinary science, it must become more than a grab bag of techniques for displaying data. Our research focuses on classifying visual information. Classification lies at the heart of every scientific field. Classifications structure domains of systematic inquiry and provide concepu for developing theories to identify anomalies and to predict future research needs. Extant lawonomies of graphs and images can be characterized as either iimc-tional or structural. Functional taxonomies focus on the intended use and purpose of the graphic material. For example, consider the functional classification developed by Macdonald-Ross [ 141. One of the main categories is lechniurl dim pm used for maintaining, operating, and troubleshooting complex equip ment. Other examples of functional classifications can be found in Tufte [ZI. A functional classification does not reflect the physical structure of images, nor is it intended to correspond to an underlfing representation in memory 111. In contrast, structural categories are well learned and are derived from exem-plar learning. They focus on the form of the image rather than its content. Rankin [18] and Bertin [Z] developed such structural categories of graphs. Rankin used the number of dimensions and graph forms to determine his clas


Journal ArticleDOI
TL;DR: An alternative approach is introduced which uses the document collections themselves as a basis for the text analysis, together with sophisticated text matching operations carried out at several levels of detail.
Abstract: In many operational environments, large text files must be processed covering a wide variety of different topic areas. Aids must then be provided to the user that permit collection browsing and make it possible to locate particular items on demand. The conventional text analysis methods based on preconstructed knowledge-bases and other vocabulary-control tools are difficult to apply when the subject coverage is unrestricted. An alternative approach, applicable to text collections in any subject area, is introduced which uses the document collections themselves as a basis for the text analysis, together with sophisticated text matching operations carried out at several levels of detail. Methods are described for relating semantically similar pieces of text, and for using the resulting hypertext structures for collection browsing and information retrieval.