scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1995"


Journal ArticleDOI
TL;DR: WordNet1 provides a more effective combination of traditional lexicographic information and modern computing, and is an online lexical database designed for use under program control.
Abstract: Because meaningful sentences are composed of meaningful words, any system that hopes to process natural languages as people do must have information about words and their meanings. This information is traditionally provided through dictionaries, and machine-readable dictionaries are now widely available. But dictionary entries evolved for the convenience of human readers, not for machines. WordNet1 provides a more effective combination of traditional lexicographic information and modern computing. WordNet is an online lexical database designed for use under program control. English nouns, verbs, adjectives, and adverbs are organized into sets of synonyms, each representing a lexicalized concept. Semantic relations link the synonym sets [4].

15,068 citations


Journal ArticleDOI
TL;DR: This approach seems to be of fundamental importance to artificial intelligence (AI) and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, decision support systems, inductive reasoning, and pattern recognition.
Abstract: Rough set theory, introduced by Zdzislaw Pawlak in the early 1980s [11, 12], is a new mathematical tool to deal with vagueness and uncertainty. This approach seems to be of fundamental importance to artificial intelligence (AI) and cognitive sciences, especially in the areas of machine learning, knowledge acquisition, decision analysis, knowledge discovery from databases, expert systems, decision support systems, inductive reasoning, and pattern recognition.

7,185 citations


Journal ArticleDOI
TL;DR: The fundamental assumptions of doing such a large-scale project are examined, the technical lessons learned by the developers are reviewed, and the range of applications that are or soon will be enabled by the technology is surveyed.
Abstract: Since 1984, a person-century of effort has gone into building CYC, a universal schema of roughly 105 general concepts spanning human reality. Most of the time has been spent codifying knowledge about these concepts; approximately 106 commonsense axioms have been handcrafted for and entered into CYC's knowledge base, and millions more have been inferred and cached by CYC. This article examines the fundamental assumptions of doing such a large-scale project, reviews the technical lessons learned by the developers, and surveys the range of applications that are or soon will be enabled by the technology.

2,116 citations


Journal ArticleDOI
Gerald Tesauro1
TL;DR: The domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning.
Abstract: Ever since the days of Shannon's proposal for a chess-playing algorithm [12] and Samuel's checkers-learning program [10] the domain of complex board games such as Go, chess, checkers, Othello, and backgammon has been widely regarded as an ideal testing ground for exploring a variety of concepts and approaches in artificial intelligence and machine learning. Such board games offer the challenge of tremendous complexity and sophistication required to play at expert level. At the same time, the problem inputs and performance measures are clear-cut and well defined, and the game environment is readily automated in that it is easy to simulate the board, the rules of legal play, and the rules regarding when the game is over and determining the outcome.

1,515 citations


Journal ArticleDOI
TL;DR: Since its inception, the software industry has been in crisis and problems with software systems are common and highly-publicized occurrences.
Abstract: Since its inception, the software industry has been in crisis. As Blazer noted 20 years ago, “[Software] is unreliable, delivered late, unresponsive to change, inefficient, and expensive … and has been for the past 20 years” [4]. In a survey of software contractors and government contract officers, over half of the respondents believed that calendar overruns, cost overruns, code that required in-house modifications before being usable, and code that was difficult to modify were common problems in the software projects they supervised [22]. Even today, problems with software systems are common and highly-publicized occurrences.

1,121 citations


Journal ArticleDOI
Lucy Suchman1
TL;DR: This chapter represents an adopt a view of representations of work whether created from within the work practices represented or in the context of externally based design initiatives as interpretations in the service of particular interests and purposes, created by actors specifically positioned with respect to the work.
Abstract: This chapter represents an adopt a view of representations of work whether created from within the work practices represented or in the context of externally based design initiatives as interpretations in the service of particular interests and purposes, created by actors specifically positioned with respect to the work. It argues the importance of deepening the resources for conceptualizing the intimate relations between work, representations and the politics of organizations. It then aims to a design practice in which representations of work are taken not as proxies for some independently existent organizational processes, but as part of the fabric of meanings within and out of which all working practices the own and others' are made. The sense in which it rings true is particularly remarkable, the large and growing body of literature dedicated to work-flow modeling, business process re-engineering and other methods aimed at representing work in the service of transforming it.

928 citations


Journal ArticleDOI
TL;DR: The Relationship Management Data model (RMDM) and the Relationship Management (RMM) methodology are presented and design activities are addressed within the first three steps of the methodology.
Abstract: Hypermedia application design di ers from other software design in that it involves navigation as well as user-interface and information processing issues. We present the Relationship Management Data model (RMDM) and the Relationship Management (RMM) methodology for the design and development of hypermedia applications. The seven steps of the methodology lend themselves to computer support, paving the way for a computerized environment to support the design and development of hypermedia applications. This article focuses on design activities, which are addressed within the rst three steps of the methodology.

852 citations


Journal ArticleDOI
TL;DR: This report outlines IBM’s perspective on key supporting technologies and on the unique challenges highlighted by the emergence of digital libraries.
Abstract: ing Education-support Object-oriented Accessibility Electronic publishing OCR Agents Ethnographic study OODB support Annotation Filtering Personalization Archive Geographic information system Preservation Billing, charging Hypermedia Privacy Browsing Hypertext Publisher library Catalog Image processing Repository Classification Indexing Scalability Clustering Information retrieval Searching Commercial service Intellectual property rights Security Content conversion Interactive Sociological study Copyright clearance Knowledge base Storage Courseware Knowbot Standard Database Library science Subscription Diagrams (e.g., CAD) Mediator Sustainability Digital video Multilingual Training support Discipline-level library Multimedia stream playback Usability Distributed processing Multimedia systems Virtual (integration) Document analysis Multimodal Visualization Document model National library World-Wide Web Economic study Navigation its characterization of digital libraries. Many important projects and perspectives have been omitted. Here we give some pointers to aid further exploration, and of course we encourage interested readers to attend the numerous conferences and workshops scheduled in this field, many sponsored by or in cooperation with ACM and its SIGs. One early journal special issue is introduced in [6]. It includes articles on copyright and intellectual property rights, a subscription model for handling funds transfer related to digital libraries, a description of the evolution of the WAIS search system in general and its interfaces in particular, an overview of the Right Pages system and its use of OCR and document analysis algorithms, and an early overview of the Envision system [7]. We note that to many, intellectual property rights issues and ways to obtain revenue streams to sustain digital libraries are the most important open problems. The largest digital library conference makes its proceedings available over the WWW [9]. These contain many insightful discussions, proposals of new research ideas, descriptions of base technologies, and explanations of how the broad concept of a digital library fits in with the needs of specific user communities and the information they require. Readers can find a variety of works on agents, architectures, catalogs, collaboration, compression, document analysis from OCR and page images, document structure, electronic journals, heterogeneous sources, knowledge-based approaches, library science, numerical data collections, object stores, and organizational usability. For more details on the origins of the Digital Library Initiative, and for a variety of perspectives on open research problems, we refer the reader to [5]. This work also has numerous pointers to people, projects, institutions, and other reference works in the area. For a perspective on the role the computer industry should have in this field, see [10]. This report outlines IBM’s perspective on key supporting technologies and on the unique challenges highlighted by the emergence of digital libraries. We expect considerable interest from the corporate sector as well as from government agencies in this important area of information technology. For lack of space, we have had to omit many publications on networking and storage technologies, sociological and ethnographic studies, library and information science, OCR and document analysis or conversion, and rights management. These and other works are needed to round out the discussion of digital libraries. However, we encourage you to read the rest of this issue as a good starting point for your future studies of this important field. We invite you to not only use but also help in the creation of a future World Digital Library System!

654 citations


Journal ArticleDOI
TL;DR: This paper aims to provide increasing levels of automation in the knowledge engineering process, replacing much time-consuming human activity with automatic techniques that improve accuracy or efficiency by discovering and exploiting regularities in training data.
Abstract: Machine learning is the study of computational methods for improving performance by mechanizing the acquisition of knowledge from experience. Expert performance requires much domain-specific knowledge, and knowledge engineering has produced hundreds of AI expert systems that are now used regularly in industry. Machine learning aims to provide increasing levels of automation in the knowledge engineering process, replacing much time-consuming human activity with automatic techniques that improve accuracy or efficiency by discovering and exploiting regularities in training data. The ultimate test of machine learning is its ability to produce systems that are used regularly in industry, education, and elsewhere.

540 citations


Journal Article
TL;DR: This article reports on some fascinating research focusing on understanding how textual and visual representations for software differ in effectiveness, and determined that the differences lie not so much in the textual-visual distinction as in the degree to which specific representations support the conventions experts expect.
Abstract: Many believe that visual programming techniques are quite close to developers. This article reports on some fascinating research focusing on understanding how textual and visual representations for software differ in effectiveness. Among other things, it is determined that the differences lie not so much in the textual-visual distinction as in the degree to which specific representations support the conventions experts expect

532 citations


Journal ArticleDOI
TL;DR: Langton offers a nice overview of the different research questions studied by the discipline, which spans such diverse topics as artificial evolution, artificial ecosystems, artificial morphogenesis, molecular evolution, and many more.
Abstract: The relatively new field of artificial life attempts to study and understand biological life by synthesizing artificial life forms. To paraphrase Chris Langton, the founder of the field, the goal of artificial life is to “model life as it could be so as to understand life as we know it.” Artificial life is a very broad discipline which spans such diverse topics as artificial evolution, artificial ecosystems, artificial morphogenesis, molecular evolution, and many more. Langton offers a nice overview of the different research questions studied by the discipline [6]. Artificial life shares with artificial intelligence (AI) its interest in synthesizing adaptive autonomous agents. Autonomous agents are computational systems that inhabit some complex, dynamic environment, sense and act autonomously in this environment, and by doing so realize a set of goals or tasks for which they are designed.

Journal ArticleDOI
Marian Petre1
TL;DR: In this paper, the differences between textual and visual representations for software differ in effectiveness, and it is determined that the differences lie not so much in the textual-visual distinction as in the degree to which specific representations support the conventions experts expect.
Abstract: Many believe that visual programming techniques are quite close to developers. This article reports on some fascinating research focusing on understanding how textual and visual representations for software differ in effectiveness. Among other things, it is determined that the differences lie not so much in the textual-visual distinction as in the degree to which specific representations support the conventions experts expect.

Journal ArticleDOI
TL;DR: Methods based rmly in probability theory have once again begun to gain acceptance in the computer-science and uncertain-reasoning communities.
Abstract: As long as knowledge-based systems have been built, facilities for handling uncertainty have been an integral part. In the early days of rule-based programming, the predominant methods used variants on probability calculus to combine certainty factors associated with applicable rules. Although it was recognized that certainty factors did not conform to the well-established theory of probability, these methods were nevertheless favored because the probabilistic techniques available at the time seemed to require either specifying an intractable number of parameters, or assuming an unrealistic set of independence relationships. Today, methods based rmly in probability theory have once again begun to gain acceptance in the computer-science and uncertain-reasoning communities. The \\breakthrough\" was a graphical modeling language for representing uncertain relationships. Although Bayesian networks have gained widespread use in knowledge-based systems relatively recently, they are based on modeling 1

Journal ArticleDOI
TL;DR: The Object-Oriented Hypermedia Design Method (OOHDM) uses abstraction and composition mechanisms in an object-oriented framework to allow a concise description of complex information items and allow the specification of complex navigation patterns and interface transformations.
Abstract: H ypermedia applications typically include complex information , and may allow sophisticated navigation behavior. The Object-Oriented Hypermedia Design Method (OOHDM) [4] uses abstraction and composition mechanisms in an object-oriented framework to, on one hand, allow a concise description of complex information items, and on the other hand, allow the specification of complex navigation patterns and interface transformations. In OOHDM, a hypermedia application is built in a four-step process supporting an incremental or prototype process model. Each step focuses on a particular design concern, and an object-oriented model is built. Classification, aggregation and generalization/specialization are used throughout the process to enhance abstraction power and reuse opportunities. Figure 1 summarizes the steps, products , mechanisms and design concerns in OOHDM. Domain Analysis In this step a conceptual model of the application domain is built using well-known object-oriented modeling principles [3], augmented with some primitives such as attribute perspectives (multiple-valued attributes, similar to HDM perspectives [1]). Conceptual classes may be built using aggregation and generalization/special-ization hierarchies. There is no concern for the types Abstract interface objects, responses to external events, interface transformations Running application

Journal ArticleDOI
TL;DR: This discussion of “designing for comprehension” addresses the second type of task—reading a hyperdocument for learning—that is more adequate for tasks requiring deep understanding and learning.
Abstract: hypermedia, it is necessary to distinguish between two kinds of applications: “One encourages those who wish to wander through large clouds of information, gathering knowledge along the way. The other is more directly tied to specific problem-solving, and is quite structured and perhaps even constrained” [20, p. 119]. Applications of the first type appear as browsable databases—or hyperbases—that can be freely explored by a reader. In contrast, applications of the second type take the shape of electronic documents—or hyperdocuments—that intentionally guide readers through an information space, controlling their exploration along the lines of a predefined structure. Each type has its particular advantages and encourages different reading strategies. While the first one is better suited to support unconstrained search and information retrieval, the second one is more adequate for tasks requiring deep understanding and learning. As Hammond points out, it “may be fun and perhaps instructive, to open every door and peer inside, but there are many situations where learning is most effective when the freedom of the learner is restricted to a relevant and helpful subset of activities.” It is this second type of task—reading a hyperdocument for learning—that we address in our discussion of “designing for comprehension.”

Journal ArticleDOI
TL;DR: The relationships among nationality, cultural values, personal information privacy concerns, and information privacy regulation are examined in this article.
Abstract: The relationships among nationality, cultural values, personal information privacy concerns, and information privacy regulation are examined in this article.

Journal ArticleDOI
TL;DR: You have just finished typing that big report into your word processor, it is formatted correctly and looks beautiful on the screen, you hit print, go to the printer—and nothing is there.
Abstract: You have just finished typing that big report into your word processor. It is formatted correctly and looks beautiful on the screen. You hit print, go to the printer—and nothing is there. Your try again—still nothing. The report needs to go out today. What do you do?

Journal ArticleDOI
TL;DR: Tapping into this source of information requires the establishment of one or more customer-developer links, which are defined as the techniques and/or channels that allow customers and developers to exchange information.
Abstract: Many of the best ideas for new products and product improvements come from the customer or end user of the product [15]. In the software arena, tapping into this source of information requires the establishment of one or more customer-developer links. These links are defined as the techniques and/or channels that allow customers and developers to exchange information.

Journal ArticleDOI
TL;DR: It is suggested that underlying assumptions rooted in different conceptions of work coexist within an organization and represent different lenses through which people in the organization peer and carry different implications for the design of work and technologies.
Abstract: world economy have led companies to restructure themselves in order to compete globally. Debates in the academic community about the changing demand for workplace skills with the globalization of the economy are paralleled in the business literature about what it takes to create a productive business (e.g., [8, 22, 24]). Business goals for such improvements as computer systems, work systems, or learning organizations are heavily influenced by underlying assumptions about how people work and how organizations function. In this article I examine these underlying assumptions and outline their implications for design. I suggest that underlying assumptions rooted in different conceptions of work coexist within an organization and represent different lenses through which people in the organization peer. One such lens, or conception of work, I call an “organizational, explicit” view, the other an “activity-oriented, tacit” view (see Table 1). Each of these perspectives carries different implications for the design of work and technologies. 1 An organizational perspective on work is an explicit view and is represented, for example, by sets of defined tasks and operations such as those described in methods and procedures, which fulfill a set of business functions (the work-flow approach reflects this; see [16].) This view of work differs from an activity-oriented approach, which suggests that the range of activities, communication practices, relationships, and coordination it takes to accomplish business functions is complex and continually mediated by

Journal ArticleDOI
TL;DR: Hypertext1 allows content to appear in different contexts and authors collect and structure materials to reflect their own understanding or in anticipation of readers’ possible interests, needs, or ability to comprehend the substrate of interrelated content.
Abstract: Introduction Hypertext1, in its most general sense, allows content to appear in different contexts. The immediate setting in which readers encounter a specific segment of material then changes from reading to reading or from reader to reader. Authors collect and structure materials to reflect their own understanding or in anticipation of readers’ possible interests, needs, or ability to comprehend the substrate of interrelated content.

Journal ArticleDOI
TL;DR: It is argued that current efforts to create digital libraries are limited by a largely unexamined and unintended allegiance to an idealized view of what libraries have been, rather than what they actually are or could be.
Abstract: What are digital libraries, how should they be designed, how will they be used, and what relationship will they bear to what we now call “libraries”? Although we cannot hope to answer all these crucial questions in this short article, we do hope to encourage, and in some small measure to shape, the dialog among computer scientists, librarians, and other interested parties out of which answers may arise. Our contribution here is to make explicit, and to question, certain assumptions that underlie current digital library efforts. We will argue that current efforts are limited by a largely unexamined and unintended allegiance to an idealized view of what libraries have been, rather than what they actually are or could be. Since these limits come from current ways of thinking about the problem, rather than being inherent in the technology or in social practice, expanding our conception of digital libraries should serve to expand the scope and the utility of development efforts.

Journal ArticleDOI
TL;DR: This article describes the design-oriented evaluation method and applies it to a highly popular commercial application: Microsoft's Art Gallery, a hypermedia guide to the National Gallery in Lon-don's painting collection.
Abstract: ence both in developing several hypermedia applications , and systematically inspecting and evaluating many commercial applications and prototypes. We evaluate an application by its very nature, our method addresses neither software design (which can be evaluated with general software evaluation techniques), nor how well the application relates to a domain or to specific user needs (a main concern of other usabili-ty evaluation techniques). Our approach complements more general evaluation methods [1, 11, 12, 18, 19, 21] for the field of hypermedia. In this article we describe our design-oriented evaluation method and apply it to a highly popular commercial application: Microsoft's Art Gallery, a hypermedia guide to the National Gallery in Lon-don's painting collection. Art Gallery is an outstanding and enjoyable application. Initially designed only for the museum's visitors [20], it is now widely available as a CD-ROM [17]. We also discuss aspects of reuse in hypermedia applications and propose some initial suggestions for designing for reuse. Analysis Dimensions We have identified several dimensions for analyzing a hypermedia application: content, structure, presentation , dynamics, and interaction. • Content: The pieces of information included in the application. These may consist of static (passive) media (such as formatted data, text strings, images, and graphics) or active (dynamic) media (such as video clips, sound tracks, and anima-F r a n c a G a r z o t t o , L u c a M a i n e t t i , ne can perform a heuristic evaluation of a hypermedia application effectively by coupling a systematic analysis of the application based on a hypermedia design model with general usability criteria, independent of the specific application area, user profile(s), and user task(s). We call our method design-oriented evaluation (as opposed to the user-oriented evaluation commonly applied in usability testing), since it evaluates the internal strength of the design underlying the hypermedia application.

Journal ArticleDOI
TL;DR: A number of techniques are developed that can be considered attempts to increase the bandwidth and quality of the interactions between users and information in an information workspace—an environment designed to support information work.
Abstract: Effective information access involves rich interactions between users and information residing in diverse locations. Users seek and retrieve information from the sources—for example, file serves, databases, and digital libraries—and use various tools to browse, manipulate, reuse, and generally process the information. We have developed a number of techniques that support various aspects of the process of user/information interaction. These techniques can be considered attempts to increase the bandwidth and quality of the interactions between users and information in an information workspace—an environment designed to support information work (see Figure 1).

Journal ArticleDOI
TL;DR: It is often hard to directly reuse existing algorithms, detailed designs, interfaces, or implementations in these systems due to the growing heterogeneity of hardware/software architectures and the increasing diversity of operating system platforms.
Abstract: Despite dramatic increases in network and host performance, it remains difficult to design, implement, and reuse communication software for complex distributed systems. Examples of these systems include global personal communication systems, network management platforms, enterprise medical imaging systems, and real-time market data monitoring and analysis systems. In addition, it is often hard to directly reuse existing algorithms, detailed designs, interfaces, or implementations in these systems due to the growing heterogeneity of hardware/software architectures and the increasing diversity of operating system platforms.

Journal ArticleDOI
TL;DR: To understand why chaos theory can be applied toward the understanding, manipulation, and control of a variety of systems, one must start with a working knowledge of how chaotic systems behave—profoundly, but sometimes subtly different, from the behavior of random systems.
Abstract: There lies a behavior between rigid regularity and randomness based on pure chance. It's called a chaotic system, or chaos for short [5]. Chaos is all around us. Our notions of physical motion or dynamic systems have encompassed the precise clock-like ticking of periodic systems and the vagaries of dice-throwing chance, but have often been overlooked as a way to account for the more commonly observed chaotic behavior between these two extremes. When we see irregularity we cling to randomness and disorder for explanations. Why should this be so? Why is it that when the ubiquitous irregularity of engineering, physical, biological, and other systems are studied, it is assumed to be random and the whole vast machinery of probability and statistics is applied? Rather recently, however, we have begun to realize that the tools of chaos theory can be applied toward the understanding, manipulation, and control of a variety of systems, with many of the practical applications coming after 1990. To understand why this is true, one must start with a working knowledge of how chaotic systems behave—profoundly, but sometimes subtly different, from the behavior of random systems.


Journal ArticleDOI
TL;DR: Libraries have long served crucial roles in learning and today the rhetoric associated with the National/Global Information Infrastructure (N/GII) always includes examples of how the vast quantities of information that global networks provide (i.e., digital libraries) will be used in educational settings.
Abstract: Libraries have long served crucial roles in learning. The first great library, in Alexandria 2,000 years ago, was really the first university. It consisted of a zoo and various cultural artifacts in addition to much of the ancient world's written knowledge and attracted scholars from around the Mediterranean, who lived and worked in a scholarly community for years at a time. Today, the rhetoric associated with the National/Global Information Infrastructure (N/GII) always includes examples of how the vast quantities of information that global networks provide (i.e., digital libraries) will be used in educational settings [16].

Journal ArticleDOI
TL;DR: Software reuse is the use of existing software knowledge or artifacts to build new software artifacts to be used in different systems to be distinguished from porting.
Abstract: Software reuse is the use of existing software knowledge or artifacts to build new software artifacts. Reuse is sometimes confused with porting. The two are distinguished as follows: Reuse is using an asset in different systems; porting is moving a system across environments or platforms. For example, in Figure 1 a component in System A is shown used again in System B; this is an example of reuse. System A, developed for Environment 1, is shown moved into Environment 2; this is an example of porting.

Journal Article
TL;DR: In this paper, the authors provide guidelines for developers of hypertext design environments to facilitate the user's design process, and examine the general human-factor aspects of the design process to determine which features help designers most.
Abstract: the conceptual hypertext data model and abstract navigational model [22] can benefit directly from software engineering approaches. Fundamental differences , however, make a pure transposition of techniques both difficult and inadequate. An important part of hypertext design concerns aesthetic and cog-nitive aspects that software engineering environments do not support. This article focuses on the hypertext 1 design task itself as a computer-supported activity. In it we provide guidelines for developers of hypermedia design environments to facilitate the user's design process. A hypertext design environment can support both formal hypermedia design techniques and the actual design process successfully. While we take advantage of object-oriented terminology to describe certain concepts, the lessons of this article apply to all formal design techniques. Thus we neither proscribe nor detail a formal design model. Instead we examine the general human-factor aspects of the design process to determine which features help designers most. We also enumerate the requirements hypermedia design environments have that other types of computer applications do not. This analysis arises from observing users and students during design tasks, and is grounded in sound and well-known results in cogni-tive science. It builds upon our experience in developing LIRMM's MacWeb 2 hypermedia development environment [14–16]. One may consider hypermedia design, as with any other 49 J o c e l y n e N a n a r d a n d M a r c N a n a r d Hypertext Design Environments and the Hypertext Design Process mproving the quality of hypermedia design and reducing its cost is an important challenge for the information industry. One way to tackle the problem is to provide hypertext designers with appropriate development environments. Hypertext engineering environments that provide sets of integrated tools boost designers' efficiency and effectiveness. 1 Hypertext and hypermedia are handled similarly in design. Both refer to organized sets of information linked by semantic relationships, and therefore are indistinguishable in this article. 2 MacWeb [14, 15] is a knowledge-based hypertext system developed at LIRMM since 1989. To avoid any confusion, note that it is unrelated to the WorldWide Web client developed later with the same name. We used the name MacWeb first (and have published articles referring to it that predate the WWW client). design activity, from two points of view or dimensions: • prescribed formal design techniques to produce the design • observations on how people actually conduct the design process Most …

Journal ArticleDOI
TL;DR: This brief tutorial on Bayesian networks serves to introduce readers to some of the concepts, terminology, and notation employed by articles in this special section.
Abstract: This brief tutorial on Bayesian networks serves to introduce readers to some of the concepts, terminology, and notation employed by articles in this special section. In a Bayesian network, a variable takes on values from a collection of mutually exclusive and collective exhaustive states. A variable may be discrete, having a finite or countable number of states, or it may be continuous. Often the choice of states itself presents an interesting modeling question. For example, in a system for troubleshooting a problem with printing, we may choose to model the variable “print output” with two states—“present” and “absent”—or we may want to model the variable with finer distinctions such as “absent,” “blurred ,” “cut off,” and “ok.”