scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1980"


Journal ArticleDOI
Turner Whitted1
TL;DR: Consideration of all of these factors allows the shader to accurately simulate true reflection, shadows, and refraction, as well as the effects simulated by conventional shaders.
Abstract: To accurately render a two-dimensional image of a three-dimensional scene, global illumination information that affects the intensity of each pixel of the image must be known at the time the intensity is calculated. In a simplified form, this information is stored in a tree of “rays” extending from the viewer to the first surface encountered and from there to other surfaces and to the light sources. A visible surface algorithm creates this tree for each pixel of the display and passes it to the shader. The shader then traverses the tree to determine the intensity of the light received by the viewer. Consideration of all of these factors allows the shader to accurately simulate true reflection, shadows, and refraction, as well as the effects simulated by conventional shaders. Anti-aliasing is included as an integral part of the visibility calculations. Surfaces displayed include curved as well as polygonal surfaces.

1,559 citations


Journal ArticleDOI
TL;DR: There are ~eg:N ms:poets of aser~emt,xu{er ~rforma~ce tha~ sys*em designers sh~d sys~emarka{~ ~ e r.

945 citations


Journal ArticleDOI
TL;DR: Multidimensional divide-and-conquer is discussed, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems.
Abstract: Most results in the field of algorithm design are single algorithms that solve single problems. In this paper we discuss multidimensional divide-and-conquer, an algorithmic paradigm that can be instantiated in many different ways to yield a number of algorithms and data structures for multidimensional problems. We use this paradigm to give best-known solutions to such problems as the ECDF, maxima, range searching, closest pair, and all nearest neighbor problems. The contributions of the paper are on two levels. On the first level are the particular algorithms and data structures given by applying the paradigm. On the second level is the more novel contribution of this paper: a detailed study of an algorithmic paradigm that is specific enough to be described precisely yet general enough to solve a wide variety of problems.

720 citations


Journal ArticleDOI
TL;DR: A theory of analogy is presented and an implemented system that embodies the theory is described that is designed to answer questions about Hamlet by way of knowledge about Macbeth.
Abstract: We use analogy when we say something is a Cinderella story and when we learn about resistors by thinking about water pipes. We also use analogy when we learn subjects like economics, medicine, and law. This paper presents a theory of analogy and describes an implemented system that embodies the theory. The specific competence to be understood is that of using analogies to do certain kinds of learning and reasoning. Learning takes place when analogy is used to generate a constraint description in one domain, given a constraint description in another, as when we learn Ohm's law by way of knowledge about water pipes. Reasoning takes place when analogy is used to answer questions about one situation, given another situation that is supposed to be a precedent, as when we answer questions about Hamlet by way of knowledge about Macbeth.

529 citations


Journal ArticleDOI
TL;DR: Peterson investigates the basic structure of several such existing programs and their approaches to solving the problems which arise when this type of program is created.
Abstract: With the increase in word and text processing computer systems, programs which check and correct spelling will become more and more common. Peterson investigates the basic structure of several such existing programs and their approaches to solving the problems which arise when this type of program is created. The basic framework and background necessary to write a spelling checker or corrector are provided.

417 citations


Journal ArticleDOI
TL;DR: These problems are addressed by the facilities described here for concurrent programming in Mesa, and experience with several substantial applications gives us some confidence in the validity of the solutions.
Abstract: The use of monitors for describing concurrency has been much discussed in the literature. When monitors are used in real systems of any size, however, a number of problems arise which have not been adequately dealt with: the semantics of nested monitor calls; the various ways of defining the meaning of WAIT; priority scheduling; handling of timeouts, aborts and other exceptional conditions; interactions with process creation and destruction; monitoring large numbers of small objects. These problems are addressed by the facilities described here for concurrent programming in Mesa. Experience with several substantial applications gives us some confidence in the validity of our solutions.

402 citations


Journal ArticleDOI
John F. Shoch1, Jon A. Hupp1
TL;DR: Characterizing “typical” traffic characteristics in this environment and demonstrating that the system works very well: under extremely heavy load—artificially generated—the system shows stable behavior, and channel utilization approaches 98 percent, as predicted.
Abstract: The Ethernet communications network is a broadcast, multiaccess system for local computer networking, using the techniques of carrier sense and collision detection. Recently we have measured the actual performance and error characteristics of an existing Ethernet installation which provides communications services to over 120 directly connected hosts.This paper is a report on some of those measurements—characterizing “typical” traffic characteristics in this environment and demonstrating that the system works very well. About 300 million bytes traverse the network daily; under normal load, latency and error rates are extremely low and there are very few collisions. Under extremely heavy load—artificially generated—the system shows stable behavior, and channel utilization approaches 98 percent, as predicted.

356 citations


Journal ArticleDOI
TL;DR: The use of computers to assist in the learning situation in a simulation, game, tutorial, or drill and practice mode is reviewed on an international basis with centers of activity identified in the United States, Canada, the United Kingdom, and Japan.
Abstract: The use of computers to assist in the learning situation in a simulation, game, tutorial, or drill and practice mode is reviewed on an international basis with centers of activity identified in the United States, Canada, the United Kingdom, and Japan. The use of the computer as an adjunct to support learning is compared to its use in a substitution mode. Evaluative studies of CAI are reviewed and costs are examined. The critical issues of CAI are enumerated and analyzed as they pertain to computer hardware, CAI languages, and courseware development and use. The future of CAI is briefly sketched from the viewpoints of individuals prominent in the field. Finally, conclusions are drawn and recommendations are offered to help ensure the most educationally cost-effective use of CAI in learning situations.

354 citations


Journal ArticleDOI
TL;DR: A demonstration is given of the way in which a simple geometrical construction yields new and efficient algorithms for various searching and list manipulation problems.
Abstract: Examples of fruitful interaction between geometrical combinatorics and the design and analysis of algorithms are presented. A demonstration is given of the way in which a simple geometrical construction yields new and efficient algorithms for various searching and list manipulation problems.

286 citations


Journal ArticleDOI
TL;DR: Three major areas of methodological concern, the selection of subjects, materials, and measures, are reviewed and the first two of these areas continue to present major difficulties for this type of research.
Abstract: The application of behavioral or psychological techniques to the evaluation of programming languages and techniques is an approach which has found increased applicability over the past decade. In order to use this approach successfully, investigators must pay close attention to methodological issues, both in order to insure the generalizability of their findings and to defend the quality of their work to researchers in other fields. Three major areas of methodological concern, the selection of subjects, materials, and measures, are reviewed. The first two of these areas continue to present major difficulties for this type of research.

237 citations


Journal ArticleDOI
Kenneth E. Iverson1
TL;DR: The importance of nomenclature, notation, and language as tools of thought has long been recognized and did much to stimulate and to channel later investigation in chemistry and in botany.
Abstract: The importance of nomenclature, notation, and language as tools of thought has long been recognized. In chemistry and in botany, for example, the establishment of systems of nomenclature by Lavoisier and Linnaeus did much to stimulate and to channel later investigation. Concerning language, George Boole in his Laws of Thought [1, p.21] asserted "That language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted.

Journal ArticleDOI
TL;DR: In this paper, three scan line methods for drawing pictures of parametrically defined surfaces are presented along with an overview of the numerical inversion of the functions used to define the surface.
Abstract: This paper presents three scan line methods for drawing pictures of parametrically defined surfaces. A scan line algorithm is characterized by the order in which it generates the picture elements of the image. These are generated left to right, top to bottom in much the same way as a picture is scanned out on a TV screen. Parametrically defined surfaces are those generated by a set of bivariate functions defining the X, Y, and Z position of points on the surface. The primary driving mechanism behind such an algorithm is the inversion of the functions used to define the surface. In this paper, three different methods for doing the numerical inversion are presented along with an overview of scan line methods.

Journal ArticleDOI
TL;DR: This note presents an efficient algorithm for finding the largest (or smallest) of a set of uniquely numbered processors arranged in a circle, in which no central controller exists and the number of processors is not known a priori.
Abstract: This note presents an efficient algorithm, requiring O(n log n) message passes, for finding the largest (or smallest) of a set of n uniquely numbered processors arranged in a circle, in which no central controller exists and the number of processors is not known a priori.

Journal ArticleDOI
TL;DR: An algorithm is presented for constructing a quadtree for a region given its boundary in the form of a chain code that reveals that its execution time is proportional to the product of the perimeter and the log of the diameter of the region.
Abstract: An algorithm is presented for constructing a quadtree for a region given its boundary in the form of a chain code. Analysis of the algorithm reveals that its execution time is proportional to the product of the perimeter and the log of the diameter of the region.

Journal ArticleDOI
TL;DR: This paper presents an algorithm for converting from quadtrees to a simple class of boundary codes and is shown to have an execution time proportional to the perimeter of the region.
Abstract: There has been recent interest in the use of quadtrees to represent regions in an image. It thus becomes desirable to develop efficient methods of conversion between quadtrees and other types of region representations. This paper presents an algorithm for converting from quadtrees to a simple class of boundary codes. The algorithm is shown to have an execution time proportional to the perimeter of the region.

Journal ArticleDOI
TL;DR: The Pilot operating system provides a single-user, single language environment for higher level software on a powerful personal computer, whose features include virtual memory, a large “flat” file system, streams, network communication facilities, and concurrent programming support.
Abstract: The Pilot operating system provides a single-user, single language environment for higher level software on a powerful personal computer. Its features include virtual memory, a large “flat” file system, streams, network communication facilities, and concurrent programming support. Pilot thus provides rather more powerful facilities than are normally associated with personal computers. The exact facilities provided display interesting similarities to and differences from corresponding facilities provided in large multi-user systems. Pilot is implemented entirely in Mesa, a high-level system programming language. The modularization of the implementation displays some interesting aspects in terms of both the static structure and dynamic interactions of the various components.


Journal ArticleDOI
TL;DR: High-Level Specifications (§2.2) l Formal Mapping and Consistency Proof (§3.3) l Standard Hoare-Style Code Verification (§ 3.4)
Abstract: Level Specifications (§2.2) l Formal Mapping and Consistency Proof (§3.2) Low-Level Specifications (§2.3) l Standard Hoare-Style Code Verification (§3.3)

Journal ArticleDOI
TL;DR: In this article, a method for computing machine independent, minimal perfect hash functions of the form: hash value ← key length + the associated value of the key's first character + the corresponding value of a key's last character is presented.
Abstract: A method is presented for computing machine independent, minimal perfect hash functions of the form: hash value ← key length + the associated value of the key's first character + the associated value of the key's last character. Such functions allow single probe retrieval from minimally sized tables of identifier lists. Application areas include table lookup for reserved words in compilers and filtering high frequency words in natural language processing. Functions for Pascal's reserved words, Pascal's predefined identifiers, frequently occurring English words, and month abbreviations are presented as examples.

Journal ArticleDOI
TL;DR: An extensive simulation effort directed at evaluating the merits of two scheduling strategies, FCFS and SSTF, for moving-arm disks under stationary request arrival process seems to confirm the overall superiority of Shortest-Seek-Time-First (SSTF), particularly for medium and heavy traffic.
Abstract: We report on a rather extensive simulation effort directed at evaluating the merits of two scheduling strategies, FCFS and SSTF, for moving-arm disks under stationary request arrival process. For First-Come-First-Served (FCFS) scheduling, analytic results for the mean waiting time are also given (in a closed form). If the objective of a schedule is to minimize the mean waiting time (or queue size) and its variance, the results seem to confirm the overall superiority of Shortest-Seek-Time-First (SSTF), particularly for medium and heavy traffic. This holds also when the input is highly correlated or addresses the cylinders nonuniformly. These results contradict some statements published in recent years. The domain of policies where SSTF is optimal is considered. The simulation methodology is described in some detail.

Journal ArticleDOI
David Harel1
TL;DR: This paper shall attempt to provide a reasonable definition of or, rather, criteria fbr ~btk theorems, followed by a detailed example illustrating the ideas, and take a piece of ~blklore and show it is a theorem, or take a theorem and show that it is ff~lklore.
Abstract: We suggest criteria for a statement to be a folk theorem and illustrate the ideas with a detailed example.

Journal ArticleDOI
TL;DR: An experiment was described to test the hypothesis that certain features of natural language provide a useful guide for the human engineering of interactive command languages, and establish that a syntax employing familiar, descriptive, everyday words and well-formed English phrases contributes to a language that can be easily and effectively used.
Abstract: The work reported here stems from our deep belief that improved human engineering can add significantly to the acceptance and use of computer technology.In particular, this report describes an experiment to test the hypothesis that certain features of natural language provide a useful guide for the human engineering of interactive command languages. The goal was to establish that a syntax employing familiar, descriptive, everyday words and well-formed English phrases contributes to a language that can be easily and effectively used. Users with varying degrees of interactive computing experience used two versions of an interactive text editor; one with an English-based command syntax in the sense described above, the other with a more notational syntax. Performance differences strongly favored the English-based editor.

Journal ArticleDOI
TL;DR: Tests on randomly generated lists of various combinations of list length and small sortedness ratios indicate that Straight Insertion Sort is best for small or very nearly sorted lists and that Quickersort is best otherwise.
Abstract: Straight Insertion Sort, Shellsort, Straight Merge Sort, Quickersort, and Heapsort are compared on nearly sorted lists. The ratio of the minimum number of list elements which must be removed so that the remaining portion of the list is in order to the size of the list is the authors' measure of sortedness. Tests on randomly generated lists of various combinations of list length and small sortedness ratios indicate that Straight Insertion Sort is best for small or very nearly sorted lists and that Quickersort is best otherwise. Cook and Kim also show that a combination of the Straight Insertion Sort and Quickersort with merging yields a sorting method that performs as well as or better than either Straight Insertion Sort or Quickersort on nearly sorted lists.

Journal ArticleDOI
TL;DR: Two new computational algorithms for product form networks are presented and a comprehensive treatment of these algorithms and the two important existing algorithms, convolution and mean value analysis, is given.
Abstract: In the last two decades there has been special interest in queueing networks with a product form solution. These have been widely used as models of computer systems and communication networks. Two new computational algorithms for product form networks are presented. A comprehensive treatment of these algorithms and the two important existing algorithms, convolution and mean value analysis, is given.

Journal ArticleDOI
TL;DR: A practical method of partial-match retrieval in very large data files, where a binary code word is associated with each record of the file to form a derived descriptor, which will serve as an index for the block as a whole.
Abstract: In this paper we describe a practical method of partial-match retrieval in very large data files. A binary code word, called a descriptor, is associated with each record of the file. These record descriptors are then used to form a derived descriptor for a block of several records, which will serve as an index for the block as a whole; hence, the name “indexed descriptor files.”First the structure of these files is described and a simple, efficient retrieval algorithm is presented. Then its expected behavior, in terms of storage accesses, is analyzed in detail. Two different file creation procedures are sketched, and a number of ways in which the file organization can be “tuned” to a particular application are suggested.

Journal ArticleDOI
TL;DR: A framework for the quantitative analysis of the impact of these factors on the performance of DBMS is presented and the main factors which determine the behavior of these systems are pointed out and analyzed independently and aggregated to yield a global performance evaluation.
Abstract: Consistency control has to be enforced in database management systems (DBMS) where several transactions may concurrently access the database. This control is usually achieved by dividing the database into locking units or granules, and by specifying a locking policy which ensures integrity of the information. However, a drawback of integrity enforcement through locking policies is the degradation of the global system performance. This is mainly due to the restriction imposed by the locking policies to the access of transactions to the database, and to the overheads involved with the management of locks. A framework for the quantitative analysis of the impact of these factors on the performance of DBMS is presented in this paper. In a first step, the main factors which determine the behavior of these systems are pointed out and analyzed independently. The results hereby obtained are aggregated in a second step to yield a global performance evaluation. Throughout this hierarchical modeling approach various analytical techniques are used and the results are illustrated by numerical examples. The paper concludes by pointing out the final results' sensitivity to some basic assumptions concerning transaction behavior and the need for more experimental studies in this area.

Journal ArticleDOI
TL;DR: Real-time debug and test is still a “lost world” compared to the “civilization” developed in other areas of software, says Robert L. Glass.
Abstract: Real-time debug and test is still a “lost world” compared to the “civilization” developed in other areas of software, says Robert L. Glass. From a survey of current practice across several projects and companies, he defines a state of the art for this problem area and suggests improvements which will ease the practitioner's task.

Journal ArticleDOI
Yonathan Bard1
TL;DR: A model of an I/O subsystem in which devices can be accessed from multiple CPUs and/or via alternative channel and control unit paths and central to the model's algorithm is the estimation of the rotational position sensing (RPS) miss probabilities by means of the maximum entropy principle.
Abstract: This paper presents a model of an I/O subsystem in which devices can be accessed from multiple CPUs and/or via alternative channel and control unit paths. The model estimates access response times, given access rates for all CPU-device combinations. Central to the model's algorithm is the estimation of the rotational position sensing (RPS) miss probabilities by means of the maximum entropy principle.Performance differences strongly favored the English-based editor.

Journal ArticleDOI
TL;DR: It is shown that communication considerations alone dictate that any VLSI design for computing the 2-bit product of two n-bit integers must satisfy the constraint AT-AT-2/64.
Abstract: The need to transfer information between processing elements can be a major factor in determining the performance of a VLSI circuit. We show that communication considerations alone dictate that any VLSI design for computing the 2n-bit product of two n-bit integers must satisfy the constraint AT2 ≥ n2/64 where A is the area of the chip and T is the time required to perform the computation. This same tradeoff applies to circuits which can shift n-bit words through n different positions.

Journal ArticleDOI
TL;DR: In 1979, Ralston was investigating curricula for discrete mathematics and Shaw was participating in evaluations of Curriculum 78 and the role of mathematics in undergraduate computer science, so they combined their notes to form a criticism of the mathematical content ofCurriculum ’78 that appeared in Communications of the ACM.
Abstract: In 1979, Ralston was investigating curricula for discrete mathematics [95] and Shaw was participating in evaluations of Curriculum 78 and the role of mathematics in undergraduate computer science. They combined their notes to form a criticism of the mathematical content of Curriculum ’78 that appeared in Communications of the ACM [94]. Some comments on the paper appeared a few months later [69]