scispace - formally typeset
Search or ask a question

Showing papers in "Communications of The ACM in 1985"


Journal ArticleDOI
TL;DR: This article shows that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules, and analyzes the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule by a factor that depends on the size of fast memory.
Abstract: In this article we study the amortized efficiency of the “move-to-front” and similar rules for dynamically maintaining a linear list. Under the assumption that accessing the ith element from the front of the list takes t(i) time, we show that move-to-front is within a constant factor of optimum among a wide class of list maintenance rules. Other natural heuristics, such as the transpose and frequency count rules, do not share this property. We generalize our results to show that move-to-front is within a constant factor of optimum as long as the access cost is a convex function. We also study paging, a setting in which the access cost is not convex. The paging rule corresponding to move-to-front is the “least recently used” (LRU) replacement rule. We analyze the amortized complexity of LRU, showing that its efficiency differs from that of the off-line paging rule (Belady's MIN algorithm) by a factor that depends on the size of fast memory. No on-line paging algorithm has better amortized performance.

2,378 citations


Journal ArticleDOI
TL;DR: The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations.
Abstract: The large-scale automated transaction systems of the near future can be designed to protect the privacy and maintain the security of both individuals and organizations.

1,759 citations


Journal ArticleDOI
John D. Gould1, Clayton Lewis
TL;DR: Data is presented which shows that the design principles of system design are not always intuitive to designers and the arguments which designers often offer for not using these principles are identified.
Abstract: This article is both theoretical and empirical. Theoretically, it describes three principles of system design which we believe must be followed to produce a useful and easy to use computer system. These principles are: early and continual focus on users; empirical measurement of usage; and iterative design whereby the system (simulated, prototype, and real) is modified, tested, modified again, tested again, and the cycle is repeated again and again. This approach is contrasted to other principled design approaches, for example, get it right the first time, reliance on design guidelines. Empirically, the article presents data which show that our design principles are not always intuitive to designers; identifies the arguments which designers often offer for not using these principles—and answers them; and provides an example in which our principles have been used successfully.

1,486 citations


Journal ArticleDOI
TL;DR: The 1-out-of-2 oblivious transfer as discussed by the authors allows one party to transfer exactly one secret, out of two recognizable secrets, to his counterpart, while the sender is ignorant of which secret has been received.
Abstract: Randomized protocols for signing contracts, certified mail, and flipping a coin are presented. The protocols use a 1-out-of-2 oblivious transfer subprotocol which is axiomatically defined.The 1-out-of-2 oblivious transfer allows one party to transfer exactly one secret, out of two recognizable secrets, to his counterpart. The first (second) secret is received with probability one half, while the sender is ignorant of which secret has been received.An implementation of the 1-out-of-2 oblivious transfer, using any public key cryptosystem, is presented.

1,257 citations


Journal ArticleDOI
TL;DR: Cosmic Cube as discussed by the authors is a hardware simulation of a future VLSI implementation that will consist of single-chip nodes, which offers high degrees of concurrency in applications and suggests that future machines with thousands of nodes are both feasible and attractive.
Abstract: Sixty-four small computers are connected by a network of point-to-point communication channels in the plan of a binary 6-cube. This “Cosmic Cube” computer is a hardware simulation of a future VLSI implementation that will consist of single-chip nodes. The machine offers high degrees of concurrency in applications and suggests that future machines with thousands of nodes are both feasible and attractive.

1,232 citations


Journal ArticleDOI
TL;DR: An evaluation of a large, operational full-text document-retrieval system shows the system to be retrieving less than 20 percent of the documents relevant to a particular search.
Abstract: An evaluation of a large, operational full-text document-retrieval system (containing roughly 350,000 pages of text) shows the system to be retrieving less than 20 percent of the documents relevant to a particular search. The findings are discussed in terms of the theory and practice of full-text document retrieval.

871 citations


Journal ArticleDOI
TL;DR: A frame-based representation facility contributes to a knowledge system's ability to reason and can assist the system designer in determining strategies for controlling the system's reasoning.
Abstract: A frame-based representation facility contributes to a knowledge system's ability to reason and can assist the system designer in determining strategies for controlling the system's reasoning.

858 citations


Journal ArticleDOI
TL;DR: Unless computer-mediated communication systems are structured, users will be overloaded with information, but structure should be imposed by individuals and user groups according to their needs and abilities, rather than through general software features.
Abstract: Unless computer-mediated communication systems are structured, users will be overloaded with information. But structure should be imposed by individuals and user groups according to their needs and abilities, rather than through general software features.

704 citations


Journal ArticleDOI
TL;DR: Rule-based systems automate problem-solving know-how, provide a means for capturing and refining human expertise, and are proving to be commercially viable.
Abstract: Rule-based systems automate problem-solving know-how, provide a means for capturing and refining human expertise, and are proving to be commercially viable.

597 citations


Journal ArticleDOI
TL;DR: The Manchester project has developed a powerful dataflow processor based on dynamic tagging that is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.
Abstract: The Manchester project has developed a powerful dataflow processor based on dynamic tagging. This processor is large enough to tackle realistic applications and exhibits impressive speedup for programs with sufficient parallelism.

467 citations


Journal ArticleDOI
TL;DR: Optimize compilers are used to compile programming languages down to instructions that are as unencumbered as microinstructions in a large virtual address space, and to make the instruction cycle time as fast as possible.
Abstract: Reduced instruction set computers aim for both simplicity in hardware and synergy between architectures and compilers Optimizing compilers are used to compile programming languages down to instructions that are as unencumbered as microinstructions in a large virtual address space, and to make the instruction cycle time as fast as possible

Journal ArticleDOI
TL;DR: A methodology for setting price, utilization, and capacity, taking into account the value of users' time is offered, and the implications of alternative control structures determined by the financial responsibility assigned to the data processing manager are examined.
Abstract: This article studies the effects of queueing delays, and users' related costs, on the management and control of computing resources. It offers a methodology for setting price, utilization, and capacity, taking into account the value of users' time, and it examines the implications of alternative control structures, determined by the financial responsibility assigned to the data processing manager.

Journal ArticleDOI
TL;DR: Orthogonal Latin squares—a new method for testing compilers—yields the informational equivalent of exhaustive testing at a fraction of the cost.
Abstract: Orthogonal Latin squares—a new method for testing compilers—yields the informational equivalent of exhaustive testing at a fraction of the cost. The method has been used successfully in designing some of the tests in the Ada Compiler Validation Capability (ACVC) test suite.

Journal ArticleDOI
TL;DR: A former member of the SDIO Panel on Computing in Support of Battle Management explains why he believes the Star Wars effort will not achieve its stated goals as discussed by the authors. But he does not specify why.
Abstract: A former member of the SDIO Panel on Computing in Support of Battle Management explains why he believes the “Star Wars” effort will not achieve its stated goals.

Journal ArticleDOI
TL;DR: A heuristic algorithm is proposed for dynamic calculation of the median and other quantiles that has a very small and fixed storage requirement regardless of the number of observations and is ideal for implementing in a quantile chip that can be used in industrial controllers and recorders.
Abstract: A heuristic algorithm is proposed for dynamic calculation of the median and other quantiles. The estimates are produced dynamically as the observations are generated. The observations are not stored; therefore, the algorithm has a very small and fixed storage requirement regardless of the number of observations. This makes it ideal for implementing in a quantile chip that can be used in industrial controllers and recorders. The algorithm is further extended to histogram plotting. The accuracy of the algorithm is analyzed.

Journal ArticleDOI
P.-J. Courtois1
TL;DR: Models of large and complex systems can often be reduced to smaller sub-models, for easier analysis, by a process known as decomposition, by certain criteria for successful decompositions.
Abstract: Models of large and complex systems can often be reduced to smaller sub-models, for easier analysis, by a process known as decomposition. Certain criteria for successful decompositions can be established.

Journal ArticleDOI
TL;DR: Prior experience with computers (i.e., prior to purchase of the home computer) was found to have a significant impact on the time allocation patterns in the household.
Abstract: An empirical study of 282 users of home computers was conducted to explore the relationship between computer use and shifts in time allocation patterns in the household. Major changes in time allocated to various activities were detected. Prior experience with computers (i.e., prior to purchase of the home computer) was found to have a significant impact on the time allocation patterns in the household. The study provides evidence that significant behavior changes can occur when people adopt personal computers in their homes.

Journal ArticleDOI
TL;DR: If the unique information-processing capabilities of protein enzymes could be adapted for computers, then evolvable, more efficient systems for such applications as pattern recognition and process control are in principle possible.
Abstract: If the unique information-processing capabilities of protein enzymes could be adapted for computers, then evolvable, more efficient systems for such applications as pattern recognition and process control are in principle possible.

Journal ArticleDOI
TL;DR: A systemic view of D SS can provide a concrete framework for effective design of DSS and can also serve as a basis for accumulating DSS research results.
Abstract: A systemic view of DSS can provide a concrete framework for effective design of DSS and can also serve as a basis for accumulating DSS research results.

Journal ArticleDOI
TL;DR: Experiments show that the behavior of the heuristics on real data is more closely described by the amortized analyses than by the probabilistic analyses.
Abstract: The performance of sequential search can be enhanced by the use of heuristics that move elements closer to the front of the list as they are found. Previous analyses have characterized the performance of such heuristics probabilistically. In this article, we use amortization to analyze the heuristics in a worst-case sense; the relative merit of the heuristics in this analysis is different in the probabilistic analyses. Experiments show that the behavior of the heuristics on real data is more closely described by the amortized analyses than by the probabilistic analyses.

Journal ArticleDOI
TL;DR: A number of data structures for representing images by quadtrees without pointers are discussed and Sequences of approximations using various combinations of locational codes of GB and GW nodes are proposed and shown to be superior to approximation methods based on truncation of nodes below a certain level in the tree.
Abstract: A number of data structures for representing images by quadtrees without pointers are discussed. The image is treated as a collection of leaf nodes. Each leaf node is represented by use of a locational code corresponding to a sequence of directional codes that locate the leaf along a path from the root of the tree. Somewhat related is the concept of a forest which is a representation that consists of a collection of maximal blocks. It is reviewed and refined to enable the representation of a quadtree as a sequence of approximations. In essence, all BLACK and WHITE nodes are said to be of type GB and GW, respectively. GRAY nodes are of type GB if at least two of their sons are of type GB; otherwise, they are of type GW. Sequences of approximations using various combinations of locational codes of GB and GW nodes are proposed and shown to be superior to approximation methods based on truncation of nodes below a certain level in the tree. These approximations have two important properties. First, they are progressive in the sense that as more of the image is transmitted, the receiving device can construct a better approximation (contrast with facsimile methods which transmit the image one line at a time). Second, they are proved to lead to compression in the sense that they never require more than MIN(B, W) nodes where B and W correspond to the number of BLACK and WHITE nodes in the original quadtree. Algorithms are given for constructing the approximation sequences as well as decoding them to rebuild the original quadtree.

Journal ArticleDOI
TL;DR: A generalized algorithm for graph coloring by implicit enumeration is formulated and a number of backtracking sequential methods are discussed in terms of the generalized algorithm.
Abstract: A generalized algorithm for graph coloring by implicit enumeration is formulated. A number of backtracking sequential methods are discussed in terms of the generalized algorithm. Some are revealed to be partially correct and inexact. A few corrections to the invalid algorithms, which cause these algorithms to guarantee optimal solutions, are proposed. Finally, some computational results and remarks on the practical relevance of improved implicit enumeration algorithms are given.

Journal ArticleDOI
TL;DR: A general-purpose data-compression routine—implemented on the IMS database system—makes use of context to achieve better compression than Huffman's method applied character by character.
Abstract: A general-purpose data-compression routine—implemented on the IMS database system—makes use of context to achieve better compression than Huffman's method applied character by character. It demonstrates that a wide variety of data can be compressed effectively using a single, fixed compression routine with almost no working storage.

Journal ArticleDOI
TL;DR: A group of 269 first-semester freshmen was used to predict both performance in an introductory computer science course and first- semester college grade point average by using information regarding the students' programs and performance in high school along with American College Testing Program (ACT) test scores.
Abstract: A group of 269 first-semester freshmen was used to predict both performance in an introductory computer science course and first-semester college grade point average by using information regarding the students' programs and performance in high school along with American College Testing Program (ACT) test scores.

Journal ArticleDOI
TL;DR: Effective development environments for discrete event simulation models should reduce development costs and improve model performance by interposing an intermediate form between a conceptual model and an executable representation of that model.
Abstract: Effective development environments for discrete event simulation models should reduce development costs and improve model performance. A model specification language used in a model development environment is defined. This approach is intended to reduce modeling costs by interposing an intermediate form between a conceptual model (the model as it exists in the mind of the modeler) and an executable representation of that model. As a model specification is constructed, the incomplete specification can be analyzed to detect some types of errors and to provide some types of model documentation. The primitives used in this specification language, called a condition specification (CS), are carefully defined. A specification for the classical patrolling repairman model is used to illustrate this language. Some possible diagnostics and some untestable model specification properties, based on such a representation, are summarized.

Journal ArticleDOI
TL;DR: This article develops some algorithms and tools for solving matrix problems on parallel processing computers that are synchronized through data-flow alone, which makes global synchronization unnecessary and enables the algorithms to be implemented on machines with very simple operating systems and communication protocols.
Abstract: In this article we develop some algorithms and tools for solving matrix problems on parallel processing computers. Operations are synchronized through data-flow alone, which makes global synchronization unnecessary and enables the algorithms to be implemented on machines with very simple operating systems and communication protocols. As examples, we present algorithms that form the main modules for solving Liapounov matrix equations. We compare this approach to wave front array processors and systolic arrays, and note its advantages in handling missized problems, in evaluating variations of algorithms or architectures, in moving algorithms from system to system, and in debugging parallel algorithms on sequential machines.

Journal ArticleDOI
TL;DR: Project Athena at MIT is an experiment to explore the potential uses of advanced computer technology in the university curriculum.
Abstract: Project Athena at MIT is an experiment to explore the potential uses of advanced computer technology in the university curriculum. About 60 different educational development projects, spanning virtually all of MIT's academic departments, are already in progress.

Journal ArticleDOI
TL;DR: A large quantity of well-respected software is tested against a series of metrics designed to measure program lucidity, with intriguing results.
Abstract: A large quantity of well-respected software is tested against a series of metrics designed to measure program lucidity, with intriguing results. Although slanted toward software written in the C language, the measures are adaptable for analyzing most high-level languages.

Journal ArticleDOI
TL;DR: The essential question becomes one of autonomy: Should the automated system serve as the human pilot's assistant, or vice versa?
Abstract: As more and more automation is incorporated in aircraft, the essential question becomes one of autonomy: Should the automated system serve as the human pilot's assistant, or vice versa?

Journal ArticleDOI
TL;DR: CS2 attire l'attention sur l'application des techniques de genie logiciel a la conception and a l'implementation de programs which manipule plus de structures de donnees complexes que ceux utilise dans CS1.
Abstract: CS2 attire l'attention sur l'application des techniques de genie logiciel a la conception et a l'implementation de programmes qui manipule plus de structures de donnees complexes que ceux utilises dans CS1