scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1988"


Journal ArticleDOI
01 Jan 1988-Nature
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Abstract: We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal ‘hidden’ units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

23,814 citations



Journal ArticleDOI
TL;DR: In this paper, an efficient algorithm for evaluating the second-order Moller-Plesset (MP2) energy directly from two-electron integrals in the atomic orbital basis, i.e., the integrals are not stored.

2,533 citations


Journal ArticleDOI
TL;DR: In this paper, a new method for using the data from Monte Carlo simulations that can increase the efficiency by 2 or more orders of magnitude is presented. But the method is not applicable to statistical models and lattice-gauge theories.
Abstract: We present a new method for using the data from Monte Carlo simulations that can increase the efficiency by 2 or more orders of magnitude. A single Monte Carlo simulation is sufficient to obtain complete thermodynamic information over the entire scaling region near a phase transition. The accuracy of the method is demonstrated by comparison with exact results for the d=2 Ising model. New results for d=2 eight-state Potts model are also presented. The method is generally applicable to statistical models and lattice-gauge theories.

2,219 citations


Proceedings ArticleDOI
01 Jan 1988
TL;DR: ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following that can effectively follow real roads under certain field conditions.
Abstract: ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand.

1,784 citations


Journal ArticleDOI
TL;DR: Possible mechanisms through which support systems may influence the etiology of physical disease are outlined and conceptual and methodological guidelines for future research in this area are proposed.
Abstract: Although there has been a substantial effort to establish the beneficial effects of social support on health and well-being, relatively little work has focused on how social support influences physical health. This article outlines possible mechanisms through which support systems may influence the etiology of physical disease. I begin by reviewing research on the relations between social support and morbidity and between social support and mortality. I distinguish between various conceptualizations of social support used in the existing literature and provide alternative explanations of how each of these conceptualizations of the social environment could influence the etiology of physical disease. In each case, I address the psychological mediators (e.g., health relevant cognitions, affect, and health behaviors) as well as biologic links (e.g., neuroendocrine links to immune and cardiovascular function). I conclude by proposing conceptual and methodological guidelines for future research in this area, highlighting the unique contributions psychologists can make to this inherently interdisciplinary endeavor.

1,703 citations


Journal ArticleDOI
TL;DR: Observations of a prototype implementation are presented, changes in the areas of cache validation, server process structure, name translation, and low-level storage representation are motivated, and Andrews ability to scale gracefully is quantitatively demonstrated.
Abstract: The Andrew File System is a location-transparent distributed tile system that will eventually span more than 5000 workstations at Carnegie Mellon University. Large scale affects performance and complicates system operation. In this paper we present observations of a prototype implementation, motivate changes in the areas of cache validation, server process structure, name translation, and low-level storage representation, and quantitatively demonstrate Andrews ability to scale gracefully. We establish the importance of whole-file transfer and caching in Andrew by comparing its performance with that of Sun Microsystems NFS tile system. We also show how the aggregation of files into volumes improves the operability of the system.

1,604 citations


Journal ArticleDOI
TL;DR: An approximation method for solving the minimum makespan problem of job shop scheduling by sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced.
Abstract: We describe an approximation method for solving the minimum makespan problem of job shop scheduling. It sequences the machines one by one, successively, taking each time the machine identified as a bottleneck among the machines not yet sequenced. Every time after a new machine is sequenced, all previously established sequences are locally reoptimized. Both the bottleneck identification and the local reoptimization procedures are based on repeatedly solving certain one-machine scheduling problems. Besides this straight version of the Shifting Bottleneck Procedure, we have also implemented a version that applies the procedure to the nodes of a partial search tree. Computational testing shows that our approach yields consistently better results than other procedures discussed in the literature. A high point of our computational testing occurred when the enumerative version of the Shifting Bottleneck Procedure found in a little over five minutes an optimal schedule to a notorious ten machines/ten jobs problem on which many algorithms have been run for hours without finding an optimal solution.

1,579 citations


Journal Article
TL;DR: The current knowledge of these three parameters for normal and neoplastic tissues, the methods of their measurements, and the implications of the results in the growth and metastasis formation as well as in the detection and treatment of tumors are reviewed.
Abstract: Blood flow rate in a vascular network is proportional to the pressure difference between the arterial and venous sides and inversely proportional to the viscous and geometric resistances. Despite rapid progress in recent years, there is a paucity of quantitative data on these three determinants of blood flow in tumors and several questions remain unanswered. This paper reviews our current knowledge of these three parameters for normal and neoplastic tissues, the methods of their measurements, and the implications of the results in the growth and metastasis formation as well as in the detection and treatment of tumors. Microvascular pressures in the arterial side are nearly equal in tumor and nontumorous vessels. Pressures in venular vessels, which are numerically dominant in tumors, are significantly lower in a tumor than those in a nontumorous tissue. Decreased intravascular pressure and increased interstitial pressure in tumors are partly responsible for the vessel collapse as well as the flow stasis and reversal in tumors. The apparent viscosity (viscous resistance) of blood is governed by the viscosity of plasma and the number, size, and rigidity of blood cells. Plasma viscosity can be increased by adding certain solutes. The concentration of cells can be increased by adding cells to blood or by reducing plasma volume. The rigidity of RBC, which are numerically dominant in blood, can be increased by lowering pH, elevating temperature, increasing extracellular glucose concentration, or making the suspending medium hypo- or hypertonic. Effective size of blood cells can be increased by forming RBC aggregates (also referred to as rouleaux). RBC aggregation can be facilitated by lowering the shear rate (i.e., decreasing velocity gradients) or by adding macromolecules (e.g., fibrinogen, globulins, dextrans). Since cancer cells and WBC are significantly more rigid than RBC, their presence in a vessel may also increase blood viscosity and may even cause transient stasis. Finally, due to the relatively large diameters of tumor microvessels the Fahraeus effect (i.e., reduction in hematocrit in small vessels) and the Fahraeus-Lindqvist effect (i.e., reduction in blood viscosity in small vessels) may be less pronounced in tumors than in normal tissues. Geometric resistance for a network of vessels is a complex function of the vascular morphology, i.e., the number of vessels of various types, their branching pattern, and their length and diameter. Geometric resistance to flow in a single vessel is proportional to the vessel length and inversely proportional to vessel diameter to the fourth power.(ABSTRACT TRUNCATED AT 400 WORDS)

1,452 citations


Journal ArticleDOI
26 May 1988-Nature
TL;DR: Chaperonins comprise a class of molecular chaperones that are found in chloroplasts, mitochondria and prokaryotes and are implicated in the assembly of the oligomeric enzyme ribulose bisphosphate carboxylase-oxygenase, which catalyses photosynthetic CO2-fixation in higher plants.
Abstract: An abundant chloroplast protein is implicated in the assembly of the oligomeric enzyme ribulose bisphosphate carboxylase-oxygenase, which catalyses photosynthetic CO2-fixation in higher plants. The product of the Escherichia coli groEL gene is essential for cell viability and is required for the assembly of bacteriophage capsids. Sequencing of the groEL gene and the complementary cDNA encoding the chloroplast protein has revealed that these proteins are evolutionary homologues which we term 'chaperonins'. Chaperonins comprise a class of molecular chaperones that are found in chloroplasts, mitochondria and prokaryotes. Assisted post-translational assembly of oligomeric protein structures is emerging as a general cellular phenomenon.

1,277 citations


Book ChapterDOI
TL;DR: In this article, a software framework running on processors onboard the new Uranus mobile robot is proposed to maintain a probabilistic, geometric map of the robot's surroundings as it moves.
Abstract: A numeric representation of uncertain and incomplete sensor knowledge called certainty grids was used successfully in several recent mobile robot control programs developed at the Carnegie-Mellon University Mobile Robot Laboratory (MRL). Certainty grids have proven to be a powerful and efficient unifying solution for sensor fusion, motion planning, landmark identification, and many other central problems. MRL had good early success with ad hoc formulas for updating grid cells with new information. A new Bayesian statistical foundation for the operations promises further improvement. MRL proposes to build a software framework running on processors onboard the new Uranus mobile robot that will maintain a probabilistic, geometric map of the robot's surroundings as it moves. The certainty grid representation will allow this map to be incrementally updated in a uniform way based on information coming from various sources, including sonar, stereo vision, proximity, and contact sensors. The approach can correctly model the fuzziness of each reading and, at the same time, combine multiple measurements to produce sharper map features; it can also deal correctly with uncertainties in the robot's motion. The map will be used by planning programs to choose clear paths, identify locations (by correlating maps), identify well-known and insufficiently sensed terrain, and perhaps identify objects by shape. The certainty grid representation can be extended in the time dimension and used to detect and track moving objects. Even the simplest versions of the idea allow us to fairly straightforwardly program the robot for tasks that have hitherto been out of reach. MRL looks forward to a program that can explore a region and return to its starting place, using map "snapshots" from its outbound journey to find its way back, even in the presence of disturbances of its motion and occasional changes in the terrain.

Journal ArticleDOI
TL;DR: A general model of Scientific Discovery as Dual Search (SDDS) is presented that shows how search in two problem spaces (an hypothesis space and an experiment space) shapes hypothesis generation, experimental design, and the evaluation of hypotheses.

Journal ArticleDOI
01 Jun 1988
TL;DR: This paper shows that software pipelining is an effective and viable scheduling technique for VLIW processors, and proposes a hierarchical reduction scheme whereby entire control constructs are reduced to an object similar to an operation in a basic block.
Abstract: This paper shows that software pipelining is an effective and viable scheduling technique for VLIW processors. In software pipelining, iterations of a loop in the source program are continuously initiated at constant intervals, before the preceding iterations complete. The advantage of software pipelining is that optimal performance can be achieved with compact object code.This paper extends previous results of software pipelining in two ways: First, this paper shows that by using an improved algorithm, near-optimal performance can be obtained without specialized hardware. Second, we propose a hierarchical reduction scheme whereby entire control constructs are reduced to an object similar to an operation in a basic block. With this scheme, all innermost loops, including those containing conditional statements, can be software pipelined. It also diminishes the start-up cost of loops with small number of iterations. Hierarchical reduction complements the software pipelining technique, permitting a consistent performance improvement be obtained.The techniques proposed have been validated by an implementation of a compiler for Warp, a systolic array consisting of 10 VLIW processors. This compiler has been used for developing a large number of applications in the areas of image, signal and scientific processing.

Book ChapterDOI
TL;DR: Baylor et al. as discussed by the authors described the progress that had been made up to that time in using information processing models and the techniques of computer simulation to explain human problem-solving processes.
Abstract: As genetics needs its model organisms, its Drosophila and Neurospora, so psychology needs standard task environments around which knowledge and understanding can cumulate. Chess has proved to be an excellent model environment for this purpose. About a decade ago in the pages of this journal, one of us, with Allen Newell, described the progress that had been made up to that time in using information-processing models and the techniques of computer simulation to explain human problem-solving processes. (Simon et al, 1964). A part of our article was devoted to a theory of the processes that expert chess players use in discovering checkmating combinations (Simon et al, 1962), a theory that was subsequently developed further, embodied in a running computer program, mater, and subjected to additional empirical testing. (Baylor et al, 1966).

Journal ArticleDOI
TL;DR: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools.
Abstract: A distributed architecture articulated around the CODGER (communication database with geometric reasoning) knowledge database is described for a mobile robot system that includes both perception and navigation tools. Results are described for vision and navigation tests using a mobile testbed that integrates perception and navigation capabilities that are based on two types of vision algorithms: color vision for road following, and 3-D vision for obstacle detection and avoidance. The perception modules are integrated into a system that allows the vehicle to drive continuously in an actual outdoor environment. The resulting system is able to navigate continuously on roads while avoiding obstacles. >

Journal ArticleDOI
01 Jun 1988
TL;DR: Higher-order abstract syntax incorporates name binding information in a uniform and language generic way and acts as a powerful link integrating diverse tools in program manipulation and other formal systems where matching and substitution or unification are central operations.
Abstract: We describe motivation, design, use, and implementation of higher-order abstract syntax as a central representation for programs, formulas, rules, and other syntactic objects in program manipulation and other formal systems where matching and substitution or unification are central operations. Higher-order abstract syntax incorporates name binding information in a uniform and language generic way. Thus it acts as a powerful link integrating diverse tools in such formal environments. We have implemented higher-order abstract syntax, a supporting matching and unification algorithm, and some clients in Common Lisp in the framework of the Ergo project at Carnegie Mellon University.

Journal ArticleDOI
TL;DR: This book presents the official, formal definition of the programming language ML including the rules for grammar and static and dynamic semantics, the most well-developed and prominent of a new group of functional programming languages.
Abstract: This book presents the official, formal definition of the programming language ML including the rules for grammar and static and dynamic semantics. ML is the most well-developed and prominent of a new group of functional programming languages. On the cutting edge of theoretical computer science, ML embodies the ideas of static typing and polymorphism and has also contributed a number of novel ideas to the design of programming languages.Contents: Syntax of the Core. Syntax of Modules. Static Semantics for the Core. Static Semantics for Modules. Dynamic Semantics for Modules. Programs.Appendixes: Derived Forms. Full Grammar. The Initial Static Basis. The Initial Dynamic Basis. The Development of ML.

Journal Article
TL;DR: The key result of these models is that the filtration of fluid from blood vessels in a uniformly perfused tumor is spatially heterogeneous, a result of elevated interstitial pressure, and sufficient to explain the heterogeneous distribution of macromolecules in tumors.
Abstract: The therapeutic efficacy of monoclonal antibodies (MAbs), bound to radionuclides, chemotherapeutic agents, toxins, growth factors, or effector antibodies, depends upon their ability to reach their target in vivo in adequate quantities. Despite the high vascular permeability and interstitial transport coefficients in tumor tissue compared to several normal tissues, MAbs and their fragments do not distribute homogeneously in a tumor. Heterogeneity of tumor-associated antigen expression alone cannot explain this maldistribution of MAbs in tumors. We propose that in addition to the heterogeneous blood perfusion, hindered diffusion in the interstitium, and extravascular binding of MAbs, elevated interstitial pressure is responsible for the poor penetration of MAbs into tumors. Elevated interstitial pressure principally reduces the driving force for extravasation of fluid and macromolecules in tumors, and also leads to an experimentally verifiable, radially outward convection which opposes the inward diffusion. We present here mathematical models for transport of fluid and macromolecules in a tumor. To illustrate the significance of elevated interstitial pressure, these models are used to describe the interstitial pressure, interstitial fluid velocity, and concentration of nonbinding macromolecules as a function of radial position in a uniformly perfused tumor. The key result of these models is that the filtration of fluid from blood vessels in a uniformly perfused tumor is (a) spatially heterogeneous, (b) a result of elevated interstitial pressure, and (c) sufficient to explain the heterogeneous distribution of macromolecules in tumors. Nonuniform blood flow, and extravascular binding would enhance this heterogeneity in the solute distribution considerably. The results of the models also agree with the following experimental data: (a) tumor interstitial pressure is low in the periphery and it increases toward the center of the tumor; (b) the radially outward fluid velocity at the tumor periphery predicted by the model is of the same order of magnitude as measured in tissue-isolated tumors; and (c) immediately after bolus injection, the concentration of macromolecules is higher in the periphery than in the center; however, at later time periods the peripheral concentration is lower than in the center. These results have significant implications not only for MAbs and their fragments, but for other biologically useful macromolecules (e.g., cytokines) produced by genetic engineering for cancer diagnosis and treatment.

Journal ArticleDOI
TL;DR: A review of the current status of research into the problem of two-dimensional melting, with an emphasis on computer simulations, is given in this article, where an attempt is made to point out unresolved issues pertaining to this fascinating and still open question.
Abstract: For a decade now the subject of the nature of the two-dimensional melting transition has remained controversial. An elegant theory based on the unbinding of pairs of crystal defects suggested that two-dimensional solids might melt by a transition sequence involving two continuous transitions separated by a novel, nearest-neighbor-bond-orientationally ordered fluid---the hexatic phase. Competing theories predict that the transition is of the usual first-order type observed in three-dimensional systems. This paper is a critical review of the current status of research into the problem of two-dimensional melting, with an emphasis on computer simulations. An attempt is made to point out unresolved issues pertaining to this fascinating and still open question.

Journal ArticleDOI
TL;DR: In this paper, the problem of detecting outliers, level shifts, and variance changes in a univariate time series is considered, and the methods employed are extremely simple yet useful, such as least squares techniques and residual variance ratios.
Abstract: Outliers, level shifts, and variance changes are commonplace in applied time series analysis. However, their existence is often ignored and their impact is overlooked, for the lack of simple and useful methods to detect and handle those extraordinary events. The problem of detecting outliers, level shifts, and variance changes in a univariate time series is considered. The methods employed are extremely simple yet useful. Only the least squares techniques and residual variance ratios are used. The effectiveness of these simple methods is demonstrated by analysing three real data sets.


Book
01 Oct 1988
TL;DR: Moravec argues that we are approaching a watershed in the history of life-a time when the boundaries between biological and post-biological intelligence will begin to dissolve Within forty years, Moravec believes, we will achieve human equivalence in our machines, not only in their capacity to reason but also in their ability to perceive, interact with, and change their complex environment.
Abstract: Imagine attending a lecture at the turn of the twentieth century in which Orville Wright speculates about the future of transportation, or one in which Alexander Graham Bell envisages satellite communications and global data banks Mind Children, written by an internationally renowned roboticist, offers a comparable experience-a mind-boggling glimpse of a world we may soon share with our artificial progeny Filled with fresh ideas and insights, this book is one of the most engaging and controversial visions of the future ever written by a serious scholar Hans Moravec convincingly argues that we are approaching a watershed in the history of life-a time when the boundaries between biological and postbiological intelligence will begin to dissolve Within forty years, Moravec believes, we will achieve human equivalence in our machines, not only in their capacity to reason but also in their ability to perceive, interact with, and change their complex environment The critical factor is mobility A computer rooted to one place is doomed to static iterations, whereas a machine on the prowl, like a mobile organism, must evolve a richer fund of knowledge about an ever-changing world upon which to base its actions In order to achieve anything near human equivalence, robots will need, at the least, the capacity to perform ten trillion calculations per second Given the trillion-fold increase in computational power since the end of the nineteenth century, and the promise of exotic technologies far surpassing the now-familiar lasers and even superconductors, Moravec concludes that our hardware will have no trouble meeting this forty-year timetable But human equivalence is just the beginning, not an upper bound Once the tireless thinking capacity of robots is directed to the problem of their own improvement and reproduction, even the sky will not limit their voracious exploration of the universe In the concluding chapters Moravec challenges us to imagine with him the possibilities and pitfalls of such a scenario Rather than warning us of takeover by robots, the author invites us, as we approach the end of this millennium, to speculate about a plausible, wonderful postbiological future and the ways in which our minds might participate in its unfolding

Book ChapterDOI
01 Jan 1988
TL;DR: Probabilistic logic sampling is a new scheme employing stochastic simulation which can make probabilistic inferences in large, multiply connected networks, with an arbitrary degree of precision controlled by the sample size.
Abstract: Bayesian belief networks and influence diagrams are attractive approaches for representing uncertain expert knowledge in coherent probabilistic form. But current algorithms for propagating updates are either restricted to singly connected networks (Chow trees), as the scheme of Pearl and Kim, or they are liable to exponential complexity when dealing with multiply connected networks. Probabilistic logic sampling is a new scheme employing stochastic simulation which can make probabilistic inferences in large, multiply connected networks, with an arbitrary degree of precision controlled by the sample size. A prototype implementation, named Pulse, is illustrated, which provides efficient methods to estimate conditional probabilities, perform systematic sensitivity analysis, and compute evidence weights to explain inferences.

Journal ArticleDOI
TL;DR: This work presents new on-line algorithms to be used by the caches of snoopy cache multiprocessor systems to decide which blocks to retain and which to drop in order to minimize communication over the bus.
Abstract: In a snoopy cache multiprocessor system, each processor has a cache in which it stores blocks of data. Each cache is connected to a bus used to communicate with the other caches and with main memory. Each cache monitors the activity on the bus and in its own processor and decides which blocks of data to keep and which to discard. For several of the proposed architectures for snoopy caching systems, we present new on-line algorithms to be used by the caches to decide which blocks to retain and which to drop in order to minimize communication over the bus. We prove that, for any sequence of operations, our algorithms' communication costs are within a constant factor of the minimum required for that sequence; for some of our algorithms we prove that no on-line algorithm has this property with a smaller constant.

Journal ArticleDOI
TL;DR: In this paper, the authors argue that the concepts of criminal careers, career criminals, selective incapacitation, prevalence and incidence, and longitudinal studies have little value for criminology.
Abstract: In a recent paper published in this journal, Gottfredson and Hirschi (1986)1 argue that the concepts of criminal careers, career criminals, selective incapacitation, prevalence, and incidence, and longitudinal studies all have little value for criminology In our view their paper misrepresents these concepts and our research on these topics We are pleased to have the opportunity in this paper to develop these concepts more clearly and to show their relevance for criminology

Journal ArticleDOI
TL;DR: In this article, a model of aggregate consumption and leisure decisions in which utility from goods and leisure is nontime-separable was investigated empirically using postwar monthly U.S. data on quantities, real wages, and the real return on the one-month Treasury bill.
Abstract: This paper investigates empirically a model of aggregate consumption and leisure decisions in which utility from goods and leisure is nontime-separable. The nonseparability of preferences accommodates intertemporal substitution or complementarity of leisure and thereby affects the comovements in aggregate compensation and hours worked. These cross-relations are examined empirically using postwar monthly U. S. data on quantities, real wages, and the real return on the one-month Treasury bill. The estimated values of the parameters governing preferences differ significantly from the values assumed in several studies of real business models. Several possible explanations of these discrepancies are discussed.

Proceedings Article
21 Aug 1988
TL;DR: This paper summarizes a set of experiments measuring the effectiveness of PRODIGY's EBL method (and its components) in several different domains.
Abstract: Although previous research has demonstrated that EBL is a viable approach for acquiring search control knowledge, in practice the control knowledge learned via EBL may not be useful. To be useful, the cumulative benefits of applying the knowledge must outweigh the cumulative costs of testing whether the knowledge is applicable. Unlike most previous EBL systems, the PRODIGY/EBL system evaluates the costs and benefits of the control knowledge it learns. The system produces useful control knowledge by actively searching for "good" explanations — explanations that can be profitably employed to control problem solving. This paper summarizes a set of experiments measuring the effectiveness of PRODIGY's EBL method (and its components) in several different domains.

Journal ArticleDOI
TL;DR: The performance of a brain-damaged patient with impaired visual appearance representations on a variety of tasks used by cognitive psychologists on one side or other of the visual vs spatial imagery debate implies that the two groups of tasks tap distinct types of representation.

Journal ArticleDOI
TL;DR: A polynomial algorithm for determining if two structures are stuttering equivalent is given and the relevance of the results for temporal logic model checking and synthesis procedures is discussed.

Journal ArticleDOI
TL;DR: A detailed model of how such "mindless" processes might lead to intelligent choices of strategies in one common situation: that in which people need to choose between stating a retrieved answer and using a backup strategy is described.
Abstract: Many intelligent strategy choices may be accomplished through relatively low-level cognitive processes. This article describes a detailed model of how such "mindless" processes might lead to intelligent choices of strategies in one common situation: that in which people need to choose between stating a retrieved answer and using a backup strategy. Several experiments testing the model's applicability to children's single-digit multiplication are reported. These include tests of predictions about when different strategies are used and how early experience shapes later performance. Then, the sufficiency of the model to generate both performance at any one time and changes in performance over time is tested through the medium of a running computer simulation of children's multiplication. The simulation acquires a considerable amount of multiplication knowledge, and its learning and performance parallel those of children in a number of ways. Finally, several implications of the model for understanding cognitive self-regulation and cognitive development are discussed.