scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1990"


Journal ArticleDOI
TL;DR: In this paper, the authors argue that the ability of a firm to recognize the value of new, external information, assimilate it, and apply it to commercial ends is critical to its innovative capabilities.
Abstract: In this paper, we argue that the ability of a firm to recognize the value of new, external information, assimilate it, and apply it to commercial ends is critical to its innovative capabilities. We label this capability a firm's absorptive capacity and suggest that it is largely a function of the firm's level of prior related knowledge. The discussion focuses first on the cognitive basis for an individual's absorptive capacity including, in particular, prior related knowledge and diversity of background. We then characterize the factors that influence absorptive capacity at the organizational level, how an organization's absorptive capacity differs from that of its individual members, and the role of diversity of expertise within an organization. We argue that the development of absorptive capacity, and, in turn, innovative performance are history- or path-dependent and argue how lack of investment in an area of expertise early on may foreclose the future development of a technical capability in that area. We formulate a model of firm investment in research and development (R&D), in which R&D contributes to a firm's absorptive capacity, and test predictions relating a firm's investment in R&D to the knowledge underlying technical change within an industry. Discussion focuses on the implications of absorptive capacity for the analysis of other related innovative activities, including basic research, the adoption and diffusion of innovations, and decisions to participate in cooperative R&D ventures. **

31,623 citations


Book
01 Dec 1990
TL;DR: In this paper, the authors propose a unified theory of cognition for the task of the Task of the Book Foundations of Cognitive Science Behaving Systems Knowledge Systems Representation Machines and Computation Symbols Architectures Intelligence Search and Problem Spaces Preparation and Deliberation Summary Human Cognitive Architecture The Human is a Symbol System System Levels The Time Scale of Human Action The Biological Band The Neural Circuit Level The Real-Time Constraint on Cognition The Cognitive Band The Level of Simple Operations The First Level of Composed Operations The Intendedly Rational Band Higher Bands: Social, Historical
Abstract: Introduction The Nature of Theories What Are Unified Theories of Cognition? Is Psychology Ready for Unified Theories? The Task of the Book Foundations of Cognitive Science Behaving Systems Knowledge Systems Representation Machines and Computation Symbols Architectures Intelligence Search and Problem Spaces Preparation and Deliberation Summary Human Cognitive Architecture The Human Is a Symbol System System Levels The Time Scale of Human Action The Biological Band The Neural Circuit Level The Real-Time Constraint on Cognition The Cognitive Band The Level of Simple Operations The First Level of Composed Operations The Intendedly Rational Band Higher Bands: Social, Historical, and Evolutionary Summary Symbolic Processing for Intelligence The Central Architecture for Performance Chunking The Total Cognitive System RI-Soar: Knowledge-Intensive and Knowledge-Lean Operation Designer-Soar: Difficult Intellectual Tasks Soar as an Intelligent System Mapping Soar onto Human Cognition Soar and the Shape of Human Cognition Summary Immediate Behavior The Scientific Role of Immediate-Response Data Methodological Preliminaries Functional Analysis of Immediate Responses The Simplest Response Task (SRI) The Two-Choice Response Task (2CRT) Stimulus-Response Compatibility (SRC) Discussion of the Three Analyses Item Recognition Typing Summary Memory, Learning, and Skill The Memory and Learning Hypothesis of Soar The Soar Qualitative Theory of Learning The Distinction between Episodic and Semantic Memory Data Chunking Skill Acquisition Short-Term Memory (STM) Summary Intendedly Rational Behavior Ciyptarithmetic Syllogisms Sentence Verification Summary Along the Frontiers Language Development The Biological Band The Social Band The Role of Applications How to Move toward Unified Theories of Cognition References Name Index Subject Index

4,129 citations


Journal ArticleDOI
TL;DR: This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.
Abstract: A concurrent object is a data object shared by concurrent processes. Linearizability is a correctness condition for concurrent objects that exploits the semantics of abstract data types. It permits a high degree of concurrency, yet it permits programmers to specify and reason about concurrent objects using known techniques from the sequential domain. Linearizability provides the illusion that each operation applied by concurrent processes takes effect instantaneously at some point between its invocation and its response, implying that the meaning of a concurrent object's operations can be given by pre- and post-conditions. This paper defines linearizability, compares it to other correctness conditions, presents and demonstrates a method for proving the correctness of implementations, and shows how to reason about concurrent objects, given they are linearizable.

3,396 citations


Journal ArticleDOI
04 Jun 1990
TL;DR: In this paper, a model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described.
Abstract: A general method that represents the state space symbolically instead of explicitly is described. The generality of the method comes from using a dialect of the mu-calculus as the primary specification language. A model-checking algorithm for mu-calculus formulas which uses R.E. Bryant's (1986) binary decision diagrams to represent relations and formulas symbolically is described. It is then shown how the novel mu-calculus model checking algorithm can be used to derive efficient decision procedures for CTL model checking, satisfiability of linear-time temporal logic formulas, strong and weak observational equivalence of finite transition systems, and language containment of finite omega -automata. This eliminates the need to describe complicated graph-traversal or nested fixed-point computations for each decision procedure. The authors illustrate the practicality of their approach to symbolic model checking by discussing how it can be used to verify a simple synchronous pipeline. >

2,698 citations


Book
01 Jan 1990
TL;DR: In this paper, the authors present a software tool for uncertainty analysis, called Analytica, for quantitative policy analysis, which can be used to perform probability assessment and propagation and analysis of uncertainty.
Abstract: Preface 1. Introduction 2. Recent milestones 3. An overview of quantitative policy analysis 4. The nature and sources of uncertainty 5. Probability distributions and statistical estimation 6. Human judgement about and with uncertainty 7. Performing probability assessment 8. The propagation and analysis of uncertainty 9. The graphic communication of uncertainty 10. Analytica: a software tool for uncertainty analysis 11. Large and complex models 12. The value of knowing how little you know Index.

2,666 citations


Journal ArticleDOI
TL;DR: In this paper, the authors examined the question of how affect arises and what affect indicates from a feedback-based view-point on self-regulation using the analogy of action control as the attempt to diminish distance to a goal, and proposed a second feedback system that senses and regulates the rate at which the action-guiding system is functioning.
Abstract: The question of how affect arises and what affect indicates is examined from a feedback-based view-point on self-regulation. Using the analogy of action control as the attempt to diminish distance to a goal, a second feedback system is postulated that senses and regulates the rate at which the action-guiding system is functioning

2,660 citations


Journal ArticleDOI
TL;DR: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol, both of which solve the uncontrolled priority inversion problem.
Abstract: An investigation is conducted of two protocols belonging to the priority inheritance protocols class; the two are called the basic priority inheritance protocol and the priority ceiling protocol. Both protocols solve the uncontrolled priority inversion problem. The priority ceiling protocol solves this uncontrolled priority inversion problem particularly well; it reduces the worst-case task-blocking time to at most the duration of execution of a single critical section of a lower-priority task. This protocol also prevents the formation of deadlocks. Sufficient conditions under which a set of periodic tasks using this protocol may be scheduled is derived. >

2,443 citations


Book
01 Jan 1990
TL;DR: This book provides a formal definition of Standard ML for the benefit of all concerned with the language, including users and implementers, and the authors have defined their semantic objects in mathematical notation that is completely independent of StandardML.
Abstract: From the Publisher: Standard ML is general-purpose programming language designed for large projects. This book provides a formal definition of Standard ML for the benefit of all concerned with the language, including users and implementers. Because computer programs are increasingly required to withstand rigorous analysis, it is all the more important that the language in which they are written be defined with full rigor. The authors have defined their semantic objects in mathematical notation that is completely independent of Standard ML.

2,389 citations


Journal ArticleDOI
TL;DR: A model of attention is presented within a parallel distributed processing framework, and it is proposed that the attributes of automaticity depend on the strength of a processing pathway and that strength increases with training.
Abstract: : A growing body of evidence suggests that traditional views of automaticity are in need of revision. For example, automaticity has often been treated as an all-or-none phenomenon, and traditional theories have held that automatic processes are independent of attention. Yet recent empirial data suggests that automatic processes are continuous, and furthermore are subject to attentional control. In this paper we present a model of attention which addresses these issues. Using a parallel distributed processing framework we propose that the attributes of automaticity depend upon the strength of a process and that strength increases with training. Using the Stroop effect as an example, we show how automatic processes are continuous and emerge gradually with practice. Specifically, we present a computational model of the Stroop task which simulates the time course of processing as well as the effects of learning.

1,923 citations


Book
01 Jun 1990
TL;DR: Is Human Cognition Rational?
Abstract: Contents: Part I:Introduction. Preliminaries. Levels of a Cognitive Theory. Current Formulation of the Levels Issues. The New Theoretical Framework. Is Human Cognition Rational? The Rest of This Book. Appendix: Non-Identifiability and Response Time. Part II:Memory. Preliminaries. A Rational Analysis of Human Memory. The History Factor. The Contextual Factor. Relationship of Need and Probability to Probability and Latency of Recall. Combining Information From Cues. Implementation in the ACT Framework. Effects of Subject Strategy. Conclusions. Part III:Categorization. Preliminaries. The Goal of Categorization. The Structure of the Environment. Recapitulation of Goals and Environment. The Optimal Solution. An Iterative Algorithm for Categorization. Application of the Algorithm. Survey of the Experimental Literature. Conclusion. Appendix: The Ideal Algorithm. Part IV:Causal Inference. Preliminaries. Basic Formulation of the Causal Inference Problem. Causal Estimation. Cues for Causal Inference. Integration of Statistical and Temporal Cues. Discrimination. Abstraction of Causal Laws. Implementation in a Production System. Conclusion. Appendix. Part V:Problem Solving. Preliminaries. Making a Choice Among Simple Actions. Combining Steps. Studies of Hill Climbing. Means-Ends Analysis. Instantiation of Indefinite Objects. Conclusions on Rational Analysis of Problem Solving. Implementation in ACT. Appendix: Problem Solving and Clotheslines. Part VI:Retrospective. Preliminaries. Twelve Questions About Rational Analysis.

1,816 citations


Journal ArticleDOI
TL;DR: Asymptotic waveform evaluation (AWE) provides a generalized approach to linear RLC circuit response approximations and reduces to the RC tree methods.
Abstract: Asymptotic waveform evaluation (AWE) provides a generalized approach to linear RLC circuit response approximations. The RLC interconnect model may contain floating capacitors, grounded resistors, inductors, and even linear controlled sources. The transient portion of the response is approximated by matching the initial boundary conditions and the first 2q-1 moments of the exact response to a lower-order q-pole model. For the case of an RC tree model, a first-order AWE approximation reduces to the RC tree methods. >

Journal ArticleDOI
TL;DR: In this article, a direct method for evaluating the gradient of the second-order Moller-Plesset (MP2) energy without storing any quartic quantities, such as two-electron repulsion integrals (ERIs), double substitution amplitudes or the two-particle density matrix, was presented.

Journal ArticleDOI
TL;DR: L'auteur s'interesse aux invariants du comportement humain tels qu'ils apparaissent dans les theories de la psychologie cognitive contemporaine.
Abstract: L'auteur s'interesse aux invariants du comportement humain tels qu'ils apparaissent dans les theories de la psychologie cognitive contemporaine

Journal ArticleDOI
TL;DR: The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test.
Abstract: The cognitive processes in a widely used, nonverbal test of analytic intelligence, the Raven Progressive Matrices Test (Raven, 1962), are analyzed in terms of which processes distinguish between higher scoring and lower scoring subjects and which processes are common to all subjects and all items on the test. The analysis is based on detailed performance characteristics, such as verbal protocols, eye-fixation patterns, and errors. The theory is expressed as a pair of computer simulation models that perform like the median or best college students in the sample. The processing characteristic common to all subjects is an incremental, reiterative strategy for encoding and inducing the regularities in each problem. The processes that distinguish among individuals are primarily the ability to induce abstract relations and the ability to dynamically manage a large set of problem-solving goals in working memory.

Proceedings ArticleDOI
24 Jun 1990
TL;DR: A package for manipulating Boolean functions based on the reduced, ordered, binary decision diagram (ROBDD) representation is described, based on an efficient implementation of the if-then-else (ITE) operator.
Abstract: Efficient manipulation of Boolean functions is an important component of many computer-aided design tasks This paper describes a package for manipulating Boolean functions based on the reduced, ordered, binary decision diagram (ROBDD) representation The package is based on an efficient implementation of the if-then-else (ITE) operator A hash table is used to maintain a strong canonical form in the ROBDD, and memory use is improved by merging the hash table and the ROBDD into a hybrid data structure A memory function for the recursive ITE algorithm is implemented using a hash-based cache to decrease memory use Memory function efficiency is improved by using rules that detect when equivalent functions are computed The usefulness of the package is enhanced by an automatic and low-cost scheme for recycling memory Experimental results are given to demonstrate why various implementation trade-offs were made These results indicate that the package described here is significantly faster and more memory-efficient than other ROBDD implementations described in the literature

Journal ArticleDOI
TL;DR: In this paper, the authors formulate semi-direct MP2 methods that utilize disk space (which is usually much larger than memory size) for the steps that require most storage, and show that these methods are superior to conventional algorithms despite requiring less disk space.

Journal ArticleDOI
23 Feb 1990-Science
TL;DR: In this article, the authors found that organizations vary considerably in the rates at which they learn and that the reasons for the variation observed in organizational learning curves include organizational forgetting, employee turnover, transfer of knowledge from other products and other organizations, and economies of scale.
Abstract: Large increases in productivity are typically realized as organizations gain experience in production. These "learning curves" have been found in many organizations. Organizations vary considerably in the rates at which they learn. Some organizations show remarkable productivity gains, whereas others show little or no learning. Reasons for the variation observed in organizational learning curves include organizational "forgetting," employee turnover, transfer of knowledge from other products and other organizations, and economies of scale.

Posted Content
TL;DR: Organizations vary considerably in the rates at which they learn, and reasons for the variation observed in organizational learning curves include organizational "forgetting," employee turnover, transfer of knowledge from other products and other organizations, and economies of scale.
Abstract: Large increases in productivity are typically realized as organizations gain experience in production. These "learning curves" have been found in many organizations. Organizations vary considerably in the rates at which they learn. Some organizations show remarkable productivity gains, whereas others show little or no learning. Reasons for the variation observed in organizational learning curves include organizational "forgetting," employee turnover, transfer of knowledge from other products and other organizations, and economies of scale.

Journal ArticleDOI
TL;DR: The design and implementation of Coda, a file system for a large-scale distributed computing environment composed of Unix workstations, is described, which provides resiliency to server and network failures through the use of two distinct but complementary mechanisms.
Abstract: The design and implementation of Coda, a file system for a large-scale distributed computing environment composed of Unix workstations, is described. It provides resiliency to server and network failures through the use of two distinct but complementary mechanisms. One mechanism, server replication, stores copies of a file at multiple servers. The other mechanism, disconnected operation, is a mode of execution in which a caching site temporarily assumes the role of a replication site. The design of Coda optimizes for availability and performance and strives to provide the highest degree of consistency attainable in the light of these objectives. Measurements from a prototype show that the performance cost of providing high availability in Coda is reasonable. >

Journal ArticleDOI
TL;DR: This article examined the persistence of learning within organizations and the transfer of learning across organizations on data collected from multiple organizations and found that knowledge acquired through production depreciates rapidly and the conventional measure of learning, cumulative output, significantly overstates the persistent learning.
Abstract: The persistence of learning within organizations and the transfer of learning across organizations are examined on data collected from multiple organizations. Results indicate that knowledge acquired through production depreciates rapidly. The conventional measure of learning, cumulative output, significantly overstates the persistence of learning. There is some evidence that learning transfers across organizations: organizations beginning production later are more productive than those with early start dates. Once organizations begin production, however, they do not appear to benefit from learning in other organizations. The implications of the results for a theory of organizational learning are discussed. Managerial implications are described.

Journal ArticleDOI
TL;DR: In this paper, a mixed integer nonlinear programming (MINLP) model is presented which can generate networks where utility cost, exchanger areas and selection of matches are optimized simultaneously, without the assumption of fixed temperature approaches (HRAT or EMAT), nor on the prediction of the pinch point for the partitioning into subnetworks.

Journal ArticleDOI
TL;DR: A new algorithm is presented, the Sporadic Server algorithm, which greatly improves response times for soft deadline a periodic tasks and can guarantee hard deadlines for both periodic and aperiodic tasks.
Abstract: This thesis develops the Sporadic Server (SS) algorithm for scheduling aperiodic tasks in real-time systems. The SS algorithm is an extension of the rate monotonic algorithm which was designed to schedule periodic tasks. This thesis demonstrates that the SS algorithm is able to guarantee deadlines for hard-deadline aperiodic tasks and provide good responsiveness for soft-deadline aperiodic tasks while avoiding the schedulability penalty and implementation complexity of previous aperiodic service algorithms. It is also proven that the aperiodic servers created by the SS algorithm can be treated as equivalently-sized periodic tasks when assessing schedulability. This allows all the scheduling theories developed for the rate monotonic algorithm to be used to schedule aperiodic tasks. For scheduling aperiodic and periodic tasks that share data, this thesis defines the interactions and schedulability impact of using the SS algorithm with the priority inheritance protocols. For scheduling hard-deadline tasks with short deadlines, an extension of the rate monotonic algorithm and analysis is developed. To predict performance of the SS algorithm, this thesis develops models and equations that allow the use of standard queueing theory models to predict the average response time of soft-deadline aperiodic tasks serviced with a high-priority sporadic server. Implementation methods are also developed to support the SS algorithm in Ada and on the Futurebus+.

Proceedings ArticleDOI
05 Dec 1990
TL;DR: A general criterion for the schedulability of a fixed priority scheduling of period tasks with arbitrary deadlines is given and the results are shown to provide a basis for developing predictable distributed real-time systems.
Abstract: Consideration is given to the problem of fixed priority scheduling of period tasks with arbitrary deadlines. A general criterion for the schedulability of such a task set is given. Worst case bounds are given which generalize the C.L. Liu and J.W. Layland (1973) bound. The results are shown to provide a basis for developing predictable distributed real-time systems. >

Journal ArticleDOI
TL;DR: Formal methods used in developing computer systems are defined, and their role is delineated, and certain pragmatic concerns about formal methods and their users, uses, and characteristics are discussed.
Abstract: Formal methods used in developing computer systems (i.e. mathematically based techniques for describing system properties) are defined, and their role is delineated. Formal specification languages, which provide the formal method's mathematical basis, are examined. Certain pragmatic concerns about formal methods and their users, uses, and characteristics are discussed. Six well-known or commonly used formal methods are illustrated by simple examples. They are Z, VDM, Larch, temporal logic, CSP, and transition axioms. >

Journal ArticleDOI
TL;DR: In this paper, the authors provide a review of the use of the user satisfaction construct as a measure of information systems effectiveness and propose a discussion of attitude structures and function in information systems.
Abstract: For nearly two decades, the user-satisfaction construct has occupied a central role in behavioral research in Information Systems IS. In industry, the construct has often been used as a surrogate for IS effectiveness. Given its widespread use by both academics and practitioners, it is surprising that no comprehensive theoretical assessment of this construct has been performed. This paper provides such a review. It begins by examining conceptual and theoretical limitations of the construct's use as a measure of IS effectiveness. Attention is then focused on the evolution of the construct in the literature and the theoretical problems associated with its broader use. The fundamental similarity between user satisfaction and the social and cognitive psychologists' notion of an attitude is suggested. The next sections focus on a discussion of attitude structures and function. First, alternative theoretical views on attitude structure are presented. While one of these structures, the family of expectancy-value models, is reflected in current research on user satisfaction, the second, the family of cognitive approaches, is not. The two attitude structures are considered from the perspective of possible refinements to future work in IS. Next, an examination is made of the ways in which these structures have been integrated in terms of understanding the relationship of users' affective responses to other responses i.e., behavior or cognition. This leads to a discussion of the function attitudes might serve for the user other than the evaluation of an information system or IS staff. Finally, the question of how behavior influences attitude is considered. The paper concludes with suggestions for future work.

Journal Article
TL;DR: It is demonstrated for the first time that the IFP is elevated throughout the tumor and drops precipitously to normal values in the tumor's periphery or in the immediately surrounding tissue.
Abstract: High interstitial fluid pressure (IFP) in solid tumors is associated with reduced blood flow as well as inadequate delivery of therapeutic agents such as monoclonal antibodies. In the present study, IFP was measured as a function of radial position within two rat tissue-isolated tumors (mammary adenocarcinoma R3230AC, 0.4-1.9 g, n = 9, and Walker 256 carcinoma, 0.5-5.0 g, n = 6) and a s.c. tumor (mammary adenocarcinoma R3230AC, 0.6-20.0 g, n = 7). Micropipettes (tip diameters 2 to 4 microns) connected to a servo-null pressure-monitoring system were introduced to depths of 2.5 to 3.5 mm from the tumor surface and IFP was measured while the micropipettes were retrieved to the surface. The majority (86%) of the pressure profiles demonstrated a large gradient in the periphery leading to a plateau of almost uniform pressure in the deeper layers of the tumors. Within isolated tumors, pressures reached plateau values at a distance of 0.2 to 1.1 mm from the surface. In s.c. tumors the sharp increase began in skin and levelled off at the skin-tumor interface. These results demonstrate for the first time that the IFP is elevated throughout the tumor and drops precipitously to normal values in the tumor's periphery or in the immediately surrounding tissue. These results confirm the predictions of our recently published mathematical model of interstitial fluid transport in tumors (Jain and Baxter, Cancer Res., 48: 7022-7032, 1988), offer novel insight into the etiology of interstitial hypertension, and suggest possible strategies for improved delivery of therapeutic agents.

Journal ArticleDOI
24 Aug 1990-Science
TL;DR: A model of catecholamine effects in a network of neural-like elements is presented, which shows that changes in the responsivity of individual elements do not affect their ability to detect a signal and ignore noise but the same changes in cell responsivity do improve the signal detection performance of the network as a whole.
Abstract: At the level of individual neurons, catecholamine release increases the responsivity of cells to excitatory and inhibitory inputs. A model of catecholamine effects in a network of neural-like elements is presented, which shows that (i) changes in the responsivity of individual elements do not affect their ability to detect a signal and ignore noise but (ii) the same changes in cell responsivity in a network of such elements do improve the signal detection performance of the network as a whole. The second result is used in a computer simulation based on principles of parallel distributed processing to account for the effect of central nervous system stimulants on the signal detection performance of human subjects.

Journal ArticleDOI
21 Dec 1990-Science
TL;DR: A simple and robust mechanism, based on human docility and bounded rationality, is proposed that can account for the evolutionary success of genuinely altruistic behavior.
Abstract: Within the framework of neo-Darwinism, with its focus on fitness, it has been hard to account for altruism behavior that reduces the fitness of the altruist but increases average fitness in society. Many population biologists argue that, except for altruism to close relatives, human behavior that appears to be altruistic amounts to reciprocal altruism, behavior undertaken with an expectation of reciprocation, hence incurring no net cost to fitness. Herein is proposed a simple and robust mechanism, based on human docility and bounded rationality that can account for the evolutionary success of genuinely altruistic behavior. Because docility-receptivity to social influence-contributes greatly to fitness in the human species, it will be positively selected. As a consequence, society can impose a "tax" on the gross benefits gained by individuals from docility by inducing docile individuals to engage in altruistic behaviors. Limits on rationality in the face of environmental complexity prevent the individual from avoiding this "tax." An upper bound is imposed on altruism by the condition that there must remain a net fitness advantage for docile behavior after the cost to the individual of altruism has been deducted.

Journal ArticleDOI
TL;DR: The results show that although no theoretical guarantee can be given, the proposed method has a high degree of reliability for finding the global optimum in nonconvex problems.

Journal ArticleDOI
TL;DR: This article analyzed political and economic data from 121 countries during the period 1950-1982 and found that the probability of a government's being overthrown by a coup is significantly influenced by the level of economic well-being.
Abstract: The transfer of power through the use of military force is a commonplace event in world affairs. Although no two coups d'etat are alike, they all have a common denominator: poverty. We analyze political and economic data from 121 countries during the period 1950–1982 and find that the probability of a government's being overthrown by a coup is significantly influenced by the level of economic well-being. Thus, even authoritarian governments have powerful incentives to promote economic growth, not out of concern for the welfare of their citizens, but because poor economic performance may lead to their removal by force. When the simultaneity of low income and coups is accounted for, we find that the aftereffects of a coup include a heritage of political instability in the form of an increased likelihood of further coups. Although the effect of income on coups is pronounced, we find little evidence of feedback from coups to income growth.