scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1982"


Journal ArticleDOI
TL;DR: In this paper, a framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage where the domain knowledge is directly embodied in procedures for performing the skill.
Abstract: A framework for skill acquisition is proposed that includes two major stages in the development of a cognitive skill: a declarative stage in which facts about the skill domain are interpreted and a procedural stage in which the domain knowledge is directly embodied in procedures for performing the skill. This general framework has been instantiated in the ACT system in which facts are encoded in a propositional network and procedures are encoded as productions. Knowledge compilation is the process by which the skill transits from the declarative stage to the procedural stage. It consists of the subprocesses of composition, which collapses sequences of productions into single productions, and proceduralization, which embeds factual knowledge into productions. Once proceduralized, further learning processes operate on the skill to make the productions more selective in their range of applications. These processes include generalization, discrimination, and strengthening of productions. Comparisons are made to similar concepts from past learning theories. How these learning mechanisms apply to produce the power law speedup in processing time with practice is discussed.

3,539 citations


Journal ArticleDOI
TL;DR: The Rete Match Algorithm is an efficient method for comparing a large collection of patterns to a largeCollection of objects that finds all the objects that match each pattern.

2,562 citations


Journal ArticleDOI
TL;DR: The Hohenberg-Kohn theorem was extended to fractional electron number for an isolated open system described by a statistical mixture in this article, and the curve of lowest average energy was found to be a series of straight line segments with slope discontinuities at integral $N.
Abstract: The Hohenberg-Kohn theorem is extended to fractional electron number $N$, for an isolated open system described by a statistical mixture. The curve of lowest average energy ${E}_{N}$ versus $N$ is found to be a series of straight line segments with slope discontinuities at integral $N$. As $N$ increases through an integer $M$, the chemical potential and the highest occupied Kohn-Sham orbital energy both jump from ${E}_{M}\ensuremath{-}{E}_{M\ensuremath{-}1}$ to ${E}_{M+1}\ensuremath{-}{E}_{M}$. The exchange-correlation potential $\frac{\ensuremath{\delta}{E}_{\mathrm{xc}}}{\ensuremath{\delta}n(\stackrel{\ensuremath{\rightarrow}}{\mathrm{r}})}$ jumps by the same constant, and $\frac{{\mathrm{lim}}_{r\ensuremath{\rightarrow}\ensuremath{\infty}}\ensuremath{\delta}{E}_{\mathrm{xc}}}{\ensuremath{\delta}n(\stackrel{\ensuremath{\rightarrow}}{\mathrm{r}})}g~0$.

2,427 citations


Journal ArticleDOI
TL;DR: In this article, a theory of the nature of knowledge and representation is proposed, namely that there is another computer system level immediately above the symbol (or program) level, and the principle of rationality plays a central role.

2,368 citations


Journal ArticleDOI
Kung1
TL;DR: The basic principle of systolic architectures is reviewed and it is explained why they should result in cost-effective, highperformance special-purpose systems for a wide range of problems.
Abstract: f High-performance, special-purpose computer systems are typically used to meet specific application requirements or to off-load computations that are especially taxing to general-purpose computers. As hardware cost and size continue to drop and processing requirements become well-understood in areas such as signal and image processing, more special-purpose systems are being constructed. However, since most of these systems are built on an ad hoc basis for specific tasks, methodological work in this area is rare. Because the knowledge gaited from individual experiences is neither accumulated nor properly organized, the same errors are repeated. I/O and computation imbalance is a notable example-often, the fact that I/O interfaces cannot keep up with device speed is discovered only after constructing a high-speed, special-purpose device. We intend to help correct this ad hoc approach by providing a general guideline-specifically, the concept of systolic architecture, a general methodology for mapping high-level computations into hardware structures. In a systolic system, data flows from the computer memcory in a rhythmic fashion, passing through many processing elements before it returns to memory, much as blood circulates to and from the heart. The system works like an autombbile assembly line where different people work on the same car at different times and many cars are assembled simultaneously. An assembly line is always linear, however, and systolic systems are sometimes two-dimensional. They can be rectangular, triangular, or hexagonal to make use of higher degrees of parallelism. Moreover, to implement a variety of computations, data flow in a systolic system may be at multiple speeds in multiple directions-both inputs and (partial) results flow, whereas only results flow in classical pipelined systems. Generally speaking, a systolic system is easy to implement because of its regularity and easy to reconfigure (to meet various outside constraints) because of its modularity. The systolic architectural concept was developed at Carnegie-Mellon University,'17 and versions of systolic processors are being designed and built by several industrial and governmental organizations.840 This article reviews the basic principle of systolic architectures and explains why they should result in cost-effective, highperformance special-purpose systems for a wide range of problems.

2,319 citations


Journal ArticleDOI
TL;DR: In this article, the authors trace the history of diffusion for 46 new products and examine the interrelations among diffusion, other aspects of technological change, price, output, and certain attributes of the relevant markets.
Abstract: This study attempts to measure and analyse the diffusion of product innovations. Diffusion is defined as the spread in the number of producers engaged in manufacturing a new product. Thus, the term refers to the net entry rate in the market for a new product. We trace the history of diffusion for 46 new products and examine the inter-relations among diffusion, other aspects of technological change, price, output, and certain attributes of the relevant markets. To explain the 46 product histories, we construct a theory of the development of industries for new products. Our theory combines elements of traditional, neoclassical models with what Nelson and Winter (I974) have termed an evolutionary theory. A novel feature is that the historical sequence, or time path, of events is viewed as a critical determinant of the ultimate structure of new product markets. Thus the time path of events determines not only the course traversed in reaching the end result but the ultimate market structure itself. The paper is organised in four sections. In Section I we present our theory. In Section II we construct a series of alternative theories of the development of industries for new products based on approaches to be found in received literature. The evidence from the 46 new product histories is examined in Section III. Finally, a brief summary of principal findings follows in Section IV.

1,487 citations


Journal ArticleDOI
TL;DR: The authors characterizes managerial problem sensing, a necessary precondition for managerial activity directed toward organizational adaptation, as composed of noticing, interpreting, and incorporating stimuli, and reviews the constituent social cognition processes that make certain kinds of problem-sensing behavior, including errors, relatively likely to occur.
Abstract: ? 1982 by Cornell University. 000 1-8392/82/2704-0548/$00.7 5 This paper characterizes managerial problem sensing, a necessary precondition for managerial activity directed toward organizational adaptation, as composed of noticing, interpreting, and incorporating stimuli. It then reviews the constituent social cognition processes that make certain kinds of problem-sensing behavior, including errors, relatively likely to occur. Implications for the organizational issues of crisis, chance events, break points, and extreme change are explored.

1,207 citations


Journal ArticleDOI
TL;DR: It is shown that addition of n-bit binary numbers can be performed on a chip with a regular layout in time proportional to log n and with area proportional to n.
Abstract: With VLSI architecture, the chip area and design regularity represent a better measure of cost than the conventional gate count. We show that addition of n-bit binary numbers can be performed on a chip with a regular layout in time proportional to log n and with area proportional to n.

1,147 citations


Journal ArticleDOI
TL;DR: R1 is a program that configures VAX-11/780 computer systems and uses Match as its principal problem solving method; it has sufficient knowledge of the configuration domain and of the peculiarities of the various configuration constraints that at each step in the configuration process, it simply recognizes what to do.

1,001 citations


Journal ArticleDOI
TL;DR: In the moving window condition as mentioned in this paper, a reader presses a button to see each successive work in a text and the previous work is removed when a new work appears, and the words appear in the same position that they would in normal test.
Abstract: This article compares several methods of presenting text, including a new paradigm that produces reading-time data with many of the characteristics of naturally occurring eye-fixation data. In the new paradigm, called the moving window condition, a reader presses a button to see each successive work in a text, and the previous work is removed when a new work appears. The words appear in the same position that they would in normal test, and word-length information is available in peripheral vision. The results are qualitatively and quantitatively compared to the results obtained by monitoring the eye fixations of subjects reading normal text. The word-level effects are generally similar. Readers pause longer on longer words, on less frequent words, on words that introduce a new topic, and at ends of sentences. The results suggest that readers initiate thr processing of each word as soon as they encounter it rather than buffer words and delay processing. Also considered are two other reading-time paradigms, one in which words are cumulatively displayed on the screen and one in which each successive word is presented at the same location on the screen. Finally, we consider how the tendency to immediately process text might interact with other techniques for text presentation, such as the rapid serial visual presentation (RSVP) condition, and we generate predictions about the nature and limits of the method.

896 citations


Journal ArticleDOI
01 Feb 1982-Cell
TL;DR: The fact that the adenovirus and Sv40 tumor antigens, both required for transformation, can be found in physical association with the same cellular protein in a transformed cell is a good indication that these two diverse viral proteins share some common mechanisms or functions.

Journal ArticleDOI
TL;DR: A method of constructing concurrent programs in which the synchronization skeleton of the program is automatically synthesized from a (branching time) temporal logic specification by using a decision procedure based on the finite model property of the logic.

Proceedings ArticleDOI
05 May 1982
TL;DR: This work considers the complexity of satisfiability and determination of truth in a particular finite structure for different propositional linear temporal logics and shows that both the above problems are NP-complete for the logic with F operator and PSPACE- complete for the logics with F,X, with U, with S, with X, and Wolper's extended logic with regular operators.
Abstract: We consider the complexity of satisfiability and determination of truth in a particular finite structure for different propositional linear temporal logics. We show that both the above problems are NP-complete for the logic with F operator and are PSPACE-complete for the logics with F,X, with U, with U,S,X, and Wolper's extended logic with regular operators [Wo81].

Book ChapterDOI
12 Jul 1982
TL;DR: This paper discusses many examples in an informal way that should serve as an introduction to the theory of domains and proves many things that were done previously axiomatically can now be proved in a straightfoward way as theorems.
Abstract: The purpose of the theory of domains is to give models for spaces on which to define computable functions. The kinds of spaces needed for denotational sematics involve not only spaces of higher type (e.g. function spaces) but also spaces defined recursively (e.g. reflexive domains). Also required are many special domain constructs (or functors) in order to create the desired structures. There are several choices of a suitable category of domains, but the basic one which has the simplest properties is the one sometimes called consistently complete algebraic cpo's. This category of domains is studied in this paper from a new, and it is to be hoped, simpler point of view incorporating the approaches of many authors into a unified presentation. Briefly, the domains of elements are represented set theoretically with the aid of structures called information systems. These systems are very familiar from mathematical logic, and their use seems to accord well with intuition. Many things that were done previously axiomatically can now be proved in a straightfoward way as theorems. The present paper discusses many examples in an informal way that should serve as an introduction to the subject.

Journal ArticleDOI
TL;DR: In this article, the authors consider the problem of how to allocate a resource among divisions when the productivity of the resource in each division is known only to the division manager, and they show that certain types of transfer pricing schemes are optimal.
Abstract: This paper considers the question: How should a firm allocate a resource among divisions when the productivity of the resource in each division is known only to the division manager? Obviously if the divisions as represented by their managers are indifferent among various allocations of the resource, the headquarters can simply request the division managers to reveal their private information on productivity knowing that the managers have no incentive to lie. The resource allocation problem can then be solved under complete or at least symmetric information. This aspect is a flaw in much of the recent literature on this topic, i.e., there is nothing in the models considered which makes divisions prefer one allocation over another. Thus, although in some cases elaborate allocation schemes are proposed and analyzed, they are really unnecessary. In the model we develop, a division can produce the same output with less managerial effort if it is allocated more resources, and effort is costly to the manager. We further assume that this effort is unobservable by the headquarters, so that it cannot infer divisional productivity from data on divisional output and managerial effort. Given these assumptions, we seek an optimal resource allocation process. Our results show that certain types of transfer pricing schemes are optimal. In particular, if there are no potentially binding capacity constraints on production of the resource, then an optimal process is for each division to choose a transfer price from a schedule announced by the headquarters. Division managers receive a fixed compensation minus the cost of the resource allocated to them at the chosen transfer price. Resources are allocated on the basis of the chosen transfer prices. If there is a potentially binding constraint on resource production, a somewhat more complicated, but similar, scheme is required.

Proceedings ArticleDOI
30 Jul 1982
TL;DR: In this paper, a unified concept of using systolic arrays to perform real-time triangularization for both general and band matrices is presented, and a framework is presented for the solution of linear systems with pivoting and for least squares computations.
Abstract: Given an n x p matrix X with p < n, matrix triangularization, or triangularization in short, is to determine an n x n nonsingular matrix Al such that MX = [ R 0 where R is p x p upper triangular, and furthermore to compute the entries in R. By triangularization, many matrix problems are reduced to the simpler problem of solving triangular- linear systems (see for example, Stewart). When X is a square matrix, triangularization is the major step in almost all direct methods for solving general linear systems. When M is restricted to be an orthogonal matrix Q, triangularization is also the key step in computing least squares solutions by the QR decomposition, and in computing eigenvalues by the QR algorithm. Triangularization is computationally expensive, however. Algorithms for performing it typically require n3 operations on general n x n matrices. As a result, triangularization has become a bottleneck in some real-time applications.11 This paper sketches unified concepts of using systolic arrays to perform real-time triangularization for both general and band matrices. (Examples and general discussions of systolic architectures can be found in other papers.6.7) Under the same framework systolic triangularization arrays arc derived for the solution of linear systems with pivoting and for least squares computations. More detailed descriptions of the suggested systolic arrays will appear in the final version of the paper.© (1982) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Journal ArticleDOI
TL;DR: In this article, a matched asymptotic expansion of a small parameter L/a, where a is the particle radius and L is the length scale characteristic of the physical interaction between solute and particle surface, was used to obtain an expression for particle velocity.
Abstract: When a particle is placed in a fluid in which there is a non-uniform concentration of solute, it will move toward higher or lower concentration depending on whether the solute is attracted to or repelled from the particle surface. A quantitative understanding of this phenomenon requires that the equations representing conservation of mass and momentum within the fluid in the vicinity of the particle are solved. This is accomplished using a method of matched asymptotic expansions in a small parameter L/a, where a is the particle radius and L is the length scale characteristic of the physical interaction between solute and particle surface. This analysis yields an expression for particle velocity, valid in the limit L/a → 0, that agrees with the expression obtained by previous researchers. The result is cast into a more useful algebraic form by relating various integrals involving the solute/particle interaction energy to a measurable thermodynamic property, the Gibbs surface excess of solute Γ. An important result is that the correction for finite L/a is actually O(Γ/C∞ a), where C∞ is the bulk concentration of solute, and could be O(1) even when L/a is orders of magnitude smaller.

Book ChapterDOI
TL;DR: This chapter focuses on the important role of retrieval structures as working memory states, which contains structures for controlling the flow of processing within the current task and provides relatively fast and direct access to knowledge structures relevant to the currenttask and context.
Abstract: Publisher Summary This chapter focuses on the important role of retrieval structures as working memory states. The working memory has at least the following components: (1) short-term memory, which provides direct and virtually immediate access to very recent or attended knowledge states; (2) intermediate-term memory, the task-specific retrieval structure in long-term memory, which provides direct and relatively fast access to knowledge states; and (3) context, which contains structures for controlling the flow of processing within the current task and provides relatively fast and direct access to knowledge structures relevant to the current task and context. The auditory and visual–spatial buffers are important components of working memory.

Journal ArticleDOI
TL;DR: In this paper, the decay rate of Dirac and Majorana neutrinos with a small mass was investigated in the SU(2)-U(1) model and possible enhancements of the rate due to methods of eluding the Glashow-Iliopoulos-Maiani suppression and charged Higgs bosons were considered.
Abstract: General formulas are given for the decay rate ${\ensuremath{ u}}_{2}\ensuremath{\rightarrow}{\ensuremath{ u}}_{1}+\ensuremath{\gamma}$ in the SU(2)\ifmmode\times\else\texttimes\fi{}U(1) model for neutrinos with a small mass. The emphasis is on distinguishing between the cases of Dirac and Majorana neutrinos. Possible enhancements of the rate due to methods of eluding the Glashow-Iliopoulos-Maiani suppression and due to charged Higgs bosons are considered.

Journal ArticleDOI
TL;DR: The authors found evidence for two types of negativity effects in electoral behavior: negativity in the formation of impressions and negativity as a consequence of impressions in the 1974 and 1978 congressional elections, concluding that negative information stands out against a general positive background.
Abstract: The tendency for negative information to have more weight than equally extreme or equally likely positive information appears in a variety of cognitive processing tasks, but has rarely been documented empirically in politics. This paper provides evidence for two types of negativity effects in electoral behavior: negativity in the formation of impressions (of Humphrey and Nixon in 1968, of McGovern and Nixon in 1972, and of Carter and Reagan in 1980), and negativity as a consequence of impressions (in the 1974 and 1978 congressional elections). Both post hoc rationalization and the nonequivalence of the positive and negative information were examined and ruled out as artifactual explanations for these results. Discussion centered around two possible explanations for negativity, a cost-orientation hypothesis (which holds that people are more strongly motivated to avoid costs than to approach gains) and a figure-ground hypothesis (which holds that negative information stands out against a general positive background).

Journal ArticleDOI
TL;DR: In this article, surface tension measurements for alkyltrimethylammonium bromides and alkylpyridinium Bromides at 50°C and for Triton X-100 at 20 and 50°c in ethylammmonium nitrate, a low-melting anhydrous fused salt, were obtained.

Journal ArticleDOI
TL;DR: This paper studies both the local and global convergence of various iterative methods for solving the variational inequality and the nonlinear complementarity problems and several convergence results are obtained for some nonlinear approximation methods.
Abstract: In this paper, we study both the local and global convergence of various iterative methods for solving the variational inequality and the nonlinear complementarity problems. Included among such methods are the Newton and several successive overrelaxation algorithms. For the most part, the study is concerned with the family of linear approximation methods. These are iterative methods in which a sequence of vectors is generated by solving certain linearized subproblems. Convergence to a solution of the given variational or complementarity problem is established by using three different yet related approaches. The paper also studies a special class of variational inequality problems arising from such applications as computing traffic and economic spatial equilibria. Finally, several convergence results are obtained for some nonlinear approximation methods.

Journal ArticleDOI
TL;DR: This paper describes polynomial heuristics for three important hard problems—the discrete fixed cost median problem (the plant location problem), the continuous fixed cost Median problem in a Euclidean space, and the network fixedcost median problem with convex costs.
Abstract: We describe in this paper polynomial heuristics for three important hard problems--the discrete fixed cost median problem (the plant location problem), the continuous fixed cost median problem in a Euclidean space, and the network fixed cost median problem with convex costs. The heuristics for all the three problems guarantee error ratios no worse than the logarithm of the number of customer points. The derivation of the heuristics is based on the presentation of all types of median problems discussed as a set covering problem.

Book ChapterDOI
TL;DR: This paper explored the ways in which people's understandings of concepts can be characterized by examining a variety of their numerical skills, by inferring representations and processes that might give rise to each of these skills, and integrating the models arising from each task domain to build a general model of preschoolers' knowledge of numbers.
Abstract: Publisher Summary This chapter explores the ways in which people's understandings of concepts can be characterized. In particular, it attempts to characterize very young children's understandings of numbers by examining a variety of their numerical skills, by inferring representations and processes that might give rise to each of these skills, and by integrating the models arising from each task domain to build a general model of preschoolers' knowledge of numbers. Several aspects of young children's knowledge of numbers are examined to devise models of their knowledge within each task domain and eventually to formulate one or more comprehensive models, including the information within each of the specific ones. The goal of formulation of models that stretch across task domains, has been given considerable homage in the abstract by developmental psychologists, but few such accounts have been stated at a sufficiently precise level to be meaningfully evaluated. It is believe that the formulation of detailed but encompassing models is crucial to understanding cognitive growth.

Journal ArticleDOI
TL;DR: In this paper, the authors explore the consequences of adopting a modern subjective view of probability for game theory and show that subjective game theory clarifies the important distinction between normative and positive theorizing about behavior in games, a distinction that is often lost in the search for "solution concepts".
Abstract: This paper explores some of the consequences of adopting a modern subjective view of probability for game theory. The consequences are substantial. The subjective view of probability clarifies the important distinction between normative and positive theorizing about behavior in games, a distinction that is often lost in the search for "solution concepts" which largely characterizes game theory since the work of von Neumann and Morgenstern. Many of the distinctions that appear important in conventional game theory two-person versus n-person, zero-sum versus variable sum appear unimportant in the subjective formulation. Other distinctions, such as single play versus repetitive-play games, appear to be more important in the subjective formulation than in the conventional formulation.

Journal ArticleDOI
TL;DR: In this article, the steady state gas sensing characteristics of SnO2 gas sensors, exemplified by the Taguchi Gas Sensor (TGS), are comprehensively studied and a quantitative model of device operation is constructed which unifies the diverse properties of the TGS and other semiconductor sensors.


Journal ArticleDOI
TL;DR: A number of exactly soluble models in statistical mechanics can be produced with the use of spins interacting with nearest neighbors on a hierarchical lattice as discussed by the authors, and the free energy has a well-defined thermodynamic limit for a large class of discrete spin models on hierarchical lattices.
Abstract: A number of exactly soluble models in statistical mechanics can be produced with the use of spins interacting with nearest neighbors on a hierarchical lattice. A general definition and several examples of such lattices are given, and the topological properties of one of these, the "diamond" lattice, are discussed in detail. It is shown that the free energy has a well-defined thermodynamic limit for a large class of discrete spin models on hierarchical lattices.

Journal ArticleDOI
TL;DR: Taylor et al. as discussed by the authors investigated potential mediators for saliency effect and made a case for the use of structural models in explanations of process, and found that schema-relevant recall, that is, information seen as representative of causal influence, is a plausible mediator.

Journal ArticleDOI
TL;DR: An iterative algorithm is proposed to calculate the eigenvectors when the rank of the correlation matrix is not large which save computation time and omputer storage requirements and gains its efficiency from the fact that only a significant set of eigenavectors are retained at any stage of iteration.
Abstract: A set of images is modeled as a stochastic process and Karhunen-Loeve expansion is applied to extract the feature images. Although the size of the correlation matrix for such a stochastic process is very large, we show the way to calculate the eigenvectors when the rank of the correlation matrix is not large. We also propose an iterative algorithm to calculate the eigenvectors which save computation time andc omputer storage requirements. This iterative algorithm gains its efficiency from the fact that only a significant set of eigenvectors are retained at any stage of iteration. Simulation results are also presented to verify these methods.