scispace - formally typeset
Search or ask a question

Showing papers by "Carnegie Mellon University published in 1985"


Journal ArticleDOI
TL;DR: There is evidence consistent with both main effect and main effect models for social support, but each represents a different process through which social support may affect well-being.
Abstract: Examines whether the positive association between social support and well-being is attributable more to an overall beneficial effect of support (main- or direct-effect model) or to a process of support protecting persons from potentially adverse effects of stressful events (buffering model). The review of studies is organized according to (1) whether a measure assesses support structure (the existence of relationships) or function (the extent to which one's interpersonal relationships provide particular resources) and (2) the degree of specificity (vs globality) of the scale. Special attention is given to methodological characteristics that are requisite for a fair comparison of the models. It is concluded that there is evidence consistent with both models. Evidence for the buffering model is found when the social support measure assesses the perceived availability of interpersonal resources that are responsive to the needs elicited by stressful events. Evidence for a main effect model is found when the support measure assesses a person's degree of integration in a large social network. Both conceptualizations of social support are correct in some respects, but each represents a different process through which social support may affect well-being. Implications for theories of social support processes and for the design of preventive interventions are discussed.

14,570 citations


01 Jan 1985
TL;DR: This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry.
Abstract: From the reviews: "This book offers a coherent treatment, at the graduate textbook level, of the field that has come to be known in the last decade or so as computational geometry...The book is well organized and lucidly written; a timely contribution by two founders of the field. It clearly demonstrates that computational geometry in the plane is now a fairly well-understood branch of computer science and mathematics. It also points the way to the solution of the more challenging problems in dimensions higher than two."

6,525 citations


Journal ArticleDOI
TL;DR: A scale measuring dispositional optimism, defined in terms of generalized outcome expectancies, was used in a longitudinal study of symptom reporting among a group of undergraduates and predicted that subjects who initially reported being highly optimistic were subsequently less likely to report being bothered by symptoms.
Abstract: This article describes a scale measuring dispositional optimism, defined in terms of generalized outcome expectancies. Two preliminary studies assessed the scale's psychometric properties and its relationships with several other instruments. The scale was then used in a longitudinal study of symptom reporting among a group of undergraduates. Specifically, respondents were asked to complete three questionnaires 4 weeks before the end of a semester. Included in the questionnaire battery was the measure of optimism, a measure of private self-consciousness, and a 39-item physical symptom checklist. Subjects completed the same set of questionnaires again on the last day of class. Consistent with predictions, subjects who initially reported being highly optimistic were subsequently less likely to report being bothered by symptoms (even after correcting for initial symptom-report levels) than were subjects who initially reported being less optimistic. This effect tended to be stronger among persons high in private self-consciousness than among those lower in private self-consciousness. Discussion centers on other health related applications of the optimism scale, and the relationships between our theoretical orientation and several related theories.

6,104 citations


Journal ArticleDOI
TL;DR: A general parallel search method is described, based on statistical mechanics, and it is shown how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way.

3,727 citations


Journal ArticleDOI
26 Apr 1985-Science
TL;DR: Computer tutors based on a set of pedagogical principles derived from the ACT theory of cognition have been developed for teaching students to do proofs in geometry and to write computer programs in the language LISP.
Abstract: Cognitive psychology, artificial intelligence, and computer technology have advanced to the point where it is feasible to build computer systems that are as effective as intelligent human tutors Computer tutors based on a set of pedagogical principles derived from the ACT theory of cognition have been developed for teaching students to do proofs in geometry and to write computer programs in the language LISP

3,092 citations


Book ChapterDOI
01 Jan 1985
TL;DR: The role of social support in protecting people from the pathogenic effects of stress has been investigated in the literature as mentioned in this paper, however, it is difficult to compare studies and to determine why support operates as a stress buffer in some cases, but not in others.
Abstract: In the last several years, we have been interested in the role social supports play in protecting people from the pathogenic effects of stress. By social supports, we scan the resources that are provided by other persons (cf. Cohen & Syme, 1985). Although others have investigated and in some cases found evidence for a “buffering” hypothesis—that social support protects persons from the pathogenic effects of stress but is relatively unimportant for unexposed individuals, there are difficulties in interpreting this literature. First, there are almost as many measures of social suppport as there are studies. Hence it is difficult to compare studies and to determine why support operates as a stress buffer in some cases, but not in others. Second, in the vast majority of work, support measures are used without regard to their psychometric properties or their appropriateness for the question under study. For example, studies using measures assessing the structure of social networks (e.g, how many friends do you have?) are seldom distinguished from those addressing the functions that networks might serve (e.g., do you have someone you can talk to about personal problems?). In fact, in many cases, structural and functional items are thrown together into single support indices resulting in scores that have little conceptual meaning.

2,200 citations


Proceedings ArticleDOI
25 Mar 1985
TL;DR: The use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot deals effectively with clutter, and can be used for motion planning and for extended landmark recognition.
Abstract: We describe the use of multiple wide-angle sonar range measurements to map the surroundings of an autonomous mobile robot. A sonar range reading provides information concerning empty and occupied volumes in a cone (subtending 30 degrees in our case) in front of the sensor. The reading is modelled as probability profiles projected onto a rasterized map, where somewhere occupied and everywhere empty areas are represented. Range measurements from multiple points of view (taken from multiple sensors on the robot, and from the same sensors after robot moves) are systematically integrated in the map. Overlapping empty volumes re-inforce each other, and serve to condense the range of occupied volumes. The map definition improves as more readings are added. The final map shows regions probably occupied, probably unoccupied, and unknown areas. The method deals effectively with clutter, and can be used for motion planning and for extended landmark recognition. This system has been tested on the Neptune mobile robot at CMU.

1,911 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a method for analyzing a standard color image to determine the amount of interface (specular) and body (diffuse) reflection at each pixel, which is based upon a physical model of reflection which states that two distinct types of reflection occur, and that each type can be decomposed into a relative spectral distribution and a geometric scale factor.
Abstract: In computer vision, the goal of which is to identify objects and their positions by examining images, one of the key steps is computing the surface normal of the visible surface at each point (“pixel”) in the image. Many sources of information are studied, such as outlines ofsuifaces, intensity gradients, object motion, and color. This article presents a method for analyzing a standard color image to determine the amount of interface (“specular”) and body (“diffuse”) reflection at each pixel. The interface reflection represents the highlights from the original image, and the body reflection represents the original image with highlights removed. Such intrinsic images are of interest because the geometric properties of each type of reflection are simpler than the geometric properties of intensity in a black-and-white image. The method is based upon a physical model of reflection which states that two distinct types of reflection–interface and body reflection–occur, and that each type can be decomposed into a relative spectral distribution and a geometric scale factor. This model is far more general than typical models used in computer vision and computer graphics, and includes most such models as special cases. In addition, the model does not assume a point light source or uniform illumination distribution over the scene. The properties of tristimulus integration are used to derive a new model of pixel-value color distribution, and this model is exploited in an algorithm to derive the desired quantities. Suggestions are provided for extending the model to deal with diffuse illumination and for analyzing the two components of reflection.

1,347 citations


Journal ArticleDOI
TL;DR: The authors compare two theories of human rationality: procedural, bounded rationality from contemporary cognitive psychology and global, substantive rationality from economics, and conclude that the model predictions rest primarily on the auxiliary assumptions rather than deriving from the rationality principle.
Abstract: This article compares two theories of human rationality that have found application in political science: procedural, bounded rationality from contemporary cognitive psychology, and global, substantive rationality from economics. Using examples drawn from the recent literature of political science, it examines the relative roles played by the rationality principle and by auxiliary assumptions (e.g., assumptions about the content of actors' goals) in explaining human behavior in political contexts, and concludes that the model predictions rest primarily on the auxiliary assumptions rather than deriving from the rationality principle.The analysis implies that the principle of rationality, unless accompanied by extensive empirical research to identify the correct auxiliary assumptions, has little power to make valid predictions about political phenomena.

1,235 citations


Journal ArticleDOI
TL;DR: It is demonstrated that by evoking the permission schema it is possible to facilitate performance in Wason's selection paradigm for subjects who have had no experience with the specific content of the problems, and evidence that evocation of a permission schema affects not only tasks requiring procedural knowledge, but also a linguistic rephrasing task requiring declarative knowledge.

1,228 citations


Journal ArticleDOI
TL;DR: The complexity of satisfiability and determination of truth in a particular finite structure are considered for different propositional linear temporal logics and it is shown that these problems are NP-complete for the logic with F and PSPACE- complete for the logics with F, X, with U, with S, X operators.
Abstract: The complexity of satisfiability and determination of truth in a particular finite structure are considered for different propositional linear temporal logics. It is shown that these problems are NP-complete for the logic with F and are PSPACE-complete for the logics with F, X, with U, with U, S, X operators and for the extended logic with regular operators given by Wolper.

Journal ArticleDOI
TL;DR: A distributed model of information processing and memory is described and shows how the functional equivalent of abstract representations--prototypes, logogens, and even rules--can emerge from the superposition of traces of specific experiences, when the conditions are right for this to happen.
Abstract: We describe a distributed model of information processing and memory and apply it to the representation of general and specific information. The model consists of a large number of simple processing elements which send excitatory and inhibitory signals to each other via modifiable connections. Information processing is thought of as the process whereby patterns of activation are formed over the units in the model through their excitatory and inhibitory interactions. The memory trace of a processing event is the change or increment to the strengths of the interconnections that results from the processing event. The traces of separate events are superimposed on each other in the values of the connection strengths that result from the entire set of traces stored in the memory. The model is applied to a number of findings related to the question of whether we store abstract representations or an enumeration of specific experiences in memory. The model simulates the results of a number of important experiments which have been taken as evidence for the enumeration of specific experiences. At the same time, it shows how the functional equivalent of abstract representations--prototypes, logogens, and even rules--can emerge from the superposition of traces of specific experiences, when the conditions are right for this to happen. In essence, the model captures the structure present in a set of input patterns; thus, it behaves as though it had learned prototypes or rules, to the extent that the structure of the environment it has learned about can be captured by describing it in terms of these abstractions.

Journal ArticleDOI
TL;DR: This paper presents a stereo matching algorithm using the dynamic programming technique that uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanlines search for correspondence of edge-Delimited interval on each scanline pair.
Abstract: This paper presents a stereo matching algorithm using the dynamic programming technique. The stereo matching problem, that is, obtaining a correspondence between right and left images, can be cast as a search problem. When a pair of stereo images is rectified, pairs of corresponding points can be searched for within the same scanlines. We call this search intra-scanline search. This intra-scanline search can be treated as the problem of finding a matching path on a two-dimensional (2D) search plane whose axes are the right and left scanlines. Vertically connected edges in the images provide consistency constraints across the 2D search planes. Inter-scanline search in a three-dimensional (3D) search space, which is a stack of the 2D search planes, is needed to utilize this constraint. Our stereo matching algorithm uses edge-delimited intervals as elements to be matched, and employs the above mentioned two searches: one is inter-scanline search for possible correspondences of connected edges in right and left images and the other is intra-scanline search for correspondences of edge-delimited intervals on each scanline pair. Dynamic programming is used for both searches which proceed simultaneously: the former supplies the consistency constraint to the latter while the latter supplies the matching score to the former. An interval-based similarity metric is used to compute the score. The algorithm has been tested with different types of images including urban aerial images, synthesized images, and block scenes, and its computational requirement has been discussed.


Journal ArticleDOI
TL;DR: The formation of the CHILDES, the governance of the system, the nature of the database, the shape of the coding conventions, and the types of computer programs being developed are detailed.
Abstract: The study of language acquisition underwent a major revolution in the late 1950s as a result of the dissemination of technology permitting high-quality tape-recording of children in the family setting. This new technology led to major breakthroughs in the quality of both data and theory. The field is now at the threshold of a possible second major breakthrough stimulated by the dissemination of personal computing. Researchers are now able to transcribe tape-recorded data into computer files. With this new medium it is easy to conduct global searches for word combinations across collections of files. It is also possible to enter new codings of the basic text line. Because of the speed and accuracy with which computer files can be copied, it is now much easier to share data between researchers. To foster this sharing of computerized data, a group of child language researchers has established the Child Language Data Exchange System (CHILDES). This article details the formation of the CHILDES, the governance of the system, the nature of the database, the shape of the coding conventions, and the types of computer programs being developed.

Journal ArticleDOI
TL;DR: In this article, a statistical model for causal inference is used to critique the discussions of other writers on causation and causal inference, including selected philosophers, medical researchers, statisticians, econometricians, and proponents of causal modelling.
Abstract: Problems involving causal inference have dogged at the heels of Statistics since its earliest days. Correlation does not imply causation and yet causal conclusions drawn from a carefully designed experiment are often valid. What can a statistical model say about causation? This question is addressed by using a particular model for causal inference (Rubin, 1974; Holland and Rubin, 1983) to critique the discussions of other writers on causation and causal inference. These include selected philosophers, medical researchers, statisticians, econometricians, and proponents of causal modelling.

Book
30 May 1985
TL;DR: This book, based on the doctoral dissertations of the two authors, examines several aspects of manipulating objects and believes that better industrial robots are presented by understanding the principles discussed.
Abstract: Robot Hands and the Mechanics of Manipulation explores several aspects of the basic mechanics of grasping, pushing, and in general, manipulating objects. It makes a significant contribution to the understanding of the motion of objects in the presence of friction, and to the development of fine position and force controlled articulated hands capable of doing useful work. In the book's first section, kinematic and force analysis is applied to the problem of designing and controlling articulated hands for manipulation. The analysis of the interface between fingertip and grasped object then becomes the basis for the specification of acceptable hand kinematics. A practical result of this work has been the development of the Stanford/JPL robot hand - a tendon-actuated, 9 degree-of-freedom hand which is being used at various laboratories around the country to study the associated control and programming problems aimed at improving robot dexterity. Chapters in the second section study the characteristics of object motion in the presence of friction. Systematic exploration of the mechanics of pushing leads to a model of how an object moves under the combined influence of the manipulator and the forces of sliding friction. The results of these analyses are then used to demonstrate verification and automatic planning of some simple manipulator operations. Matthew T. Mason is Assistant Professor of Computer Science at Carnegie-Mellon University, and coeditor of Robot Motion (MIT Press 1983). J. Kenneth Salisbury, Jr. is a Research Scientist at MIT's Artificial Intelligence Laboratory, and president of Salisbury Robotics, Inc. Robot Hands and the Mechanics of Manipulation is 14th in theArtificial Intelligence Series, edited by Patrick Henry Winston and Michael Brady.


Journal ArticleDOI
TL;DR: This paper analyzes the causes for large differences in difficulty of various isomorphic versions of the Tower of Hanoi problem to seek and find causes for the differences in features of the problem representation.

Journal ArticleDOI
TL;DR: In this article, the paradox of not voting is examined in a model where voters have uncertainty about the preferences and costs of other voters, and it is shown that voters with negligible or negative net voting costs participate when the electorate is large.
Abstract: The paradox of not voting is examined in a model where voters have uncertainty about the preferences and costs of other voters. In game-theoretic models of voter participation under complete information, equilibrium outcomes can have substantial turnout even when voting costs are relatively high. In contrast, when uncertainty about preferences and costs is present, only voters with negligible or negative net voting costs participate when the electorate is large.

Journal ArticleDOI
TL;DR: This paper provides a detailed theoretical account of the mental rotation of individuals of low and high spatial ability as they solve problems taken from psychometric tests as two related computer simulation models that not only solve the problems, but also match the response times for the two groups.
Abstract: : Strategic differences in spatial tasks can be explained in terms of different cognitive coordinate systems that subjects adopt. The strategy of mental rotation (of the type used in most mental rotation experiments and in some psychometric tests of spatial ability) uses a coordinate system defined by the standard axes of our visual world (i.e. horizontal, vertical, and depth axes). Within this strategy, rotations are performed around one or more of the standard axes. The paper provides a detailed theoretical account of the mental rotation of individuals of low and high spatial ability as they solve problems taken from psychometric tests. The theory is instantiated as two related computer simulation models that not only solve the problems, but also match the response times for the two groups. The simulation models contain modularized units of procedural knowledge called productions, that select and execute the appropriate actions at each knowledge state. Small localized differences between the two models simulate the large quantitative and qualitative differences between the two groups of subjects.

Journal ArticleDOI
TL;DR: In this paper, a revised version of the Self-Consciousness Scale is presented, along with information regarding its psychometric properties, and it is suggested that the revised scale be used whenever data are collected from populations other than college students.
Abstract: Recent research suggests that some of the wording of the original Self-Consciousness Scale is too abstract for easy understanding by research participants who are not college students. This article presents a revised version of that scale, along with information regarding its psychometric properties. In general, the psychometric properties of the revised scale compare quite favorably to those of the original scale. It is suggested that the revised Self-Consciousness Scale be used whenever data are collected from populations other than college students.

Journal ArticleDOI
01 Mar 1985
TL;DR: A learning technique is described in which the robot develops a global model and a network of places, which is useful for navigation in a finite, pre-learned domain such as a house, office, or factory.
Abstract: A navigation system is described for a mobile robot equipped with a rotating ultrasonic range sensor. This navigation system is based on a dynamically maintained model of the local environment, called the composite local model. The composite local model integrates information from the rotating range sensor, the robot's touch sensor, and a pre-learned global model as the robot moves through its environment. Techniques are described for constructing a line segment description of the most recent sensor scan (the sensor model), and for integrating such descriptions to build up a model of the immediate environment (the composite local model). The estimated position of the robot is corrected by the difference in position between observed sensor signals and the corresponding symbols in the composite local model. A learning technique is described in which the robot develops a global model and a network of places. The network of places is used in global path planning, while the segments are recalled from the global model to assist in local path execution. This system is useful for navigation in a finite, pre-learned domain such as a house, office, or factory.

Book
01 Jul 1985
TL;DR: What do you do to start reading programming expert systems in ops5 an introduction to rule based programming the addison wesley series in artificial intelligence?
Abstract: What do you do to start reading programming expert systems in ops5 an introduction to rule based programming the addison wesley series in artificial intelligence? Searching the book that you love to read first or find an interesting book that will make you want to read? Everybody has difference with their reason of reading a book. Actuary, reading habit must be from earlier. Many people may be love to read, but not a book. It's not fault. Someone will be bored to open the thick book with small words to read. In more, this is the real condition. So do happen probably with this programming expert systems in ops5 an introduction to rule based programming the addison wesley series in artificial intelligence.

Journal ArticleDOI
TL;DR: Given the layout of an IC, a fault model and a ranked fault list can be automatically generated which take into account the technology, layout, and process characteristics.
Abstract: Inductive Fault Analysis (IFA) is a systematic Procedure to predict all the faults that are likely to occur in MOS integrated circuit or subcircuit The three major steps of the IFA procedure are: (1) generation of Physical defects using statistical data from the fabrication process; (2) extraction of circuit-level faults caused by these defects; and (3) classification of faults types and ranking of faults based on their likelihood of occurrence Hence, given the layout of an IC, a fault model and a ranked fault list can be automatically generated which take into account the technology, layout, and process characteristics. The IFA procedure is illustrated by its applications to an example circuit. The results from this sample led to some very interesting observations regarding nonclassical faults.

Journal ArticleDOI
TL;DR: A general framework for analyzing flexibility in chemical process design is presented in this paper, which measures the size of the parameter space over which feasible steady-state operation of the plant can be attained by proper adjustment of the control variables.
Abstract: One of the key components of chemical plant operability is flexibility—the ability to operate over a range of conditions while satisfying performance specifications. A general framework for analyzing flexibility in chemical process design is presented in this paper. A quantitative index is proposed which measures the size of the parameter space over which feasible steady-state operation of the plant can be attained by proper adjustment of the control variables. The mathematical formulation of this index and a detailed study of its properties are presented. Application of the flexibility in design is illustrated with an example.

Proceedings Article
01 Dec 1985
TL;DR: A multiprocessor real-time system simulator is constructed with which a number of well-known scheduling algorithms such as Shortest Process Time (SPT), Deadline, Shortest Slack Time, FIFO, and a fixed priority scheduler are measured, with respect to the resulting total system values.
Abstract: Process scheduling in real-time systems has almost invariably used one or more of three algorithms: fixed priority, FIFO, or round robin. The reasons for these choices are simplicity and speed in the operating system, but the cost to the system in terms of reliability and maintainability have not generally been assessed. This paper originates from the notion that the primary distinguishing characteristic of a real-time system is the concept that completion of a process or a set of processes has a value to the system which can be expressed as a function of time. This notion is described in terms of a time-driven scheduling model for real-time operating systems and provides a tool for measuring the effectiveness of most of the currently used process schedulers in real-time systems. Applying this model, we have constructed a multiprocessor real-time system simulator with which we measure a number of well-known scheduling algorithms such as Shortest Process Time (SPT), Deadline, Shortest Slack Time, FIFO, and a fixed priority scheduler, with respect to the resulting total system values. This approach to measuring the process scheduling effectiveness is a first step in our longer term effort to produce a scheduler which will explicitly schedule real-time processes in such a way that their execution times maximize their collective value to the system, either in a shared memory multiprocessing environment or in multiple nodes of a distributed processing environment.

Posted Content
TL;DR: In this paper, the authors report on three psychometric scaling studies, summarized in Table 1. In each study, participants rated a given set of hazards on a range of risk characteristics and indicated the degree of risk reduction and regulation they desired.
Abstract: In this chapter, we report on three psychometric scaling studies, summarized in Table 1. In each study, participants rated a given set of hazards on a range of risk characteristics and indicated the degree of risk reduction and regulation they desired. Based on this data, we explore the relationships among risk characteristics and a smaller number of dimensions (factors) derived from them. We also relate risk characteristics to people's perception of risk and their desire for risk reduction and regulation. Our work builds on and extends earlier studies with a smaller number of hazards and risk characteristics (Fischhoff et al., 1978).

Journal ArticleDOI
TL;DR: A new conceptual framework for the convexification of discrete optimization problems, and a general technique for obtaining approximations to the conveX hull of the feasible set is discussed.
Abstract: We discuss a new conceptual framework for the convexification of discrete optimization problems, and a general technique for obtaining approximations to the convex hull of the feasible set. The concepts come from disjunctive programming and the key tool is a description of the convex hull of a union of polyhedra in terms of a higher dimensional polyhedron. Although this description was known for several years, only recently was it shown by Jeroslow and Lowe to yield improved representations of discrete optimization problems. We express the feasible set of a discrete optimization problem as the intersection (conjunction) of unions of polyhedra, and define an operation that takes one such expression into another, equivalent one, with fewer conjuncts. We then introduce a class of relaxations based on replacing each conjunct (union of polyhedra) by its convex hull. The strength of the relaxations increases as the number of conjuncts decreases, and the class of relaxations forms a hierarchy that spans the spec...

Journal ArticleDOI
TL;DR: In this article, the authors describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes, and the results of the system's analysis are characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model.
Abstract: In this paper, we describe the organization of a rule-based system, SPAM, that uses map and domain-specific knowledge to interpret airport scenes. This research investigates the use of a rule-based system for the control of image processing and interpretation of results with respect to a world model, as well as the representation of the world model within an image/map database. We present results on the interpretation of a high-resolution airport scene wvhere the image segmentation has been performed by a human, and by a region-based image segmentation program. The results of the system's analysis is characterized by the labeling of individual regions in the image and the collection of these regions into consistent interpretations of the major components of an airport model. These interpretations are ranked on the basis of their overall spatial and structural consistency. Some evaluations based on the results from three evolutionary versions of SPAM are presented.