scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Review in 1988"


Journal ArticleDOI
TL;DR: In this article, the authors present a research-based model that accounts for these patterns in terms of underlying psychological processes, and place the model in its broadest context and examine its implications for our understanding of motivational and personality processes.
Abstract: Past work has documented and described major patterns of adaptive and maladaptive behavior: the mastery-oriented and the helpless patterns. In this article, we present a research-based model that accounts for these patterns in terms of underlying psychological processes. The model specifies how individuals' implicit theories orient them toward particular goals and how these goals set up the different patterns. Indeed, we show how each feature (cognitive, affective, and behavioral) of the adaptive and maladaptive patterns can be seen to follow directly from different goals. We then examine the generality of the model and use it to illuminate phenomena in a wide variety of domains. Finally, we place the model in its broadest context and examine its implications for our understanding of motivational and personality processes. The task for investigators of motivation and personality is to identify major patterns of behavior and link them to underlying psychological processes. In this article we (a) describe a research-based model that accounts for major patterns of behavior, (b) examine the generality of this model—its utility for understanding domains beyond the ones in which it was originally developed, and (c) explore the broader implications of the model for motivational and personality processes.

8,588 citations


Journal ArticleDOI
TL;DR: This chapter discusses data concerning the time course of word identification in a discourse context and a simulation of arithmetic word-problem understanding provides a plausible account for some well-known phenomena.
Abstract: Publisher Summary This chapter discusses data concerning the time course of word identification in a discourse context. A simulation of arithmetic word-problem understanding provides a plausible account for some well-known phenomena. The current theories use representations with several mutually constraining layers. There is typically a linguistic level of representation, conceptual levels to represent both the local and global meaning and structure of a text, and a level at which the text itself has lost its individuality and its information content. Knowledge provides part of the context within which a discourse interpreted. The integration phase is the price the model pays for the necessary flexibility in the construction process.

3,650 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a theory in which automatization is construed as the acquisition of a domainspeciSc knowledge base, formed of separate representations, instances, of each exposure to the task.
Abstract: This article presents a theory in which automatization is construed as the acquisition of a domainspeciSc knowledge base, formed of separate representations, instances, of each exposure to the task. Processing is considered automatic if it relies on retrieval of stored instances, which will occur only after practice in a consistent environment. Practice is important because it increases the amount retrieved and the speed of retrieval; consistency is important because it ensures that the retrieved instances will be useful. The theory accounts quantitatively for the power-function speed-up and predicts a power-function reduction in the standard deviation that is constrained to have the same exponent as the power function for the speed-up. The theory accounts for qualitative properties as well, explaining how some may disappear and others appear with practice. More generally, it provides an alternative to the modal view of automaticity, arguing that novice performance is limited by a lack of knowledge rather than a scarcity of resources. The focus on learning avoids many problems with the modal view that stem from its focus on resource limitations.

3,222 citations


Journal ArticleDOI
TL;DR: The results of a series of search experiments are interpreted as evidence that focused attention to single items or to groups is required to reduce background activity when the Weber fraction distinguishing the pooled feature activity with displayscontaining a target and with displays containing only distractors is too small to allow reliable discrimination.
Abstract: In this article we review some new evidence relating to early visual processing and propose an explanatory framework. A series of search experiments tested detection of targets distinguished from the distractors by differences on a single dimension. Our aim was to use the pattern of search latencies to infer which features are coded automatically in early vision. For each of 12 different dimensions, one or more pairs of contrasting stimuli were tested. Each member of a pair played the role of target in one condition and the role of distractor in the other condition. Many pairs gave rise to a marked asymmetry in search latencies, such that one stimulus in the pair was detected either through parallel processing or with small increases in latency as display size increased, whereas the other gave search functions that increased much more steeply. Targets denned by larger values on the quantitative dimensions of length, number, and contrast, by line curvature, by misaligned orientation, and by values that deviated from a standard or prototypical color or shape were detected easily, whereas targets defined by smaller values on the quantitative dimensions, by straightness, by frame-aligned orientation, and by prototypical colors or shapes required slow and apparently serial search. These values appear to be coded by default, as the absence of the contrasting values. We found no feature of line arrangements that allowed automatic, preattentive detection; nor did connectedness or containment—the two examples of topological features that we tested. We interpret the results as evidence that focused attention to single items or to groups is required to reduce background activity when the Weber fraction distinguishing the pooled feature activity with displays containing a target and with displays containing only distractors is too small to allow reliable discrimination.

2,240 citations


Journal ArticleDOI
TL;DR: This article developed a hierarchy of models in which the tradeoff between attributes is contingent on the nature of the response, and applied this theory to the analysis of various compatibility effects, including the choice-matching discrepancy and the preference-reversal phenomenon.
Abstract: : Preference can be inferred from direct choice between options or from a matching procedure in which the decision maker adjusts one option to match another. Studies of perferences between two-dimensional options (e.g., public policies, job applicants, benefit plans) show that the more prominent dimension looms larger in choice than in matching. Thus, choice is more lexicographic than matching. This finding is viewed as an instance of a general principle of compatibility: the weighting of input is enhanced by their compatibility with the output. To account for such effects, we develop a hierarchy of models in which the tradeoff between attributes is contingent on the nature of the response. The simplest theory of this type, called the contingent weighting model, is applied to the analysis of various compatibility effects, including the choice-matching discrepancy and the preference-reversal phenomenon. These results raise both conceptual and practical questions concerning the nature, the meaning and the assessment of preference.

1,646 citations


Journal ArticleDOI
TL;DR: The present conceptual framework provides insights into principles of motor performance, and it links the study of physical action to research on sensation, perception, and cognition, where psychologists have been concerned for some time about the degree to which mental processes incorporate rational and normative rules.
Abstract: A stochastic optimized-submovement model is proposed for Pitts' law, the classic logarithmic tradeoff between the duration and spatial precision of rapid aimed movements. According to the model, an aimed movement toward a specified target region involves a primary submovement and an optional secondary corrective submovement. The submovements are assumed to be programmed such that they minimize average total movement time while maintaining a high frequency of target hits. The programming process achieves this minimization by optimally adjusting the average magnitudes and durations of noisy neuromotor force pulses used to generate the submovements. Numerous results from the literature on human motor performance may be explained in these terms. Two new experiments on rapid wrist rotations yield additional support for the stochastic optimizedsubmovement model. Experiment 1 revealed that the mean durations of primary submovements and of secondary submovements, not just average total movement times, conform to a square-root approximation of Pitts' law derived from the model. Also, the spatial endpoints of primary submovements have standard deviations that increase linearly with average primary-submovement velocity, and the average primary-submovement velocity influences the relative frequencies of secondary submovements, as predicted by the model. During Experiment 2, these results were replicated and extended under conditions in which subjects made movements without concurrent visual feedback. This replication suggests that submovement optimization may be a pervasive property of movement production. The present conceptual framework provides insights into principles of motor performance, and it links the study of physical action to research on sensation, perception, and cognition, where psychologists have been concerned for some time about the degree to which mental processes incorporate rational and normative rules. An enduring issue in the study of the human mind concerns of mathematical probability theory and statistical decision thethe rationality and optimality of the mental processes that guide ory (e.g., see Edwards, 1961; Edwards, Lindman, & Savage,

1,361 citations


Book ChapterDOI
TL;DR: In this article, a thought experiment is offered which analyses how a system as a whole can correct errors of hypothesis testing in a fluctuating environment when none of the system's components, taken in isolation, even knows that an error has occurred.
Abstract: This article provides a self-contained introduction to my work from a recent perspective. A thought experiment is offered which analyses how a system as a whole can correct errors of hypothesis testing in a fluctuating environment when none of the system’s components, taken in isolation, even knows that an error has occurred. This theme is of general philosophical interest: How can intelligence or knowledge be ascribed to a system as a whole but not to its parts? How can an organism’s adaptive mechanisms be stable enough to resist environmental fluctuations which do not alter its behavioral success, but plastic enough to rapidly change in response to environmental demands that do alter its behavioral success? To answer such questions, we must identify the functional level on which a system’s behavioral success is defined.

1,195 citations


Journal ArticleDOI
TL;DR: The multiple-trace simulation model, MINERVA 2, was applied to a number of phenomena found in experiments on relative and absolute judgments of frequency, and forced-choice and yes-no recognition memory.
Abstract: The multiple-trace simulation model, MINERVA 2, was applied to a number of phenomena found in experiments on relative and absolute judgments of frequency, and forced-choice and yes-no recognition memory. How the basic model deals with effects of repetition, forgetting, list length, orientation task, selective retrieval, and similarity and how a slightly modified version accounts for effects of contextual variability on frequency judgments were shown. Two new experiments on similarity and recognition memory were presented, together with appropriate simulations; attempts to modify the model to deal with additional phenomena were also described. Questions related to the representation of frequency are addressed, and the model is evaluated and compared with related models of frequency judgments and recognition memory.

1,099 citations


Journal ArticleDOI
TL;DR: A real-time neural network model, called the vector-integration-to-endpoint (VITE) model is developed and used to simulate quantitatively behavioral and neural data about planned and passive arm movements to demonstrate invariant properties of arm movements.
Abstract: A real-time neural network model, called the vector-integration-to-endpoint (VITE) model is developed and used to simulate quantitatively behavioral and neural data about planned and passive arm movements. Invariants o farm movements emerge through network interactions rather than through an explicitly precomputed trajectory. Motor planning occurs in the form of a target position command (TPC), which specifies where the arm intends to move, and an independently controlled GO command, which specifies the movement's overall speed. Automatic processes convert this information into an arm trajectory with invariant properties. These automatic processes include computation of a present position command (PPC) and a difference vector (DV). The DV is the difference between the PPC and the TPC at any time. The PPC is gradually updated by integrating the DV through time. The GO signal multiplies the DV before it is integrated by the PPC. The PPC generates an outflow movement command to its target muscle groups. Opponent interactions regulate the PPCs to agonist and antagonist muscle groups. This system generates synchronous movements across synergetic muscles by automatically compensating for the different total contractions that each muscle group must undergo. Quantitative simulations are provided of Woodworth's law, of the speed-accuracy trade-offknown as Fitts's law, of isotonic arm-movement properties before and after deafferentation, of synchronous and compensatory "central-error-correction" properties of isometric contractions, of velocity amplification during target switching, of velocity profile invariance and asymmetry, of the changes in velocity profile asymmetry at higher movement speeds, of the automarie compensation for staggered onset times of synergetic muscles, of vector cell properties in precentral motor cortex, of the inverse relation between movement duration and peak velocity, and of peak acceleration as a function of movement amplitude and duration. It is shown that TPC, PPC, and DV computations are needed to actively modulate, or gate, the learning of associative maps between TPCs of different modalities, such as between the eye-head system and the hand-arm system. By using such an associative map, looking at an object can activate a TPC of the hand-arm system, as Piaget noted. Then a VITE circuit can translate this TPC into an invariant movement trajectory. An auxiliary circuit, called the Passive Update of Position (PUP) model is described for using inflow signals to update the PPC during passive arm movements owing to external forces. Other uses of outflow and inflow signals are also noted, such as for adaptive linearization of a nonlinear muscle plant, and sequential readout of TPCs during a serial plan, as in reaching and grasping. Comparisons are made with other models of motor control, such as the mass-spring and minimumjerk models.

769 citations


Journal ArticleDOI
TL;DR: The retrieval theory is applied to priming phenomena and is shown to be capable of explaining the same empirical findings as spreading activation theories.
Abstract: and the familiarity of this compound cue is determined by the strengths of connections between the cue and items in memory. The familiarity of the compound cue is assessed by direct access or by parallel comparisons to all items in memory (depending on the way the theory is implemented), and it is assumed that the greater the familiarity, the faster the response time (specific models for latency assumptions are described below). In this article, the retrieval theory is applied to priming phenomena and is shown to be capable of explaining the same empirical findings as spreading activation theories. Two new experiments are also presented in which data are successfully predicted by the retrieval model but that require modification of current models of spreading activation. In the latter part of this article, implementation of the compound cue mechanism for priming within several different models is evaluated.

679 citations



Journal ArticleDOI
TL;DR: In this article, a new model for interference and forgetting is presented based on the Raaijmakers and Shiffrin search of associative memory (SAM) theory for retrieval from long-term memory.
Abstract: A new model for interference and forgetting is presented. The model is based on the Raaijmakers and Shiffrin search of associative memory (SAM) theory for retrieval from long-term memory. It includes a contextual fluctuation process that enables it to handle time-dependent changes in retrieval strengths. That is, the contextual retrieval strength is assumed to be proportional to the overlap between the contextual elements encoded in the memory trace and the elements active at the time of testing. It is shown that the model predicts a large number of phenomena from the classical interference literature. These include the basic results concerning retroactive inhibition, proactive inhibition, spontaneous recovery, independence of List 1 and List 2 recall, Osgood's transfer and retroaction surface, simple forgetting functions, the use of recognition measures, and the relation between response accuracy and response latency. It is shown that these results can be explained by a model that does not incorporate an "unlearning" assumption, thus avoiding many of the difficulties that have plagued the traditional interference theories. In recent years, a number of memory models have been presented that successfully predict the major results concerning recall and recognition. Unfortunately, however, many of those models have not been applied in a systematic manner to the phenomena of interference and forgetting. This is especially regrettable since there exists a wealth of data, accumulated in the years when these topics were the main focus of memory research, that should not be disregarded by contemporary memory theories. In this article, we present a model intended to explain the basic findings concerning interference and forgetting, findings that have been shown in many experiments to be relatively robust and reliable. The model is based on the general search of associative memory (SAM) theory (Raaijmakers & Shiffrin, 1981a) but incorporates a new model describing contextual fluctuation processes (Mensink & Raaijmakers, 1988). The SAM theory is a probabilistic cue-dependent search theory that describes retrieval processes in long-term memory. Retrieval is assumed to be mediated by retrieval cues, such as category names, words from a to-be-remembered list, contextual cues, etc. Sampling and recovery of sampled images (or memory traces) are the mechanisms that constitute the central features of the theory. As has been documented in previous papers

Journal ArticleDOI
TL;DR: Previously overlooked neuropsychological evidence on the relation between imagery and perception is reviewed, and its relative immunity to the foregoing alternative explanations is discussed.
Abstract: Does visual imagery engage some of the same representations used in visual perception? The evidence collected by cognitive psychologists in support of this claim has been challenged by three types of alternative explanation: Tacit knowledge, according to which subjects use nonvisual representations to simulate the use of visual representations during imagery tasks, guided by their tacit knowledge of their visual systems; experimenter expectancy, according to which the data implicating shared representations for imagery and perception is an artifact of experimenter expectancies; and nonvisual spatial representation, according to which imagery representations are partially similar to visual representations in the way they code spatial relations but are not visual representations. This article reviews previously overlooked neuropsychological evidence on the relation between imagery and perception, and discusses its relative immunity to the foregoing alternative explanations. This evidence includes electrophysiological and cerebral blood flow studies localizing brain activity during imagery to cortical visual areas, and parallels between the selective effects of brain damage on visual perception and imagery. Because these findings cannot be accounted for in the same way as traditional cognitive data using the alternative explanations listed earlier, they can play a decisive role in answering the title question.



Journal ArticleDOI
TL;DR: A new theory of similarity, rooted in the detection and recognition literatures, is developed and it is shown that the general recognition theory contains Euclidean distance models of similarity as a special case but that unlike them, it is not constrained by any distance axioms.
Abstract: A new theory of similarity, rooted in the detection and recognition literatures, is developed. The general recognition theory assumes that the perceptual effect of a stimulus is random but that on any single trial it can be represented as a point in a multidimensional space. Similarity is a function of the overlap of perceptual distributions. It is shown that the general recognition theory contains Euclidean distance models of similarity as a special case but that unlike them, it is not constrained by any distance axioms. Three experiments are reported that test the empirical validity of the theory. In these experiments the general recognition theory accounts for similarity data as well as the currently popular similarity theories do, and it accounts for identification data as well as the longstanding "champion" identification model does. The concept of similarity is of fundamental importance in psychology. Not only is there a vast literature concerned directly with the interpretation of subjective similarity judgments (e.g., as in multidimensional scaling) but the concept also plays a crucial but less direct role in the modeling of many psychophysical tasks. This is particularly true in the case of pattern and form recognition. It is frequently assumed that the greater the similarity between a pair of stimuli, the more likely one will be confused with the other in a recognition task (e.g., Luce, 1963; Shepard, 1964; Tversky & Gati, 1982). Yet despite the potentially close relationship between the two, there have been only a few attempts at developing theories that unify the similarity and recognition literatures. Most attempts to link the two have used a distance-based similarity measure to predict the confusions in recognition experiments (Appelman & Mayzner, 1982; Getty, Swets, & Swets, 1980; Getty, Swets, Swets, & Green, 1979; Nakatani, 1972; Nosofsky, 1984, 1985b, 1986; Shepard, 1957, 1958b). It is now widely suspected, however, that standard distance-based similarity measures do not provide an adequate account of perceived similarity (e.g., Krumhansl, 1978; Tversky, 1977). Our approach takes the opposite tack. We begin with a very powerful and general theory of recognition and use it to derive a new similarity measure, which successfully accounts for a wide variety of similarity results in both the recognition and the similarity literatures. The theory, which we call the general recognition theory, is rooted in the detection and recognition literatures and, in fact, is a multivariate generalization of signal-detection


Journal ArticleDOI
TL;DR: Estimates of the amount of partial information that subjects have accumulated about a test stimulus at each intermediate moment during a reaction time trial provide deeper insights into the rate at which partial information is accumulated over time and into discrete versus continuous modes of information processing.
Abstract: Measurements of reaction time have played a major role in developing theories about the menial processes that underlie sensation, perception, memory, cognition, and action. The interpretation of reaction time data requires strong assumptions about how subjects trade accuracy for speed of performance and about whether there is a discrete or continuous transmission of information from one component process to the next. Conventional reaction time and speed-accuracy trade-off procedures are not, by themselves, sufficiently powerful to test these assumptions. However, the deficiency can be remedied in part through a new speed-accuracy decomposition technique. To apply the technique, one uses a hybrid mixture of (a) conventional reaction time trials in which subjects must process a given test stimulus with high accuracy and (b) peremptory response-signal trials in which subjects must make prompted guesses before stimulus processing has been finished. Data from this "titrated reaction time procedure" are then analyzed in terms of a parallel sophisticated-guessing model, under which normal mental processes and guessing processes are assumed to race against each other in producing overt responses. With the model, one may estimate the amount of partial information that subjects have accumulated about a test stimulus at each intermediate moment during a reaction time trial. Such estimates provide deeper insights into the rate at which partial information is accumulated over time and into discrete versus continuous modes of information processing. An application of speed-accuracy decomposition to studies of word recognition illustrates the potential power of the technique.

Journal ArticleDOI
TL;DR: Results obtained from Meyer et al?s (1988) new technique give important qualitative support to some stochastic models and impressive quantitative support to the continuous diffusion model.
Abstract: David Meyer and colleagues have recently developed a new technique for examining the time course of information processing. The technique is a variant of the response signal procedure: On some trials subjects are presented with a signal that requires them to respond, whereas on other trials they respond normally. These two types of trials are randomly intermixed so subjects are unable to anticipate which kind of trial is to be presented next. For data analysis, it is assumed that on the signal trials observed reaction times are a probability mixture of regular responses and guesses based on partial information. The accuracy of guesses based on partial information can be determined by using the data from the regular trials and a simple race model to remove the contribution of fastfinishing regular trials from signal trial data. This analysis shows that the accuracy of guesses is relatively low and is either approximately constant or grows slowly over the time course of retrieval. Meyer and colleagues have argued that this pattern of results rules out most continuous models of information processing. But the analyses presented in this article show that this pattern is consistent with several stochastic reaction time models': the simple random walk, the runs, and the continuous diffusion models. The diffusion model is assessed with data from a new experiment using the studytest recognition memory procedure. Fitting the diffusion model to the data from regular trials fixes all parameters of the model except one (the signal encoding and decision parameter). With this one free parameter, the model predicts the observed guessing accuracy. In summary, results obtained from Meyer et al?s (1988) new technique give important qualitative support to some stochastic models and impressive quantitative support to the continuous diffusion model.



Journal ArticleDOI
TL;DR: This article provides evidence against a fundamental assumption of traditional theories of orientation--that gravitoinertial force is perceived, and argues that orientation is based on information that is available in patterns of motion of the organism, and that perception and control of orientation depend not only on information about an organism's motions relative to the local force environment but also on Information about the surface of support and about the compensatory actions of the organisms.
Abstract: In this article we provide evidence against a fundamental assumption of traditional theories of orientation--that gravitoinertial force is perceived. We argue that orientation is based on information that is available in patterns of motion of the organism. We further argue that perception and control of orientation depend not only on information about an organism's motions relative to the local force environment but also on information about the surface of support and about the compensatory actions of the organism. We describe these kinds of information and discuss their availability to, and across, different perceptual systems. The use of this information for the control of orientation is emphasized. We conclude with recommendations for research based on the new approach. How do organisms orient themselves in the environment? To what do they orient? The answer to these questions has seemed so obvious for so long that it is taken for granted: Organisms orient to gravity. They sit, stand, walk, run, throw, catch, dance, and do gymnastics relative to the omnipresent force of gravity. "Gravity is probably the most important spatial reference cue for an organism" (Schone, 1984, p. 258). Within this context, what are the functions of the perceptual systems? How do the perceptual systems contribute to the achievement and maintenance of orientation in terrestrial environments and in such situations as flight and weightlessness? In this article we analyze the information for orientation that is available to the perceptual systems. Our analysis leads us to two important areas of disagreement with the classical approach to spatial orientation. One is in the nature of the information for orientation; this is discussed in the first half of the article. The other concerns cooperation between perceptual systems in the perception and control of orientation and is dis





Journal ArticleDOI
TL;DR: Mise a l'epreuve critique, dans une perspective empruntee a la psychologie de la forme, de la validite du principe de vraisemblance postule par Helmholtz pour rendre compte de l'interpretation d'une configuration sensorielle
Abstract: Mise a l'epreuve critique, dans une perspective empruntee a la psychologie de la forme, de la validite du principe de vraisemblance postule par Helmholtz pour rendre compte de l'interpretation d'une configuration sensorielle

Journal ArticleDOI
TL;DR: The authors argue that the result+entered methods will not solve problems such as confirmation bias and irreplicability and will aggravate other existing problems: lack of viable theory, fragmentation of the field, mechanical fact gathering, limited applicability of psychological knowledge, and non-cumulative develryment of facts, with needless duplication of results and reinvention of empirical constructs.
Abstract: This article reexamines some important issues raised by Greenwald, Pratkanis, Leippe, and Baumgardner (1986) conceming the nature oftheory and its role in research progress, practical applications of psychological knowledge, strategies for develqing and evaluating theories, and relations between empirical and theoretical psychology. I argue that Greenwald et al.'s result+entered methods will not solve problems such as confirmation bias and irreplicability and will aggravate other existing problems: lack of viable theory, fragmentation of the field, mechanical fact gathering, limited applicability of psychological knowledge, and noncumulative develryment of facts, with needless duplication ofresults and reinvention ofempirical constructs. I conclude that all ofthese problems are best solved by establishing a balance between the "rational" and "empirical" epistemologies in psychology.

Journal ArticleDOI
TL;DR: Deux des auteurs concernes repondent aux critiques de trois articles (1988) remettant en question les methodes proposees par Greenwald et al. (1966) pour reduire le biais de confirmation faisant obstacle au progres de la recherche as mentioned in this paper.
Abstract: Deux des auteurs concernes repondent aux critiques de trois articles (1988) remettant en question les methodes proposees par Greenwald et al. (1966) pour reduire le biais de confirmation faisant obstacle au progres de la recherche

Journal ArticleDOI
TL;DR: The authors argue that result-centered research can impede the progress of psychology because it retards theoretical, methodological, and technological advancement, and encourages increasingly narrow and trivial research endeavors, and suggest ways to minimize these problems.
Abstract: Greenwald, Pratkanis, Leippe, and Baumgardner (1986) argued that a theory-testing research orientation contributes to a confirmation bias that impedes the progress of research. To eliminate this confirmation bias, they proposed two complementary result-centered approaches: the method of condition seeking and the design approach. We argue that Greenwald et al. confused the relation between theory and research and that the result-centered strategies they proposed would in no way minimize the bias. We also suggest that result-centered research can impede the progress of psychology because it retards theoretical, methodological, and technological advancement, and encourages increasingly narrow and trivial research endeavors. We conclude by discussing ways to minimize these problems.