scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: General in 1980"


Journal ArticleDOI
TL;DR: These results appear to provide an important model system for the study of the relationship between attention and the structure of the visual system, and it is found that attention shifts are not closely related to the saccadic eye movement system.
Abstract: Detection of a visual signal requires information to reach a system capable of eliciting arbitrary responses required by the experimenter. Detection latencies are reduced when subjects receive a cue that indicates where in the visual field the signal will occur. This shift in efficiency appears to be due to an alignment (orienting) of the central attentional system with the pathways to be activated by the visual input. It would also be possible to describe these results as being due to a reduced criterion at the expected target position. However, this description ignores important constraints about the way in which expectancy improves performance. First, when subjects are cued on each trial, they show stronger expectancy effects than when a probable position is held constant for a block, indicating the active nature of the expectancy. Second, while information on spatial position improves performance, information on the form of the stimulus does not. Third, expectancy may lead to improvements in latency without a reduction in accuracy. Fourth, there appears to be little ability to lower the criterion at two positions that are not spatially contiguous. A framework involving the employment of a limited-capacity attentional mechanism seems to capture these constraints better than the more general language of criterion setting. Using this framework, we find that attention shifts are not closely related to the saccadic eye movement system. For luminance detection the retina appears to be equipotential with respect to attention shifts, since costs to unexpected stimuli are similar whether foveal or peripheral. These results appear to provide an important model system for the study of the relationship between attention and the structure of the visual system.

3,559 citations


Journal ArticleDOI
David A. Rosenbaum1
TL;DR: A method for discovering how the defining values of forthcoming body movements are specified is presented, consistant with a distinctive-feature view, rather than a hierarchical view, of motor programming.
Abstract: This article presents a method for discovering how the defining values of forthcoming body movements are specified. In experiments using this movement precuing technique, information is given about some, none, or all of the defining values of a movement that will be required when a reaction signal is presented. It is assumed that the reaction time (RT) reflects the time to specify those values that were not precued. With RTs for the same movements in different precue conditions, it is possible to make detailed inferences about the value specification process for each of the movements under study. The present experiments were concerned with the specification of the arm, direction, and extent (or distance) of aimed hand movements. In Experiment 1 it appeared that (a) specification times during RTs were longest for arm, shorter for direction, and shortest for extent, and (b) these values were specified serially but not in an invariant order. Experiment 2 suggested that the precuing effects obtained in Experiment 1 were not attributable to stimulus identification. Experiment 3 suggested that subjects in Experiment 1 did not use precues to prepare sets of possible movements from which the required movement was later selected. The model of value specification supported by the data is consistant with a distinctive-feature view, rather than a hierarchical view, of motor programming.

925 citations


Journal ArticleDOI
TL;DR: It is concluded that the results from the first day of testing require a revision of Eysenck's theory of introversion/extraversion, because it appeared that low impulsives are more aroused in the morning and less arousing in the evening than are the high impulsives.
Abstract: The personality dimension of introversion/extraversion is one of the few personality dimensions that can be reliably identified from study to study and investigator to investigator. The importance of this demension within personality theory is due both to the stability of the trait and the influential theory of H. J. Eysenck. The basic assumption in Eysenck's theory of introversion/extraversion is that the personality differences between introverts and extraverts reflect some basic difference in the resting level of cortical arousal or activation. Assuming that there is a curvilinear relationship (an inverted U) between levels of stress and performance leads to a test of this arousal theory. That is, moderate increases in stress should hinder the performance of introverts who are presumably already highly aroused. However, the same moderate increase in stress might help the performance of the presumably underaroused extraverts. Revelle, Amaral, and Turriff reported that the administration of moderate doses of caffeine hindered the performance of introverts and helped the performance of extraverts on a cognitive task similar to the verbal test of the Graduate Record Examination. Assuming that caffeine increases arousal, this interaction between introversion/extraversion and drug condition supports Eysenck's theory. This interaction was explored in a series of experiments designed to replicate, extend, and test the generality of the original finding. The interaction between personality and drug condition was replicated and extended to additional cognitive performance tasks. However, these interactions were affected by time of day and stage of practice, and the subscales of introversion/extraversion, impulsivity, and sociability, were differentially affected. In the morning of the first day, low impulsives were hindered and high impulsives helped by caffeine. This pattern reversed in the evening of the first day, and it reversed again in the evening of Day 2. We concluded that the results from the first day of testing require a revision of Eysenck's theory. Instead of a stable difference in arousal between low and high impulsives, it appeared that these groups differed in the phase of their diurnal arousal rhythms. The result is that low impulsives are more aroused in the morning and less aroused in the evening than are the high impulsives. A variety of peripheral or strategic explanations (differences in caffeine consumption, guessing strategies, distraction, etc.) for the observed performance increments and decrements were proposed and tentatively rejected. It seems probable that some fundamental change in the efficiency with which information is processes is responsible for these performance changes.

417 citations


Journal ArticleDOI
TL;DR: An inverse relationship between duration of inducing stimulus and duration of sensory persistence is suggested and allows the inference that visual persistence may be identified more fittingly with ongoing neural processes than with the decaying contents of an iconic store.
Abstract: SUMMARY Iconic memory has often been likened to a sensory store whose contents drain away rapidly as soon as the inducing stimulus is turned off. Instances of short-lived visible persistence have been explained in terms of the decaying contents of iconic store. A fundamental requirement of this storage model is that strength of persistence should be a decreasing function of time elapsed since the cessation—not since the onset—of the inducing stimulation. That is, strength of visible persistence may be directly related—but not inversely related—to the duration of the inducing stimulus. Two complementary paradigms were utilized in the present studies. In the first paradigm performance was facilitated by visible persistence in that the task required the bridging of a temporal gap between two successive displays. In the second paradigm (forward visual masking by pattern), performance was impaired by lingering visible persistence of the temporally leading mask. Both paradigms yielded evidence of an inverse relationship between duration of inducing stimulus and duration of visible persistence. More specifically, in a task requiring temporal integration of a pattern displayed briefly in two successive portions, performance was severely impaired if the duration of the leading part exceeded about 100 msec. This suggests an inverse relationship between duration of inducing stimulus and duration of sensory persistence and allows the inference that visual persistence may be identified more fittingly with ongoing neural processes than with the decaying contents of an iconic store. In keeping with this suggestion, two experiments disconfirmed the conjecture that lack of temporal integration following long inducing stimuli could be ascribed to emergence of unitary form separately in the two portions of the display or to the triggering of some sort of discontinuity detection mechanism within the visual system. In added support of a "processing" model, two further studies showed that the severity of forward masking by pattern declines sharply as the duration of the leading mask is increased. This pattern of results is equally unsupportive of a storage theory of iconic persistence as of perceptual moment theory in any of its versions. This is so because both theories regard interstimulus interval rather than stimulus-onset asynchrony as the crucial factor in temporal integration. Neither can the results be explained in terms of receptor adaptation or of metacontrast suppression. The theory of inhibitory channel interactions can encompass the more prominent aspects of the results but fails to account for foveal suppression and for some crucial temporal effects.

380 citations


Journal ArticleDOI
TL;DR: These experiments suggest a model of information access whereby pictures access semantic information were readily more readily than name information, with the reverse being true for words.
Abstract: A number of independent lines of research have suggested that semantic and articulatory information become available differentially from pictures and words. The first of the experiments reported here sought to clarify the time course by which information about pictures and words becomes available by considering the pattern of interference generated when incongruent pictures and words are presented simultaneously in a Stroop-like situation. Previous investigators report that picture naming is easily disrupted by the presence of a distracting word but that word naming is relatively immune to interference from an incongruent picture. Under the assumption that information available from a completed process may disrupt an ongoing process, these results suggest that words access articulatory information more rapidly than do pictures. Experiment 1 extended this paradigm by requiring subjects to verify the category of the target stimulus. In accordance with the hypothesis that picture access the semantic code more rapidly than words, there was a reversal in the interference pattern: Word categorization suffered considerable disruption, whereas picture categorization was minimally affected by the presence of an incongruent word. Experiment 2 sought to further test the hypothesis that access to semantic and articulatory codes is different for pictures and words by examining memory for those items following naming or categorization. Categorized words were better recognized than named words, whereas the reverse was true for pictures, a result which suggests that picture naming involves more extensive processing than picture categorization. Experiment 3 replicated this result under conditions in which viewing time was held constant. The last experiment extended the investigation of memory differences to a situation in which subjects were required to generate the superordinate category name. Here, memory for categorized pictures was as good as memory for named pictures. Category generation also influenced memory for words, memory performance being superior to that following a yes--no verification of category membership. These experiments suggest a model of information access whereby pictures access semantic information were readily than name information, with the reverse being true for words. Memory for both pictures and words was a function of the amount of processing required to access a particular type of information as well as the extent of response differentiation necessitated by the task.

348 citations



Journal ArticleDOI
TL;DR: The authors showed that the ability to divide attention is constrained primarily by the individual's level of skill, not by the size of a fixed pool of resources, and that the writing task may become automatic, and require no capacity at all.
Abstract: SUMMARY Spelke, Hirst, and Neisser trained two subjects to copy unrelated words at dictation as they read and understood stories. The subjects' success was interpreted as evidence against the hypothesis of a fixed attentional capacity or limited cognitive resources; instead, it was hypothesized, attention is a skill that improves with practice. However, other explanations of these results can be proposed. The present research addressed two such counterhypotheses: that capacity may be alternated between reading and writing and that the writing task may become "automatic," and require no capacity at all. Experiment 1 was designed to see whether subjects take intermittent advantage of the redundancy of the stories to switch to the writing task. Some subjects were trained to copy words while reading highly redundant material (short stories); others were trained with less redundant encyclopedia articles. On reaching criterion, each subject was switched to the other type of reading material. Three of the four subjects trained with stories transferred their skill immediately to the encyclopedia, suggesting that they had not been using the redundancy of the stories to accomplish their task. Experiment 2 addressed that automaticity hypothesis. Two subjects were trained to copy complete sentences while reading. Several tests then showed that they understood the meaning of the sentences : (a) They made fewer copying errors with real sentences than with random words; (b) they recalled real sentences better than random words; (c) they integrated information from successive sentences, as demonstrated by a test of recognition memory for new statements whose truth was implied by the original ones. In view of this evidence that the sentences were understood, it is hard to maintain that they were being handled in an automatic way. These results strengthen the hypothesis that the ability to divide attention is constrained primarily by the individual's level of skill, not by the size of a fixed pool of resources. Postulated capacity limits may provide plausible accounts of unskilled performance but fail to explain the achievements of practiced individuals.

267 citations



Journal ArticleDOI
TL;DR: In this paper, the precuing method developed by Rosenbaum in which a precue provided partial information of the upcoming movement before the stimulus to move was used to study the initiation of movement.
Abstract: This set of experiments is concerned with the specification of movement parameters hypothesized to be involved in the initiation of movement. Experiment 1 incorporated the precuing method developed by Rosenbaum in which a precue provided partial information of the upcoming movement before the stimulus to move. Under conditions in which precues were provided by letter symbols and stimuli were color-coded dots mapped to response keys. Rosenbaum found reaction times to be slower for the specification of arm than for direction, and both to be slower than the specification of extent. In Experiment 1, using precue and stimulus conditions that paralleled those employed by Rosenbaum, we obtained very similar findings. The three follow-up experiments extended these findings to more naturalized stimulus-response compatible conditions. We used a method in which precues and stimuli were directly specified through vision and mapped in a one-to-one manner with responses. In Experiment 2, although reacion times decreased as a function of the number of parameters precued, there were no systematic effects of precuing particular parameters. In Experiments 3 and 4, we incorported an ambiguous precue that, while serving to reduce task uncertainty, failed to provide any specific information as to the arm, direction, or extent of the upcoming movement. Initiation times did not systematically vary as a function of the type of parameter precued nor were there significant differences between specific and ambiguous precue conditions. In sum, only in Experiment 1 in which precues and stimuli involved complex cognitive transformations was there support for Rosenbaum's parameter specification model. When we employed highly compatible conditions, designed to reflect a real-world environment, we failed to obtain any tendency for movement parameters to be serially specified. We discuss grounds for suspecting the generality of parameter specification models and propose an alternative approach that is consonant with the dynamic characteristics of the motor control system.

201 citations


Journal ArticleDOI
TL;DR: The results strongly favor the theory that observers use the same operation for both instructions, and the two-operation model tested assumes magnitude estimations of "ratios" are a comparable power function of subjective ratios.
Abstract: This article examines the hypothesis that judges compare stimuli by ratio and subtractive operations when instructed to judge" "ratios" and "differences." Rule and Curtis hold that magnitude estimations are a power function of subjective values, with an exponent between 1.1 and 2.1. Accordingly, the two-operation model tested assumes magnitude estimations of "ratios" are a comparable power function of subjective ratios. In contrast, Birnbaum and Veit theorize that judges compare two stimuli by subraction for both "ratio" and "difference" instructions and that magnitude estimations of "ratios" are approximately an exponential function of subjective differences. Three tests were used to compare the theory of one operation with the two-operation theory for the data of nine experiments. The results strongly favor the theory that observers use the same operation for both instructions.

144 citations


Journal ArticleDOI
TL;DR: The modality effect refers to the higher level of recall of the last few items of a list when presentation is auditory as opposed to visual, and it is concluded that echoic information can persist for many seconds and is used directly at the time of recall.
Abstract: The modality effect refers to the higher level of recall of the last few items of a list when presentation is auditory as opposed to visual. It is usually attributed to echoic memory. The effect may be sharply reduced by an ostensibly irrelevant auditory item appended to the end of the list. Previous research suggests that this "suffix effect" arises only when the suffix item occurs within 2 sec of the last list item. This finding strengthens the widely held assumption that echoic information decays within 2 sec, and has led to the assumption that if echoic information is to be useful in serial recall it must first be encoded into a more durable modality-independent form. Both assumptions conflict with the research reported here. The first two experiments demonstrate substantial suffix effects with suffix delays of 2 and 4 sec, indicating that echoic information lasts at least 4 sec. This finding implies that echoic information may aid recall directly, an implication that was supported in Experiments 3 and 4. In Experiment 3 serial recall was interrupted with a brief distractor task. The modality effect was smaller when this task was auditory than when it was visual, suggesting that echoic information was still available immediately prior to recency recall. In Experiment 4 list presentation was broken by a 4-sec pause; the modalities of the list halves were combined factorially. Interest focused on the recency positions of the first half. A modality effect was found at these positions when the second half was visual but not when it was auditory. This is contrary to the hypothesis that echoic information is encoded before recall, but is consistent with the hypothesis that echoic information is encoded before recall, but is consistent with the alternative hypothesis that echoic information is used directly at recall. The final two experiments concern the modality effect found when a delay is interpolated between list presentation and recall. Experiment 5 showed that a 20-sec silent copying task interpolated before free recall reduced visual recency more than auditory recency, and so enhanced the modality effect. This suggests that, contrary to prevailing opinion, the modality effect in delayed recall is not the result of a memory that is modality-independent. In Experiment 6 a modality effect found with serial recall after an unfilled interval of 18 sec was unaffected by visual distractor task, but almost eliminated by an auditory distractor task, given just prior to recall. It thus seems that the modality effect in delayed recall is the result of information persisting in echoic form until recall. It is concluded that echoic information can persist for many seconds and is used directly at the time of recall.

Journal ArticleDOI
TL;DR: The time subjects took to scan between objects in a mental image was used to infer the sorts of geometric information that images preserve, and it is argued that imagery and perception share some representational structures but that mental image scanning is a process distinct from eye movements or eye-movement commands.
Abstract: What sort of medium underlies imagery for three-dimensional scenes? In the present investigation, the time subjects took to scan between objects in a mental image was used to infer the sorts of geometric information that images preserve. Subjects studied an open box in which five objects were suspended, and learned to imagine this display with their eyes closed. In the first experiment, subjects scanned by tracking an imaginary point moving in a straight line between the imagined objects. Scanning times increased linearly with increasing distance between objects in three dimensions. Therefore metric 3-D information must be preserved in images, and images cannot simply be 2-D "snapshots." In a second experiment, subjects scanned across the image by "sighting" objects through an imaginary rifle sight. Here scanning times were found to increase linearly with the two-dimensional separations between objects as they appeared from the original viewing angle. Therefore metric 2-D distance information in the original perspective view must be preserved in images, and images cannot simply be 3-D "scale-models" that are assessed from any and all directions at once. In a third experiment, subjects mentally rotated the display 90 degrees and scanned between objects as they appeared in this new perspective view by tracking an imaginary rifle signt, as before. Scanning times increased linearly with the two-dimensional separations between objects as they would appear from the new relative viewing perspective. Therefore images can display metric 2-D distance information in a perspective view never actually experiences, so mental images cannot simply be "snapshot plus scale model" pairs. These results can be explained by a model in which the three-dimensional structure of objects is encoded in long-term memory in 3-D object-centered coordinate systems. When these objects are imagined, this information is then mapped onto a single 2-D "surface display" in which the perspective properties specific to a given viewing angle can be depicted. In a set of perceptual control experiments, subjects scanned a visible display by (a) simply moving their eyes from one object to another, (b) sweeping an imaginary rifle sight over the display, or (c) tracking an imaginary point moving from one object to another. Eye-movement times varied linearly with 2-D interobject distance, as did time to scan with an imaginary rifle sight; time to tract a point varied independently with the 3-D and 2-D interobject distances. These results are compared with the analogous image scanning results to argue that imagery and perception share some representational structures but that mental image scanning is a process distinct from eye movements or eye-movement commands.

Journal ArticleDOI
TL;DR: In this article, Sperling et al. investigated the role of visual information during visual persistence by comparing partial-report (PR) and whole-report estimates of available information.
Abstract: Following Sperling, the nature of the representation of visual information during visual persistence has been investigated by comparing partial-report (PR) and whole-report (WR) estimates of available information. A PR superiority is considered evidence for the representation of the cued stimulus dimension in visual persistence. In general, PR cues based on a physical characteristic produce a PR superiority, but PR cues based on a category distinction give no higher estimates of available information than is obtained with WR. These findings have been used to support an interpretation of visual persistence based upon a storage system metaphor (e.g., iconic memory), whereby a critical characteristic of the stored information is its "literal" precategorical nature. The present experiments explored whether there are reasonable alternative explanations for the fact that only physical PR cues typical produced a PR superiority. Experiments 1 and 2 demonstrate that the effectiveness of physical PR cues depends upon the "goodness' of the perceptual groups defined by the cued dimension. Perceptual grouping within multi-letter displays was varied according to the principles of proximity (Exp. 1) and similarity (Exp. 2), and the results showed greater PR superiorities when the demand characteristics of the cues were compatible with the implied perceptual groups in the displays. Experiments 3 and 4 establish that PR cues based upon a category distinction (letter-digit) produce a PR superiority when both cue onset latency and cue uncertainty are equated across PR and WR conditions. Circular alphanumeric displays were used, and category PR cues and WR cues were either presented in separate trial blocks (Exp. 3) or intermixed at two possible cue delays relative to display onset (-1000 msec or 0 msec). A PR superiority was found in all conditions. In addition, Experiment 5 shows that the magnitude of this category PR superiority decreased systematically with increases in cue delay (-900 msec, -300 msec, +300 msec, and +900 msec), and in Experiment 6, it was found that the PR superiorities for both physical and category cues decrease at comparable rates with increased cue delay. Since perceptual grouping influences the effectiveness of physical PR cues and category PR cues produce a PR superiority under appropriate conditions, the results question the validity of interpretions of visual persistence that imply the existence of a literal, precategorical storage system. It is suggested that a multichannel view of the visual system provides a more adquate theoretical conceptualization of visual persistence.


Journal ArticleDOI
TL;DR: These experiments indicate that whether or not categories have criterial features, subjects attempt to develop a set of feature tests that allow for exemplar classification, and previous evidence supporting probabilistic or similarity models may be interpreted as resulting from subjects' use of the most efficient rules for classification and the averaging of responses for subjects using different sets of rules.
Abstract: Early work in perceptual and conceptual categorization assumed that categories had criterial features and that category membership could be determined by logical rules for the combination of features. More recent theories have assumed that categories have an ill-defined structure and have prosposed probabilistic or global similarity models for the verification of category membership. In the experiments reported here, several models of categorization were compared, using one set of categories having criterial features and another set having an ill-defined structure. Schematic faces were used as exemplars in both cases. Because many models depend on distance in a multidimensional space for their predictions, in Experiment 1 a multidimensional scaling study was performed using the faces of both sets as stimuli, In Experiment 2, subjects learned the category membership of faces for the categories having criterial features. After learning, reaction times for category verification and typicality judgments were obtained. Subjects also judged the similarity of pairs of faces. Since these categories had characteristic as well as defining features, it was possible to test the predictions of the feature comparison model (Smith et al.), which asserts that reaction times and typicalities are affected by characteristic features. Only weak support for this model was obtained. Instead, it appeared that subjects developed logical rules for the classification of faces. A characteristic feature affected reaction times only when it was part of the rule system devised by the subject. The procedure for Experiment 3 was like that for Experiment 2, but with ill-defined rather than well-defined categories. The obtained reaction times had high correlations with some of the models for ill-defined categories. However, subjects' performance could best be described as one of feature testing based on a logical rule system for classification. These experiments indicate that whether or not categories have criterial features, subjects attempt to develop a set of feature tests that allow for exemplar classification. Previous evidence supporting probabilistic or similarity models may be interpreted as resulting from subjects' use of the most efficient rules for classification and the averaging of responses for subjects using different sets of rules.

Journal ArticleDOI
TL;DR: The Revelle et al. data suggest that differences between introverts and extraverts in time of day effects are due more to the impulsivity component of extraversion than to the sociability component.
Abstract: Revelle et al. have provided convincing evidence of interesting and replicable empirical relationships between the factors of time of day, caffeine administration, and impulsivity. Furthermore, their data suggest that differences between introverts and extraverts in time of day effects are due more to the impulsivity component of extraversion than to the sociability component. However, their preferred interpretation of the data in terms of a unidimensional arousal model is seriously deficient and should be replaced by a more complex conceptualization that emphasizes the existence of a number of qualitatively distinct activation states and that focuses on the effects of each of these activation states on the componential functions of information processing.


Journal ArticleDOI
TL;DR: In this article, three issues raised by M. W. Eysenck and Folkard are discussed: (a) just what individual difference variable is mediating the time of day and caffeine effects; (b) what the difference is in the diurnal rhythms of low and high impulsives; and (c) whether it is necessary to postulate multiple activation states.
Abstract: Three issues raised by M. W. Eysenck and Folkard are discussed. These include (a) just what individual difference variable is mediating the time of day and caffeine effects; (b) what the difference is in the diurnal rhythms of low and high impulsives; and (c) whether it is necessary to postulate multiple activation states. Suggestions for future research are then given.

Journal ArticleDOI
TL;DR: Simulated data based on the assumption that subjects evaluate both perceived relations were computed for stimulus values used by Veit to investigate judgments of ratios and differences in grayness, leaving open the question of whether one or two perceived relations underly the data.
Abstract: Because nonmetric analyses of judged ratios and differences in sensory magnitude have yielded similar scales, some investigators have concluded that a single perceived relation underlies both judgment tasks. Issues rasied by this interpretation are considered in this article. Simulated data based on the assumption that subjects evaluate both perceived relations were computed for stimulus values used by Veit to investigate judgments of ratios and differences in grayness. A nonmetric analysis of both sets of simulated data in terms of a difference model yielded a solution such that each set of data was a weak monotonic transformation of the model's values, and the scale values were approximately linear with those obtained by Veit from empirical data. This result leaves open the question of whether one or two perceived relations underly the data. Ordinal properties of ratios and differences for a finite set are discussed together with their relation to systematic biases in psychophysical judgment tasks.