scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: Human Perception and Performance in 1997"


Journal ArticleDOI
TL;DR: The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns.
Abstract: Four experiments were carried out to investigate an early- versus late-selection explanation for the attentional blink (AB). In both Experiments 1 and 2, 3 groups of participants were required to identify a noun (Experiment 1) or a name (Experiment 2) target (experimental conditions) and then to identify the presence or absence of a 2nd target (probe), which was their own name, another name, or a specified noun from among a noun distractor stream (Experiment 1) or a name distractor stream (Experiment 2). The conclusions drawn are that individuals do not experience an AB for their own names but do for either other names or nouns. In Experiments 3 and 4, either the participant's own name or another name was presented, as the target and as the item that immediately followed the target, respectively. An AB effect was revealed in both experimental conditions. The results of these experiments are interpreted as support for a late-selection interference account of the AB.

322 citations


Journal ArticleDOI
TL;DR: The authors found that participants were unable to identify what Gestalt grouping patterns had occurred in the background of primary-task displays (A. Mack, B. Tang, R. Tuma, S. Kahn, & I. Rock, 1992).
Abstract: Many theories of visual perception assume that before attention is allocated within a scene, visual information is parsed according to the Gestalt principles of organization. This assumption has been challenged by experiments in which participants were unable to identify what Gestalt grouping patterns had occurred in the background of primary-task displays (A. Mack, B. Tang, R. Tuma, S. Kahn, & I. Rock, 1992). In the present study, participants reported which of 2 horizontal lines was longer. Dots in the background, if grouped, formed displays similar to the Ponzo illusion (Experiments 1 and 2) or the Muller-Lyer illusion (Experiment 3). Despite inaccurate reports of what the patterns were, participants' responses on the line-length discrimination task were clearly affected by the 2 illusions. These results suggest that Gestalt grouping does occur without attention but that the patterns thus formed may not be encoded in memory without attention.

300 citations


Journal ArticleDOI
TL;DR: Results showed that learning may involve qualitative or quantitative alterations in the layout of the coordination dynamics depending on whether such dynamics are bistable or multistable before exposure to the learning task.
Abstract: The dynamics of learning a new coordinated behavior was examined by requiring participants to perform a visually specified phase relationship between the hands Results showed that learning may involve qualitative or quantitative alterations in the layout of the coordination dynamics depending on whether such dynamics are bistable or multistable before exposure to the learning task In both cases, the process stabilized the to-be-learned behavior and its symmetry partner, even though the latter had not actually been practiced Kinematic analyses of hand motion showed that previously existing coordination tendencies were exploited during learning in order to match visual requirements These findings and the concepts presented here provide a framework for understanding how learning occurs in the context of previous experience and allow individual differences in learning to be tackled explicitly

278 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the role of awareness in visual attention and found that an exogenous cue presented below a subjective threshold of awareness captured attention automatically and without awareness, whereas a controlled process will likely interfere with other processes and necessarily requires intention and awareness.
Abstract: Previous research has shown that visual attention can be directed to a spatial location in 2 qualitatively different ways. Attention can be allocated endogenously in response to centrally presented precues, or it can be captured exogenously by a visual stimulus with an abrupt onset. It has been suggested that exogenous orienting of attention is an automatic process, whereas endogenous orienting of attention represents a controlled and strategic process. M.I. Posner and C.R.R. Snyder (1975) suggested that an automatic process occurs without intention, does not interfere with other mental processes, and does not necessarily give rise to awareness, whereas a controlled process will likely interfere with other processes and necessarily requires intention and awareness. Three experiments investigated the role of awareness in orienting visual attention. Endogenous and exogenous components of orienting attention were placed in opposition to each other to assess the automaticity of exogenous orienting by examining the potential for brief stimulus events to capture attention in the absence of subjective awareness. Results show that an exogenous cue presented below a subjective threshold of awareness captured attention automatically and without awareness.

276 citations


Journal ArticleDOI
TL;DR: It is found that the identification probability of the arrow was reduced when the to-be-executed reaction was compatible with the presented arrow, and the perception of a right-pointing arrow was impaired when presented during the execution of aright response as compared with that of a left response.
Abstract: This contribution is devoted to the question of whether action-control processes may be demonstrated to influence perception. This influence is predicted from a framework in which stimulus processing and action control are assumed to share common codes, thus possibly interfering with each other' In 5 experiments, a paradigm was used that required a motor action during the presentation of a stimulus. The participants were presented with masked right -~ or left-pointing arrows shortly before executing an already prepared left or fight keypress response. We found that the identification probability of the arrow was reduced when the to-be-executed reaction was compatible with the presented arrow. For example, the perception of a fight-pointing arrow was impaired when presented during the execution of a fight response as compared with that of a left response. The theoretical implications of this finding as well as its relation to other, seemingly similar phenomena (repetition blindness, inhibition of return, psychological refractory period) are discussed.

257 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compared walking and verbal report as distance indicators, looking for a tight covariation in responses that would indicate control by a common variable, namely perceived distance.
Abstract: It has not been established that walking without vision to previewed targets is indeed controlled by perceived distance. To this end, we compared walking and verbal report as distance indicators, looking for a tight covariation in responses that would indicate control by a common variable. Targets from 79-500 cm away were presented under dark and well-lit conditions. Both verbal reports and walking indicated overestimation of near targets and underestimation of far targets under dark viewing conditions. Moreover, the finding that verbally reported distance plotted essentially as a single-valued function of walked distance and vice versa is evidence that both indicators were responding to the same internal variable, ostensibly perceived distance. In addition, binocular parallax, absolute motion parallax, and angular elevation were evaluated as distance cues, and only angular elevation exerted a large influence on perceived distance.

256 citations


Journal ArticleDOI
TL;DR: Weak correlations between fixation durations and RTs suggest that this oculomotor measure may be related more to stimulus factors than to search processes, while findings suggest that parallel-serial search dichotomies are reflected in oculumotor behavior.
Abstract: Two experiments (one using O and Q-like stimuli and the other using colored-oriented bars) investigated the oculomotor behavior accompanying parallel-serial visual search. Eye movements were recorded as participants searched for a target in 5- or 17-item displays. Results indicated the presence of parallel-serial search dichotomies and 2:1 ratios of negative to positive slopes in the number of saccades initiated during both search tasks. This saccade number measure also correlated highly with search times, accounting for up to 67% of the reaction time (RT) variability. Weak correlations between fixation durations and RTs suggest that this oculomotor measure may be related more to stimulus factors than to search processes. A third experiment compared free-eye and fixed-eye searches and found a small RT advantage when eye movements were prevented. Together these findings suggest that parallel-serial search dichotomies are reflected in oculomotor behavior.

241 citations


Journal ArticleDOI
TL;DR: In 3 experiments, sinewave replicas of natural speech sampled from 10 talkers eliminated natural voice quality while preserving idiosyncratic phonetic variation, supporting a revised description of speech perception in which the phonetic properties of utterances serve to identify both words and talkers.
Abstract: Accounts of the identification of words and talkers commonly rely on different acoustic properties. To identify a word, a perceiver discards acoustic aspects of an utterance that are talker specific, forming an abstract representation of the linguistic message with which to probe a mental lexicon. To identify a talker, a perceiver discards acoustic aspects of an utterance specific to particular phonemes, creating a representation of voice quality with which to search for familiar talkers in long-term memory. In 3 experiments, sinewave replicas of natural speech sampled from 10 talkers eliminated natural voice quality while preserving idiosyncratic phonetic variation. Listeners identified the sinewave talkers without recourse to acoustic attributes of natural voice quality. This finding supports a revised description of speech perception in which the phonetic properties of utterances serve to identify both words and talkers.

230 citations


Journal ArticleDOI
TL;DR: The present studies show that focused attention can distort the encoding of nearby positions and speculate that the repulsion effect is one of the costs involved in the allocation of more resources to the focus of attention.
Abstract: Attention was focused at a specific location either by a briefly flashed cue (cue-induced attention) or by a voluntary effort (voluntary attention). In both cases, briefly presented probes appeared displaced away from the focus of attention. The results showed that the effect of cue-induced attention was transient whereas the effect of voluntary attention was long lasting. The repulsion effect was most evident with brief probe durations (< 200 ms). Control experiments ruled out nonattentional hypotheses based on classic figural aftereffects and apparent motion. Although a number of studies have demonstrated enhancements of visual perception at attended locations, the present studies show that focused attention can distort the encoding of nearby positions. Speculation is offered that the repulsion effect is one of the costs involved in the allocation of more resources to the focus of attention.

200 citations



Journal ArticleDOI
TL;DR: In this article, the transition from reaching using only arm extension to a mode of reaching in which they used the upper torso to lean forward occurred at closer distances than each actor's absolute critical boundary, beyond which the former action was no longer afforded.
Abstract: How do people choose an action to satisfy a goal from among the actions that are afforded by the environment? In 3 experiments the action modes used by actors to reach for a block placed at various distances from them were observed. In each experiment, when actors were not restricted in how they could reach for the object, the transition from their reaching using only arm extension to a mode of reaching in which they used the upper torso to lean forward occurred at closer distances than each actor's absolute critical boundary, beyond which the former action was no longer afforded. In Experiments 2 and 3 actors' seated posture was varied so that the effect of postural dynamics on the distance at which actors actually chose to make the transition between action modes, the preferred critical boundary, could be examined. The results are consistent with the proposal that the preferred critical boundary reflects the relative comfort of available modes of reaching.

Journal ArticleDOI
TL;DR: The authors examined the relationship between attentional blink and repetition blindness in rapid serial visual presentation (RSVP) and showed that correct identification of a target interferes with processing of a second target appearing within half a second.
Abstract: The visual system is generally limited in the number of items it can process at any given time. Performance decrements can be observed for reporting multiple targets presented in a sequence. Using a technique where targets are embedded among distractors in rapid serial visual presentation (RSVP), previous studies have shown that correct identification of a target (T1) interferes with processing of a second target (T2) appearing within half a second. This effect on T2 has been called an attentional blink (AB; Raymond, Shapiro, & Amell, 1992). A similar deficit has been shown for a repeated stimulus appearing in RSVP, an effect called repetition blindness (RB; Kanwisher, 1987). The focus of this article is on examining the relationship between the two deficits. The AB occurs when two different targets are presented

Journal ArticleDOI
TL;DR: The results demonstrated that adjustment takes place over a number of sentences, depending on the compression rate, and the level of speech processing at which such adjustment might occur.
Abstract: This study investigated the perceptual adjustments that occur when listeners recognize highly compressed speech In Experiment 1, adjustment was examined as a function of the amount of exposure to compressed speech by use of 2 different speakers and compression rates The results demonstrated that adjustment takes place over a number of sentences, depending on the compression rate Lower compression rates required less experience before full adjustment occurred In Experiment 2, the impact of an abrupt change in talker characteristics was investigated; in Experiment 3, the impact of an abrupt change in compression rate was studied The results of these 2 experiments indicated that sudden changes in talker characteristics or compression rate had little impact on the adjustment process The findings are discussed with respect to the level of speech processing at which such adjustment might occur

Journal ArticleDOI
TL;DR: The order in which 4 property classes of haptically perceived surfaces becomes available for processing after initial contact was studied, which included material, abrupt-surface discontinuity, relative orientation, and continuous 3-D surface contour properties.
Abstract: How the relative order in which 4 property classes of haptically perceived surfaces becomes available for processing after initial contact was studied. The classes included material, abrupt-surface discontinuity, relative orientation, and continuous 3-D surface contour properties. Relative accessibility was evaluated by using the slopes of haptic search functions obtained with a modified version of A. Treisman's (A. Treisman & S. Gormican, 1988) visual pop-out paradigm; the y0 intercepts were used to confirm and fine-tune order of accessibility. Target and distractors differed markedly in terms of their value on a single dimension. The results of 15 experiments show that coarse intensive discriminations are haptically processed early on. In marked contrast, most spatially encoded dimensions become accessible relatively later, sometimes considerably so.

Journal ArticleDOI
TL;DR: Perceived distance, averaged over observers, was accurate out to 15 m under full-cue conditions and results show that observers, on average, were accurate in imaginally updating the locations of previously viewed targets.
Abstract: Two triangulation methods for measuring perceived egocentric distance were examined. In the triangulation-by-pointing procedure, the observer views a target at some distance and, with eyes closed, attempts to point continuously at the target while traversing a path that passes by it. In the triangulation-by-walking procedure, the observer views a target and, with eyes closed, traverses a path that is oblique to the target; on command from the experimenter, the observer turns and walks toward the target. Two experiments using pointing and 3 using walking showed that perceived distance, averaged over observers, was accurate out to 15 m under full-cue conditions. For target distances between 15 and 25 m, the evidence indicates slight perceptual underestimation. Results also show that observers, on average, were accurate in imaginally updating the locations of previously viewed targets. The term visual space (or visually perceived space) refers to a perceptual representation of the immediate physical environment that exists independently of any of the particular spatial behaviors it helps to control. Much vision research has been devoted to establishing the functional properties of visual space and the mechanisms that underlie it. A major goal of such research has been to characterize the mapping from physical to visual space under different conditions of information availability, but ultimately the goal must be to predict visual space solely in terms of its sensory inputs and internal determinants (e.g., intrinsic noise, observer assumptions, etc.). Because visual direction is perceived accurately, most space perception research has examined the perception of egocentric distance (i-e., the distance from the object to the observer) and the perception of exocentric distance (i.e., the distance between two targets lying in the same visual direction or, more generally, the distance between any two locations). Because we believe that the perception of egocentric

Journal ArticleDOI
TL;DR: Although target priming and distractor priming both survived the AB, the 2 forms of priming appeared to have different bases and priming by T1 was larger, modulated by backward associative strength, and longer lasting.
Abstract: In 5 experiments, 432 college students viewed lists of words containing 2 targets (Target 1 [T1] and Target 2 [T2]) presented by rapid serial visual presentation at 10 words per second Identification of T1 caused a 500-ms impairment in the identification of T2 (the attentional blink [AB]) Improved recall of T2 was observed throughout the time course of the AB when T2 was a strong associate of either T1 or a priming distractor (PD) When participants ignored T1, the AB was eliminated, but the amount of priming was not affected Priming of T2 by PD was temporary (100-200 ms after the onset of PD) Although target priming and distractor priming both survived the AB, the 2 forms of priming appeared to have different bases In contrast to priming by PD, priming by T1 was larger, modulated by backward associative strength, and longer lasting Priming and the AB are hypothesized to result from on-line attentional processes, but recall from RSVP lists is also influenced by off-line memory processes

Journal ArticleDOI
TL;DR: In the early 1970s, Navon as discussed by the authors reported evidence supporting the hypothesis that perception proceeds from the global aspect of visual objects to the analysis of more local details, referred to as the global precedence hypothesis.
Abstract: In the late 1970s, Navon (1977) reported evidence supporting the hypothesis that perception proceeds from the global aspect of visual objects to the analysis of more local details. This hypothesis, referred to as the global precedence hypothesis, rests on two observations made when subjects were processing hierarchically structured stimuli, such as a large letter composed of smaller letters. The first, called global advantage, is that the discrimination time for global stimulus feature is faster than for local ones. The second, called global-to-local interference, is that it is difficult or even impossible to ignore the global aspect of a stimulus when processing its local aspects, whereas the local attributes do not interfere in the processing of the global aspect (Navon, 1977). The global precedence effect has been replicated in several studies (e.g., Boer & Keuss, 1982; Kimchi, 1988; Navon, 1981; Navon & Norman, 1983; Paquet & Merikle, 1984, 1988; Peressotti, Rumiati, Nicoletti, & Job, 1991; Pomerantz, Sager, & Stoever, 1977). A question remains, however: Is global precedence mediated by purely sensoryperceptual or by postperceptual mechanisms? According to Navon (1991), the priority given to the global information is involuntary, and the source of global advantage must be perceptual if not sensory. In this respect, global precedence was shown to be affected by some properties of the sensory input, such as the visual angle of the stimulus (Lamb & Robertson, 1988), its retinal location (Grice, Canham, & Boroughs, 1983; Pomerantz, 1983), the

Journal ArticleDOI
TL;DR: The results suggest that the visuomotor coordination necessary for controlling sitting is functional prior to the onset of independent sitting but becomes more finely tuned with experience.
Abstract: In this investigation of developmental changes in the coordination of perceived optical flow and postural responses, 4 age groups of infants (5-, 7-, 9-, and 13-month-olds) were tested while seated on a force plate in a "moving room." During each trial the walls oscillated in an anteroposterior direction for 12 s, and the postural sway of the infant was measured. The results revealed that infants perceived the frequency and amplitude of the optical flow and scaled their postural responses to the visual information. This scaling was present even before infants could sit without support, but it showed considerable improvement during the period when infants learn to sit. Taken together, these results suggest that the visuomotor coordination necessary for controlling sitting is functional prior to the onset of independent sitting but becomes more finely tuned with experience.

Journal ArticleDOI
TL;DR: In this article, negative priming is a dually determined effect produced by inhibitory mechanisms and by a memorial process, and older adults showed negative primers under retrieval and non-retrieval conditions, consistent with the view of deficient inhibitory mechanism for older adults.
Abstract: Three experiments examined whether negative priming is a dually determined effect produced by inhibitory mechanisms and by a memorial process. Younger adults (Experiment 1) and older adults (Experiments 1-3) were tested in procedures that varied the likelihood of inducing retrieval of the prior trial. This was done by making test-trial target decoding difficult (Experiments 1 & 2) or by making prior information useful on some nonnegative priming trials (Experiment 3). Younger adults demonstrated negative priming under retrieval and nonretrieval conditions, with patterns of performance indicating different sources of negative priming effects. Older adults showed negative priming only under retrieval-inducing conditions, consistent with the view of deficient inhibitory mechanisms for older adults. The data suggest that contextual variables critically determine whether negative priming is largely due to inhibition or to episodic retrieval.

Journal ArticleDOI
TL;DR: In this paper, a new theory on stress and human performance is proposed in which physical and cognitive stressors enhance the level of neuromotor noise in the information-processing system.
Abstract: A new theory on stress and human performance is proposed in which physical and cognitive stressors enhance the level of neuromotor noise in the information-processing system. The neuromotor noise propagates in time and space. A 2nd assumption states that such noise facilitates easy tasks but disrupts complex tasks. In 4 experiments, 2 graphic tasks (number writing and graphic aiming) were crossed with 2 stressors (cognitive stress from a dual-task situation and physical stress in the form of loud auditory noise). Reaction time (RT), movement time (MT), and axial pen pressure were measured. In the RT phase, stress was predicted to lead to decreased RT with easy tasks and to increased RT with difficult tasks. In the execution phase, biomechanical adaptation to enhanced levels of noise was expected to manifest in higher levels of limb stiffness. In all 4 experiments, an increase of axial pen pressure with higher levels of stress evidenced the generality of biomechanical adaptation as a response to stress. RT and MT showed differential effects among the 4 experiments.

Journal ArticleDOI
TL;DR: Results suggest that the conditions proposed by I.C. Biederman and P. Gerhardstein are not generally applicable, the recognition of qualitatively distinct objects often relies on viewpoint-dependent mechanisms, and the molar features of view-based mechanisms appear to be image features rather than geons.
Abstract: Based on the geon structural description approach, I. Biederman and P.C. Gerhardstein (1993) proposed 3 conditions under which object recognition is predicted to be viewpoint invariant. Two experiments are reported that satisfied all 3 criteria yet revealed performance that was clearly viewpoint dependent. Experiment 1 demonstrated that for both sequential matching and naming tasks, recognition of qualitatively distinct objects became progressively longer and less accurate as the viewpoint difference between study and test viewpoints increased. Experiment 2 demonstrated that for single-part objects, larger effects of viewpoint occurred when there was a change in the visible structure, indicating sensitivity to qualitative features in the image, not geon structural descriptions. These results suggest that the conditions proposed by I. Biederman and P.C. Gerhardstein are not generally applicable, the recognition of qualitatively distinct objects often relies on viewpoint-dependent mechanisms, and the molar features of view-based mechanisms appear to be image features rather than geons. Language: en

Journal ArticleDOI
TL;DR: A neural model simulates rate-dependent category boundaries that emerge from feedback interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items.
Abstract: What is the neural representation of a speech code as it evolves in time? A neural model simulates data concerning segregation and integration of phonetic percepts. Hearing two phonetically related stops in a VC-CV pair (V = vowel; C = consonant) requires 150 ms more closure time than hearing two phonetically different stops in a VC1-C2V pair. Closure time also varies with long-term stimulus rate. The model simulates rate-dependent category boundaries that emerge from feedback interactions between a working memory for short-term storage of phonetic items and a list categorization network for grouping sequences of items. The conscious speech code is a resonant wave. It emerges after bottom-up signals from the working memory select list chunks which read out top-down expectations that amplify and focus attention on consistent working memory items. In VC1-C2V pairs, resonance is reset by mismatch of C2 with the C1 expectation. In VC-CV pairs, resonance prolongs a repeated C.

Journal ArticleDOI
TL;DR: Predictions concerning the effects of handedness and attention on bimanual coordination were made from a dynamical model that incorporates the body's lateral asymmetry and confirmation of these predictions suggests that the dynamical perspective offers the possibility of studying handednesses and attention without compromising theoretical precision or experimental control.
Abstract: Predictions concerning the effects of handedness and attention on bimanual coordination were made from a dynamical model that incorporates the body's lateral asymmetry. Both handedness and the direction of attention (to the left or right) were manipulated in an inphase 1:1 frequency locking task. Left-handed and right-handed participants had to coordinate the planar oscillations of 2 handheld pendulums while 1 pendulum oscillated between spatial targets positioned over either the left or right hand. Predictions from the model were that participants would show a phase lead with the preferred hand, and that, although the phase lead would be greater when attention was directed to the preferred hand, the variability of relative phase would be lower. Confirmation of these predictions suggests that the dynamical perspective offers the possibility of studying handedness and attention without compromising theoretical precision or experimental control.

Journal ArticleDOI
TL;DR: In this paper, normal older participants (aged 60-79 years), with known scores on the Culture Fair Intelligence Test, were tested on four timing tasks (i.e., temporal generalization, bisection, differential threshold, and interval production).
Abstract: Normal older participants (aged 60-79 years), with known scores on the Culture Fair Intelligence Test, were tested on 4 timing tasks (i.e., temporal generalization, bisection, differential threshold, and interval production). The data were related to the theoretical framework of scalar timing theory and ideas about information processing and aging. In general, increasing age and decreasing IQ tended to be associated with increasing variability of judgments of duration, although in all groups events could be timed on average accurately. In some cases (e.g., bisection), performance differences between the older participants and students nearly 50 years younger used in other studies were negligible.

Journal ArticleDOI
TL;DR: This article investigated whether the Simon effect depends on the orienting of attention and found that spatially non-corresponding responses were faster than spatially corresponding responses, and that the regular Simon effect was reinstated.
Abstract: We investigated whether the Simon effect depends on the orienting of attention. In Experiment 1, participants were required to execute left-right discriminative responses to 2 patterns that were presented to the left or right of fixation. The 2 patterns were similar, and the discrimination was difficult. A letter at fixation signaled whether the current trial was a catch trial. The results showed a reversal of the Simon effect. That is, spatially noncorresponding responses were faster than spatially corresponding responses. In Experiment 2, the discrimination of the relevant stimulus attribute was easy. In Experiment 3, the discrimination of the relevant stimulus attribute was difficult, but the stimulus exposure time was long. In either experiment, the regular Simon effect was reinstated. In Experiment 4, the letter that signaled a catch trial appeared to the left or right of the imperative stimulus. The Simon effect occurred relative to the position of the letter.

Journal ArticleDOI
TL;DR: The paradigm of the fuzzy logical model of perception (FLMP) is extended to the domain of perception and recognition of facial affect and results indicate that participants evaluated and integrated information from both features to perceive affective expressions.
Abstract: The paradigm of the fuzzy logical model of perception (FLMP) is extended to the domain of perception and recognition of facial affect Two experiments were performed using a highly realistic computer-generated face varying on 2 features of facial affect Each experiment used the same expanded factorial design, with 5 levels of brow deflection crossed with 5 levels of mouth deflection, as well as their corresponding half-face conditions, for a total stimulus set of 35 faces Experiment 1 used a 2-alternative, forced-choice paradigm (either happy or angry), whereas Experiment 2 used 9 rating steps from happy to angry Results indicate that participants evaluated and integrated information from both features to perceive affective expressions Both choice probabilities and ratings showed that the influence of 1 feature was greater to the extent that the other feature was ambiguous The FLMP fit the judgments from both experiments significantly better than an additive model Our results question previous claims of categorical and holistic perception of affect

Journal ArticleDOI
TL;DR: The findings demonstrate the efficacy of top-down information in guiding attention and show that it can be applied flexibly, weighted toward particular target features.
Abstract: Conjunctive visual search is most difficult when distractor types are in equal proportions and gets easier as the proportions diverge (e.g., E. Zohary & S. Hochstein, 1989). This may reflect restriction of search to the feature shared by the target and the less-frequent distractor. Alternatively, such effects could reflect target salience, which varies with distractor ratio. In 2 experiments, 60 participants searched 64-element displays for a conjunctive target among distractors of 2 types in various proportions. Participants were correctly informed (Experiment 1) or misinformed (Experiment 2) about which distractor type would be less frequent on most trials. In both experiments, the distractor-ratio effect was significantly influenced by the information provided to participants. These findings demonstrate the efficacy of top-down information in guiding attention and show that it can be applied flexibly, weighted toward particular target features.

Journal ArticleDOI
TL;DR: The letter substitution errors of dysgraphic subjects who, despite relatively intact oral spelling, made well-formed letter substitution in written spelling, were studied in this article, which revealed that this similarity apparently was based on the features of the component strokes of letters rather than on visuospatial characteristics.
Abstract: The letter substitution errors of 2 dysgraphic subjects who, despite relatively intact oral spelling, made well-formed letter substitution errors in written spelling, were studied. Many of these errors bear a general physical similarity to the intended target. Analyses revealed that this similarity apparently was based on the features of the component strokes of letters rather than on visuospatial characteristics. A comparison of these subjects' letter substitution errors with those of 2 other individuals with brain damage, whose damage was at a different level of processing, revealed that the latter subjects' errors are not explicable in terms of strokefeature similarity. Strong support was found for the computation of multiple representational types in the course of written spelling. This system includes a relatively abstract, effectorindependent representational level that specifies the features of the component strokes of

Journal ArticleDOI
TL;DR: This paper found that individual differences in the magnitude of phonological effects in word recognition, as indicated by spelling-to-sound regularity effects on lexical decision latencies and by sensitivity to stimulus length effects, were strongly related to differences in hemispheric lateralization in two cortical regions.
Abstract: This study linked 2 experimental paradigms for the analytic study of reading that heretofore have been used separately. Measures on a lexical decision task designed to isolate phonological effects in the identification of printed words were examined in young adults. The results were related to previously obtained measures of brain activation patterns for these participants derived from functional magnetic resonance imaging (fMRI). The fMRI measures were taken as the participants performed tasks that were designed to isolate orthographic, phonological, and lexical-semantic processes in reading. Individual differences in the magnitude of phonological effects in word recognition, as indicated by spelling-to-sound regularity effects on lexical decision latencies and by sensitivity to stimulus length effects, were strongly related to differences in the degree of hemispheric lateralization in 2 cortical regions.

Journal ArticleDOI
TL;DR: It is tentatively concluded that the mental imagery of action is grounded and calibrated in reference to multiple skeletal degrees of behavioral freedom, and this calibration is a source of systematic error in reachability judgments.
Abstract: An account of the postural determinants of perceived reachability is proposed to explain systematic overestimations of the distance at which an object is perceived to be reachable. In this account, these errors are due to a mapping of the limits of prehensile space onto a person's perceived region of maximum stretchability, in the context of a whole-body engagement. In support of this account, 6 experiments on the judged reachability of both static and dynamic objects are reported. We tentatively conclude that the mental imagery of action is grounded and calibrated in reference to multiple skeletal degrees of behavioral freedom. Accordingly, this calibration is a source of systematic error in reachability judgments.