scispace - formally typeset
Search or ask a question
JournalISSN: 1350-6285

Visual Cognition 

Taylor & Francis
About: Visual Cognition is an academic journal published by Taylor & Francis. The journal publishes majorly in the area(s): Visual search & Perception. It has an ISSN identifier of 1350-6285. Over the lifetime, 1449 publications have been published receiving 52678 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, it is argued that focused attention provides spatiotemporal coherence for the stable representation of one object at a time, and that the allocation of attention can be co-ordinated to create a virtual representation.
Abstract: One of the more powerful impressions created by vision is that of a coherent, richly detailed world where everything is present simultaneously. Indeed, this impression is so compelling that we tend to ascribe these properties not only to the external world, but to our internal representations as well. But results from several recent experiments argue against this latter ascription. For example, changes in images of real-world scenes often go unnoticed when made during a saccade, flicker, blink, or movie cut. This “change blindness” provides strong evidence against the idea that our brains contain a picture-like representation of the scene that is everywhere detailed and coherent. How then do we represent a scene? It is argued here that focused attention provides spatiotemporal coherence for the stable representation of one object at a time. It is then argued that the allocation of attention can be co-ordinated to create a “virtual representation”. In such a scheme, a stable object representation is formed...

1,050 citations

Journal ArticleDOI
TL;DR: In this paper, three studies manipulated the direction of gaze in a computerized face, which appeared centrally in a frontal view during a peripheral letter discrimination task, and found faster discrimination of peripheral target letters on the side the face gazed towards, even though the seen gaze did not predict target side, and despite participants being asked to ignore the face.
Abstract: This paper seeks to bring together two previously separate research traditions: research on spatial orienting within the visual cueing paradigm and research into social cognition, addressing our tendency to attend in the direction that another person looks. Cueing methodologies from mainstream attention research were adapted to test the automaticity of orienting in the direction of seen gaze. Three studies manipulated the direction of gaze in a computerized face, which appeared centrally in a frontal view during a peripheral letter-discrimination task. Experiments 1 and 2 found faster discrimination of peripheral target letters on the side the computerized face gazed towards, even though the seen gaze did not predict target side, and despite participants being asked to ignore the face. This suggests reflexive covert and/or overt orienting in the direction of seen gaze, arising even when the observer has no motivation to orient in this way. Experiment 3 found faster letter discrimination on the side the computerized face gazed towards even when participants knew that target letters were four times as likely on the opposite side. This suggests that orienting can arise in the direction of seen gaze even when counter to intentions. The experiments illustrate that methods from mainstream attention research can be usefully applied to social cognition, and that studies of spatial attention may profit from considering its social function.

1,010 citations

Journal ArticleDOI
TL;DR: For example, this paper found that subjects show remarkable agreement in ascribing a wide range of mental states to facial expressions, and that the whole face is more informative than either the eyes or the mouth.
Abstract: Previous work suggests that a range of mental states can be read from facial expressions, beyond the “basic emotions”. Experiment 1 tested this in more detail, by using a standardized method, and by testing the role of face parts (eyes vs. mouth vs. the whole face). Adult subjects were shown photographs of an actress posing 10 basic emotions (happy, sad, angry, afraid, etc.) and 10 complex mental states (scheme, admire, interest, thoughtfulness, etc.). For each mental state, each subject was shown the whole face, the eyes alone, or the mouth alone, and were given a forced choice of two mental state terms. Results indicated that: (1) Subjects show remarkable agreement in ascribing a wide range of mental states to facial expressions, (2) for the basic emotions, the whole face is more informative than either the eyes or the mouth, (3) for the complex mental states, seeing the eyes alone produced significantly better performance than seeing the mouth alone, and was as informative as the whole face. In Experim...

906 citations

Journal ArticleDOI
TL;DR: This work has shown that if attention subserves action control, object files may include action-related information as well as stimulus features, and featurebinding may not be restricted to stimulus features but also includefeatures of the responses made to the respective stimulus.
Abstract: One of the main functions that visual attention serves in perception and action is feature binding; that is, integrating all information that belongs to an object. The outcome of this integration has been called “object file”, a hypothetical memory structure coding episodic combinations of stimulus features. Action-oriented approaches to attention, however, suggest that such a purely perceptual or perceptually derived structure may be incomplete: If attention subserves action control, object files may include action-related information as well. That is, featurebinding may notbe restrictedto stimulus features butalso includefeatures of the responses made to the respective stimulus. In three experiments, subjects performed simple, already prepared left- or right-key responses (R1) to the mere presence of “Go” signals (S1) that varied randomly in form, colour and location. Shortly after the prepared response, a binary choice reaction (R2) to the form or colour of a second stimulus (S2) was made. The results ...

705 citations

Journal ArticleDOI
TL;DR: It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orienting of an observer's visual attention.
Abstract: Four experiments investigate the hypothesis that cues to the direction of another's social attention produce a reflexive orienting of an observer's visual attention. Participants were asked to make a simple detection response to a target letter which could appear at one of four locations on a visual display. Before the presentation of the target, one of these possible locations was cued by the orientation of a digitized head stimulus, which appeared at fixation in the centre of the display. Uninformative and to-be-ignored cueing stimuli produced faster target detection latencies at cued relative to uncued locations, but only when the cues appeared 100 msec before the onset of the target (Experiments 1 and 2). The effect was uninfluenced by the introduction of a to-be-attended and relatively informative cue (Experiment 3), but was disrupted by the inversion of the head cues (Experiment 4). It is argued that these findings are consistent with the operation of a reflexive, stimulus-driven or exogenous orient...

538 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202314
202239
202178
202044
201955
201859