scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 1997"


Journal ArticleDOI
25 Apr 1997-Science
TL;DR: Three experiments suggest that these auditory cortical areas are not engaged when an individual is viewing nonlinguistic facial movements but appear to be activated by silent meaningless speechlike movements (pseudospeech), which supports psycholinguistic evidence that seen speech influences the perception of heard speech at a prelexical stage.
Abstract: Watching a speaker's lips during face-to-face conversation (lipreading) markedly improves speech perception, particularly in noisy conditions. With functional magnetic resonance imaging it was found that these linguistic visual cues are sufficient to activate auditory cortex in normal hearing individuals in the absence of auditory speech sounds. Two further experiments suggest that these auditory cortical areas are not engaged when an individual is viewing nonlinguistic facial movements but appear to be activated by silent meaningless speechlike movements (pseudospeech). This supports psycholinguistic evidence that seen speech influences the perception of heard speech at a prelexical stage.

963 citations


Journal ArticleDOI
TL;DR: It is suggested that this γ-band energy increase reflects both bottom-up (binding of elementary features) and top-down (search for the hidden dog) activation of the same neural assembly coding for the Dalmatian.
Abstract: The coherent representation of an object in the visual system has been suggested to be achieved by the synchronization in the gamma-band (30-70 Hz) of a distributed neuronal assembly. Here we measure variations of high-frequency activity on the human scalp. The experiment is designed to allow the comparison of two different perceptions of the same picture. In the first condition, an apparently meaningless picture that contained a hidden Dalmatian, a neutral stimulus, and a target stimulus (twirled blobs) are presented. After the subject has been trained to perceive the hidden dog and its mirror image, the second part of the recordings is performed (condition 2). The same neutral stimulus is presented, intermixed with the picture of the dog and its mirror image (target stimulus). Early (95 msec) phase-locked (or stimulus-locked) gamma-band oscillations do not vary with stimulus type but can be subdivided into an anterior component (38 Hz) and a posterior component (35 Hz). Nonphase-locked gamma-band oscillations appear with a latency jitter around 280 msec after stimulus onset and disappear in averaged data. They increase in amplitude in response to both target stimuli. They also globally increase in the second condition compared with the first one. It is suggested that this gamma-band energy increase reflects both bottom-up (binding of elementary features) and top-down (search for the hidden dog) activation of the same neural assembly coding for the Dalmatian. The relationships between high- and low-frequency components of the response are discussed, and a possible functional role of each component is suggested.

877 citations



Journal ArticleDOI
TL;DR: It is proposed that the salience of a part depends on (at least) three factors: its size relative to the whole object, the degree to which it protrudes, and the strength of its boundaries.

489 citations


Journal ArticleDOI
TL;DR: Experiments from the author's laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex, which appears to be important for specifying the locations of targets for actions such as eye movements or reaching.
Abstract: The posterior parietal cortex has long been considered an 'association' area that combines information from different sensory modalities to form a cognitive representation of space. However, until recently little has been known about the neural mechanisms responsible for this important cognitive process. Recent experiments from the author's laboratory indicate that visual, somatosensory, auditory and vestibular signals are combined in areas LIP and 7a of the posterior parietal cortex. The integration of these signals can represent the locations of stimuli with respect to the observer and within the environment. Area MSTd combines visual motion signals, similar to those generated during an observer's movement through the environment, with eye-movement and vestibular signals. This integration appears to play a role in specifying the path on which the observer is moving. All three cortical areas combine different modalities into common spatial frames by using a gain-field mechanism. The spatial representations in areas LIP and 7a appear to be important for specifying the locations of targets for actions such as eye movements or reaching; the spatial representation within area MSTd appears to be important for navigation and the perceptual stability of motion signals.

481 citations


Journal ArticleDOI
TL;DR: The results of a psychophysics experiment show that the brain can consistently and quantitatively interpret detail in a stationary image obscured with time varying noise and that both the noise intensity and its temporal characteristics strongly determine the perceived image quality.
Abstract: Stochastic resonance can be used as a measuring tool to quantify the ability of the human brain to interpret noise contaminated visual patterns. Here we report the results of a psychophysics experiment which show that the brain can consistently and quantitatively interpret detail in a stationary image obscured with time varying noise and that both the noise intensity and its temporal characteristics strongly determine the perceived image quality.

470 citations


Journal ArticleDOI
10 Apr 1997-Nature
TL;DR: It is shown that changes in apparent visual direction anticipate saccades and are not of the same size, or even in the same direction, for all parts of the visual field and there is a compression of visual space sufficient to reduce the spacing and even the apparent number of pattern elements.
Abstract: Saccadic eye movements, in which the eye moves rapidly between two resting positions, shift the position of our retinal images. If our perception of the world is to remain stable, the visual directions associated with retinal sites, and others they report to, must be updated to compensate for changes in the point of gaze. It has long been suspected that this compensation is achieved by a uniform shift of coordinates driven by an extra-retinal position signal, although some consider this to be unnecessary. Considerable effort has been devoted to a search for such a signal and to measuring its time course and accuracy. Here, by using multiple as well as single targets under normal viewing conditions, we show that changes in apparent visual direction anticipate saccades and are not of the same size, or even in the same direction, for all parts of the visual field. We also show that there is a compression of visual space sufficient to reduce the spacing and even the apparent number of pattern elements. The results are in part consistent with electrophysiological findings of anticipatory shifts in the receptive fields of neurons in parietal cortex and superior colliculi.

443 citations


Journal ArticleDOI
TL;DR: This paper tests the hypothesis that scale diagnosticity can determine scale selection for recognition and suggests that a mandatory low-level registration of multiple spatial scales promotes flexible scene encodings, perceptions, and categorizations.

441 citations


Journal ArticleDOI
TL;DR: Visual illusions can provide evidence of object knowledge and working rules for vision, but only when the phenomena are explained and classified, which makes it hard to define 'illusion'.
Abstract: Following Hermann von Helmholtz, who described visual perceptions as unconscious inferences from sensory data and knowledge derived from the past, perceptions are regarded as similar to predictive hypotheses of science, but are psychologically projected into external space and accepted as our most immediate reality. There are increasing discrepancies between perceptions and conceptions with science's advances, which makes it hard to define 'illusion'. Visual illusions can provide evidence of object knowledge and working rules for vision, but only when the phenomena are explained and classified. A tentative classification is presented, in terms of appearances and kinds of causes. The large contribution of knowledge from the past for vision raises the issue: how do we recognize the present, without confusion from the past. This danger is generally avoided as the present is signalled by real-time sensory inputs-perhaps flagged by qualia of consciousness.

437 citations


Journal ArticleDOI
TL;DR: It is proposed that the study of strategies is a valuable option to obtain insight into early blind persons' spatial impairment and the reasons why vision plays a critical role in spatial cognition are examined.
Abstract: Some researchers of studies of the incidence of early visual experience on spatial abilities have demonstrated profound spatial deficits in early blind participants, whereas others have not found evidence of deleterious effects of early visual deprivation. The aims of this article are to (a) consider the theoretical background of these studies, (b) take stock of the divergent data, and (c) propose new means of investigation. The authors examine the reasons why vision plays a critical role in spatial cognition. They review the literature data. They also review the factors that could account for the discrepant data and the effects of lack of early visual experience on brain functioning. They propose that the study of strategies is a valuable option to obtain insight into early blind persons' spatial impairment. The ability to move about independently in space, to localize places that cannot be directly perceived because they are hidden or remote, and to plan trajectories on the basis of this knowledge is of great importance in everyday human life activities. It is not necessary to refer to sophisticated experimental studies to assert that many of these spatial behaviors depend on to a great extent visual perception. In cases where an object or a place to reach is visible, the movement or trajectory is directly guided by the visual perception of the goal or of conspicuous landmarks associated with it. In many circumstances, however, spatial behavior takes place in larger environments where the goal is not visible. In that case, it is necessary that spatial knowledge takes the form of a representation. The latter may simply consist of remembering a specific route to follow, but this simple means to achieve accurate trajectories lacks adaptive properties (O'Keefe & Nadel, 1978). The most adequate form of spatial representation is that of the topography of the environment beyond perceptual reach. This representation is acquired either by one using symbolic supports (such as reading a map) or progressively constructing one's internal map on the basis of experience (as when one frequently goes shopping in the district in which one resides, e.g.).

436 citations


Journal ArticleDOI
19 Jun 1997-Nature
TL;DR: It is shown that detection of differences in a simple feature such as orientation is severely impaired by additionally imposing an attentionally demanding rapid serial visual presentation task involving letter identification, demonstrating that attention can be critical even for the detection of so-called ‘preattentive’ features.
Abstract: It is commonly assumed that certain features are so elementary to the visual system that they require no attentional resources to be perceived. Such 'preattentive' features are traditionally identified by visual search performance, in which the reaction time for detecting a feature difference against a set of distractor items does not increase with the number of distractors. This suggests an unlimited capacity for the perception of such features. We provide evidence to the contrary, demonstrating that detection of differences in a simple feature such as orientation is severely impaired by additionally imposing an attentionally demanding rapid serial visual presentation task involving letter identification. The same visual stimuli exhibit non-increasing reaction time versus set-size functions. These results demonstrate that attention can be critical even for the detection of so-called 'preattentive' features.

Journal ArticleDOI
TL;DR: This review focuses on advances in the understanding of the roles played by vision in the control of human locomotion as well as effects of various visual deficits on adaptive control.

Journal ArticleDOI
18 Sep 1997-Nature
TL;DR: It is demonstrated that disparity-selective neurons in V1 signal the disparity of anticorrelated random-dot stereograms, indicating that they do not unambiguously signal stereoscopic depth, and single V1 neurons cannot account for the conscious perception of stereopsis, although combining the outputs of many V1 neuron could solve the matching problem.
Abstract: The identification of brain regions that are associated with the conscious perception of visual stimuli is a major goal in neuroscience1. Here we present a test of whether the signals on neurons in cortical area V1 correspond directly to our conscious perception of binocular stereoscopic depth. Depth perception requires that image features on one retina are first matched with appropriate features on the other retina. The mechanisms that perform this matching can be examined by using random-dot stereograms2, in which the left and right eyes view randomly positioned but binocularly correlated dots. We exploit the fact that anticorrelated random-dot stereograms (in which dots in one eye are matched geometrically to dots of the opposite contrast in the other eye) do not give rise to the perception of depth3 because the matching process does not find a consistent solution. Anticorrelated random-dot stereograms contain binocular features that could excite neurons that have not solved the correspondence problem. We demonstrate that disparity-selective neurons in V1 signal the disparity of anticorrelated random-dot stereograms, indicating that they do not unambiguously signal stereoscopic depth. Hence single V1 neurons cannot account for the conscious perception of stereopsis, although combining the outputs of many V1 neurons could solve the matching problem. The accompanying paper4 suggests an additional function for disparity signals from V1: they may be important for the rapid involuntary control of vergence eye movements (eye movements that bring the images on the two foveae into register).

01 Jan 1997
TL;DR: The role of visual perception in the control of human locomotion is discussed in this article, where the authors focus on advances in our understanding of the roles played by vision in the controlling of human motion.
Abstract: This review focuses on advances in our understanding of the roles played by vision in the control of human locomotion. Vision is unique in its ability to provide information about near and far environment almost instantaneously: this information is used to regulate locomotion on a local level (step by step basis) and a global level (route planning). Basic anatomy and neurophysiology of the sensory apparatus. the neural substrate involved in processing this visual input, descending pathways involved in effecting control and mechanisms for controlling gaze are discussed. Characteristics of visual perception subserving control of locomotion include the following: (a) intermittent visual sampling is adequate for safe travel over various terrains; (b) information about body posture and movement from the visual system is given higher priority over information from the other two sensory modalities; (c) exteroceptive information about the environment is used primarily in a feedforward sampled control mode rather than on-line control mode; (d) knowledge acquired through past experience influences the interpretation of the exteroceptive information; (e) exproprioceptive information about limb position and movement is used on-line to fine tune the swing limb trajectory; (f) exproprioceptive information about self-motion acquired through optic flow is used on-line in a sampled controlled mode. Characteristics of locomotor adaptive strategies are: (a) most adaptive strategies can be implemented successfully in one step cycle provided the attention is biased towards the visual cues: only steering has to be planned in the previous step; (b) stability requirements constrain the selection of specific avoidance strategies: (c) response is not localized to a joint or limb: it is global, complex and task specific; (d) response characteristics are dependent upon available response time; (e) effector system dynamics are exploited by the control system to simplify and effectively control swing limb trajectory. Effects of various visual deficits on adaptive control are briefly discussed. Copyright 0 1997 Elsevier Science B.V.

Journal ArticleDOI
01 Mar 1997-Brain
TL;DR: Whether the loss of phenomenal vision is a necessary consequence of striate cortical destruction and whether this structure is indispensable for conscious sight are much debated questions which need to be tackled experimentally.
Abstract: In man and monkey, absolute cortical blindness is caused by destruction of the optic radiations and/or the primary visual cortex. It is characterized by an absence of any conscious vision, but stimuli presented inside its borders may nevertheless be processed. This unconscious vision includes neuroendocrine, reflexive, indirect and forced-choice responses which are mediated by the visual subsystems that escape the direct cerebral damage and the Ensuring degeneration. While extrastriate cortical areas participate in the mediation of the forced-choice responses, a concomitant striate cortical activation does not seem to be necessary for blindsight. Whether the loss of phenomenal vision is a necessary consequence of striate cortical destruction and whether this structure is indispensable for conscious sight are much debated questions which need to be tackled experimentally.

Journal ArticleDOI
09 Jan 1997-Nature
TL;DR: The results demonstrate for the first time that visual neglect is a disorder of directing attention in time, as well as space.
Abstract: When we identify a visual object such as a word or letter, our ability to detect a second object is impaired if it appears within 400ms of the first. This phenomenon has been termed the attentional blink or dwell time and is a measure of our ability to allocate attention over time (temporal attention). Patients with unilateral visual neglect are unaware of people or objects contralateral to their lesion. They are considered to have a disorder of attending to a particular location in space (spatial attention). Here we examined the non-spatial temporal dynamics of attention in patients, using a protocol for assessing the attentional blink. Neglect patients with right parietal, frontal or basal ganglia strokes had an abnormally severe and protracted attentional blink When they identified a letter, their awareness of a subsequent letter was significantly diminished for a length of time that was three times as long as for individuals without neglect. Our results demonstrate for the first time that visual neglect is a disorder of directing attention in time, as well as space.

Book
24 Oct 1997
TL;DR: In this article, a theory of hemispheric asymmetrics in perception visual perception - lateralization in simple and complex patterns attention and visual laterality auditory perception speech perception and language a computer implementation of the double filtering by frequency theory.
Abstract: Introduction and historical overview a theory of hemispheric asymmetrics in perception visual perception - lateralization in simple and complex patterns attention and visual laterality auditory perception speech perception and language a computer implementation of the double filtering by frequency theory the DDF theory at work the two sides of perception.

Journal ArticleDOI
09 Oct 1997-Nature
TL;DR: Perceptuallearning of faces or objects enhanced the activity of inferiortemporal regions known to be involved in face and object recognitionrespectively and led to increased activity in medial and lateralparietal regions that have been implicated in attention and visual imagery.
Abstract: A degraded image of an object or face, which appears meaningless when seen for the first time, is easily recognizable after viewing an undegraded version of the same image The neural mechanisms by which this form of rapid perceptual learning facilitates perception are not well understood Psychological theory suggests the involvement of systems for processing stimulus attributes, spatial attention and feature binding, as well as those involved in visual imagery Here we investigate where and how this rapid perceptual learning is expressed in the human brain by using functional neuroimaging to measure brain activity during exposure to degraded images before and after exposure to the corresponding undegraded versions Perceptual learning of faces or objects enhanced the activity of inferior temporal regions known to be involved in face and object recognition respectively In addition, both face and object learning led to increased activity in medial and lateral parietal regions that have been implicated in attention and visual imagery We observed a strong coupling between the temporal face area and the medial parietal cortex when, and only when, faces were perceived This suggests that perceptual learning involves direct interactions between areas involved in face recognition and those involved in spatial attention, feature binding and memory recall

Journal ArticleDOI
TL;DR: Cognitive and sensorimotor maps without motion of target, background, or eye are separated, with an “induced Roelofs effect”: a target inside an off-center frame appears biased opposite the direction of the frame.
Abstract: Studies of saccadic suppression and induced motion have suggested separate representations of visual space for perception and visually guided behavior. Because these methods required stimulus motion, subjects might have confounded motion and position. We separated cognitive and sensorimotor maps without motion of target, background, or eye, with an "induced Roelofs effects": a target inside an off-center frame appears biased opposite the direction of the frame. A frame displayed to the left of a subject's center line, for example, will make a target inside the frame appear farther to the right than its actual position. The effect always influences perception, but in half of our subjects it did not influence pointing. Cognitive and sensorimotor maps interacted when the motor response was delayed; all subjects now showed a Roelofs effect for pointing, suggesting that the motor system was being fed from the biased cognitive map. A second experiment showed similar results when subjects made an open-ended cognitive response instead of a five-alternative forced choice. Experiment 3 showed that the results were not due to shifts in subjects' perception of the felt straight-ahead position. In Experiment 4, subjects pointed to the target and judged its location on the same trail. Both measures showed a Roelofs effect, indicating that each trial was treated as a single event and that the cognitive representation was accessed to localize this event in both response modes.

Journal ArticleDOI
09 May 1997-Science
TL;DR: Surprisingly, contrast adaptation barely affected the stimulus-driven modulations in the membrane potential of cortical cells, and did not produce sizable changes in membrane resistance.
Abstract: The firing rate responses of neurons in the primary visual cortex grow with stimulus contrast, the variation in the luminance of an image relative to the mean luminance. These responses, however, are reduced after a cell is exposed for prolonged periods to high-contrast visual stimuli. This phenomenon, known as contrast adaptation, occurs in the cortex and is not present at earlier stages of visual processing. To investigate the cellular mechanisms underlying cortical adaptation, intracellular recordings were performed in the visual cortex of cats, and the effects of prolonged visual stimulation were studied. Surprisingly, contrast adaptation barely affected the stimulus-driven modulations in the membrane potential of cortical cells. Moreover, it did not produce sizable changes in membrane resistance. The major effect of adaptation, evident both in the presence and in the absence of a visual stimulus, was a tonic hyperpolarization. Adaptation affects a class of synaptic inputs, most likely excitatory in nature, that exert a tonic influence on cortical cells.

Journal ArticleDOI
07 Nov 1997-Science
TL;DR: Findings suggest that the prefrontal cortex is functionally compartmentalized with respect to the nature of its inputs.
Abstract: A central issue in cognitive neuroscience concerns the functional architecture of the prefrontal cortex and the degree to which it is organized by sensory domain. To examine this issue, multiple areas of the macaque monkey prefrontal cortex were mapped for selective responses to visual stimuli that are prototypical of the brain's object vision pathway-pictorial representations of faces. Prefrontal neurons not only selectively process information related to the identity of faces but, importantly, such neurons are localized to a remarkably restricted area. These findings suggest that the prefrontal cortex is functionally compartmentalized with respect to the nature of its inputs.

Journal ArticleDOI
TL;DR: The authors found that participants were unable to identify what Gestalt grouping patterns had occurred in the background of primary-task displays (A. Mack, B. Tang, R. Tuma, S. Kahn, & I. Rock, 1992).
Abstract: Many theories of visual perception assume that before attention is allocated within a scene, visual information is parsed according to the Gestalt principles of organization. This assumption has been challenged by experiments in which participants were unable to identify what Gestalt grouping patterns had occurred in the background of primary-task displays (A. Mack, B. Tang, R. Tuma, S. Kahn, & I. Rock, 1992). In the present study, participants reported which of 2 horizontal lines was longer. Dots in the background, if grouped, formed displays similar to the Ponzo illusion (Experiments 1 and 2) or the Muller-Lyer illusion (Experiment 3). Despite inaccurate reports of what the patterns were, participants' responses on the line-length discrimination task were clearly affected by the 2 illusions. These results suggest that Gestalt grouping does occur without attention but that the patterns thus formed may not be encoded in memory without attention.

Journal ArticleDOI
TL;DR: Fourteen areas were activated in common by both tasks, only 1 of which may not be involved in visual processing (the precentral gyrus) and in addition, 2 were activation in perception but not imagery, and 5 wereactivated in imagery but not perception.

Journal ArticleDOI
TL;DR: Widefield imaging in conjunction with voltage-sensitive dyes is used to record electrical activity from the virtually intact, unanesthetized turtle brain to show that large-scale differences in neuronal timing are present and persistent during visual processing.
Abstract: The computations involved in the processing of a visual scene invariably involve the interactions among neurons throughout all of visual cortex. One hypothesis is that the timing of neuronal activity, as well as the amplitude of activity, provides a means to encode features of objects. The experimental data from studies on cat [Gray, C. M., Konig, P., Engel, A. K. & Singer, W. (1989) Nature (London) 338, 334-337] support a view in which only synchronous (no phase lags) activity carries information about the visual scene. In contrast, theoretical studies suggest, on the one hand, the utility of multiple phases within a population of neurons as a means to encode independent visual features and, on the other hand, the likely existence of timing differences solely on the basis of network dynamics. Here we use widefield imaging in conjunction with voltage-sensitive dyes to record electrical activity from the virtually intact, unanesthetized turtle brain. Our data consist of single-trial measurements. We analyze our data in the frequency domain to isolate coherent events that lie in different frequency bands. Low frequency oscillations (<5 Hz) are seen in both ongoing activity and activity induced by visual stimuli. These oscillations propagate parallel to the afferent input. Higher frequency activity, with spectral peaks near 10 and 20 Hz, is seen solely in response to stimulation. This activity consists of plane waves and spiral-like waves, as well as more complex patterns. The plane waves have an average phase gradient of approximately pi/2 radians/mm and propagate orthogonally to the low frequency waves. Our results show that large-scale differences in neuronal timing are present and persistent during visual processing.


Journal ArticleDOI
TL;DR: Two underlying processes related to cues, orienting (to location) and alerting, are hypothesized and human and animal data suggest that the orienting mechanism may be modulated by the basal forebrain cholinergic system.

Journal ArticleDOI
TL;DR: A detailed neural model is proposed of how lateral geniculate nuclei and the interblob cortical stream through V1 and V2 generate context-sensitive perceptual groupings from visual inputs and suggests a functional role for cortical layers, columns, maps and networks.

Journal ArticleDOI
TL;DR: It is demonstrated that attentional gating within the blink operates only after substantial stimulus processing has already taken place, and is discussed in terms of two forms of visual representation, namely, types and tokens.
Abstract: When people must detect several targets in a very rapid stream of successive visual events at the same location, detection of an initial target induces misses for subsequent targets within a brief period. This attentional blink may serve to prevent interruption of ongoing target processing by temporarily suppressing vision for subsequent stimuli. We examined the level at which the internal blink operates, specifically, whether it prevents early visual processing or prevents quite substantial processing from reaching awareness. Our data support the latter view. We observed priming from missed letter targets, benefiting detection of a subsequent target with the same identity but a different case. In a second study, we observed semantic priming from word targets that were missed during the blink. These results demonstrate that attentional gating within the blink operates only after substantial stimulus processing has already taken place. The results are discussed in terms of two forms of visual representatio...

Journal ArticleDOI
31 Jan 1997-Science
TL;DR: Results show that parietal extinction arises only after substantial processing has generated visual surfaces, supporting recent claims that visual attention is surface-based.
Abstract: Unilateral brain damage frequently produces “extinction,” in which patients can detect brief single visual stimuli on either side but are unaware of a contralesional stimulus if presented concurrently with an ipsilesional stimulus. Explanations for extinction have invoked deficits in initial processes that operate before the focusing of visual attention or in later attentive stages of vision. Preattentive vision was preserved in a parietally damaged patient, whose extinction was less severe when bilateral stimuli formed a common surface, even if this required visual filling-in to yield illusory Kanizsa figures or completion of partially occluded figures. These results show that parietal extinction arises only after substantial processing has generated visual surfaces, supporting recent claims that visual attention is surface-based.

Journal ArticleDOI
TL;DR: It is proposed that activation in left parieto-occipital cortex reflects the use of imagery-related visuo-spatial processes to enable the tactile discrimination of orientation.
Abstract: Mental imagery is thought to play a key role in certain aspects of visual perception and to depend on neural activity in visual cortex. We asked whether tactile discrimination of grating orientation, which appears to involve visual mental imagery, recruits visual cortical areas. H215O positron emission tomography was performed in humans during presentation of gratings to the right index fingerpad. Selective attention to grating orientation significantly increased regional cerebral blood flow, relative to a control task involving selective attention to grating dimensions, in a region located in left parieto-occipital cortex. We propose that this activation reflects the use of imagery-related visuo-spatial processes to enable the tactile discrimination of orientation.