scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2002"


Journal ArticleDOI
05 Dec 2002-Neuron
TL;DR: It is proposed that explicit vision advances in reverse hierarchical direction, as shown for perceptual learning, and feature search "pop-out" is attributed to high areas, where large receptive fields underlie spread attention detecting categorical differences.

1,348 citations


Reference BookDOI
15 Jul 2002
TL;DR: Vol. 1 1. Neural Basis of Vision, 2. Associative Structures in Pavlovian and Instrumental Conditioning, 3. Reinforcement Learning, and 4. Learning: Laws and Models of Basic Conditioning.
Abstract: Vol. 1 1. Neural Basis of Vision. 2. Color Vision. 3. Depth Perception. 4. Perception of Visual Motion. 5. Perceptual Organization in Vision. 6. Attention. 7. Visual Object Recognition. 8. Motor Control. 9. Neural Basis of Audition. 10. Auditory Perception and Cognition. 11. Music Perception and Cognition. 12. Speech Perception. 13. Neural Basis of Haptic Perception. 14. Touch and Haptics. 15. Perception of Pain and Temperature. 16. Taste. 17. Olfaction. Vol. 2 1. Kinds of Memory. 2. Models of Memory. 3. Cognitive Neuroscience. 4. Spatial Cognition. 5. Knowledge Representation. 6. Psycholinguistics. 7. Language Processing. 8. Problem Solving. 9. Reasoning. 10. Decision Making. 11. Concepts & Categorization. 12. Cognitive Development. 13. Culture & Cognition. Vol. 3 1. Associative Structures in Pavlovian and Instrumental Conditioning. 2. Learning: Laws and Models of Basic Conditioning. 3. Reinforcement Learning. 4. Neural Analysis of Learning in Simple Systems. 5. Learning Mutants. 6. Learning Instincts. 7. Perceptual Learning. 8. Spatial Learning. 9. Temporal Learning. 10. Role of Learning in Cognitive Development. 11. Language Acquisition. 12. Role of Learning in the Operation of Motivational Systems. 13. Emotional Plasticity. 14. Anatomy of Motivation. 15. Hunger Energy Homeostasis. 16. Thirst and Water-Salt Appetite. 17. Reproductive Motivation. 18. Social Behavior. 19. Addiction. Vol. 4 1. Representational Measurement Theory. 2. Signal Detection Theory. 3. Psychophysical Scaling. 4. Cognitive Neuropsychology. 5. Functional Brain Imaging. 6. Neural Network Modeling. 7. Parallel and Serial Processing. 8. Methodology and Statistics in Single-Subject Experiments. 9. Analysis, Interpretation, and Visual Presentation of Experimental Data. 10. Meta-Analysis. 11. Mathematical Modeling. 12. Analysis of Response Time Distributions. 13. Testing and Measurement. 14. Personality and Individual Differences. 15. Electrophysiology of Attention. 16. Single vs. Multiple Systems of Memory and Learning. 17. Infant Cognition. 18. Aging and Cognition.

878 citations


Journal ArticleDOI
TL;DR: The authors examined the processes by which perceptual mechanisms become attuned to the contingencies of affective signals in the environment, and measured the sequential, content-based properties of feature detection in emotion recognition processes.
Abstract: The present research examines visual perception of emotion in both typical and atypical development. To examine the processes by which perceptual mechanisms become attuned to the contingencies of affective signals in the environment, the authors measured the sequential, content-based properties of feature detection in emotion recognition processes. To evaluate the role of experience, they compared typically developing children with physically abused children, who were presumed to have experienced high levels of threat and hostility. As predicted, physically abused children accurately identified facial displays of anger on the basis of less sensory input than did controls, which suggests that physically abused children have facilitated access to representations of anger. The findings are discussed in terms of experiential processes in perceptual learning.

430 citations


Journal ArticleDOI
TL;DR: Using functional magnetic resonance imaging in humans, neural activity is measured 24 h after a single session of intensive monocular training on visual texture discrimination to provide a direct demonstration of learning-dependent reorganization at early processing stages in the visual cortex of adult humans.
Abstract: Visual texture discrimination has been shown to induce long-lasting behavioral improvement restricted to the trained eye and trained location in visual field [Karni, A. & Sagi, D. (1991) Proc. Natl. Acad. Sci. USA 88, 4966-4970]. We tested the hypothesis that such learning involves durable neural modifications at the earliest cortical stages of the visual system, where eye specificity, orientation, and location information are mapped with highest resolution. Using functional magnetic resonance imaging in humans, we measured neural activity 24 h after a single session of intensive monocular training on visual texture discrimination, performed in one visual quadrant. Within-subject comparisons between trained and untrained eye for targets presented within the same quadrant revealed higher activity in a corresponding retinotopic area of visual cortex. Functional connectivity analysis showed that these learning-dependent changes were not associated with an increased engagement of other brain areas remote from early visual cortex. We suggest that these new data are consistent with recent proposals that the cellular mechanisms underlying this type of perceptual learning may involve changes in local connections within primary visual cortex. Our findings provide a direct demonstration of learning-dependent reorganization at early processing stages in the visual cortex of adult humans.

394 citations


Journal ArticleDOI
TL;DR: The authors found that predictive relationships yielded better learning for sequentially presented auditory stimuli, and for simultaneously presented visual stimuli, but no such advantage was found for sequential presented visual stimulus, which suggests that constraints on learning mechanisms that mirror the structure of natural languages are not tailored solely for language learning.

339 citations


Journal ArticleDOI
TL;DR: It will be shown that functional neuroimaging can be used to test for interactions between bottom-up and top-up inputs to an area and the prevalence of top-down influences and the plausibility of generative models of sensory brain function are pointed toward.

302 citations


Journal ArticleDOI
TL;DR: Pooling models suggest that the behavioral improvement was accomplished with a task-dependent and orientation-selective pooling of unaltered signals from early visual neurons, suggesting that, even for training with stimuli suited to the selectivities found in early areas of visual cortex, behavioral improvements can occur in the absence of pronounced changes in the physiology of those areas.
Abstract: Performance in visual discrimination tasks improves with practice. Although the psychophysical parameters of these improvements have suggested the involvement of early areas in visual cortex, there...

301 citations


Journal ArticleDOI
TL;DR: The results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception.
Abstract: Improvement in perception takes place within the training session and from one session to the next. The present study aims at determining the time course of perceptual learning as revealed by changes in auditory event-related potentials (ERPs) reflecting preattentive processes. Subjects were trained to discriminate two complex auditory patterns in a single session. ERPs were recorded just before and after training, while subjects read a book and ignored stimulation. ERPs showed a negative wave called mismatch negativity (MMN)-which indexes automatic detection of a change in a homogeneous auditory sequence-just after subjects learned to consciously discriminate the two patterns. ERPs were recorded again 12, 24, 36, and 48 h later, just before testing performance on the discrimination task. Additional behavioral and neurophysiological changes were found several hours after the training session: an enhanced P2 at 24 h followed by shorter reaction times, and an enhanced MMN at 36 h. These results indicate that gains in performance on the discrimination of two complex auditory patterns are accompanied by different learning-dependent neurophysiological events evolving within different time frames, supporting the hypothesis that fast and slow neural changes underlie the acquisition of improved perception.

257 citations


Journal ArticleDOI
TL;DR: In this article, the authors compared perceptual learning in 16 psychophysical studies, ranging from low-level spatial frequency and orientation discrimination tasks to high-level object and face recognition tasks, and found that the amount of learning varies widely between different tasks.
Abstract: We compared perceptual learning in 16 psychophysical studies, ranging from low-level spatial frequency and orientation discrimination tasks to high-level object and face-recognition tasks. All studies examined learning over at least four sessions and were carried out foveally or using free fixation. Comparison of learning effects across this wide range of tasks demonstrates that the amount of learning varies widely between different tasks. A variety of factors seems to affect learning, including the number of perceptual dimensions relevant to the task, external noise, familiarity, and task complexity.

236 citations


Journal ArticleDOI
01 Oct 2002-Brain
TL;DR: It is found that some RD subjects have generally impaired perceptual skills, and the "magnocellular" level of description did not capture the nature of the perceptual difficulties in any of the RD individuals assessed by us.
Abstract: The magnocellular theory is a prominent, albeit controversial view asserting that many reading disabled (RD) individuals suffer from a specific impairment within the visual magnocellular pathway. In order to assess the validity of this theory we tested its two basic predictions. The first is that a subpopulation of RD subjects will show impaired performance across a broad range of psychophysical tasks relying on magnocellular functions. The second is that this subpopulation will not be consistently impaired across tasks that do not rely on magnocellular functions. We defined a behavioural criterion for magnocellular function, which incorporates performance in flicker detection, detection of drifting gratings (at low spatial frequencies), speed discrimination and detection of coherent dot motion. We found that some RD subjects (six out of 30) had impaired magnocellular function. Nevertheless, these RD subjects were also consistently impaired on a broad range of other perceptual tasks. The performance of the other subgroup of RD subjects on magnocellular tasks did not differ from that of controls. However, they did show impaired performance in both visual and auditory non‐magnocellular tasks requiring fine frequency discriminations. The stimuli used in these tasks were neither modulated in time nor briefly presented. We conclude that some RD subjects have generally impaired perceptual skills. Many RD subjects have more specific perceptual deficits; however, the ‘magnocellular’ level of description did not capture the nature of the perceptual difficulties in any of the RD individuals assessed by us.

225 citations


Journal ArticleDOI
TL;DR: Exposure to task-irrelevant motion improved sensitivity to the local motion directions within the stimulus, which are processed at low levels of the visual system, and this results indicate that when attentional influence is limited, lower-level motion processing is more receptive to long-term modification than higher- level motion processing in the visual cortex.
Abstract: Simple exposure is sufficient to sensitize the human visual system to a particular direction of motion, but the underlying mechanisms of this process are unclear. Here, in a passive perceptual learning task, we found that exposure to task-irrelevant motion improved sensitivity to the local motion directions within the stimulus, which are processed at low levels of the visual system. In contrast, task-irrelevant motion had no effect on sensitivity to the global motion direction, which is processed at higher levels. The improvement persisted for at least several months. These results indicate that when attentional influence is limited, lower-level motion processing is more receptive to long-term modification than higher-level motion processing in the visual cortex.

Journal ArticleDOI
TL;DR: Evidence is provided that supports the possibility of learned categorical perception (CP) and the data are consistent with the possibility that language may shape color perception and suggest a plausible mechanism for the linguistic relativity hypothesis.
Abstract: Color perception can be categorical: Between-category discriminations are more accurate than equivalent within-category discriminations. The effects could be inherited, learned, or both. The authors provide evidence that supports the possibility of learned categorical perception (CP). Experiment 1 demonstrated that observers' color discrimination is flexible and improves through repeated practice. Experiment 2 demonstrated that category learning simulates effects of "natural" color categories on color discrimination. Experiment 3 investigated the time course of acquired CP. Experiment 4 found that CP effects are acquired through hue- and lightness-based category learning and obtained interesting data on the dimensional perception of color. The data are consistent with the possibility that language may shape color perception and suggest a plausible mechanism for the linguistic relativity hypothesis.

Journal ArticleDOI
TL;DR: Pisoni et al. as mentioned in this paper used attention-to-dimension (A2D) models of perceptual learning to study effects of training on the perception of an unfamiliar phonetic contrast.
Abstract: A class of selective attention models often applied to speech perception is used to study effects of training on the perception of an unfamiliar phonetic contrast. Attention-to-dimension (A2D) models of perceptual learning assume that the dimensions that structure listeners’ perceptual space are constant and that learning involves only the reweighting of existing dimensions to emphasize or de-emphasize different sensory dimensions. Multidimensional scaling is used to identify the acoustic–phonetic dimensions listeners use before and after training to recognize the 3 classes of Korean stop consonants. Results suggest that A2D models can account for some observed restructuring of listeners’ perceptual space, but listeners also show evidence of directing attention to a previously unattended dimension of phonetic contrast. Recently, speech researchers have begun to make use of perceptual classification models that stem from the generalized context model (GCM) of perceptual learning and categorization developed by Nosofsky (1986). This model has particular application to phonetic learning (acquisition of new phonetic categories) in the context of first and second language acquisition (e.g., see Jusczyk, 1994, 1997; Kuhl & Iverson, 1995; Pisoni, 1997), although it is usually applied as a post hoc explanation of experimental results. This model basically assumes that categorization can be understood within a spatial metaphor (see Shepard, 1957, 1974; but also Tversky, 1977; Tversky & Gatti, 1982) in which sensory attributes of stimuli are represented as the dimensional structure of a categorization space. In broad terms, learning shifts attention to dimensions relevant for classification and away from dimensions that are irrelevant. The operations of attending and ignoring are formalized as a stretching or shrinking of the dimensions to represent shifts of attention to or away from dimensions of categorization. The GCM framework seems to fit with some general patterns of findings in perceptual learning of speech (see Pisoni, Lively, & Logan, 1994). More importantly, the GCM formalizes a theory of selective attention, and therefore applying it to phonetic learning provides a concrete cognitive model to describe phenomena that are commonly termed attentional without further clarification (see especially discussions by Jusczyk, 1994; Pisoni et al.,

Journal ArticleDOI
TL;DR: It is concluded that induced GBRs might represent a signature of synchronized neural activity in a Hebbian cell assembly, activated by the fragmented picture after perceptual learning took place.
Abstract: Fragmented pictures of an object, which appear meaningless when seen for the first time, can easily be identified after the presentation of an unfragmented version of the same picture. The neuronal mechanism for such a rapid perceptual learning phenomenon is largely unknown. Recently, induced gamma band responses (GBRs) have been discussed as a possible physiological correlate of activity in cell assemblies formed by learning. The present study was designed to investigate the modulation of induced GBRs in a perceptual learning task by using a 128-channel EEG montage. In the first sequence of the experiment, fragmented pictures from the Snodgrass and Vandervart inventory were presented. The fragmentation of the pictures was selected that subjects were unable to identify them. In the second experimental sequence - the perceptual learning sequence - half of the pictures were displayed in their unfragmented version. In the third sequence, all pictures were presented again in the fragmented version. Now, subjects had to rate whether or not they could identify the images. Results showed an increase in spectral gamma power at parietal electrode sites for identified pictures. In addition, neural activity in the gamma band was highly synchronized between posterior electrodes. For pictures not presented in their complete version, we found no such pattern in the third sequence. From our results, we concluded that induced GBRs might represent a signature of synchronized neural activity in a Hebbian cell assembly, activated by the fragmented picture after perceptual learning took place. No difference between identified and unidentified pictures was found in the visual evoked potential in the same time range and in the evoked GBR in the same frequency range as the induced response.

Journal ArticleDOI
TL;DR: Assessment of behavioral discrimination of nearly novel odorants and the results demonstrate that associative conditioning can enhance olfactory acuity for odors that are the same as or similar to the learned odorant, but not for odor dissimilar to thelearned odorant.
Abstract: Perceptual learning has been demonstrated in several thalamocortical sensory systems wherein experience enhances sensory acuity for trained stimuli. This perceptual learning is believed to be dependent on changes in sensory cortical receptive fields. Sensory experience and learning also modifies receptive fields and neural response patterns in the mammalian olfactory system; however, to date there has been little reported evidence of learned changes in behavioral olfactory acuity. The present report used a bradycardial orienting response and cross-habituation paradigm that allowed assessment of behavioral discrimination of nearly novel odorants, and then used the same paradigm to examine odorant discrimination after associative olfactory conditioning with similar or dissimilar odorants. The results demonstrate that associative conditioning can enhance olfactory acuity for odors that are the same as or similar to the learned odorant, but not for odors dissimilar to the learned odorant. Furthermore, scopolamine injected before associative conditioning can block the acquisition of this learned enhancement in olfactory acuity. These results could have important implications for mechanisms of olfactory perception and memory, as well as for correlating behavioral olfactory acuity with observed spatial representations of odorant features in the olfactory system.

Journal Article
TL;DR: In this article, the authors explore a brand of scepticism about perceptual experience that takes its start from recent work in psychology and philosophy of mind on change blindness and related phenomena, and show how this problem can be addressed by drawing on an enactive or sensorimotor approach to perceptual consciousness.
Abstract: In this paper I explore a brand of scepticism about perceptual experience that takes its start from recent work in psychology and philosophy of mind on change blindness and related phenomena. I argue that the new scepticism rests on a problematic phenomenology of perceptual experience. I then consider a strengthened version of the sceptical challenge that seems to be immune to this criticism. This strengthened sceptical challenge formulates what I call the problem of perceptual presence. I show how this problem can be addressed by drawing on an enactive or sensorimotor approach to perceptual consciousness. Our experience of environmental detail consists in our access to that detail thanks to our possession of practical knowledge of the way in which what we do and sensory stimulation depend on each other.

Journal ArticleDOI
TL;DR: In two experiments it is found that contrast sensitivity increases following extensive practice at detecting briefly presented sinusoidal luminance gratings and that learning is maintained after six months.

Journal ArticleDOI
TL;DR: In this paper, the capacity for semantic (fact) learning in the profoundly amnesic patient E.P., who has extensive damage limited primarily to the medial temporal lobe, was studied.
Abstract: Most amnesic patients with damage to the medial temporal lobe retain some capacity to learn new information about facts and events. In many cases, the learning appears to depend on a residual ability to acquire conscious (declarative) knowledge. We have studied the capacity for semantic (fact) learning in the profoundly amnesic patient E.P., who has extensive damage limited primarily to the medial temporal lobe. E.P. was presented with factual information (novel three-word sentences) during 24 study sessions across 12 weeks. E.P. performed much more poorly than controls but demonstrated unmistakable improvement across the sessions, achieving after 12 weeks a score of 18.8% correct on a cued-recall test and 64.6% correct on a two-alternative, forced-choice test. Unlike controls, E.P.'s learning was not accompanied by conscious knowledge about which answers were correct. He assigned the same confidence ratings to his correct answers as his incorrect answers. Moreover, on the forced-choice test his response times were identical for correct and incorrect responses. Furthermore, unlike controls, he could not respond correctly when the second word in each sentence was replaced by a synonym. Thus, what E.P. learned was rigidly organized, unavailable as conscious knowledge, and in all respects exhibited the characteristics of nondeclarative memory. Thus, factual information, which is ordinarily learned as declarative (conscious) knowledge and with the participation of the medial temporal lobe, can be acquired as nondeclarative memory, albeit very gradually and in a form that is outside of awareness and that is not represented as factual knowledge. We suggest that E.P.'s learning depended on a process akin to perceptual learning and occurred directly within neocortex.

Journal ArticleDOI
TL;DR: The results indicated that pitch discrimination learning is, at least to some extent, timbre-specific, and cannot be viewed as a reduction of an internal noise which would affect directly the output of a neural device extracting pitch from both pure tones and complex tones including low-rank harmonics.
Abstract: This paper reports two experiments concerning the stimulus specificity of pitch discrimination learning. In experiment 1, listeners were initially trained, during ten sessions (about 11 000 trials), to discriminate a monaural pure tone of 3000 Hz from ipsilateral pure tones with slightly different frequencies. The resulting perceptual learning (improvement in discrimination thresholds) appeared to be frequency-specific since, in subsequent sessions, new learning was observed when the 3000-Hz standard tone was replaced by a standard tone of 1200 Hz, or 6500 Hz. By contrast, a subsequent presentation of the initial tones to the contralateral ear showed that the initial learning was not, or was only weakly, ear-specific. In experiment 2, training in pitch discrimination was initially provided using complex tones that consisted of harmonics 3–7 of a missing fundamental (near 100 Hz for some listeners, 500 Hz for others). Subsequently, the standard complex was replaced by a standard pure tone with a frequency ...


Journal ArticleDOI
03 Jan 2002-Neuron
TL;DR: It is proposed that eye-position signals can be exploited by visual cortex as classical conditioning stimuli, enabling the perceptual learning of systematic relationships between point of regard and the structure of the visual environment.

Journal ArticleDOI
TL;DR: Results provide the first evidence that embryos are sensitive to redundant, bimodal information and that it can facilitate learning during the prenatal period.
Abstract: Information presented redundantly and in temporal synchrony across sensory modalities (intersensory redundancy) selectively recruits attention and facilitates perceptual learning in human infants. This comparative study examined whether intersensory redundancy also facilitates perceptual learning prenatally. The authors assessed quail (Colinus virginianus) embryos’ ability to learn a maternal call when it was (a) unimodal, (b) concurrent but asynchronous with patterned light, or (c) redundant and synchronous with patterned light. Chicks’ preference for the familiar over a novel maternal call was assessed 24 hr following hatching. Chicks receiving redundant, synchronous stimulation as embryos learned the call 4 times faster than those who received unimodal exposure. Chicks who received asynchronous bimodal stimulation showed no evidence of learning. These results provide the first evidence that embryos are sensitive to redundant, bimodal information and that it can facilitate learning during the prenatal period.

Journal ArticleDOI
TL;DR: The enhanced behavioral and neurophysiological sensitivity found after training indicates a strong relationship between learning and (plastic) changes in the cortical substrate.
Abstract: In this magnetoencephalographic (MEG) study, we examined with high temporal resolution the traces of learning in the speech-dominant left-hemispheric auditory cortex as a function of newly trained mora-timing. In Japanese, the "mora" is a temporal unit that divides words into almost isochronous segments (e.g., na-ka-mu-ra and to-o-kyo-o each comprises four mora). Changes in the brain responses of a group of German and Japanese subjects to differences in the mora structure of Japanese words were compared. German subjects performed a discrimination training in 10 sessions of 1.5 h each day. They learned to discriminate Japanese pairs of words (in a consonant, anni-ani; and a vowel, kiyo-kyo, condition), where the second word was shortened by one mora in eight steps of 15 msec each. A significant increase in learning performance, as reflected by behavioral measures, was observed, accompanied by a significant increase of the amplitude of the Mismatch Negativity Field (MMF). The German subjects' hit rate for detecting durational deviants increased by up to 35%. Reaction times and MMF latencies decreased significantly across training sessions. Japanese subjects showed a more sensitive MMF to smaller differences. Thus, even in young adults, perceptual learning of non-native mora-timing occurs rapidly and deeply. The enhanced behavioral and neurophysiological sensitivity found after training indicates a strong relationship between learning and (plastic) changes in the cortical substrate.

Journal ArticleDOI
TL;DR: The PFC is specifically involved in explicit and implicit motor sequence learning and different PFC regions may be selectively involved in such learning depending on the cognitive demands of the sequential task.
Abstract: Objective: (1) To verify whether the prefrontal cortex (PFC) is specifically involved in visuomotor sequence learning as opposed to other forms of motor learning and (2) to establish the role of executive functions in visuomotor sequence learning. Background: Visuomotor skill learning depends on the integrity of the premotor and parietal cortex; the prefrontal cortex, however, is essential when the learning of a sequence is required. Methods: We studied 25 patients with PFC lesions and 86 controls matched for age and educational level. Participants performed: (1) a Pursuit Tracking Task (PTT), composed of a random tracking task (perceptual learning) and a pattern tracking task (explicit motor sequence learning with learning indicated by the decrease in mean root square error across trial blocks), (2) a 12-item sequence version of a serial reaction time task (SRTT) with specific implicit motor sequence learning indicated by the rebound increase in response time when comparing the last sequence block with the next random block, and (3) a neuropsychological battery that assessed executive functions. Results: PFC patients were impaired in sequence learning on the pattern tracking task of the PTT and on the SRTT as compared to controls, but performed normally on the PTT random tracking task. Learning on the PTT did not correlate with learning on the SRTT. PTT performance correlated with planning functions while SRTT performance correlated with working memory capacity. Conclusions: The PFC is specifically involved in explicit and implicit motor sequence learning. Different PFC regions may be selectively involved in such learning depending on the cognitive demands of the sequential task.

Journal ArticleDOI
TL;DR: A developmental systems approach is proposed that holds that the development of intersensory integration consists of the heterochronous emergence of heterogeneous perceptual skills.

Journal ArticleDOI
TL;DR: The model proposes how perceptual learning, opponent processing, and habituation at both monocular and binocular surface representations are involved, including early thalamocortical sites, and explains the anomalous ME utilizing these multiple processing sites.

Book ChapterDOI
26 Sep 2002
TL;DR: The study of human face processing has advanced considerably in recent years, from consisting of a collection of isolated empirical facts and anecdotal observations to a relatively coherent view of the complexity and diversity of the problems tackled by a human observer when confronted with a face as discussed by the authors.
Abstract: The study of human face processing has advanced considerably in recent years, from consisting of a collection of isolated empirical facts and anecdotal observations to a relatively coherent view of the complexity and diversity of the problems tackled by a human observer when confronted with a face. This rapid progress can be traced to the proposal of comprehensive theories of face processing (cf. Ellis, 1975, 1986; Hay and Young, 1982; Bruce and Young, 1986), which have provided a theoretical framework for investigating human face processing within functional sub-systems. These models have had much to say about the kinds of tasks subserved by the human face processing system (e.g. naming faces, extracting visual categorical information such as sex and age, etc.), and about the co-ordination of processing among these tasks (e.g. Young et al., 1986). They have also provided important constraints for making sense of neuropsychological data on patients with various face processing deficits (e.g. Bruyer, 1986). Despite the success of these models in guiding research efforts into many aspects of human face processing, they have provided somewhat less guidance in understanding the immensely complicated problems solved by the perceptual system in extracting and representing the richness of the perceptual information available in human faces. In recent years, it has been primarily from computational models that the difficulty of this problem and its importance to understanding human face processing abilities has come to be appreciated.

Journal ArticleDOI
TL;DR: It is found that both, unique visual features and differences in brightness distribution lead to parallelisation with practice of originally inefficient search, indicating that perceptual learning in visual search does not solely reflect an unspecific global improvement in search strategy.

Journal ArticleDOI
TL;DR: Evidence indicates that synaptic learning rules apply in intact brains, and the evidence indicates that they do, and reveals interesting implications for brain development and perceptual learning.

Journal ArticleDOI
TL;DR: Using a clever stimulus that separates local from global motion features, the authors of a new paper show that perceptual learning occurs at low cortical levels when the motion is irrelevant to the observer's task, whereas higher-level learning of the same stimulus requires attention.
Abstract: Using a clever stimulus that separates local from global motion features, the authors of a new paper show that perceptual learning occurs at low cortical levels when the motion is irrelevant to the observer's task, whereas higher-level learning of the same stimulus requires attention.