scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 1994"


Journal ArticleDOI
29 Jul 1994-Science
TL;DR: Performance of a basic visual discrimination task improved after a normal night's sleep, indicating that a process of human memory consolidation, active during sleep, is strongly dependent on REM sleep.
Abstract: Several paradigms of perceptual learning suggest that practice can trigger long-term, experience-dependent changes in the adult visual system of humans. As shown here, performance of a basic visual discrimination task improved after a normal night's sleep. Selective disruption of rapid eye movement (REM) sleep resulted in no performance gain during a comparable sleep interval, although non-REM slow-wave sleep disruption did not affect improvement. On the other hand, deprivation of REM sleep had no detrimental effects on the performance of a similar, but previously learned, task. These results indicate that a process of human memory consolidation, active during sleep, is strongly dependent on REM sleep.

988 citations


Journal ArticleDOI
TL;DR: This article found that the ability to identify a talker's voice improved intelligibility of novel words produced by that talker, suggesting that speech perception may involve talker-contingent processes whereby perceptual learning of aspects of the vocal source facilitates the subsequent phonetic analysis of acoustic signal.
Abstract: To determine how familiarity with a talker's voice affects perception of spoken words, we trained two groups of subjects to recognize a set of voices over a 9-day period One group then identified novel words produced by the same set of talkers at four signal-to-noise ratios Control subjects identified the same words produced by a different set of talkers The results showed that the ability to identify a talker's voice improved intelligibility of novel words produced by that talker The results suggest that speech perception may involve talker-contingent processes whereby perceptual learning of aspects of the vocal source facilitates the subsequent phonetic analysis of the acoustic signal

512 citations


Journal ArticleDOI
TL;DR: In this article, four experiments investigated the influence of categorization training on perceptual discrimination and found that acquired equivalence within a categorization-relevant dimension was not found for either integral or separable dimensions.
Abstract: Four experiments investigated the influence of categorization training on perceptual discrimination Ss were trained according to 1 of 4 different categorization regimes Subsequent to category learning, Ss performed a Same-Different judgment task Ss' sensitivities (d's) for discriminating between items that varied on category-(ir)relevant dimensions were measured Evidence for acquired distinctiveness (increased perceptual sensitivity for items that are categorized differently) was obtained One case of acquired equivalence (decreased perceptual sensitivity for items that are categorized together) was found for separable, but not integral, dimensions Acquired equivalence within a categorization-relevant dimension was never found for either integral or separable dimensions The relevance of the results for theories of perceptual learning, dimensional attention, categorical perception, and categorization are discussed

507 citations


Journal ArticleDOI
TL;DR: A plasticity in early vision governed by Hebbian-like rules is suggested, with learning induces an increase in the spatial range of lateral interactions and the induced longer-range facilitation is a result of internal response transmission via a cascade of local connections.
Abstract: Perceptual learning has been shown to affect early visual processes Here, we show that learning induces an increase in the spatial range of lateral interactions Using a lateral masking/facilitation paradigm and bandpass-localized stimuli, we measured the interaction range before and after extensive training on a threshold detection task For naive observers, target threshold was found to be facilitated by mask presence at distances up to six times the target period However, practice had the effect of increasing the facilitation range by at least a factor of three We suggest that the induced longer-range facilitation is a result of internal response transmission via a cascade of local connections The data presented also show that this chain can be broken These results suggest a plasticity in early vision governed by Hebbian-like rules

249 citations


Journal ArticleDOI
TL;DR: Studies examining the time course of learning indicate that at least two different learning processes are involved in perceptual learning, reflecting different levels of processing.

210 citations


Journal ArticleDOI

202 citations


Journal ArticleDOI
TL;DR: Psychophysical evidence for learning at early stages in sensory pathways, particularly visual, is reviewed and anatomical and physiological studies in primary sensory areas indicate that the properties of neurons and functional architecture of cortex are capable of undergoing modification by experience.
Abstract: Until recently, it was commonly believed that within the early stages of sensory processing, the functional properties of neurons and the circuitry of sensory cortex are subject to experience early in cortical development but are fixed in adulthood. It is obvious, however, that some form of neural plasticity must exist well into adulthood, because we continue to be capable of adapting to experience and of learning to recognize new objects. One usually associates learning with the acquisition and storage of complex percepts, such as faces, which is generally believed to be an attribute of advanced stages of cortical processing. There is an accumulating body of evidence indicating that, quite to the contrary, even at the earliest stages of sensory processing, neuronal functional specificity is mutable and subject to experience. In this issue of the Proceedings, Polat and Sagi (1) report an important functional consequence of perceptual learning: Lateral interactions in visual space, an essential component of the integration of local features into a unified percept, can be induced to increase in spatial extent by training (see below). The focus of this review is psychophysical evidence for learning at early stages in sensory pathways, particularly visual. Most of the studies of this form of learning do not require giving subjects error feedback. Rather, there is improvement in performance simply as a result of repeating a perceptual discrimination task many times, which involves exposure to a stimulus and evaluation of a particular visual attribute. Early perceptual learning has been seen in various experiments for over a century. But there has been a flurry of recent studies showing that this form of implicit learning operates on various time scales, ranging from seconds to weeks, and indicating that its mechanism may be found in primary sensory cortex. In parallel with the advances in the psychophysical characterization of perceptual learning, anatomical and physiological studies in primary sensory areas indicate that the properties of neurons and functional architecture of cortex are capable of undergoing modification by experience.

103 citations


Journal ArticleDOI
TL;DR: The hypothesis that vernier breaks are detected ‘early’ during pattern recognition is supported by the fact that reaction times for the detection of verniers depend hardly at all on the number of stimuli presented simultaneously, indicating that deviation from straightness is an elementary feature for visual pattern recognition in humans that is detected at an early stage of pattern recognition.
Abstract: A new theory of visual object recognition by Poggio et al that is based on multidimensional interpolation between stored templates requires fast, stimulus-specific learning in the visual cortex. Indeed, performance in a number of perceptual tasks improves as a result of practice. We distinguish between two phases of learning a vernier-acuity task, a fast one that takes place within less than 20 min and a slow phase that continues over 10 h of training and probably beyond. The improvement is specific for relatively 'simple' features, such as the orientation of the stimulus presented during training, for the position in the visual field, and for the eye through which learning occurred. Some of these results are simulated by means of a computer model that relies on object recognition by multidimensional interpolation between stored templates. Orientation specificity of learning is also found in a jump-displacement task. In a manner parallel to the improvement in performance, cortical potentials evoked by the jump displacement tend to decrease in latency and to increase in amplitude as a result of training. The distribution of potentials over the brain changes significantly as a result of repeated exposure to the same stimulus. The results both of psychophysical and of electrophysiological experiments indicate that some form of perceptual learning might occur very early during cortical information processing. The hypothesis that vernier breaks are detected 'early' during pattern recognition is supported by the fact that reaction times for the detection of verniers depend hardly at all on the number of stimuli presented simultaneously. Hence, vernier breaks can be detected in parallel at different locations in the visual field, indicating that deviation from straightness is an elementary feature for visual pattern recognition in humans that is detected at an early stage of pattern recognition. Several results obtained during the last few years are reviewed, some new results are presented, and all these results are discussed with regard to their implications for models of pattern recognition.

90 citations


Book ChapterDOI
01 Jan 1994

76 citations


Journal ArticleDOI
TL;DR: The alterations in neurophysiological activity in the human brain induced by repeated presentation of visual stimuli, including a spatio-temporal activation pattern with steep gradients over the primary visual cortex appeared to be correlated with plasticity in thehuman visual system.
Abstract: Rapid learning processes are crucial for human object recognition. We report here on the alterations in neurophysiological activity in the human brain induced by repeated presentation of visual stimuli. In psychophysical experiments the percentage of correct responses increased significantly within less than 30 minutes in untrained observers. This stimulus-specific improvement was not carried over to differently oriented stimuli. Similar learning effects were observed in component latencies of evoked potential field distributions. The occurrence of specific potential field configurations reflected perceptual learning. A spatio-temporal activation pattern with steep gradients over the primary visual cortex appeared to be correlated with plasticity in the human visual system.

42 citations


Journal ArticleDOI
01 Oct 1994
TL;DR: This article found that sufficient exposure to relevant stimulus variation produces more efficient information extraction, and that perceptual learning may explain the difference between novices and experts in many piloting skills, which may also explain the differences in piloting performance.
Abstract: Differences between novices and experts in many piloting skills may be due to perceptual learning. Sufficient exposure to relevant stimulus variation produces more efficient information extraction,...

Journal ArticleDOI
TL;DR: Research is reviewed which reveals the surprisingly advanced perceptual skills of very young infants and some changes in these capacities which occur early in life; possible mechanisms which may underlie these changes are discussed.
Abstract: Research is reviewed which reveals the surprisingly advanced perceptual skills of very young infants and some changes in these capacities which occur early in life; possible mechanisms which may underlie these changes are discussed. Newborns readily turn toward visual, auditory, and tactual stimulation, indicating that primitive localization systems operate at birth. However, their pattern perception appears to be more limited, with the notable exception of certain facial configurations which may have a privileged status. During the period from 1 to 3 months of life, auditory localization responses decrease substantially from neonatal levels while interest in visual patterns increases; indeed, during this period infants seem to become 'captured' by visual stimuli. By 4 months of age, infants turn rapidly and accurately towards off-centered sounds again, as they begin to reach for visible and invisible sounding objects. Between 3 and 4 months of age, they become sensitive to various types of static pattern regularities such as symmetry and other global configurational properties, and to dynamic aspects of faces (e.g. changes in facial expressions). Major structural maturation of the visual cortex at this age may underlie these new levels of auditory-visual spatial integration and pattern analysis abilities.

Journal Article
TL;DR: After training for the same task, multichannel evoked-potential recordings changed significantly in component latency and in the distribution of field potentials, suggesting an involvement of and plasticity in the primary visual cortex of human adults.
Abstract: We investigated learning in a motion-detection task using both psychophysical and neurophysiological methods in normal humans. A total of 20 naive observers had to discriminate between a small motion to the left versus to the right (jump displacement) or between a motion upward versus downward. Their performance improved significantly within less than 30 min in discriminating between directions in the psychophysical jump-displacement task. The improvement of performance with practice was very specific and did not transfer to the same stimulus rotated by 90 degrees. After training for the same task, multichannel evoked-potential recordings changed significantly in component latency and in the distribution of field potentials. This indicates that neuronal ensembles rather than single cells are involved in perceptual learning. Significant differences between the potential distributions occur for potentials at latencies of less than 100 ms over the occipital pole, suggesting an involvement of and plasticity in the primary visual cortex of human adults.

Journal ArticleDOI
TL;DR: This paper found that people become expert at perceiving information that is related to concepts they think about a great deal, because of their extensive perceptual experience with this material to test this idea, and manipulated the capitalization of a series of briefly exposed words.
Abstract: We hypothesized that people become expert at perceiving information that is related to concepts they think about a great deal, because of their extensive perceptual experience with this material To test this idea, we manipulated the capitalization of a series of briefly exposed words If expertise emerges because of perceptual experience, then people should show facilitation identifying words that they think about a great deal, but only when capitalization of these words is consistent with prior perceptual experience with these words Support for this hypothesis was found in two experiments—one in which trait words were presented to depressed and nondepressed subjects, and one in which food words were presented to anorexic and nonanorexic subjects Thus, these experiments demonstrated that personality, as well as personality disorder, has the potential to change the nature of the input people receive from the perceptual system

Journal ArticleDOI
TL;DR: A review of outcome studies about remedial perceptual retraining for adults with diffuse acquired brain injury suggests that those learning assumptions hold true only for clients with localized lesions and preserved abstract reasoning who have been explicitly taught to transfer learning across a variety of treatment activities.
Abstract: Occupational therapy for adults with perceptual dysfunction secondary to diffuse acquired brain injury from trauma or anoxia often includes remedial retraining with treatment tasks, like construction of puzzles, to provide clients with practice in deficit perceptual skills. Therapists using this approach assume that adults with brain injury learn specific perceptual skills from retraining exercises and can transfer those skills across all activities (including self-care and community living activities) that require those skills. This review of outcome studies about remedial perceptual retraining for adults with diffuse acquired brain injury suggests that those learning assumptions hold true only for clients with localized lesions and preserved abstract reasoning who have been explicitly taught to transfer learning across a variety of treatment activities. Recommendations about ways to assess clients' learning potential and appropriateness for remedial retraining include keeping track of the number of repetitions clients need to relearn functional tasks and systematically varying functional tasks during training to see how easily clients can transfer learning across variations of the same task.

Journal ArticleDOI
TL;DR: Perceptual learning is accompanied by changes in the properties of individual neurons and in the functional cortical architecture; these are observed in a number of cortical areas, over short and long time scales.

Journal ArticleDOI
TL;DR: This article showed that phonetic categories that are distinctive (phonemic) in the listener's native language are differentiated easily and effortlessly, while non-native phonetic category/contrasts present perceptual difficulties.
Abstract: Cross‐language studies of speech perception by adults have shown ‘‘language‐specific’’ patterns of perception of phonetic categories and contrasts. In general, phonetic categories that are distinctive (phonemic) in the listener’s native language are differentiated easily and effortlessly, while non‐native phonetic categories/contrasts present perceptual difficulties. Thus learners of a second language (L2) often have persistant difficulty learning to perceive (and produce) ‘‘foreign’’ consonants and vowels. However, recent research has shown that not all non‐native phonetic categories and contrasts are equally difficult to differentiate perceptually. Current theories that attempt to predict and explain these relative perceptual difficulties in terms of the relationship between native language (L1) and L2 phonetic categories will be discussed. In addition, results of perceptual training experiments with L2 learners which explore the effects of subject, stimulus, and task variables on perception of non‐native phonetic categories will be summarized. Finally, implications of this research for general theories of speech perception and perceptual learning will be suggested. [Work supported by NIDCD.]

Journal ArticleDOI
TL;DR: A neural network model of an early visual cortical area, composed of orientation selective units, arranged in a hypercolumn structure, with receptive field properties modeled from real monkey neurons, which is able to learn even from chance performance, and in the presence of a large amount of noise in the response function.
Abstract: We introduce a neural network model of an early visual cortical area, in order to understand better results of psychophysical experiments concerning perceptual learning during odd element (pop-out) detection tasks (Ahissar and Hochstein, 1993, 1994a). The model describes a network, composed of orientation selective units, arranged in a hypercolumn structure, with receptive field properties modeled from real monkey neurons. Odd element detection is a final pattern of activity with one (or a few) salient units active. The learning algorithm used was the Associative reward-penalty (Ar-p) algorithm of reinforcement learning (Barto and Anandan, 1985), following physiological data indicating the role of supervision in cortical plasticity. Simulations show that network performance improves dramatically as the weights of inter-unit connections reach a balance between lateral iso-orientation inhibition, and facilitation from neighboring neurons with different preferred orientations. The network is able to learn even from chance performance, and in the presence of a large amount of noise in the response function. As additional tests of the model, we conducted experiments with human subjects in order to examine learning strategy and test model predictions.

Proceedings Article
01 Jan 1994
TL;DR: This work has demonstrated that perceptual learning occurs for the discrimination of direction in stochastic motion stimuli, and model this learning using two approaches: a clustering model that learns to accommodate the motion noise, and an averaging model that learn to ignore the noise.
Abstract: Perceptual learning is defined as fast improvement in performance and retention of the learned ability over a period of time. In a set of psychophysical experiments we demonstrated that perceptual learning occurs for the discrimination of direction in stochastic motion stimuli. Here we model this learning using two approaches: a clustering model that learns to accommodate the motion noise, and an averaging model that learns to ignore the noise. Simulations of the models show performance similar to the psychophysical results.

Proceedings ArticleDOI
01 Apr 1994
TL;DR: This article showed that some degree of skill in radiological search can be acquired with no high-level medical knowledge at all, and that some aspect of radiological skill may be based on changes in the effectiveness of early visual processes.
Abstract: The main focus of this paper is on the extent to which radiological expertise is based on low-level perceptual processes. Experiment 1 showed that naive observers can perform well above chance level in classifying mammograms with just a few hours training. Experiment 2 showed that expert radiologists performed better than naive observers on a 'perceptual simulation' of a radiographic task, even though high-level knowledge of anatomy and disease processes was of no assistance. Experiment 3 showed that one of the fundamental parameters of the visual system likely to be involved in radiographic performance- contrast sensitivity-could be improved with practice. Experiment 4 showed that naive observers improved on a similar perceptual simulation task as used in experiment 2, and that although there was partial interocular transfer, the results suggest that at least some degree of learning was based on low-level perceptual processes. Overall the results show that some degree of skill in radiological search can be acquired with no high-level medical knowledge at all, and that some aspect of radiological skill may be based on changes in the effectiveness of early visual processes.




Journal ArticleDOI
TL;DR: In this article, a marriage of ideas drawn from perceptual skills training are combined with therapeutic techniques from Gestalt therapy can be used to train counselors to understand and experience how techniques used by artists to see objects realistically can be applied to a counseling relationship.
Abstract: This article offers some novel ideas to the age-old debate within the helping professions about how to gain balance between an analytical, cognitive style of processing data and the more experiential and humanistic process of direct perception. Ideas drawn from the perceptual skills training are combined with therapeutic techniques from Gestalt therapy. This marriage of ideas can be helpful in training counselors to understand and experience how techniques used by artists to see objects realistically can be applied to a counseling relationship.

Journal ArticleDOI
01 Jan 1994
TL;DR: The authors argue that perceptual experience is neither necessary nor sufficient for perceptual knowledge, but rather provides information about the sources of beliefs, both as to which perceptual modality and within a given modality, which is useful in assessing the reliability of perceptual beliefs.
Abstract: One of the traditional problems of philosophy is the nature of the connection between perceptual experience and empirical knowledge. That there is an intimate connection between the two is rarely doubted. Three case studies of visual deficits due to brain damage are used to motivate the claim that perceptual experience is neither necessary nor sufficient for perceptual knowledge. Acceptance of this claim leaves a mystery as to the epistemic role, if any, of perceptual experience. It is argued that one function of perceptual experience is to provide information about the sources of beliefs, both as to which perceptual modality and within a given modality. This information is useful in assessing the reliability of perceptual beliefs.

Book ChapterDOI
01 Jan 1994
TL;DR: It is observed that the weakness of the most part of the existing systems is imputed to the existing gap between the rather ideal conditions under which most of those systems are designed to work and the very characteristics of the real world.
Abstract: The problem of learning and discovering in perception is addressed and discussed with particular reference to present machine learning paradigms. These paradigms are briefly introduced by S. Gaglio. The subsymbolic approach is addressed by S. Nolfi, and the role of symbolic learning is analysed by F. Esposito. Many of the open problems, that are evidentiated in the course of the panel, show how this is an important field of research that still needs a lot of investigation. In particular, as a result of the whole discussion, it seems that a suitable integration of different approaches must be accurately investigated. It is observed, in fact, that the weakness of the most part of the existing systems is imputed to the existing gap between the rather ideal conditions under which most of those systems are designed to work and the very characteristics of the real world.


15 Dec 1994
TL;DR: IRV is presented, a visual robot that integrates visual information across camera movements using minimal geometric assumptions and develops an accurate model of its own visual-motor geometry by learning to predict the sampled images that follow each random, but precise, camera movement.
Abstract: Our eyes see well only what is directly in front of them; they must continually scan the faces, words, and objects around us. Perceptual integration is the process of combining the resulting jumpy, incomplete images into our stable, comprehensive perception of the world. Visual robots, whose goals and designs are becoming more life-like, share this need. This thesis presents IRV, a visual robot that integrates visual information across camera movements. As a means to robust and accurate perceptual integration, IRV learns to solve the problem from experience, which consists of a series of random movements of a camera mounted on a motorized pan-tilt platform, observing the day-to-day activity in a laboratory. Learning proceeds without a prior analytic model, external calibration references, or a contrived environment. Because the solution is learned using minimal geometric assumptions, it can compensate for arbitrary imaging distortions, including lens aberrations, rotation of the camera about its viewing axis, and spatially-varying or even random sampling patterns. IRV develops an accurate model of its own visual-motor geometry by learning to predict the sampled images that follow each random, but precise, camera movement. Gradually accumulating evidence over repeated practice movements, IRV overcomes the ambiguity inherent in real-world perceptual-motor learning. The computational basis of perceptual integration itself is a connectionist visual memory that continuously transforms visual information from previous fixations into a reference frame centered on the current viewing direction. Both learning and performance exploit a motor metric that associates pairs of points in visual space with eye movement parameters, to establish an interpretable, linear visual representation. The computational architecture, including the learning mechanism, as well as the natural environment, approximate the conditions of biological perceptual development. Perceptual learning and mature performance both manifest time and space complexities commensurate with human abilities and resources. Experiments confirm the practicality of visual robots that learn to perceive the stability of the world despite eye movements, learn to integrate geometric features across fixations, and, in general, develop and calibrate accurate models of their own perceptual-motor systems.