scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2005"


Journal ArticleDOI
TL;DR: The aims of this article are to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective and to provide a principled way to understand many aspects of cortical organization and responses.
Abstract: This article concerns the nature of evoked brain responses and the principles underlying their generation. We start with the premise that the sensory brain has evolved to represent or infer the causes of changes in its sensory inputs. The problem of inference is well formulated in statistical terms. The statistical fundaments of inference may therefore afford important constraints on neuronal implementation. By formulating the original ideas of Helmholtz on perception, in terms of modern-day statistical theories, one arrives at a model of perceptual inference and learning that can explain a remarkable range of neurobiological facts. It turns out that the problems of inferring the causes of sensory input (perceptual inference) and learning the relationship between input and cause (perceptual learning) can be resolved using exactly the same principle. Specifically, both inference and learning rest on minimizing the brain’s free energy, as defined in statistical physics. Furthermore, inference and learning can proceed in a biologically plausible fashion. Cortical responses can be seen as the brain’s attempt to minimize the free energy induced by a stimulus and thereby encode the most likely cause of that stimulus. Similarly, learning emerges from changes in synaptic efficacy that minimize the free energy, averaged over all stimuli encountered. The underlying scheme rests on empirical Bayes and hierarchical models of how sensory input is caused. The use of hierarchical models enables the brain to construct prior expectations in a dynamic and context-sensitive fashion. This scheme provides a principled way to understand many aspects of cortical organization and responses. The aim of this article is to encompass many apparently unrelated anatomical, physiological and psychophysical attributes of the brain within a single theoretical perspective. In terms of cortical architectures, the theoretical treatment predicts that sensory cortex should be arranged hierarchically, that connections should be reciprocal and that forward and backward connections should show a functional asymmetry (forward connections are driving, whereas backward connections are both driving and modulatory). In terms of synaptic physiology, it predicts associative plasticity and, for dynamic models, spike-timing-dependent plasticity. In terms of electrophysiology, it accounts for classical and extra classical receptive field effects and long-latency or endogenous components of evoked cortical responses. It predicts the attenuation of responses encoding prediction error with perceptual learning and explains many phenomena such as repetition suppression, mismatch negativity (MMN) and the P300 in electroencephalography. In psychophysical terms, it accounts for the behavioural correlates of these physiological phenomena, for example, priming and global precedence. The final focus of this article is on perceptual learning as measured with the MMN and the implications for empirical studies of coupling among cortical areas using evoked sensory responses.

3,569 citations


Journal ArticleDOI
TL;DR: Results from four different kinds of category-learning tasks provide strong evidence that human category learning is mediated by multiple, qualitatively distinct systems.
Abstract: Much recent evidence suggests some dramatic differences in the way people learn perceptual categories, depending on exactly how the categories were constructed. Four different kinds of category-learning tasks are currently popular-rule-based tasks, information-integration tasks, prototype distortion tasks, and the weather prediction task. The cognitive, neuropsychological, and neuroimaging results obtained using these four tasks are qualitatively different. Success in rule-based (explicit reasoning) tasks depends on frontal-striatal circuits and requires working memory and executive attention. Success in information-integration tasks requires a form of procedural learning and is sensitive to the nature and timing of feedback. Prototype distortion tasks induce perceptual (visual cortical) learning. A variety of different strategies can lead to success in the weather prediction task. Collectively, results from these four tasks provide strong evidence that human category learning is mediated by multiple, qualitatively distinct systems.

866 citations


Journal ArticleDOI
TL;DR: Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning, which may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.
Abstract: Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. For noise-vocoded sentences, a manipulation that removes spectral detail from speech, listeners' reporting improved from near 0% to 70% correct over 30 sentences (Experiment 1). Learning was enhanced if listeners heard distorted sentences while they knew the identity of the undistorted target (Experiments 2 and 3). Learning was absent when listeners were trained with nonword sentences (Experiments 4 and 5), although the meaning of the training sentences did not affect learning (Experiment 5). Perceptual learning of noise-vocoded speech depends on higher level information, consistent with top-down, lexically driven learning. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.

480 citations


Journal ArticleDOI
TL;DR: The model suggests that long-term sensitivity enhancements to task-relevant or irrelevant stimuli occur as a result of timely interactions between diffused signals triggered by task performance and signals produced by stimulus presentation.

357 citations


Journal ArticleDOI
TL;DR: The results support a view of phonemic representations as dynamic and flexible, and suggest that they interact with both higher- (e.g., lexical) and lower-level (e-g., acoustic) information in important ways.

313 citations


Journal ArticleDOI
TL;DR: Computer simulations and mathematical analyses demonstrate the functional and empirical adequacy of selective reweighting as a perceptual learning mechanism.
Abstract: The mechanisms of perceptual learning are analyzed theoretically, probed in an orientation-discrimination experiment involving a novel nonstationary context manipulation, and instantiated in a detailed computational model. Two hypotheses are examined: modification of early cortical representations versus task-specific selective reweighting. Representation modification seems neither functionally necessary nor implied by the available psychophysical and physiological evidence. Computer simulations and mathematical analyses demonstrate the functional and empirical adequacy of selective reweighting as a perceptual learning mechanism. The stimulus images are processed by standard orientation- and frequency-tuned representational units, divisively normalized. Learning occurs only in the "read-out" connections to a decision unit; the stimulus representations never change. An incremental Hebbian rule tracks the task-dependent predictive value of each unit, thereby improving the signal-to-noise ratio of their weighted combination. Each abrupt change in the environmental statistics induces a switch cost in the learning curves as the system temporarily works with suboptimal weights.

296 citations


Journal ArticleDOI
TL;DR: Further research on perceptual learning should investigate first, the conditions for generalization of training-induced improvement, second, its use for teaching and rehabilitation, and third, its dependence on pharmacological agents.

278 citations


Journal ArticleDOI
TL;DR: It is shown how speech perception is crucial for word learning, and it is suggested that it bootstraps the development of a separate but parallel phonological system that links sound to meaning.

239 citations


Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging was used to test for an effect of perceptual organization across the whole brain and found that regions in the intraparietal sulcus (IPS) showed greater activity when 2 streams were perceived rather than 1.
Abstract: The structuring of the sensory scene (perceptual organization) profoundly affects what we perceive, and is of increasing clinical interest. In both vision and audition, many cues have been identified that influence perceptual organization, but only a little is known about its neural basis. Previous studies have suggested that auditory cortex may play a role in auditory perceptual organization (also called auditory stream segregation). However, these studies were limited in that they just examined auditory cortex and that the stimuli they used to generate different organizations had different physical characteristics, which per se may have led to the differences in neural response. In the current study, functional magnetic resonance imaging was used to test for an effect of perceptual organization across the whole brain. To avoid confounding physical changes to the stimuli with differences in perceptual organization, we exploited an ambiguous auditory figure that is sometimes perceived as a single auditory stream and sometimes as two streams. We found that regions in the intraparietal sulcus (IPS ) showed greater activity when 2 streams were perceived rather than 1. The specific involvement of this region in perceptual organization is exciting, as there is a growing literature that suggests a role for the IPS in binding in vision, touch, and cross-modally. This evidence is discussed, and a general role proposed for regions of the IPS in structuring sensory input.

233 citations


Journal ArticleDOI
TL;DR: This baseline study lays the groundwork for an ongoing longitudinal study addressing the effects of intensive musical training on brain and cognitive development, and making it possible to look retroactively at the brain and Cognitive development of those children who emerge showing exceptional musical talent.

214 citations


Journal ArticleDOI
TL;DR: The behavioral results opened the possibility that focusing attention on time intervals not only enhances motor processing, as has been shown by previous research, but also might improve perceptual processing.
Abstract: Research that uses simple response time tasks and neuroimaging has emphasized that attentional preparation based on temporal expectancy modulates processing at motor levels. A novel approach was taken to study whether the temporal orienting of attention can also modulate perceptual processing. A temporal-cuing paradigm was used together with a rapid serial visual presentation procedure, in order to maximize the processing demands of perceptual analysis. Signal detection theory was applied in order to examine whether temporal orienting affects processes related to perceptual sensitivity or to response criterion (indexed byďand beta measures, respectively). If temporal orienting implies perceptual preparation, we would expect to observe an increase in perceptual sensitivity (ď) when a target appeared at expected, rather than unexpected, time intervals. Indeed, our behavioral results opened the possibility that focusing attention on time intervals not only enhances motor processing, as has been shown by previous research, but also might improve perceptual processing.

Journal ArticleDOI
TL;DR: Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions.
Abstract: Tasks such as steering, braking, and intercepting moving objects constitute a class of behaviors, known as visually guided actions, which are typically carried out under continuous control on the basis of visual information. Several decades of research on visually guided action have resulted in an inventory of control laws that describe for each task how information about the sufficiency of one's current state is used to make ongoing adjustments. Although a considerable amount of important research has been generated within this framework, several aspects of these tasks that are essential for successful performance cannot be captured. The purpose of this paper is to provide an overview of the existing framework, discuss its limitations, and introduce a new framework that emphasizes the necessity of calibration and perceptual learning. Within the proposed framework, successful human performance on these tasks is a matter of learning to detect and calibrate optical information about the boundaries that separate possible from impossible actions. This resolves a long-lasting incompatibility between theories of visually guided action and the concept of an affordance. The implications of adopting this framework for the design of experiments and models of visually guided action are discussed.

Journal ArticleDOI
TL;DR: This article has attempted to show how early evidence of the existence of multiple memory systems in the brain arose from the study of a few patients with bilateral damage to the medial structures of the temporal lobe in the hippocampal region, as in the case of the now famous patient HM.

Journal ArticleDOI
TL;DR: The notion of a critical period for the treatment of amblyopia is reviewed in light of recent experimental and clinical evidence for neural plasticity.
Abstract: Critical periods for experience-dependent plasticity are ubiquitous. The idea that experience-dependent plasticity is closely linked with the development of sensory function is still widely held; however, there also is growing evidence for plasticity in the adult nervous system. This article reviews the notion of a critical period for the treatment of amblyopia in light of recent experimental and clinical evidence for neural plasticity. Specifically, adults with amblyopia can improve their perceptual performance via extensive practice on a challenging visual task, and this improvement may transfer to improved visual acuity. Amblyopes achieve this improvement via the mechanisms that have been shown to explain perceptual learning in the normal visual system. It is hypothesized that these same mechanisms account for at least some of the improvement that occurs in the treatment of amblyopia.

Journal ArticleDOI
TL;DR: The results indicate a dramatic improvement in phonological awareness following phonemic discrimination training without matching perceptual learning.

Journal ArticleDOI
TL;DR: Initial perceptual learning is comparable in young and older adults but maintenance and transfer of this learning decline with age, as well as spectrally shifted noise-vocoded speech.
Abstract: When presented with several time-compressed sentences, young adults' performance improves with practice. Such adaptation has not been studied in older adults. To study age-related changes in perceptual learning, the authors tested young and older adults' ability to adapt to degraded speech. First, the authors showed that older adults, when equated for starting accuracy with young adults, adapted at a rate and magnitude comparable to young adults. However, unlike young adults, older adults failed to transfer this learning to a different speech rate and did not show additional benefit when practice exceeded 20 sentences. Listeners did not adapt to speech degraded by noise, indicating that adaptation to time-compressed speech was not attributable to task familiarity. Finally, both young and older adults adapted to spectrally shifted noise-vocoded speech. The authors conclude that initial perceptual learning is comparable in young and older adults but maintenance and transfer of this learning decline with age.

Journal ArticleDOI
TL;DR: This research shows that visual learning is susceptible to disruption and elucidates the processes by which the brain can consolidate learning and thus protect what is learned from being overwritten.
Abstract: For more than a century, the process of stabilization has been a central issue in the research of learning and memory. Namely, after a skill or memory is acquired, it must be consolidated before it becomes resistant to disruption by subsequent learning. Although it is clear that there are many cases in which learning can be disrupted, it is unclear when learning something new disrupts what has already been learned. Herein, we provide two answers to this question with the demonstration that perceptual learning of a visual stimulus disrupts or interferes with the consolidation of a previously learned visual stimulus. In this study, we trained subjects on two different hyperacuity tasks and determined whether learning of the second task disrupted that of the first. We first show that disruption of learning occurs between visual stimuli presented at the same orientation in the same retinotopic location but not for the same stimuli presented at retinotopically disparate locations or different orientations at the same location. Second, we show that disruption from stimuli in the same retinotopic location is ameliorated if the subjects wait for 1 h before training on the second task. These results indicate that disruption, at least in visual learning, is specific to features of the tasks and that a temporal delay of 1 h can stabilize visual learning. This research shows that visual learning is susceptible to disruption and elucidates the processes by which the brain can consolidate learning and thus protect what is learned from being overwritten.

Journal ArticleDOI
TL;DR: Concept, methodological, and practical issues associated with whether perceptual skills underlying anticipatory movement in sport can or indeed should be trained implicitly, and potential advantages of implicitly learned skills relating to task complexity and robustness under stress are discussed.

Journal ArticleDOI
TL;DR: Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments and improve visual performance in amblyopic children.
Abstract: PURPOSE. To determine whether practicing a position-discrimination task improves visual performance in children with amblyopia and to determine the mechanism(s) of improvement. METHODS. Five children (age range, 7-10 years) with amblyopia practiced a positional acuity task in which they had to judge which of three pairs of lines was misaligned. Positional noise was produced by distributing the individual patches of each line segment according to a Gaussian probability function. Observers were trained at three noise levels (including 0), with each observer performing between 3000 and 4000 responses in 7 to 10 sessions. Trial-by-trial feedback was provided. RESULTS. Four of the five observers showed significant improvement in positional acuity. In those four observers, on average, positional acuity with no noise improved by approximately 32% and with high noise by approximately 26%. A position-averaging model was used to parse the improvement into an increase in efficiency or a decrease in equivalent input noise. Two observers showed increased efficiency (51% and 117% improvements) with no significant change in equivalent input noise across sessions. The other two observers showed both a decrease in equivalent input noise (18% and 29%) and an increase in efficiency (17% and 71%). All five observers showed substantial improvement in Snellen acuity (approximately 26%) after practice. CONCLUSIONS. Perceptual learning can improve visual performance in amblyopic children. The improvement can be parsed into two important factors: decreased equivalent input noise and increased efficiency. Perceptual learning techniques may add an effective new method to the armamentarium of amblyopia treatments.

Journal ArticleDOI
TL;DR: The results suggest that training in one display condition optimizes the limiting factor(s) in performance in that condition and that noise filtering is also improved by exposure to the stimulus in clear displays.
Abstract: Human operators develop expertise in perceptual tasks by practice or perceptual learning. For noisy displays, practice improves performance by learned external-noise filtering. For clear displays, practice improves performance by improved amplification or enhancement of the stimulus. Can these two mechanisms of perceptual improvement be trained separately? In an orientation task, we found that training with clear displays generalized to performance in noisy displays, but we did not find the reverse to be true. In noisy displays, the noise in the stimulus limits performance. In clear displays, performance is limited by noisiness of internal representations and processes. Our results suggest that training in one display condition optimizes the limiting factor(s) in performance in that condition and that noise filtering is also improved by exposure to the stimulus in clear displays. The asymmetric pattern of transfer implies the existence of two independent mechanisms of perceptual learning, which may reflect channel reweighting in adult visual system. These results also suggest that training operators with clear stimuli may suffice to improve performance in a range of clear and noisy environments by simultaneous learning by two mechanisms.

Journal ArticleDOI
TL;DR: It is hypothesized that perceptual deterioration may be caused by changes in the ability for attention to selectively enhance the responses of relatively low-level orientation-selective sensory neurons, possibly within the primary visual cortex.
Abstract: Repeated within-day testing on a texture discrimination task leads to retinotopically specific decreases in performance. Although perceptual learning has been shown to be highly specific to the retinotopic location and characteristics of the trained stimulus, the specificity of perceptual deterioration has not been studied. We investigated the similarities between learning and deterioration by examining whether deterioration transfers to new distractor or target orientations or to the untrained eye. Participants performed a texture discrimination task in three one-hour sessions. We tested the specificity of deterioration in the final session by switching either the orientation of the background or the target elements by 90°. We found that performance deteriorated steadily both within and across the first two sessions and was specific to the target but not the distractor orientation. In a separate experiment, we found that deterioration transferred to the untrained eye. Changes in performance were independent of reported sleepiness and awareness of stimulus changes, arguing against the possibility that perceptual deterioration is due to general fatigue. Rather, we hypothesize that perceptual deterioration may be caused by changes in the ability for attention to selectively enhance the responses of relatively low-level orientation-selective sensory neurons, possibly within the primary visual cortex. Further, the differences in specificity profiles between learning and deterioration suggest separate underlying mechanisms that occur within the same cortical area.

Journal ArticleDOI
TL;DR: The results indicate, for the first time, the existence of a latent, hours-long, consolidation phase in a human auditory verbal learning task, which occurs even during the awake state, as well as the evolution of delayed gains in human perceptual learning.
Abstract: Large gains in performance, evolving hours after practice has terminated, were reported in a number of visual and some motor learning tasks, as well as recently in an auditory nonverbal discrimination task. It was proposed that these gains reflect a latent phase of experience-triggered memory consolidation in human skill learning. It is not clear, however, whether and when delayed gains in performance evolve following training in an auditory verbal identification task. Here we show that normal-hearing young adults trained to identify consonant-vowel stimuli in increasing levels of background noise showed significant, robust, delayed gains in performance that became effective not earlier than 4 h post-training, with most participants improving at more than 6 h post-training. These gains were retained for over 6 mo. Moreover, although it has been recently argued that time including sleep, rather than time per se, is necessary for the evolution of delayed gains in human perceptual learning, our results show that 12 h post-training in the waking state were as effective as 12 h, including no less than 6 h night's sleep. Altogether, the results indicate, for the first time, the existence of a latent, hours-long, consolidation phase in a human auditory verbal learning task, which occurs even during the awake state.

Journal ArticleDOI
TL;DR: The behavioral specificity of the learning effects supports an involvement of V1 in perceptual learning, and not in unspecific attentional effects, as seen in fMRI investigations of perceptual learning in the primary visual cortex.
Abstract: Perceptual learning involves the specific and relatively permanent modification of perception following a sensory experience. In psychophysical experiments, the specificity of the learning effects to the trained stimulus attributes (e.g., visual field position or stimulus orientation) is often attributed to assumed neural modifications at an early cortical site within the visual processing hierarchy. We directly investigated a neural correlate of perceptual learning in the primary visual cortex using fMRI. Twenty volunteers practiced a curvature discrimination on Kanizsa-type illusory contours in the MR scanner. Practice-induced changes in the BOLD response to illusory contours were compared between the pretraining and the posttraining block in those areas of the primary visual cortex (V1) that, in the same session, had been identified to represent real contours at corresponding visual field locations. A retinotopically specific BOLD signal increase to illusory contours was observed as a consequence of the training, possibly signaling the formation of a contour representation, which is necessary for performing the curvature discrimination. The effects of perceptual training were maintained over a period of about 10 months, and they were specific to the trained visual field position. The behavioral specificity of the learning effects supports an involvement of V1 in perceptual learning, and not in unspecific attentional effects.

Proceedings ArticleDOI
17 Jan 2005
TL;DR: Preliminary results from a human legibility trial with 57 volunteers that yielded 4275 CAPTCHA challenges and responses show that subjective rating of difficulty is strongly (and usefully) correlated with illegibility, and early insights emerging from these data are presented.
Abstract: A reading-based CAPTCHA designed to resist character-segmentation attacks, called 'ScatterType,' is described. Its challenges are pseudorandomly synthesized images of text strings rendered in machine-print typefaces: within each image, characters are fragmented using horizontal and vertical cuts, and the fragments are scattered by vertical and horizontal displacements. This scattering is designed to defeat all methods known to us for automatic segmentation into characters. As in the BaffleText CAPTCHA, English-like but unspellable text-strings are used to defend against known-dictionary attacks. In contrast to the PessimalPrint and BaffleText CAPTCHAs (and others), no physics-based image degradations, occlusions, or extraneous patterns are employed. We report preliminary results from a human legibility trial with 57 volunteers that yielded 4275 CAPTCHA challenges and responses. ScatterType human legibility remains remarkably high even on extremely degraded cases. We speculate that this is due to Gestalt perception abilities assisted by style-specific (here, typeface-specific) consistency among primitive shape features of character fragments. Although recent efforts to automate style-consistent perceptual skills have reported progress, the best known methods do not yet pose a threat to ScatterType. The experimental data also show that subjective rating of difficulty is strongly (and usefully) correlated with illegibility. In addition, we present early insights emerging from these data as we explore the ScatterType design space -- choice of typefaces, 'words', cut positioning, and displacements -- with the goal of locating regimes in which ScatterType challenges remain comfortably legible to almost all people but strongly resist mahine-vision methods for automatic segmentation into characters.

Journal ArticleDOI
TL;DR: The authors examined perceptual learning of nonspeech auditory categories in an interactive multi-modal training paradigm, where participants played a computer game in which they navigated through a three-dimensional space while responding to animated characters encountered along the way.
Abstract: This study examined perceptual learning of spectrally complex nonspeech auditory categories in an interactive multi-modal training paradigm. Participants played a computer game in which they navigated through a three-dimensional space while responding to animated characters encountered along the way. Characters’ appearances in the game correlated with distinctive sound category distributions, exemplars of which repeated each time the characters were encountered. As the game progressed, the speed and difficulty of required tasks increased and characters became harder to identify visually, so quick identification of approaching characters by sound patterns was, although never required or encouraged, of gradually increasing benefit. After 30 min of play, participants performed a categorization task, matching sounds to characters. Despite not being informed of audio-visual correlations, participants exhibited reliable learning of these patterns at posttest. Categorization accuracy was related to several measu...

Journal ArticleDOI
01 Apr 2005
TL;DR: A new stimulation protocol is described that allows to improve haptic performance in humans in a highly systemic way through unattended activation-based learning in parallel to cortical reorganization in somatosensory cortex.
Abstract: Human haptic performance is not fixed, but subject to major alterations through learning processes. We describe a new stimulation protocol that allows to improve haptic performance in humans in a highly systemic way through unattended activation-based learning. The so-called coactivation protocol is based upon temporal constraints of Hebbian learning where simultaneity plays a key role for the induction of plastic changes. We provide an overview about the potential of coactivation by summarizing recent findings showing that coactivation alters a broad range of basic as well as cognitively demanding types of haptic performance in parallel to cortical reorganization in somatosensory cortex. For example, coactivation applied to the tip of the index finger, or to all fingers of the dominant hand improves tactile acuity, but also haptic object recognition, and speeds up multiple-choice reaction times. Because such changes persist between 24 h and 1 week without further intervention, we interpret the underlying processes as a particular form of perceptual learning. We describe results where coactivation has been utilized for therapeutical purposes in impaired human populations, we outline new developments to optimize and extend unattended activation-based learning protocols, and we sketch the next steps necessary to apply the concept of unattended activation-based learning on a regular and reliable basis as a therapeutical tool in order to selectively interfere with impaired haptic performance.

Journal ArticleDOI
TL;DR: An essential role of stimulus temporal patterning is demonstrated in enabling perceptual learning by showing that 'unlearnable' contrast and motion-direction discrimination can be readily learned when stimuli are practiced in a fixed temporal pattern.
Abstract: Little is known about how temporal stimulus factors influence perceptual learning. Here we demonstrate an essential role of stimulus temporal patterning in enabling perceptual learning by showing that 'unlearnable' contrast and motion-direction discrimination (resulting from random interleaving of stimuli) can be readily learned when stimuli are practiced in a fixed temporal pattern. This temporal patterning does not facilitate learning by reducing stimulus uncertainty; further, learning enabled by temporal patterning can later generalize to randomly presented stimuli.

Journal ArticleDOI
TL;DR: Categorical perception may result from attentionally modulated perceptual learning about diagnostic category features, based upon orientation-selective stages of analysis within the visual processing stream, which argues strongly that category learning can alter the authors' perception of the world.

MonographDOI
06 May 2005
TL;DR: This work focuses on the development of a novel model of Speech Sound Perception in Primates by exploiting the role of the Auditory Cortex in the Development of Sensory Interactions in Rats.
Abstract: Contents: Preface. Part I: Auditory Cortical Fields and Their Functions. E. Budinger, Introduction: Auditory Cortical Fields and Their Functions. J.H. Kaas, T.A. Hackett, Subdivisions and Connections of the Auditory Cortex in Primates: A Working Model. P. Morosan, J. Rademacher, N. Palomero-Gallagher, K. Zilles, Anatomical Organization of the Human Auditory Cortex: Cytoarchitecture and Transmitter Receptors. D.A. Hall, Sensitivity to Spectral and Temporal Properties of Sound in Human Non-Primary Auditory Cortex. S. Clarke, M. Adriani, E. Tardif, "What" and "Where" in Human Audition: Evidence From Anatomical, Activation, and Lesion Studies. K. Imaizumi, C.C. Lee, J.F. Linden, J.A. Winer, C.E. Schreiner, The Anterior Field of Auditory Cortex: Neurophysiological and Neuroanatomical Organization. H.E. Heffner, The Neurobehavioral Study of Auditory Cortex. M. Brosch, H. Scheich, Non-Acoustic Influence on Neural Activity in Auditory Cortex. J.F. Brugge, I.O. Volkov, R.A. Reale, P.C. Garell, H. Kawasaki, H. Oya, Q. Li, M.A. Howard III, The Posterolateral Superior Temporal Auditory Field in Humans: Functional Organization and Connectivity. P. Belin, R.J. Zatorre, Voice Processing in Human Auditory Cortex. H. Ackermann, I. Hertrich, W. Lutzenberger, K. Mathiak, Cerebral Organization of Speech Sound Perception: Hemispheric Lateralization Effects at the Level of the Supratemporal Plane, the Inferior Dorsolateral Frontal Lobe and the Cerebellum. Part II: Coding of Sounds. M. Brosch, Introduction: Coding of Sounds. P. Heil, H. Neubauer, Toward a Unifying Basis of Auditory Thresholds. J.C. Middlebrooks, S. Furukawa, G.C. Stecker, B.J. Mickey, Distributed Representation of Sound-Source Location in the Auditory Cortex. B. Wible, T. Nicol, N. Kraus, Encoding of Complex Sounds in an Animal Model: Implications for Understanding Speech Perception in Humans. J.J. Eggermont, Correlated Neural Activity: Epiphenomenon or Part of the Neural Code? A.E.P. Villa, Spatio-Temporal Patterns of Spike Occurrences in Freely-Moving Rats Associated to Perception of Human Vowels. E. Ahissar, M. Ahissar, Processing of the Temporal Envelope of Speech. I. Taniguchi, S. Sugimoto, A. Hess, J. Horikawa, Y. Hosokawa, H. Scheich, Spatio-Temporal Patterns of Responses to Pure Tones and Frequency Modulated Sounds in the Guinea Pig Auditory Cortex. I. Nelken, L. Las, N. Ulanovsky, D. Farkas, Levels of Auditory Processing: The Subcortical Auditory System, Primary Auditory Cortex, and the Hard Problems of Auditory Perception. S.J. Eliades, X. Wang, Dynamics of Vocalization-Induced Sensory-Motor Interactions in the Primate Auditory Cortex. Part III: Plasticity, Learning, and Cognition. R. Konig, Introduction: Plasticity, Learning, and Cognition. J-M. Edeline, Learning-Induced Plasticity in the Thalamo-Cortical Auditory System: Should We Move From Rate Code to Temporal Code Descriptions? H. Scheich, F.W. Ohl, H. Schulze, A. Hess, A. Brechmann, What Is Reflected in Auditory Cortex Activity: Properties of Sound Stimuli or What the Brain Does With Them? D. Irvine, M. Brown, R. Martin, V. Park, Auditory Perceptual Learning and Cortical Plasticity. F.W. Ohl, H. Scheich, W.J. Freeman, Neurodynamics in Auditory Cortex During Category Learning. J. Fritz, M. Elhilali, S. Shamma, Task-Dependent Adaptive Plasticity of Receptive Fields in Primary Auditory Cortex of the Ferret. J. Russeler, W. Nager, J. Mobes, T.F. Munte, Cognitive Adaptations and Neuroplasticity: Lessons From Event-Related Brain Potentials.

Journal ArticleDOI
TL;DR: Despite fundamental differences between LAM and PTM, both models show that learning leads to an improvement of the perceptual template (filter) such that the template is more capable of extracting the crucial information from the signal.