scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2013"


Journal ArticleDOI
TL;DR: It is demonstrated that auditory-based cognitive training can partially restore age-related deficits in temporal processing in the brain; this plasticity in turn promotes better cognitive and perceptual skills.
Abstract: Neural slowing is commonly noted in older adults, with consequences for sensory, motor, and cognitive domains. One of the deleterious effects of neural slowing is impairment of temporal resolution; older adults, therefore, have reduced ability to process the rapid events that characterize speech, especially in noisy environments. Although hearing aids provide increased audibility, they cannot compensate for deficits in auditory temporal processing. Auditory training may provide a strategy to address these deficits. To that end, we evaluated the effects of auditory-based cognitive training on the temporal precision of subcortical processing of speech in noise. After training, older adults exhibited faster neural timing and experienced gains in memory, speed of processing, and speech-in-noise perception, whereas a matched control group showed no changes. Training was also associated with decreased variability of brainstem response peaks, suggesting a decrease in temporal jitter in response to a speech signal. These results demonstrate that auditory-based cognitive training can partially restore age-related deficits in temporal processing in the brain; this plasticity in turn promotes better cognitive and perceptual skills.

209 citations


Journal ArticleDOI
TL;DR: Investigation of the effectiveness of three types of vocabulary annotations on vocabulary learning for EFL college students in Taiwan showed that the version with text plus picture was the most effective type of vocabulary annotation.
Abstract: This first goal of the study described here was to investigate the effectiveness of three types of vocabulary annotations on vocabulary learning for EFL college students in Taiwan: text annotation only, text plus picture, and text plus picture and sound. The second goal of the study was to determine whether learners with certain perceptual learning styles benefited more from a particular type of vocabulary annotations. The perceptual learning styles investigated were auditory, visual-verbal (with text), visual-nonverbal (with pictures), and mixed preferences. The results of the study showed that the version with text plus picture was the most effective type of vocabulary annotation. Perceptual learning styles did not seem to have a significant influence on the effectiveness of vocabulary annotations.

180 citations


Journal ArticleDOI
TL;DR: How data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning is discussed and the possible neural basis of priors is reviewed.
Abstract: Expectations are known to greatly affect our experience of the world A growing theory in computational neuroscience is that perception can be successfully described using Bayesian inference models and that the brain is "Bayes-optimal" under some constraints In this context, expectations are particularly interesting, because they can be viewed as prior beliefs in the statistical inference process A number of questions remain unsolved, however, for example: How fast do priors change over time? Are there limits in the complexity of the priors that can be learned? How do an individual's priors compare to the true scene statistics? Can we unlearn priors that are thought to correspond to natural scene statistics? Where and what are the neural substrate of priors? Focusing on the perception of visual motion, we here review recent studies from our laboratories and others addressing these issues We discuss how these data on motion perception fit within the broader literature on perceptual Bayesian priors, perceptual expectations, and statistical and perceptual learning and review the possible neural basis of priors

148 citations


Book
28 Nov 2013
TL;DR: In this article, the authors present an approach towards the perceptual model and against the perceptual model for emotion, attention, and virtue, which they call Emotion, Attention, and Virtue.
Abstract: Introduction 1. Towards the Perceptual Model 2. The Perceptual Model 3. Against the Perceptual Model 4. Emotion and Understanding 5. Emotion, Attention, and Virtue Bibliography

139 citations


Journal ArticleDOI
TL;DR: Tapping performance related to reading, attention, and backward masking, which motivates future research investigating whether beat synchronization training can improve not only reading ability, but potentially executive function and auditory processing as well.

132 citations


Journal ArticleDOI
TL;DR: For example, this paper presented modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision).
Abstract: We present modality exclusivity norms for 400 randomly selected noun concepts, for which participants provided perceptual strength ratings across five sensory modalities (i.e., hearing, taste, touch, smell, and vision). A comparison with previous norms showed that noun concepts are more multimodal than adjective concepts, as nouns tend to subsume multiple adjectival property concepts (e.g., perceptual experience of the concept baby involves auditory, haptic, olfactory, and visual properties, and hence leads to multimodal perceptual strength). To show the value of these norms, we then used them to test a prediction of the sound symbolism hypothesis: Analysis revealed a systematic relationship between strength of perceptual experience in the referent concept and surface word form, such that distinctive perceptual experience tends to attract distinctive lexical labels. In other words, modality-specific norms of perceptual strength are useful for exploring not just the nature of grounded concepts, but also the nature of form-meaning relationships. These norms will be of benefit to those interested in the representational nature of concepts, the roles of perceptual information in word processing and in grounded cognition more generally, and the relationship between form and meaning in language development and evolution.

122 citations


Journal ArticleDOI
TL;DR: Perceptual learning alters the weighting of both early and midlevel representations of the visual system, consistent with single-cell recording studies.
Abstract: Improvements in performance on visual tasks due to practice are often specific to a retinal position or stimulus feature Many researchers suggest that specific perceptual learning alters selective retinotopic representations in early visual analysis However, transfer is almost always practically advantageous, and it does occur If perceptual learning alters location-specific representations, how does it transfer to new locations? An integrated reweighting theory explains transfer over retinal locations by incorporating higher level location-independent representations into a multilevel learning system Location transfer is mediated through location-independent representations, whereas stimulus feature transfer is determined by stimulus similarity at both location-specific and location-independent levels Transfer to new locations/positions differs fundamentally from transfer to new stimuli After substantial initial training on an orientation discrimination task, switches to a new location or position are compared with switches to new orientations in the same position, or switches of both Position switches led to the highest degree of transfer, whereas orientation switches led to the highest levels of specificity A computational model of integrated reweighting is developed and tested that incorporates the details of the stimuli and the experiment Transfer to an identical orientation task in a new position is mediated via more broadly tuned location-invariant representations, whereas changing orientation in the same position invokes interference or independent learning of the new orientations at both levels, reflecting stimulus dissimilarity Consistent with single-cell recording studies, perceptual learning alters the weighting of both early and midlevel representations of the visual system

118 citations


Book ChapterDOI
TL;DR: The involvement of PL in complex cognitive tasks and why these connections, along with contemporary experimental and neuroscientific research in perception, challenge widely held accounts of the relation ships among perception, cognition, and learning are described.
Abstract: Recent research indicates that perceptual learning (PL)—experience-induced changes in the way perceivers extract information—plays a larger role in complex cognitive tasks, including abstract and symbolic domains, than has been understood in theory or implemented in instruction. Here, we describe the involvement of PL in complex cognitive tasks and why these connections, along with contemporary experimental and neuroscientific research in perception, challenge widely held accounts of the relationships among perception, cognition, and learning. We outline three revisions to common assumptions about these relations: 1) Perceptual mechanisms provide complex and abstract descriptions of reality; 2) Perceptual representations are often amodal, not limited to modality-specific sensory features; and 3) Perception is selective. These three properties enable relations between perception and cognition that are both synergistic and dynamic, and they make possible PL processes that adapt information extraction to optimize task performance. While PL is pervasive in natural learning and in expertise, it has largely been neglected in formal instruction. We describe an emerging PL technology that has already produced dramatic learning gains in a variety of academic and professional learning contexts, including mathematics, science, aviation, and medical learning.

106 citations


Journal ArticleDOI
TL;DR: When eye and hand motor preparation is disentangled from perceptual decisions, sensorimotor areas are not involved in accumulating sensory evidence toward a perceptual decision, and sensory evidence levels modulate decision and motor preparation stages differently in different IPS regions, suggesting functional heterogeneity of the IPS.
Abstract: The extent to which different cognitive processes are “embodied” is widely debated. Previous studies have implicated sensorimotor regions such as lateral intraparietal (LIP) area in perceptual decision making. This has led to the view that perceptual decisions are embodied in the same sensorimotor networks that guide body movements. We use event-related fMRI and effective connectivity analysis to investigate whether the human sensorimotor system implements perceptual decisions. We show that when eye and hand motor preparation is disentangled from perceptual decisions, sensorimotor areas are not involved in accumulating sensory evidence toward a perceptual decision. Instead, inferior frontal cortex increases its effective connectivity with sensory regions representing the evidence, is modulated by the amount of evidence, and shows greater task-positive BOLD responses during the perceptual decision stage. Once eye movement planning can begin, however, an intraparietal sulcus (IPS) area, putative LIP, participates in motor decisions. Moreover, sensory evidence levels modulate decision and motor preparation stages differently in different IPS regions, suggesting functional heterogeneity of the IPS. This suggests that different systems implement perceptual versus motor decisions, using different neural signatures.

98 citations


Journal ArticleDOI
TL;DR: It is suggested that value interacts with perceptual salience to modulate the value-based attentional capture, and the extent of value information capturing attention depends on the biological significance of the value attribute.
Abstract: Previous research demonstrated that associating a stimulus with value (e.g., monetary reward) can increase its salience and induce a value-driven attentional capture when it becomes a distractor in visual search. Here we investigate to what extent this value-driven attentional capture is affected by the perceptual salience of the stimulus and the type of value attached to the stimulus. We showed that a color previously associated with monetary gain or loss impaired subsequent search for a unique shape target (Experiment 1), but a shape that was previously associated with gain or loss did not affect search for a unique color target (Experiments 2A and 2B), indicating that the associative learning of value and the effect of value-driven attentional capture are modulated by the perceptual salience of a stimulus. The value-based attentional capture recurred when the shape distractor was paired more strongly with monetary loss (Experiment 2C) or was paired with pain stimulation (Experiment 3), indicating that when the value is significant enough to an organism, it can render a perceptually less salient stimulus capable of capturing attention in visual search. These results suggest that value interacts with perceptual salience to modulate the value-based attentional capture, and the extent of value information capturing attention depends on the biological significance of the value attribute.

94 citations


Journal ArticleDOI
TL;DR: It is shown that particular aspects of the readout process can have specific, identifiable effects on the threshold, slope, upper asymptote, time dependence, and choice dependence of psychometric functions.

Journal ArticleDOI
TL;DR: A network model based on an interaction between recurrent inputs to V1 and intrinsic connections within V1, which accounts for task-dependent changes in the properties of V1 neurons, and shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.
Abstract: The visual system uses continuity as a cue for grouping oriented line segments that define object boundaries in complex visual scenes. Many studies support the idea that long-range intrinsic horizontal connections in early visual cortex contribute to this grouping. Top-down influences in primary visual cortex (V1) play an important role in the processes of contour integration and perceptual saliency, with contour-related responses being task dependent. This suggests an interaction between recurrent inputs to V1 and intrinsic connections within V1 that enables V1 neurons to respond differently under different conditions. We created a network model that simulates parametrically the control of local gain by hypothetical top-down modification of local recurrence. These local gain changes, as a consequence of network dynamics in our model, enable modulation of contextual interactions in a task-dependent manner. Our model displays contour-related facilitation of neuronal responses and differential foreground vs. background responses over the neuronal ensemble, accounting for the perceptual pop-out of salient contours. It quantitatively reproduces the results of single-unit recording experiments in V1, highlighting salient contours and replicating the time course of contextual influences. We show by means of phase-plane analysis that the model operates stably even in the presence of large inputs. Our model shows how a simple form of top-down modulation of the effective connectivity of intrinsic cortical connections among biophysically realistic neurons can account for some of the response changes seen in perceptual learning and task switching.


Journal ArticleDOI
TL;DR: Functional region-of-interest analyses revealed a key role for the HC and PrC in discrimination learning, which is consistent with representational accounts in which subregions in these MTL structures store complex spatial and object representations, respectively.
Abstract: It is debated whether subregions within the medial temporal lobe (MTL), in particular the hippocampus (HC) and perirhinal cortex (PrC), play domain-sensitive roles in learning. In the present study, two patients with differing degrees of MTL damage were first exposed to pairs of highly similar scenes, faces, and dot patterns and then asked to make repeated same/different decisions to preexposed and nonexposed (novel) pairs from the three categories (Experiment 1). We measured whether patients would show a benefit of prior exposure (preexposed > nonexposed) and whether repetition of nonexposed (and preexposed) pairs at test would benefit discrimination accuracy. Although selective HC damage impaired learning of scenes, but not faces and dot patterns, broader MTL damage involving the HC and PrC compromised discrimination learning of scenes and faces but left dot pattern learning unaffected. In Experiment 2, a similar task was run in healthy young participants in the MRI scanner. Functional region-of-interest analyses revealed that posterior HC and posterior parahippocampal gyrus showed greater activity during scene pattern learning, but not face and dot pattern learning, whereas PrC, anterior HC, and posterior fusiform gyrus were recruited during discrimination learning for faces, but not scenes and dot pattern learning. Critically, activity in posterior HC and PrC, but not the other functional region-of-interest analyses, was modulated by accuracy (correct > incorrect within a preferred category). Therefore, both approaches revealed a key role for the HC and PrC in discrimination learning, which is consistent with representational accounts in which subregions in these MTL structures store complex spatial and object representations, respectively.

Journal ArticleDOI
01 Apr 2013-Cortex
TL;DR: Results showed that motor knowledge transfers more effectively than perceptual knowledge during the offline period, irrespective of whether sleep occurred or not and whether there was a 12- or 24-h delay period between the learning and the testing phase.

Journal ArticleDOI
TL;DR: The importance of a dimensional approach for understanding the developmental origins of reduced face perception skills is highlighted, and the need for longitudinal research to truly understand how social motivation and social attention influence the development of social perceptual skills is emphasized.
Abstract: Although the extant literature on face recognition skills in Autism Spectrum Disorder (ASD) shows clear impairments compared to typically developing controls (TDC) at the group level, the distribution of scores within ASD is broad. In the present research, we take a dimensional approach and explore how differences in social attention during an eye tracking experiment correlate with face recognition skills across ASD and TDC. Emotional discrimination and person identity perception face processing skills were assessed using the Let's Face It! Skills Battery in 110 children with and without ASD. Social attention was assessed using infrared eye gaze tracking during passive viewing of movies of facial expressions and objects displayed together on a computer screen. Face processing skills were significantly correlated with measures of attention to faces and with social skills as measured by the Social Communication Questionnaire (SCQ). Consistent with prior research, children with ASD scored significantly lower on face processing skills tests but, unexpectedly, group differences in amount of attention to faces (vs. objects) were not found. We discuss possible methodological contributions to this null finding. We also highlight the importance of a dimensional approach for understanding the developmental origins of reduced face perception skills, and emphasize the need for longitudinal research to truly understand how social motivation and social attention influence the development of social perceptual skills.

Journal ArticleDOI
TL;DR: This study provides strong support for the role of inhibitory mechanisms in memory control and suggests a tight link between higher-order cognitive operations and perceptual processing.
Abstract: In the present study, the effect of memory suppression on subsequent perceptual processing of visual objects was examined within a modified think/no-think paradigm. Suppressing memories of visual objects significantly impaired subsequent perceptual identification of those objects when they were briefly encountered (Experiment 1) and when they were presented in noise (Experiment 2), relative to performance on baseline items for which participants did not undergo suppression training. However, in Experiment 3, when perceptual identification was performed on mirror-reversed images of to-be-suppressed objects, no impairment was observed. These findings, analogous to those showing forgetting of suppressed words in long-term memory, suggest that suppressing memories of visual objects might be mediated by direct inhibition of perceptual representations, which, in turn, impairs later perception of them. This study provides strong support for the role of inhibitory mechanisms in memory control and suggests a tight link between higher-order cognitive operations and perceptual processing.

Journal Article
TL;DR: In this paper, a study of human tactile microspatial learning in which participants achieved >six-fold decline in acuity threshold after multiple training sessions was conducted and effective connectivity between relevant brain regions was estimated using multivariate, autoregressive models of hidden neuronal variables obtained by deconvolution of the hemodynamic response.
Abstract: Despite considerable work, the neural basis of perceptual learning remains uncertain. For visual learning, although some studies suggested that changes in early sensory representations are responsible, other studies point to decision-level reweighting of perceptual readout. These competing possibilities have not been examined in other sensory systems, investigating which could help resolve the issue. Here we report a study of human tactile microspatial learning in which participants achieved >six-fold decline in acuity threshold after multiple training sessions. Functional magnetic resonance imaging was performed during performance of the tactile microspatial task and a control, tactile temporal task. Effective connectivity between relevant brain regions was estimated using multivariate, autoregressive models of hidden neuronal variables obtained by deconvolution of the hemodynamic response. Training-specific increases in task-selective activation assessed using the task × session interaction and associated changes in effective connectivity primarily involved subcortical and anterior neocortical regions implicated in motor and/or decision processes, rather than somatosensory cortical regions. A control group of participants tested twice, without intervening training, exhibited neither threshold improvement nor increases in task-selective activation. Our observations argue that neuroplasticity mediating perceptual learning occurs at the stage of perceptual readout by decision networks. This is consonant with the growing shift away from strictly modular conceptualization of the brain toward the idea that complex network interactions underlie even simple tasks. The convergence of our findings on tactile learning with recent studies of visual learning reconciles earlier discrepancies in the literature on perceptual learning.

Journal ArticleDOI
TL;DR: A brain state-dependency of perceptual learning success in humans opening new avenues for supportive learning tools in the clinical and educational realms is suggested.
Abstract: Learning constitutes a fundamental property of the human brain—yet an unresolved puzzle is the profound variability of the learning success between individuals. Here we highlight the relevance of individual ongoing brain states as sources of the learning variability in exposure-based somatosensory perceptual learning. Electroencephalogram recordings of ongoing rhythmic brain activity before and during learning revealed that prelearning parietal alpha oscillations as well as during-learning stimulus-induced contralateral central alpha changes are predictive for the learning outcome. These two distinct alpha rhythm sources predicted up to 64% of the observed learning variability, one source representing an idling state with posteroparietal focus and a potential link to the default mode network, the other representing the sensorimotor mu rhythm, whose desynchronization is indicative for the degree of engagement of sensorimotor neuronal populations during application of the learning stimuli. Unspecific effects due to global shifts of attention or vigilance do not explain our observations. Our study thus suggests a brain state-dependency of perceptual learning success in humans opening new avenues for supportive learning tools in the clinical and educational realms.

Journal ArticleDOI
TL;DR: It is argued that neuroplasticity mediating perceptual learning occurs at the stage of perceptual readout by decision networks, consonant with the growing shift away from strictly modular conceptualization of the brain toward the idea that complex network interactions underlie even simple tasks.
Abstract: Despite considerable work, the neural basis of perceptual learning remains uncertain. For visual learning, although some studies suggested that changes in early sensory representations are responsible, other studies point to decision-level reweighting of perceptual readout. These competing possibilities have not been examined in other sensory systems, investigating which could help resolve the issue. Here we report a study of human tactile microspatial learning in which participants achieved >six-fold decline in acuity threshold after multiple training sessions. Functional magnetic resonance imaging was performed during performance of the tactile microspatial task and a control, tactile temporal task. Effective connectivity between relevant brain regions was estimated using multivariate, autoregressive models of hidden neuronal variables obtained by deconvolution of the hemodynamic response. Training-specific increases in task-selective activation assessed using the task × session interaction and associated changes in effective connectivity primarily involved subcortical and anterior neocortical regions implicated in motor and/or decision processes, rather than somatosensory cortical regions. A control group of participants tested twice, without intervening training, exhibited neither threshold improvement nor increases in task-selective activation. Our observations argue that neuroplasticity mediating perceptual learning occurs at the stage of perceptual readout by decision networks. This is consonant with the growing shift away from strictly modular conceptualization of the brain toward the idea that complex network interactions underlie even simple tasks. The convergence of our findings on tactile learning with recent studies of visual learning reconciles earlier discrepancies in the literature on perceptual learning.

Journal ArticleDOI
TL;DR: This article investigated whether native Hmong speakers' first language (L1) lexical tone experience facilitates or interferes with their perception of Mandarin tones and whether training is effective for perceptual learning of second (L2) tones.
Abstract: This study investigates whether native Hmong speakers' first language (L1) lexical tone experience facilitates or interferes with their perception of Mandarin tones and whether training is effective for perceptual learning of second (L2) tones. In Experiment 1, 3 groups of beginning level learners of Mandarin with different L1 prosodic background (Hmong, Japanese, and English) took a perception test on Mandarin tones. Both the English and Japanese groups outperformed the Hmong group in perceptual accuracy of Mandarin tones. In Experiment 2, 18 learners with different L1 background received either perception training only or perception with production training on Mandarin tones for 6 hours within 3-4 weeks. Both training paradigms were effective for perceptual learning of Mandarin tone contrasts as the two training groups' perceptual accuracy improved significantly at posttest compared with a control group. Although Hmong speakers initially had more difficulties in perception of Mandarin tones than the other 2 groups, they are by no means disadvantaged by their L1 prosodic background as they gain L2 experience after intensive training. [ABSTRACT FROM AUTHOR]

Journal ArticleDOI
TL;DR: It is proposed that the auditory system - although able to selectively focus processing on a relevant stream of sounds - is likely to have surplus capacity to process auditory information from other streams, regardless of the perceptual load in the attended stream.

Journal ArticleDOI
TL;DR: This study suggests perceptual learning plays an integral role in motor learning, and the beneficial effects of perceptual training are found to be substantially dependent on reinforced decision-making in the sensory domain.
Abstract: Motor learning often involves situations in which the somatosensory targets of movement are, at least initially, poorly defined, as for example, in learning to speak or learning the feel of a proper tennis serve. Under these conditions, motor skill acquisition presumably requires perceptual as well as motor learning. That is, it engages both the progressive shaping of sensory targets and associated changes in motor performance. In the present study, we test the idea that perceptual learning alters somatosensory function and in so doing produces changes to human motor performance and sensorimotor adaptation. Subjects in these experiments undergo perceptual training in which a robotic device passively moves the subject's arm on one of a set of fan-shaped trajectories. Subjects are required to indicate whether the robot moved the limb to the right or the left and feedback is provided. Over the course of training both the perceptual boundary and acuity are altered. The perceptual learning is observed to improve both the rate and extent of learning in a subsequent sensorimotor adaptation task and the benefits persist for at least 24 h. The improvement in the present studies varies systematically with changes in perceptual acuity and is obtained regardless of whether the perceptual boundary shift serves to systematically increase or decrease error on subsequent movements. The beneficial effects of perceptual training are found to be substantially dependent on reinforced decision-making in the sensory domain. Passive-movement training on its own is less able to alter subsequent learning in the motor system. Overall, this study suggests perceptual learning plays an integral role in motor learning.

Journal ArticleDOI
TL;DR: It is proposed that early AV speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptuallearning.
Abstract: Speech perception under audiovisual conditions is well known to confer benefits to perception such as increased speed and accuracy. Here, we investigated how audiovisual training might benefit or impede auditory perceptual learning speech degraded by vocoding. In Experiments 1 and 3, participants learned paired associations between vocoded spoken nonsense words and nonsense pictures in a protocol with a fixed number of trials. In Experiment 1, paired-associates (PA) audiovisual (AV) training of one group of participants was compared with audio-only (AO) training of another group. When tested under AO conditions, the AV-trained group was significantly more accurate than the AO-trained group. In addition, pre- and post-training AO forced-choice consonant identification with untrained nonsense words showed that AV-trained participants had learned significantly more than AO participants. The pattern of results pointed to their having learned at the level of the auditory phonetic features of the vocoded stimuli. Experiment 2, a no-training control with testing and re-testing on the AO consonant identification, showed that the controls were as accurate as the AO-trained participants in Experiment 1 but less accurate than the AV-trained participants. In Experiment 3, PA training alternated AV and AO conditions on a list-by-list basis within participants, and training was to criterion (92% correct). PA training with AO stimuli was reliably more effective than training with AV stimuli. We explain these discrepant results in terms of the so-called "reverse hierarchy theory" of perceptual learning and in terms of the diverse multisensory and unisensory processing resources available to speech perception. We propose that early audiovisual speech integration can potentially impede auditory perceptual learning; but visual top-down access to relevant auditory features can promote auditory perceptual learning.

Journal ArticleDOI
TL;DR: A web-based learning module based on the principles of cognitive science showed an evidence for improved recognition of histopathology patterns by medical students.

Journal ArticleDOI
TL;DR: The hypothesis that tactile perceptual learning is limited by finger size is supported, and it is suspected that analogous physical constraints on perceptual learning will be found in other sensory modalities.
Abstract: In touch as in vision, perceptual acuity improves with training to an extent that differs greatly across people; even individuals with similar initial acuity may undergo markedly different improvement with training. What accounts for this variability in perceptual learning? We hypothesized that a simple physical characteristic, fingertip surface area, might constrain tactile learning, because previous research suggests that larger fingers have more widely spaced mechanoreceptors. To test our hypothesis, we trained 10 human participants intensively on a tactile spatial acuity task. During 4 d, participants completed 1900 training trials (38 50-trial blocks) in which they discriminated the orientation of square-wave gratings pressed onto the stationary index or ring finger, with auditory feedback provided to signal correct and incorrect responses. We progressively increased task difficulty by shifting to thinner groove widths whenever participants achieved ≥90% correct block performance. We took optical scans to measure surface area from the distal interphalangeal crease to the tip of the finger. Participants' acuity improved markedly on the trained finger and to a lesser extent on the untrained finger. Crucially, we found that participants' tactile spatial acuity improved toward a theoretical optimum set by their finger size; participants with worse initial performance relative to their finger size improved more with training, and posttraining performance was better correlated than pretraining performance with finger size. These results strongly support the hypothesis that tactile perceptual learning is limited by finger size. We suspect that analogous physical constraints on perceptual learning will be found in other sensory modalities.

Journal ArticleDOI
TL;DR: The author describes the Adaptive Response-Time-based Sequencing (ARTS) system, which uses each learner's accuracy and speed in interactive learning to guide spacing, sequencing, and mastery.
Abstract: Recent advances in the learning sciences offer remarkable potential to improve medical education and maximize the benefits of emerging medical technologies. This article describes 2 major innovation areas in the learning sciences that apply to simulation and other aspects of medical learning: Perceptual learning (PL) and adaptive learning technologies. PL technology offers, for the first time, systematic, computer-based methods for teaching pattern recognition, structural intuition, transfer, and fluency. Synergistic with PL are new adaptive learning technologies that optimize learning for each individual, embed objective assessment, and implement mastery criteria. The author describes the Adaptive Response-Time-based Sequencing (ARTS) system, which uses each learner's accuracy and speed in interactive learning to guide spacing, sequencing, and mastery. In recent efforts, these new technologies have been applied in medical learning contexts, including adaptive learning modules for initial medical diagnosis and perceptual/adaptive learning modules (PALMs) in dermatology, histology, and radiology. Results of all these efforts indicate the remarkable potential of perceptual and adaptive learning technologies, individually and in combination, to improve learning in a variety of medical domains.

Journal ArticleDOI
01 Mar 2013-System
TL;DR: In this paper, the authors explored the pattern of graduate learners' perceptual learning style preferences and its possible relationship with their gender, age, discipline, and self-rated proficiency level.

Journal ArticleDOI
TL;DR: Evidence for perceptual and motor learning generalization is reviewed, suggesting that generalization patterns are affected by the way in which the original memory is encoded and consolidated.

Journal ArticleDOI
TL;DR: The P2m amplitude increase and its persistence over time constitute a neuroplastic change that likely reflects enhanced object representation after stimulus experience and training, which enables listeners to improve their ability for scrutinizing fine differences in pre-voicing time.
Abstract: Background Auditory perceptual learning persistently modifies neural networks in the central nervous system. Central auditory processing comprises a hierarchy of sound analysis and integration, which transforms an acoustical signal into a meaningful object for perception. Based on latencies and source locations of auditory evoked responses, we investigated which stage of central processing undergoes neuroplastic changes when gaining auditory experience during passive listening and active perceptual training. Young healthy volunteers participated in a five-day training program to identify two pre-voiced versions of the stop-consonant syllable ‘ba’, which is an unusual speech sound to English listeners. Magnetoencephalographic (MEG) brain responses were recorded during two pre-training and one post-training sessions. Underlying cortical sources were localized, and the temporal dynamics of auditory evoked responses were analyzed.