scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2022"


Journal ArticleDOI
TL;DR: In this article , visual perceptual learning (VPL) was used to restore and enhance visual functions in neuro-ophthalmological diseases like amblyopia, and combining VPL and tDCS applied during training can further improve visual skills.

15 citations


Journal ArticleDOI
TL;DR: In this article , a review of recent investigations about amblyopia treatment based on perceptual learning, dichoptic training and videogames is presented, which shows that most of the studies have found an improvement in some mono and binocular visual functions, such as visual acuity, contrast sensitivity and stereopsis.

10 citations



Journal ArticleDOI
TL;DR: In this paper , the authors adapted an established implicit learning paradigm and presented three groups of listeners with the same acoustic patterns in different presentation formats, i.e., either back-to-back, separated by a silent interval or by a masker sound.
Abstract: The human auditory system is capable of learning unstructured acoustic patterns that occur repeatedly. While most previous studies on perceptual learning focused on seamless pattern repetitions, our study included several presentation formats, which are more typical for memory tasks (involving temporal delays or irrelevant information between pattern presentations), and probed active recognition of learned patterns more directly. We adapted an established implicit learning paradigm and presented three groups of listeners with the same acoustic patterns in different presentation formats, i.e., either back-to-back, separated by a silent interval or by a masker sound. Participants additionally completed an unexpected memory test after the learning phase. We found substantial learning in all groups, measured indirectly via the increased sensitivity in a perceptual task for patterns that occurred repeatedly (compared to patterns that occurred only once) and more directly via above-chance recognition performance in the memory test. Pattern learning and recognition were robust across presentation formats. Therefore, we propose that similar mechanisms might underlie memory formation for initially unfamiliar sounds in everyday listening situations. Moreover, memories for unstructured acoustic patterns that were acquired implicitly through perceptual learning enable subsequent active recognition. ARTICLE HISTORY Received 17 December 2021 Accepted 20 May 2022

5 citations


Journal ArticleDOI
TL;DR: The authors found that both temporal jittering and onset uncertainty reduced auditory perceptual learning and were ameliorated when the same snippet occurred in both temporally manipulated and unmanipulated trials.
Abstract: Detecting and learning structure in sounds is fundamental to human auditory perception. Evidence for auditory perceptual learning comes from previous studies where listeners were better at detecting repetitions of a short noise snippet embedded in longer, ongoing noise when the same snippet recurred across trials compared with when the snippet was novel in each trial. However, previous work has mainly used (a) temporally regular presentations of the repeating noise snippet and (b) highly predictable intertrial onset timings for the snippet sequences. As a result, it is unclear how these temporal features affect perceptual learning. In five online experiments, participants judged whether a repeating noise snippet was present, unaware that the snippet could be unique to that trial or used in multiple trials. In two experiments, temporal regularity was manipulated by jittering the timing of noise-snippet repetitions within a trial. In two subsequent experiments, temporal onset certainty was manipulated by varying the onset time of the entire snippet sequence across trials. We found that both temporal jittering and onset uncertainty reduced auditory perceptual learning. In addition, we observed that these reductions in perceptual learning were ameliorated when the same snippet occurred in both temporally manipulated and unmanipulated trials. Our study demonstrates the importance of temporal regularity and onset certainty for auditory perceptual learning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).

5 citations


Journal ArticleDOI
TL;DR: In this paper , mice were trained in an abstract task in which selected activity patterns were rewarded using an optical brain-computer interface device coupled to primary visual cortex (V1) neurons.
Abstract: Acquisition of new skills has the potential to disturb existing network function. To directly assess whether previously acquired cortical function is altered during learning, mice were trained in an abstract task in which selected activity patterns were rewarded using an optical brain-computer interface device coupled to primary visual cortex (V1) neurons. Excitatory neurons were longitudinally recorded using 2-photon calcium imaging. Despite significant changes in local neural activity during task performance, tuning properties and stimulus encoding assessed outside of the trained context were not perturbed. Similarly, stimulus tuning was stable in neurons that remained responsive following a different, visual discrimination training task. However, visual discrimination training increased the rate of representational drift. Our results indicate that while some forms of perceptual learning may modify the contribution of individual neurons to stimulus encoding, new skill learning is not inherently disruptive to the quality of stimulus representation in adult V1.

5 citations


Journal ArticleDOI
01 Mar 2022-iScience
TL;DR: In this article , the role of exogenous attention during visual perceptual learning in adults with amblyopia was investigated and it was shown that training on a discrimination task leads to improvements in foveal contrast sensitivity, acuity, and stereoacuity.

4 citations


Journal ArticleDOI
TL;DR: In this paper, a gas sensory system with perceptual learning is developed, inspired by the biological olfaction system, which consists of a gas sensor, a flexible oscillator, and a memristor type artificial synapse.
Abstract: Imbuing artificial sensory system with intelligence of the biological counterpart is limited by challenges in emulating perceptual learning ability at the device level. In biological systems, stimuli from the surrounding environment are detected, transmitted, and processed by receptor, afferent nerve, and brain, respectively. This process allows the living creatures to identify the potential hazards and improve their adaptability in various environments. Here, inspired by the biological olfaction system, a gas sensory system with perceptual learning is developed. As a proof-of-concept, H2S gas with various concentrations is used as the stimulation and the stimuli will be converted to pulse-like physiological signals in the designed system, which consists of a gas sensor, a flexible oscillator, and a memristor-type artificial synapse. Furthermore, the learning ability is implemented using a supervised learning method based on k-nearest neighbors (KNN) algorithm. The recognition accuracy can be enhanced by repeating training, illustrating a great potential to be used as the neuromorphic sensory system with a learning ability for the applications in robotics.

4 citations


Journal ArticleDOI
TL;DR: In this article , the authors used multichannel EEG to track behaviorally relevant neuroplastic changes in the auditory event-related potentials (ERPs) pre-to-post-training and found that successful auditory categorical learning of music sounds is characterized by short-term functional changes in sensory coding processes superimposed on preexisting structural differences in bilateral auditory cortex.
Abstract: Categorizing sounds into meaningful groups helps listeners more efficiently process the auditory scene and is a foundational skill for speech perception and language development. Yet, how auditory categories develop in the brain through learning, particularly for non-speech sounds (e.g., music), is not well understood. Here, we asked musically naïve listeners to complete a brief (∼20 min) training session where they learned to identify sounds from a musical interval continuum (minor-major 3rds). We used multichannel EEG to track behaviorally relevant neuroplastic changes in the auditory event-related potentials (ERPs) pre- to post-training. To rule out mere exposure-induced changes, neural effects were evaluated against a control group of 14 non-musicians who did not undergo training. We also compared individual categorization performance with structural volumetrics of bilateral Heschl’s gyrus (HG) from MRI to evaluate neuroanatomical substrates of learning. Behavioral performance revealed steeper (i.e., more categorical) identification functions in the posttest that correlated with better training accuracy. At the neural level, improvement in learners’ behavioral identification was characterized by smaller P2 amplitudes at posttest, particularly over right hemisphere. Critically, learning-related changes in the ERPs were not observed in control listeners, ruling out mere exposure effects. Learners also showed smaller and thinner HG bilaterally, indicating superior categorization was associated with structural differences in primary auditory brain regions. Collectively, our data suggest successful auditory categorical learning of music sounds is characterized by short-term functional changes (i.e., greater post-training efficiency) in sensory coding processes superimposed on preexisting structural differences in bilateral auditory cortex.

4 citations


Journal ArticleDOI
TL;DR: In this paper , the authors found that the ability to rapidly adapt to changes in the auditory environment (i.e., perceptual learning) is among the processes contributing to individual differences in speech perception in adverse listening conditions.
Abstract: Older adults with age-related hearing loss exhibit substantial individual differences in speech perception in adverse listening conditions. We propose that the ability to rapidly adapt to changes in the auditory environment (i.e., perceptual learning) is among the processes contributing to these individual differences, in addition to the cognitive and sensory processes that were explored in the past. Seventy older adults with age-related hearing loss participated in this study. We assessed the relative contribution of hearing acuity, cognitive factors (working memory, vocabulary, and selective attention), rapid perceptual learning of time-compressed speech, and hearing aid use to the perception of speech presented at a natural fast rate (fast speech), speech embedded in babble noise (speech in noise), and competing speech (dichotic listening). Speech perception was modeled as a function of the other variables. For fast speech, age [odds ratio (OR) = 0.79], hearing acuity (OR = 0.62), pre-learning (baseline) perception of time-compressed speech (OR = 1.47), and rapid perceptual learning (OR = 1.36) were all significant predictors. For speech in noise, only hearing and pre-learning perception of time-compressed speech were significant predictors (OR = 0.51 and OR = 1.53, respectively). Consistent with previous findings, the severity of hearing loss and auditory processing (as captured by pre-learning perception of time-compressed speech) was strong contributors to individual differences in fast speech and speech in noise perception. Furthermore, older adults with good rapid perceptual learning can use this capacity to partially offset the effects of age and hearing loss on the perception of speech presented at fast conversational rates. Our results highlight the potential contribution of dynamic processes to speech perception.

3 citations


Journal ArticleDOI
TL;DR: In this article , the authors used EEG source-imaging to explore the relationship between selective visual attention and amblyopic suppression and its role in the success of training human adults with strabismic and anisometropic amblyopia.
Abstract: Long-term and chronic visual suppression to the non-preferred eye in early childhood is a key factor in developing amblyopia, as well as a critical barrier to treat amblyopia. To explore the relationship between selective visual attention and amblyopic suppression and its role in the success of amblyopic training, we used EEG source-imaging to show that training human adults with strabismic and anisometropic amblyopia with dichoptic attention tasks improved attentional modulation of neural populations in the primary visual cortex (V1) and intraparietal sulcus (IPS). We also used psychophysics to show that training reduced interocular suppression along with visual acuity and stereoacuity improvements. Importantly, our results revealed that the reduction of interocular suppression by training was significantly correlated with the improvement of selective visual attention in both training-related and -unrelated tasks in the amblyopic eye, relative to the fellow eye. These findings suggest a relation between interocular suppression and selective visual attention bias between eyes in amblyopic vision, and that dichoptic training with high-attention demand tasks in the amblyopic eye might be an effective way to treat amblyopia.

Journal ArticleDOI
TL;DR: The initial data suggest that roll tilt perception can be improved with less than 5 hours of training and that perceptual training may contribute to a reduction in subclinical postural instability.
Abstract: The present study aimed to determine if a vestibular perceptual learning intervention could improve roll tilt self-motion perception and balance performance. Two intervention groups (N=10 each) performed 1300 trials of roll tilt at either 0.5 Hz (2 sec per motion) or 0.2 Hz (5 sec per motion) distributed over 5 days; each intervention group was provided feedback (correct/incorrect) after each trial. Roll tilt perceptual thresholds, measured using 0.2, 0.5 and 1 Hz stimuli, as well as quiet stance postural sway, were measured on day one and day six of the study. The control group (N=10) who performed no perceptual training, showed stable 0.2 Hz (+1.48%, p>0.99), 0.5 Hz (-4.0%, p>0.99), and 1 Hz (-17.48%, p=0.20) roll tilt thresholds. The 0.2 Hz training group demonstrated significant improvements in both 0.2 Hz (-23.77%, p=0.003) and 0.5 Hz (-22.2%, p=0.03) thresholds. The 0.5 Hz training group showed a significant improvement in 0.2 Hz thresholds (-19.13%, p=0.029), but not 0.5 Hz thresholds (-17.68%, p=0.052). Neither training group improved significantly at the untrained 1 Hz frequency (p>0.05). In addition to improvements in perceptual precision, the 0.5 Hz training group showed a decrease in sway when measured during "eyes open, on foam" (dz = 0.57, p = 0.032) and "eyes closed, on foam" (dz = 2.05, p < 0.001) quiet stance balance tasks. These initial data suggest that roll tilt perception can be improved with less than 5 hours of training and that vestibular perceptual training may contribute to a reduction in subclinical postural instability.

Journal ArticleDOI
TL;DR: In this paper , extensive training can improve performance on almost every visual task, through a process called visual perceptual learning, which has been applied to rehabilitate impaired vision for patients with low vision.
Abstract: Extensive training can improve performance on almost every visual task, through a process called visual perceptual learning ([He et al., 2021][1]). Visual perceptual learning has been applied to rehabilitate impaired vision for patients with low vision ([He et al., 2021][1]). In addition, visual

Journal ArticleDOI
01 Jun 2022-Vision
TL;DR: A growing body of literature offers exciting perspectives on the use of brain stimulation to boost training-related perceptual improvements in humans as discussed by the authors , which leads to learning rate and generalization effects larger than each technique used individually.
Abstract: A growing body of literature offers exciting perspectives on the use of brain stimulation to boost training-related perceptual improvements in humans. Recent studies suggest that combining visual perceptual learning (VPL) training with concomitant transcranial electric stimulation (tES) leads to learning rate and generalization effects larger than each technique used individually. Both VPL and tES have been used to induce neural plasticity in brain regions involved in visual perception, leading to long-lasting visual function improvements. Despite being more than a century old, only recently have these techniques been combined in the same paradigm to further improve visual performance in humans. Nonetheless, promising evidence in healthy participants and in clinical population suggests that the best could still be yet to come for the combined use of VPL and tES. In the first part of this perspective piece, we briefly discuss the history, the characteristics, the results and the possible mechanisms behind each technique and their combined effect. In the second part, we discuss relevant aspects concerning the use of these techniques and propose a perspective concerning the combined use of electric brain stimulation and perceptual learning in the visual system, closing with some open questions on the topic.

Journal ArticleDOI
Di Wu, Yifan Wang, Na Liu, Pan Wang, Kewei Sun, Wei Xiao 
TL;DR: In this paper, the authors investigated the effect of tDCS over the left human middle temporal complex (hMT+) on learning to discriminate visual motion direction and found that the threshold of motion direction discrimination significantly decreased after training, but no obvious differences in the indicators of perceptual learning, such as the magnitude of improvement, transfer indexes, and learning curves, were noted among the three groups.
Abstract: Visual perceptual learning (VPL) refers to the improvement in visual perceptual abilities through training and has potential implications for clinical populations. However, improvements in perceptual learning often require hundreds or thousands of trials over weeks to months to attain, limiting its practical application. Transcranial direct current stimulation (tDCS) could potentially facilitate perceptual learning, but the results are inconsistent thus far. Thus, this research investigated the effect of tDCS over the left human middle temporal complex (hMT+) on learning to discriminate visual motion direction. Twenty-seven participants were randomly assigned to the anodal, cathodal and sham tDCS groups. Before and after training, the thresholds of motion direction discrimination were assessed in one trained condition and three untrained conditions. Participants were trained over 5 consecutive days while receiving 4 × 1 ring high-definition tDCS (HD-tDCS) over the left hMT+. The results showed that the threshold of motion direction discrimination significantly decreased after training. However, no obvious differences in the indicators of perceptual learning, such as the magnitude of improvement, transfer indexes, and learning curves, were noted among the three groups. The current study did not provide evidence of a beneficial effect of tDCS on VPL. Further research should explore the impact of the learning task characteristics, number of training sessions and the sequence of stimulation.

Posted ContentDOI
23 Jan 2022
TL;DR: This work demonstrates that perceptual attention constrains outcome-based learning by changing the strength of modality-specific cortico-cortical interactions.
Abstract: Attention is central to learning stimulus-outcome relationships. In addition to its role in learning, attention has been conceptualized as a sensory filter that improves perception. It remains unexplored whether these two aspects of attention interact at the behavioral and neural level. Thus, we investigated how learning novel stimulus-outcome associations in a multi-modal environment is influenced by the degree to which perceptual attention has been focused onto a single modality. We trained head-fixed rats to discriminate compound auditory-visual stimuli using one modality and then reduced stimulus discriminability in that modality. We observed perceptual learning and increased EEG Granger causality between frontal cortex and the behaviorally relevant sensory cortex, suggesting that perceptual attention was engaged. We then presented novel and easily discriminable stimuli in both modalities and measured outcome-driven learning to discriminate stimuli in the other modality. We observed slowed learning after engaging perceptual attention onto the previously relevant modality by requiring practice with difficult discriminations. This result could not be explained by changes in non-attentional factors, such as arousal (measured with pupillometry), number of rewards received, or shifted response criterion (measured using response velocity). Our work demonstrates that perceptual attention constrains outcome-based learning by changing the strength of modality-specific cortico-cortical interactions.

Journal ArticleDOI
TL;DR: This study showed that the short-term plastic visual perceptual training based on VR and AR technology can improve BCVA, fine stereopsis and CSF of refractive amblyopia.
Abstract: Backgrounds The treatment for amblyopia can have a substantial impact on quality of life. Conventional treatments for amblyopia have some limitations, then we try to explore a new and effective method to treat amblyopia. This study aimed to determine the potential effect of short-term plastic visual perceptual training based on VR and AR platforms in amblyopic patients. Methods All observers were blinded to patient groupings. A total of 145 amblyopic children were randomly assigned into 2 groups: VR group (71 patients) and AR group (74 patients). In the VR group, each subject underwent a 20-min short-term plastic visual perceptual training based on a VR platform, and in the AR group, based on an AR platform. The best-corrected visual acuity (BCVA), fine stereopsis, and contrast sensitivity function (CSF) were measured before and after training. Results The BCVA (P < 0.001) and fine stereopsis (P < 0.05) were improved significantly both in VR and AR group after training. Moreover, in the AR group, the CSF showed the value of all spatial frequencies had a statistically significant improvement after training (P < 0.05), while in the VR group, only the value of spatial frequency 12 improved significantly (P = 0.008). Conclusions This study showed that the short-term plastic visual perceptual training based on VR and AR technology can improve BCVA, fine stereopsis and CSF of refractive amblyopia. It was suggested that the visual perceptual training based on the VR and AR platforms may be potentially applied in treatment for amblyopia and provided a high-immersing alternative.

Journal ArticleDOI
TL;DR: Results suggest that ketamine can augment retrieval of perceptual expectations and thus this may be how it induces hallucination-like experiences in humans, and more broadly, mediated learning may unite the conditioning, perceptual decision-making, and even reality monitoring accounts of psychosis in a manner that translates across species.

Journal ArticleDOI
TL;DR: A multicomponent theoretical framework to model contributions of both long- and short-term processes in perceptual learning identified ubiquitous long-term general learning and within-session relearning in most tasks.
Abstract: Practice makes perfect in almost all perceptual tasks, but how perceptual improvements accumulate remains unknown. Here, we developed a multicomponent theoretical framework to model contributions of both long- and short-term processes in perceptual learning. Applications of the framework to the block-by-block learning curves of 49 adult participants in seven perceptual tasks identified ubiquitous long-term general learning and within-session relearning in most tasks. More importantly, we also found between-session forgetting in the vernier-offset discrimination, face-view discrimination, and auditory-frequency discrimination tasks; between-session off-line gain in the visual shape search task; and within-session adaptation and both between-session forgetting and off-line gain in the contrast detection task. The main results of the vernier-offset discrimination and visual shape search tasks were replicated in a new experiment. The multicomponent model provides a theoretical framework to identify component processes in perceptual learning and a potential tool to optimize learning in normal and clinical populations.

Journal ArticleDOI
TL;DR: In this article , the authors found that the configuration perceptual learning equally improved the perception of the configuration stimulus and both element stimuli, while the element perceptual learning was confined to the trained element stimulus.
Abstract: Visual perceptual learning has been studied extensively and reported to enhance the perception of almost all types of training stimuli, from low- to high-level visual stimuli. Notably, high-level stimuli are often composed of multiple low-level features. Therefore, it is natural to ask whether training of high-level stimuli affects the perception of low-level stimuli and vice versa. In the present study, we trained subjects with either a high-level configuration stimulus or a low-level element stimulus. The high-level configuration stimulus consisted of two Gabors in the left and right visual fields, respectively, and the low-level element stimulus was the Gabor in the right visual field of the configuration stimulus. We measured the perceptual learning effects using the configuration stimulus and the element stimuli in both left and right visual fields. We found that the configuration perceptual learning equally improved the perception of the configuration stimulus and both element stimuli. In contrast, the element perceptual learning was confined to the trained element stimulus. These findings demonstrate an asymmetric relationship between perceptual learning of the configuration and the element stimuli and suggest a hybrid mechanism of the configuration perceptual learning. Our findings also offer a promising paradigm to promote the efficiency of perceptual learning—that is, gaining more learning effect with less training time.

Journal ArticleDOI
TL;DR: It is reported that visual perceptual learning induced a marked and long-lasting recovery of visual acuity, visual depth perception abilities and binocular matching of orientation preference, and a link between the last two parameters is provided.
Abstract: An abnormal visual experience early in life, caused by strabismus, unequal refractive power of the eyes, or eye occlusion, is a major cause of amblyopia (lazy eye), a highly diffused neurodevelopmental disorder severely affecting visual acuity and stereopsis abilities. Current treatments for amblyopia, based on a penalization of the fellow eye, are only effective when applied during the juvenile critical period of primary visual cortex plasticity, resulting mostly ineffective at older ages. Here, we developed a new paradigm of operant visual perceptual learning performed under conditions of conventional (binocular) vision in adult amblyopic rats. We report that visual perceptual learning induced a marked and long-lasting recovery of visual acuity, visual depth perception abilities and binocular matching of orientation preference, and we provide a link between the last two parameters.

Journal ArticleDOI
TL;DR: The authors used olfactory perceptual learning, a non-associative form of learning in which discrimination between perceptually similar odorants is improved following exposure to these odorants, to better understand the cellular bases of olfactual aging in mice.

Posted ContentDOI
20 Nov 2022-bioRxiv
TL;DR: It is found that while training improves discrimination ability, it leads to increases in appearance distortion, and a model of how distortions of appearance can arise from increased precision of neural representations and serve to enhance distinctions between perceptual categories is proposed.
Abstract: Perceptual sensitivity often improves with training, a phenomenon known as ‘perceptual learning’. Another important perceptual dimension is appearance, the subjective sense of stimulus magnitude. Are training-induced improvements in sensitivity accompanied by more accurate appearance? Here, we examine this question by measuring both discrimination and estimation capabilities for nearhorizontal motion perception, before and after training. Observers trained on either discrimination or estimation exhibited improved sensitivity, along with increases in already-large estimation biases away from horizontal. To explain this counterintuitive finding, we developed a computational observer model in which perceptual learning arises from changes in the precision of underlying neural representations. For each observer, the fitted model accounted for both discrimination performance and the distribution of estimates, and their changes after training. Our empirical findings and modeling suggest that learning enhances distinctions between categories, a potentially important aspect of real-world perception and perceptual learning.

Journal ArticleDOI
TL;DR: In this paper , the authors measured the concentration of GABA in early visual cortical areas in a time-resolved fashion before, during, and after visual perceptual learning (VPL) within subjects using functional MRS (fMRS) and then compared the concentrations between children (8 to 11 years old) and adults (18 to 35 years old).

Book ChapterDOI
05 Dec 2022
TL;DR: The authors provide a systematic review of 27 perceptual training studies conducted over the last 40 years which include the testing of generalization and/or retention of L2 speech learning, and discuss the benefits and challenges of using these learning robustness evaluation methods.
Abstract: It is widely acknowledged that second language (L2) speech acquisition is often challenging to adult learners. Certain non-native speech sounds tend to be more difficult to perceive and to produce accurately than others, even after years of experience with the L2. Adult L2 learners are therefore frequently characterized as having not only foreign accent but also accented perception (Strange 1995). Over the last four decades, numerous studies on L2 speech learning have applied training programs to improve the perception and production abilities of L2 learners and thus reduce degree of accentedness with a focus on nativeness or intelligibility (Sakai and Moorman 2018; Thomson and Derwing 2015). However, training studies with different language pairings have yielded complex findings due to the interplay of subject, task, and stimulus variables and assessment procedures (Bohn 2000; Thomson and Derwing 2015). To evaluate the efficacy of a training study, Logan and Pruitt (1995) propose that both generalization and retention of learning need to be examined. This paper provides a systematic review of 27 perceptual training studies conducted over the last 40 years which include the testing of generalization and/or retention of L2 speech learning. It overviews the use of these measures and examines how effective perceptual training is in promoting robust L2 speech learning. The review also discusses the benefits and challenges of using these learning robustness evaluation methods. The limitations of the qualitative review are presented as well as suggestions for future research.

Journal ArticleDOI
TL;DR: It is reported that repetitive passive exposure to oriented sequences that are not linked to a specific task induces a persistent, bottom-up form of learning that is stronger than top-down practice learning and generalizes across complex stimulus dimensions.
Abstract: It is increasingly being understood that perceptual learning involves different types of plasticity. Thus, whereas the practice-based improvement in the ability to perform specific tasks is believed to rely on top-down plasticity, the capacity of sensory systems to passively adapt to the stimuli they are exposed to is believed to rely on bottom-up plasticity. However, top-down and bottom-up plasticity have never been investigated concurrently, and hence their relationship is not well understood. To examine whether passive exposure influences perceptual performance, we asked subjects to test their orientation discrimination performance around and orthogonal to the exposed orientation axes, at an exposed and an unexposed location while oriented sine-wave gratings were presented in a fixed position. Here we report that repetitive passive exposure to oriented sequences that are not linked to a specific task induces a persistent, bottom-up form of learning that is stronger than top-down practice learning and generalizes across complex stimulus dimensions. Importantly, orientation-specific exposure learning led to a robust improvement in the discrimination of complex stimuli (shapes and natural scenes). Our results indicate that long-term sensory adaptation by passive exposure should be viewed as a form of perceptual learning that is complementary to practice learning in that it reduces constraints on speed and generalization.

Journal ArticleDOI
TL;DR: In this paper , the authors examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements and found that the number of dots that appeared in the target object was correlated with the hand movement distance.
Abstract: The present study examined the perceptual consequences of learning arbitrary mappings between visual stimuli and hand movements. Participants moved a small cursor with their unseen hand twice to a large visual target object and then judged either the relative distance of the hand movements (Exp.1), or the relative number of dots that appeared in the two consecutive target objects (Exp.2) using a two-alternative forced choice method. During a learning phase, the numbers of dots that appeared in the target object were correlated with the hand movement distance. In Exp.1, we observed that after the participants were trained to expect many dots with larger hand movements, they judged movements made to targets with many dots as being longer than the same movements made to targets with few dots. In Exp.2, another group of participants who received the same training judged the same number of dots as smaller when larger rather than smaller hand movements were executed. When many dots were paired with smaller hand movements during the learning phase of both experiments, no significant changes in the perception of movements and of visual stimuli were observed. These results suggest that changes in the perception of body states and of external objects can arise when certain body characteristics co-occur with certain characteristics of the environment. They also indicate that the (dis)integration of multimodal perceptual signals depends not only on the physical or statistical relation between these signals, but on which signal is currently attended.

Journal ArticleDOI
TL;DR: In this paper , a Braille-like dot pattern matching N-back WM task was used as the WM training task, with four workload levels (0, 1, 2, and 3-back levels).
Abstract: Perceptual learning is commonly assumed to enhance perception through continuous attended sensory input. However, learning is generalizable to performance in untrained stimuli and tasks. Although previous studies have observed a possible generalization effect across tasks as a result of working memory (WM) training, comparisons of the contributions of WM training and continuous attended sensory input to perceptual learning generalization are still rare. Therefore, we compared which factors contributed most to perceptual generalization and investigated which skills acquired during WM training led to tactile generalization across tasks. Here, a Braille-like dot pattern matching N-back WM task was used as the WM training task, with four workload levels (0, 1, 2, and 3-back levels). A tactile angle discrimination (TAD) task was used as a pre- and posttest to assess improvements in tactile perception. Between tests, four subject groups were randomly assigned to 4 different workload N-back tasks to consecutively complete three sessions of training. The results showed that tactile N-back WM training could enhance TAD performance, with the 3-back training group having the highest TAD threshold improvement rate. Furthermore, the rate of WM capacity improvement on the 3-back level across training sessions was correlated with the rate of TAD threshold improvement. These findings suggest that continuous attended sensory input and enhanced WM capacity can lead to improvements in TAD ability, and that greater improvements in WM capacity can predict greater improvements in TAD performance.

Journal ArticleDOI
TL;DR: This paper showed that perceptual similarity between study and test images, rather than image variability at learning per se, predicts face recognition for perceptually similar but not for perceptically dissimilar images.