scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2016"


Journal ArticleDOI
23 Nov 2016-Neuron
TL;DR: The current state of the understanding of the modifications in the sensorsimotor pathway related to sensorimotor learning is reviewed, and the process is divided into three hierarchical levels with distinct goals: Sensory perceptual learning, sensorim motor associative learning, and motor skill learning.

162 citations


Journal ArticleDOI
TL;DR: Studies of both human limb movement and speech that indicate that plasticity in sensory and motor systems is reciprocally linked point to an approach to motor learning in which perceptual learning and sensory plasticity have a fundamental role.

158 citations


Journal ArticleDOI
TL;DR: A biologically inspired architecture for incremental learning that remains resource-efficient even in the face of very high data dimensionalities (>1000) that are typically associated with perceptual problems is presented and how a new perceptual (object) class can be added to a trained architecture without retraining is investigated.
Abstract: We present a biologically inspired architecture for incremental learning that remains resource-efficient even in the face of very high data dimensionalities (>1000) that are typically associated with perceptual problems. In particular, we investigate how a new perceptual (object) class can be added to a trained architecture without retraining, while avoiding the well-known catastrophic forgetting effects typically associated with such scenarios. At the heart of the presented architecture lies a generative description of the perceptual space by a self-organized approach which at the same time approximates the neighborhood relations in this space on a two-dimensional plane. This approximation, which closely imitates the topographic organization of the visual cortex, allows an efficient local update rule for incremental learning even in the face of very high dimensionalities, which we demonstrate by tests on the well-known MNIST benchmark. We complement the model by adding a biologically plausible short-term memory system, allowing it to retain excellent classification accuracy even under incremental learning in progress. The short-term memory is additionally used to reinforce new data statistics by replaying previously stored samples during dedicated “sleep” phases.

146 citations


Journal ArticleDOI
TL;DR: It is found that neural representations in early visual cortex are biased toward previous perceptual decisions, suggesting that biases in perceptual decisions induced by previous stimuli may result from neural biases in sensory cortex induced by recent perceptual history.
Abstract: Sensory signals are highly structured in both space and time. These regularities allow expectations about future stimulation to be formed, thereby facilitating decisions about upcoming visual features and objects. One such regularity is that the world is generally stable over short time scales. This feature of the world is exploited by the brain, leading to a bias in perception called serial dependence: previously seen stimuli bias the perception of subsequent stimuli, making them appear more similar to previous input than they really are. What are the neural processes that may underlie this bias in perceptual choice? Does serial dependence arise only in higher-level areas involved in perceptual decision-making, or does such a bias occur at the earliest levels of sensory processing? In this study, human subjects made decisions about the orientation of grating stimuli presented in the left or right visual field while activity patterns in their visual cortex were recorded using fMRI. In line with previous behavioral reports, reported orientation on the current trial was consistently biased toward the previously reported orientation. We found that the orientation signal in V1 was similarly biased toward the orientation presented on the previous trial. Both the perceptual decision and neural effects were spatially specific, such that the perceptual decision and neural representations on the current trial were only influenced by previous stimuli at the same location. These results suggest that biases in perceptual decisions induced by previous stimuli may result from neural biases in sensory cortex induced by recent perceptual history. SIGNIFICANCE STATEMENT We perceive a stable visual scene, although our visual input is constantly changing. This experience may in part be driven by a bias in visual perception that causes images to be perceived as similar to those previously seen. Here, we provide evidence for a sensory bias that may underlie this perceptual effect. We find that neural representations in early visual cortex are biased toward previous perceptual decisions. Our results suggest a direct neural correlate of serial dependencies in visual perception. These findings elucidate how our perceptual decisions are shaped by our perceptual history.

136 citations


Journal ArticleDOI
TL;DR: This work uses the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network and reports that bi-directional changes in confidence do not affect perceptual accuracy.
Abstract: A central controversy in metacognition studies concerns whether subjective confidence directly reflects the reliability of perceptual or cognitive processes, as suggested by normative models based on the assumption that neural computations are generally optimal. This view enjoys popularity in the computational and animal literatures, but it has also been suggested that confidence may depend on a late-stage estimation dissociable from perceptual processes. Yet, at least in humans, experimental tools have lacked the power to resolve these issues convincingly. Here, we overcome this difficulty by using the recently developed method of decoded neurofeedback (DecNef) to systematically manipulate multivoxel correlates of confidence in a frontoparietal network. Here we report that bi-directional changes in confidence do not affect perceptual accuracy. Further psychophysical analyses rule out accounts based on simple shifts in reporting strategy. Our results provide clear neuroscientific evidence for the systematic dissociation between confidence and perceptual performance, and thereby challenge current theoretical thinking.

129 citations


Journal ArticleDOI
29 Mar 2016-eLife
TL;DR: It is proposed that human observers are capable of generating their own feedback signals by monitoring internal decision variables and this hypothesis is investigated in a visual perceptual learning task using fMRI and confidence reports as a measure for this monitoring process.
Abstract: Much of our behavior is shaped by feedback from the environment. We repeat behaviors that previously led to rewards and avoid those with negative outcomes. At the same time, we can learn in many situations without such feedback. Our ability to perceive sensory stimuli, for example, improves with training even in the absence of external feedback. Guggenmos et al. hypothesized that this form of perceptual learning may be guided by self-generated feedback that is based on the confidence in our performance. The general idea is that the brain reinforces behaviors associated with states of high confidence, and weakens behaviors that lead to low confidence. To test this idea, Guggenmos et al. used a technique called functional magnetic resonance imaging to record the brain activity of healthy volunteers as they performed a visual learning task. In this task, the participants had to judge the orientation of barely visible line gratings and then state how confident they were in their decisions. Feedback signals derived from the participants’ confidence reports activated the same brain areas typically engaged for external feedback or reward. Moreover, just as these regions were previously found to signal the difference between actual and expected rewards, so did they signal the difference between actual confidence levels and those expected on the basis of previous confidence levels. This parallel suggests that confidence may take over the role of external feedback in cases where no such feedback is available. Finally, the extent to which an individual exhibited these signals predicted overall learning success. Future studies could investigate whether these confidence signals are automatically generated, or whether they only emerge when participants are required to report their confidence levels. Another open question is whether such self-generated feedback applies in non-perceptual forms of learning, where learning without feedback has likewise been a long-standing puzzle.

110 citations


Journal ArticleDOI
TL;DR: The results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.
Abstract: Human perception is shaped by past experience on multiple timescales. Sudden and dramatic changes in perception occur when prior knowledge or expectations match stimulus content. These immediate effects contrast with the longer-term, more gradual improvements that are characteristic of perceptual learning. Despite extensive investigation of these two experience-dependent phenomena, there is considerable debate about whether they result from common or dissociable neural mechanisms. Here we test single- and dual-mechanism accounts of experience-dependent changes in perception using concurrent magnetoencephalographic and EEG recordings of neural responses evoked by degraded speech. When speech clarity was enhanced by prior knowledge obtained from matching text, we observed reduced neural activity in a peri-auditory region of the superior temporal gyrus (STG). Critically, longer-term improvements in the accuracy of speech recognition following perceptual learning resulted in reduced activity in a nearly identical STG region. Moreover, short-term neural changes caused by prior knowledge and longer-term neural changes arising from perceptual learning were correlated across subjects with the magnitude of learning-induced changes in recognition accuracy. These experience-dependent effects on neural processing could be dissociated from the neural effect of hearing physically clearer speech, which similarly enhanced perception but increased rather than decreased STG responses. Hence, the observed neural effects of prior knowledge and perceptual learning cannot be attributed to epiphenomenal changes in listening effort that accompany enhanced perception. Instead, our results support a predictive coding account of speech perception; computational simulations show how a single mechanism, minimization of prediction error, can drive immediate perceptual effects of prior knowledge and longer-term perceptual learning of degraded speech.

108 citations


Journal ArticleDOI
TL;DR: Results suggest that long-term associative learning of two different visual features such as orientation and color was created, most likely in early visual areas.

94 citations


Journal ArticleDOI
05 Oct 2016-Neuron
TL;DR: It is suggested that experience adjusts odor representations to balance the robustness and efficiency depending on the similarity of the experienced odorants, and when the same odorants were experienced passively, a condition that would induce implicit perceptual learning.

90 citations


Journal ArticleDOI
Cesare Parise1
TL;DR: Questions about their definition, origins, their plasticity, and their underlying computational mechanisms are reviewed in the light of current research on sensory cue integration, where crossmodal correspondences can be conceptualized in terms of natural mappings across different sensory cues that are present in the environment and learnt by the sensory systems.
Abstract: Crossmodal correspondences refer to the systematic associations often found across seemingly unrelated sensory features from different sensory modalities. Such phenomena constitute a universal trait of multisensory perception even in non-human species, and seem to result, at least in part, from the adaptation of sensory systems to natural scene statistics. Despite recent developments in the study of crossmodal correspondences, there are still a number of standing questions about their definition, their origins, their plasticity, and their underlying computational mechanisms. In this paper, I will review such questions in the light of current research on sensory cue integration, where crossmodal correspondences can be conceptualized in terms of natural mappings across different sensory cues that are present in the environment and learnt by the sensory systems. Finally, I will provide some practical guidelines for the design of experiments that might shed new light on crossmodal correspondences.

85 citations


Journal ArticleDOI
TL;DR: It is demonstrated here that the transfer of perceptual learning from a task involving coherent motion to atask involving noisy motion can induce a functional substitution of V3A (one of the visual areas in the extrastriate visual cortex) for MT+ (middle temporal/medial superior temporal cortex) to process noisy motion.
Abstract: Training can improve performance of perceptual tasks. This phenomenon, known as perceptual learning, is strongest for the trained task and stimulus, leading to a widely accepted assumption that the associated neuronal plasticity is restricted to brain circuits that mediate performance of the trained task. Nevertheless, learning does transfer to other tasks and stimuli, implying the presence of more widespread plasticity. Here, we trained human subjects to discriminate the direction of coherent motion stimuli. The behavioral learning effect substantially transferred to noisy motion stimuli. We used transcranial magnetic stimulation (TMS) and functional magnetic resonance imaging (fMRI) to investigate the neural mechanisms underlying the transfer of learning. The TMS experiment revealed dissociable, causal contributions of V3A (one of the visual areas in the extrastriate visual cortex) and MT+ (middle temporal/medial superior temporal cortex) to coherent and noisy motion processing. Surprisingly, the contribution of MT+ to noisy motion processing was replaced by V3A after perceptual training. The fMRI experiment complemented and corroborated the TMS finding. Multivariate pattern analysis showed that, before training, among visual cortical areas, coherent and noisy motion was decoded most accurately in V3A and MT+, respectively. After training, both kinds of motion were decoded most accurately in V3A. Our findings demonstrate that the effects of perceptual learning extend far beyond the retuning of specific neural populations for the trained stimuli. Learning could dramatically modify the inherent functional specializations of visual cortical areas and dynamically reweight their contributions to perceptual decisions based on their representational qualities. These neural changes might serve as the neural substrate for the transfer of perceptual learning.

Journal ArticleDOI
TL;DR: A computational simulation is presented that examines the degree to which visual complexity leads to grapheme learning difficulty, and discusses how visual complexity can be a factor leading to reading difficulty across writing systems.
Abstract: The visual complexity of orthographies varies across writing systems. Prior research has shown that complexity strongly influences the initial stage of reading development: the perceptual learning of grapheme forms. This study presents a computational simulation that examines the degree to which visual complexity leads to grapheme learning difficulty. We trained each of 131 identical neural networks to learn the structure of a different orthography and demonstrated a strong, positive association between network learning difficulty and multiple dimensions of grapheme complexity. We also tested the model’s performance against grapheme complexity effects on behavioral same/different judgments. Although the model was broadly consistent with human performance in how processing difficulty depended on the complexity of the tested orthography, as well as its relationship to viewers’ first-language orthography, discrepancies provided insight into important limitations of the model. We discuss how visual co...

Journal ArticleDOI
TL;DR: This paper showed that visual perceptual learning is completely transferrable between distinct physical stimuli and demonstrated that perceptual learning also operates at a conceptual level in a stimulus-invariant manner, which is consistent with object learning.
Abstract: Humans can learn to abstract and conceptualize the shared visual features defining an object category in object learning. Therefore, learning is generalizable to transformations of familiar objects and even to new objects that differ in other physical properties. In contrast, visual perceptual learning (VPL), improvement in discriminating fine differences of a basic visual feature through training, is commonly regarded as specific and low-level learning because the improvement often disappears when the trained stimulus is simply relocated or rotated in the visual field. Such location and orientation specificity is taken as evidence for neural plasticity in primary visual cortex (V1) or improved readout of V1 signals. However, new training methods have shown complete VPL transfer across stimulus locations and orientations, suggesting the involvement of high-level cognitive processes. Here we report that VPL bears similar properties of object learning. Specifically, we found that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex. Similarly, motion direction discrimination learning is transferable between first- and second-order motion signals. These results suggest that VPL can take place at a conceptual level and generalize to stimuli with different physical properties. Our findings thus reconcile perceptual and object learning into a unified framework. SIGNIFICANCE STATEMENT Training in object recognition can produce a learning effect that is applicable to new viewing conditions or even to new objects with different physical properties. However, perceptual learning has long been regarded as a low-level form of learning because of its specificity to the trained stimulus conditions. Here we demonstrate with new training tactics that visual perceptual learning is completely transferrable between distinct physical stimuli. This finding indicates that perceptual learning also operates at a conceptual level in a stimulus-invariant manner.

Journal ArticleDOI
TL;DR: This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia and argues against the assumption that reading deficiencies in Dyslexia are caused by phonological deficits.
Abstract: There is an ongoing debate about whether the cause of dyslexia is based on linguistic, auditory, or visual timing deficits. To investigate this issue three interventions were compared in 58 dyslexics in second grade (7 years on average), two targeting the temporal dynamics (timing) of either the auditory or visual pathways with a third reading intervention (control group) using linguistic word building. Visual pathway training in dyslexics to improve direction-discrimination of moving test patterns relative to a stationary background (figure/ground discrimination) significantly improved attention, reading fluency, both speed and comprehension, phonological processing, and both auditory and visual working memory relative to controls, whereas auditory training to improve phonological processing did not improve these academic skills significantly more than found for controls. This study supports the hypothesis that faulty timing in synchronizing the activity of magnocellular with parvocellular visual pathways is a fundamental cause of dyslexia, and argues against the assumption that reading deficiencies in dyslexia are caused by phonological deficits. This study demonstrates that visual movement direction-discrimination can be used to not only detect dyslexia early, but also for its successful treatment, so that reading problems do not prevent children from readily learning.

Journal ArticleDOI
TL;DR: The authors investigated the relation of university students' learning styles and the use of computer technology for language learning, and whether the demographic variables of gender and age would make a difference in the effectiveness of technology use for a desired goal.
Abstract: Computer technology provides spaces and locales for language learning. However, learning style preference and demographic variables may affect the effectiveness of technology use for a desired goal. Adapting Reid's pioneering Perceptual Learning Style Preference Questionnaire (PLSPQ), this study investigated the relations of university students' learning styles and the use of computer technology for language learning, and whether the demographic variables of gender and age would make a difference. Chinese students aged 17–36 years (M = 20.31, SD = 3.42) from two universities in Hong Kong (N = 401: male = 140 and female = 261) responded to a survey about four learning styles and computer technology. Principal components analysis and confirmatory factor analysis established the five factors, which were all positively correlated. No gender differences were found in technology application and learning styles (visual, auditory, kinesthetic, and tactile). Only some subtle age differences were found in kinesthet...

Journal ArticleDOI
TL;DR: It is demonstrated that even less than twenty minutes of alternating current stimulation below the individual perceptual threshold is sufficient to affect speech perception.

Journal ArticleDOI
TL;DR: It is suggested that LGN signals can be amplified by training to detect faint patterns, and neural plasticity induced by perceptual learning in human adults might not be confined to the cortical level but might occur as early as at the thalamic level.

Journal ArticleDOI
TL;DR: An object perception and perceptual learning system developed for a complex artificial cognitive agent working in a restaurant scenario that integrates detection, tracking, learning and recognition of tabletop objects and the Point Cloud Library is used in nearly all modules.

Journal ArticleDOI
TL;DR: The results show for the first time that PL with a lateral masking configuration has strong, non-invasive and long lasting rehabilitative potential to improve residual vision in the PRL of patients with central vision loss.
Abstract: BACKGROUND Macular Degeneration (MD), a visual disease that produces central vision loss, is one of the main causes of visual disability in western countries. Patients with MD are forced to use a peripheral retinal locus (PRL) as a substitute of the fovea. However, the poor sensitivity of this region renders basic everyday tasks very hard for MD patients. OBJECTIVE We investigated whether perceptual learning (PL) with lateral masking in the PRL of MD patients, improved their residual visual functions. METHOD Observers were trained with two distinct contrast detection tasks: (i) a Yes/No task with no feedback (MD: N = 3; controls: N = 3), and (ii) a temporal two-alternative forced choice task with feedback on incorrect trials (i.e., temporal-2AFC; MD: N = 4; controls: N = 3). Observers had to detect a Gabor patch (target) flanked above and below by two high contrast patches (i.e., lateral masking). Stimulus presentation was monocular with durations varying between 133 and 250 ms. Participants underwent 24- 27 training sessions in total. RESULTS Both PL procedures produced significant improvements in the trained task and learning transferred to visual acuity. Besides, the amount of transfer was greater for the temporal-2AFC task that induced a significant improvement of the contrast sensitivity for untrained spatial frequencies. Most importantly, follow-up tests on MD patients trained with the temporal-2AFC task showed that PL effects were retained between four and six months, suggesting long-term neural plasticity changes in the visual cortex. CONCLUSION The results show for the first time that PL with a lateral masking configuration has strong, non-invasive and long lasting rehabilitative potential to improve residual vision in the PRL of patients with central vision loss.

Journal ArticleDOI
TL;DR: This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI), and results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of mult isensory training on unisensory abilities.
Abstract: Life in a multisensory world requires the rapid and accurate integration of stimuli across the different senses. In this process, the temporal relationship between stimuli is critical in determining which stimuli share a common origin. Numerous studies have described a multisensory temporal binding window—the time window within which audiovisual stimuli are likely to be perceptually bound. In addition to characterizing this window’s size, recent work has shown it to be malleable, with the capacity for substantial narrowing following perceptual training. However, the generalization of these effects to other measures of perception is not known. This question was examined by characterizing the ability of training on a simultaneity judgment task to influence perception of the temporally-dependent sound-induced flash illusion (SIFI). Results do not demonstrate a change in performance on the SIFI itself following training. However, data do show an improved ability to discriminate rapidly-presented two-flash control conditions following training. Effects were specific to training and scaled with the degree of temporal window narrowing exhibited. Results do not support generalization of multisensory perceptual learning to other multisensory tasks. However, results do show that training results in improvements in visual temporal acuity, suggesting a generalization effect of multisensory training on unisensory abilities.

Journal ArticleDOI
TL;DR: Perceived usefulness was significantly related to attitude of using the system that was decisive to EFL learners' continuing use of ASR-based CAPT and no significant relationship between any type of perceptual learning style and perceived usefulness was discovered.
Abstract: This study aims to explore the structural relationships among the variables of EFL (English as a foreign language) learners' perceptual learning styles and Technology Acceptance Model (TAM). Three hundred and forty-one (n = 341) EFL learners were invited to join a self-regulated English pronunciation training program through automatic speech recognition (ASR) computer system. Participants were asked to actively undertake the interactions with ASR-based computer-assisted pronunciation training (CAPT) on a daily basis for three months. They were directed to finish a questionnaire on their perceptual learning style and technology acceptance. The collected data were analysed with descriptive statistics and structural equation model to investigate the structural relationships. Results show that most participants were visual learners; furthermore, no significant relationship between any type of perceptual learning style and perceived usefulness was discovered. Visual style as well as kinaesthetic style was foun...

Journal ArticleDOI
TL;DR: The results support the idea that hf-tRNS can be successfully used to reduce the duration of the perceptual training and/or to increase its efficacy in producing perceptual learning and generalization to improved VA and CS in individuals with uncorrected mild myopia.

Journal ArticleDOI
TL;DR: A novel perceptual learning paradigm is used to assess whether the benefits associated with training on a task in one sense transfer to another sense and suggests a unidirectional transfer of perceptual learning from dominant to non-dominant sensory modalities and place important constraints on models of multisensory processing and plasticity.

Journal ArticleDOI
TL;DR: Using magnetic resonance spectroscopy of GABA prior to and after repetitive tactile stimulation, it is shown that baseline GABA+ levels predict changes in perceptual outcome, which provides new insights into the role of inhibitory mechanisms during perceptual learning.
Abstract: Learning mechanisms are based on synaptic plasticity processes. Numerous studies on synaptic plasticity suggest that the regulation of the inhibitory neurotransmitter γ-aminobutyric acid (GABA) plays a central role maintaining the delicate balance of inhibition and excitation. However, in humans, a link between learning outcome and GABA levels has not been shown so far. Using magnetic resonance spectroscopy of GABA prior to and after repetitive tactile stimulation, we show here that baseline GABA+ levels predict changes in perceptual outcome. Although no net changes in GABA+ are observed, the GABA+ concentration prior to intervention explains almost 60% of the variance in learning outcome. Our data suggest that behavioral effects can be predicted by baseline GABA+ levels, which provide new insights into the role of inhibitory mechanisms during perceptual learning.

Journal ArticleDOI
Wu Li1
20 Oct 2016
TL;DR: This work has shown that perceptual learning results from a complex interplay between bottom-up and top- down processes, causing a global reorganization across cortical areas specialized for sensory processing, engaged in top-down attentional control, and involved in perceptual decision making.
Abstract: Our perceptual abilities significantly improve with practice. This phenomenon, known as perceptual learning, offers an ideal window for understanding use-dependent changes in the adult brain. Different experimental approaches have revealed a diversity of behavioral and cortical changes associated with perceptual learning, and different interpretations have been given with respect to the cortical loci and neural processes responsible for the learning. Accumulated evidence has begun to put together a coherent picture of the neural substrates underlying perceptual learning. The emerging view is that perceptual learning results from a complex interplay between bottom-up and top-down processes, causing a global reorganization across cortical areas specialized for sensory processing, engaged in top-down attentional control, and involved in perceptual decision making. Future studies should focus on the interactions among cortical areas for a better understanding of the general rules and mechanisms underlying various forms of skill learning.

Journal ArticleDOI
TL;DR: Details regarding the mechanisms underlying MI are revealed, and its use as a modality for skill acquisition is informed, to inform its use in numerous disciplines.
Abstract: Motor imagery (MI), the mental rehearsal of movement, is an effective means for acquiring a novel skill, even in the absence of physical practice (PP). The nature of this learning, be it perceptual, motor, or both, is not well understood. Understanding the mechanisms underlying MI-based skill acquisition has implications for its use in numerous disciplines, including informing best practices regarding its use. Here we used an implicit sequence learning (ISL) task to probe whether MI-based skill acquisition can be attributed to perceptual or motor learning. Participants (n = 60) randomized to 4 groups were trained through MI or PP, and were then tested in either perceptual (altering the sensory cue) or motor (switching the hand) transfer conditions. Control participants (n = 42) that did not perform a transfer condition were utilized from previous work. Learning was quantified through effect sizes for reaction time (RT) differences between implicit and random sequences. Generally, PP-based training led to lower RTs compared with MI-based training for implicit and random sequences. All groups demonstrated learning (p < .05), the magnitude of which was reduced by transfer conditions relative to controls. For MI-based training perceptual transfer disrupted performance more than for PP. Motor transfer disrupted performance equally for MI- and PP-based training. Our results suggest that MI-based training relies on both perceptual and motor learning, while PP-based training relies more on motor processes. These results reveal details regarding the mechanisms underlying MI, and inform its use as a modality for skill acquisition. (PsycINFO Database Record

Journal ArticleDOI
TL;DR: Results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability, suggesting tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.
Abstract: Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over 4 consecutive days (n = 30). During 20 minutes of tDCS, subjects had to learn a visual orientation-discrimination task. Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p < 0.003). We found reduced phosphene thresholds and increased ps-VEP ratios indicating increased cortical excitability after anodal tDCS (phosphene threshold: p = 0.002, ps-VEP: p = 0.003). Correlation analysis within the anodal tDCS group revealed no significant correlation between phosphene thresholds and learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

Journal ArticleDOI
22 Jan 2016-PLOS ONE
TL;DR: Three lines of evidence from healthy adults are presented in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory, providing empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.
Abstract: Perceptual training is generally assumed to improve perception by modifying the encoding or decoding of sensory information. However, this assumption is incompatible with recent demonstrations that transfer of learning can be enhanced by across-trial variation of training stimuli or task. Here we present three lines of evidence from healthy adults in support of the idea that the enhanced transfer of auditory discrimination learning is mediated by working memory (WM). First, the ability to discriminate small differences in tone frequency or duration was correlated with WM measured with a tone n-back task. Second, training frequency discrimination around a variable frequency transferred to and from WM learning, but training around a fixed frequency did not. The transfer of learning in both directions was correlated with a reduction of the influence of stimulus variation in the discrimination task, linking WM and its improvement to across-trial stimulus interaction in auditory discrimination. Third, while WM training transferred broadly to other WM and auditory discrimination tasks, variable-frequency training on duration discrimination did not improve WM, indicating that stimulus variation challenges and trains WM only if the task demands stimulus updating in the varied dimension. The results provide empirical evidence as well as a theoretic framework for interactions between cognitive and sensory plasticity during perceptual experience.

31 Dec 2016
TL;DR: The finding that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex indicates that perceptual learning also operates at a conceptual level in a stimulus-invariant manner.
Abstract: Humans can learn to abstract and conceptualize the shared visual features defining an object category in object learning. Therefore, learning is generalizable to transformations of familiar objects and even to new objects that differ in other physical properties. In contrast, visual perceptual learning (VPL), improvement in discriminating fine differences of a basic visual feature through training, is commonly regarded as specific and low-level learning because the improvement often disappears when the trained stimulus is simply relocated or rotated in the visual field. Such location and orientation specificity is taken as evidence for neural plasticity in primary visual cortex (V1) or improved readout of V1 signals. However, new training methods have shown complete VPL transfer across stimulus locations and orientations, suggesting the involvement of high-level cognitive processes. Here we report that VPL bears similar properties of object learning. Specifically, we found that orientation discrimination learning is completely transferrable between luminance gratings initially encoded in V1 and bilaterally symmetric dot patterns encoded in higher visual cortex. Similarly, motion direction discrimination learning is transferable between first- and second-order motion signals. These results suggest that VPL can take place at a conceptual level and generalize to stimuli with different physical properties. Our findings thus reconcile perceptual and object learning into a unified framework. SIGNIFICANCE STATEMENT Training in object recognition can produce a learning effect that is applicable to new viewing conditions or even to new objects with different physical properties. However, perceptual learning has long been regarded as a low-level form of learning because of its specificity to the trained stimulus conditions. Here we demonstrate with new training tactics that visual perceptual learning is completely transferrable between distinct physical stimuli. This finding indicates that perceptual learning also operates at a conceptual level in a stimulus-invariant manner.

Journal ArticleDOI
TL;DR: It is suggested that most training-related changes occurred at higher level task-specific cognitive processes in both groups, however, these were enhanced by high quality perceptual representations in the normal-hearing group.
Abstract: Introduction : Speech recognition in adverse listening conditions becomes more difficult as we age, particularly for individuals with age-related hearing loss (ARHL). Whether these difficulties can be eased with training remains debated, because it is not clear whether the outcomes are sufficiently general to be of use outside of the training context. The aim of the current study was to compare training-induced learning and generalization between normal-hearing older adults and those with ARHL. Methods : Fifty-six listeners (60-72 y/o), 35 participants with ARHL, and 21 normal hearing adults participated in the study. The study design was a cross over design with three groups (immediate-training, delayed-training, and no-training group). Trained participants received 13 sessions of home-based auditory training over the course of 4 weeks. Three adverse listening conditions were targeted: (1) Speech-in-noise, (2) time compressed speech, and (3) competing speakers, and the outcomes of training were compared between normal and ARHL groups. Pre- and post-test sessions were completed by all participants. Outcome measures included tests on all of the trained conditions as well as on a series of untrained conditions designed to assess the transfer of learning to other speech and non-speech conditions. Results : Significant improvements on all trained conditions were observed in both ARHL and normal-hearing groups over the course of training. Normal hearing participants learned more than participants with ARHL in the speech-in-noise condition, but showed similar patterns of learning in the other conditions. Greater pre- to post-test changes were observed in trained than in untrained listeners on all trained conditions. In addition, the ability of trained listeners from the ARHL group to discriminate minimally different pseudowords in noise also improved with training. Conclusions : ARHL did not preclude auditory perceptual learning but there was little generalization to untrained conditions. We suggest that most training-related changes occurred at higher level task-specific cognitive processes in both groups. However, these were enhanced by high quality perceptual representations in the normal-hearing group. In contrast, some training-related changes have also occurred at the level of phonemic representations in the ARHL group, consistent with an interaction between bottom-up and top-down processes.