scispace - formally typeset
Search or ask a question

Showing papers on "Perceptual learning published in 2021"


Journal ArticleDOI
TL;DR: In this paper, the accuracy of sensory encoding from psychophysical data was extracted by using an information theoretic measure, showing that sensory encoding and how it adapts to changing stimulus statistics during feedback also characteristically differs between neurotypical and ASD groups.
Abstract: Perceptual anomalies in individuals with autism spectrum disorder (ASD) have been attributed to an imbalance in weighting incoming sensory evidence with prior knowledge when interpreting sensory information. Here, we show that sensory encoding and how it adapts to changing stimulus statistics during feedback also characteristically differs between neurotypical and ASD groups. In a visual orientation estimation task, we extracted the accuracy of sensory encoding from psychophysical data by using an information theoretic measure. Initially, sensory representations in both groups reflected the statistics of visual orientations in natural scenes, but encoding capacity was overall lower in the ASD group. Exposure to an artificial (i.e., uniform) distribution of visual orientations coupled with performance feedback altered the sensory representations of the neurotypical group toward the novel experimental statistics, while also increasing their total encoding capacity. In contrast, neither total encoding capacity nor its allocation significantly changed in the ASD group. Across both groups, the degree of adaptation was correlated with participants' initial encoding capacity. These findings highlight substantial deficits in sensory encoding-independent from and potentially in addition to deficits in decoding-in individuals with ASD.

36 citations


Journal ArticleDOI
TL;DR: The literature is synthesized to establish some over-arching findings for the aging population, including an intact capacity for auditory perceptual learning, but a limited transfer of learning to untrained stimuli.

20 citations


Journal ArticleDOI
TL;DR: The results suggest that complex auditory prediction errors are encoded by changes in feedforward and intrinsic connections, confined to superior temporal gyrus.
Abstract: Learning of complex auditory sequences such as music can be thought of as optimizing an internal model of regularities through unpredicted events (or "prediction errors"). We used dynamic causal modeling (DCM) and parametric empirical Bayes on functional magnetic resonance imaging (fMRI) data to identify modulation of effective brain connectivity that takes place during perceptual learning of complex tone patterns. Our approach differs from previous studies in two aspects. First, we used a complex oddball paradigm based on tone patterns as opposed to simple deviant tones. Second, the use of fMRI allowed us to identify cortical regions with high spatial accuracy. These regions served as empirical regions-of-interest for the analysis of effective connectivity. Deviant patterns induced an increased blood oxygenation level-dependent response, compared to standards, in early auditory (Heschl's gyrus [HG]) and association auditory areas (planum temporale [PT]) bilaterally. Within this network, we found a left-lateralized increase in feedforward connectivity from HG to PT during deviant responses and an increase in excitation within left HG. In contrast to previous findings, we did not find frontal activity, nor did we find modulations of backward connections in response to oddball sounds. Our results suggest that complex auditory prediction errors are encoded by changes in feedforward and intrinsic connections, confined to superior temporal gyrus.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the cortical dynamics of the somatosensory learning system to investigate both the form of the generative model as well as its neural surprise signatures.
Abstract: Tracking statistical regularities of the environment is important for shaping human behavior and perception. Evidence suggests that the brain learns environmental dependencies using Bayesian principles. However, much remains unknown about the employed algorithms, for somesthesis in particular. Here, we describe the cortical dynamics of the somatosensory learning system to investigate both the form of the generative model as well as its neural surprise signatures. Specifically, we recorded EEG data from 40 participants subjected to a somatosensory roving-stimulus paradigm and performed single-trial modeling across peri-stimulus time in both sensor and source space. Our Bayesian model selection procedure indicates that evoked potentials are best described by a non-hierarchical learning model that tracks transitions between observations using leaky integration. From around 70ms post-stimulus onset, secondary somatosensory cortices are found to represent confidence-corrected surprise as a measure of model inadequacy. Indications of Bayesian surprise encoding, reflecting model updating, are found in primary somatosensory cortex from around 140ms. This dissociation is compatible with the idea that early surprise signals may control subsequent model update rates. In sum, our findings support the hypothesis that early somatosensory processing reflects Bayesian perceptual learning and contribute to an understanding of its underlying mechanisms.

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors conducted an individual differences study to identify basic cognitive abilities and/or dispositional traits that predict an individual's ability to learn and generalize learning in tasks of perceptual learning.
Abstract: Given appropriate training, human observers typically demonstrate clear improvements in performance on perceptual tasks. However, the benefits of training frequently fail to generalize to other tasks, even those that appear similar to the trained task. A great deal of research has focused on the training task characteristics that influence the extent to which learning generalizes. However, less is known about what might predict the considerable individual variations in performance. As such, we conducted an individual differences study to identify basic cognitive abilities and/or dispositional traits that predict an individual’s ability to learn and/or generalize learning in tasks of perceptual learning. We first showed that the rate of learning and the asymptotic level of performance that is achieved in two different perceptual learning tasks (motion direction and odd-ball texture detection) are correlated across individuals, as is the degree of immediate generalization that is observed and the rate at which a generalization task is learned. This indicates that there are indeed consistent individual differences in perceptual learning abilities. We then showed that several basic cognitive abilities and dispositional traits are associated with an individual’s ability to learn (e.g., simple reaction time; sensitivity to punishment) and/or generalize learning (e.g., cognitive flexibility; openness to experience) in perceptual learning tasks. We suggest that the observed individual difference relationships may provide possible targets for future intervention studies meant to increase perceptual learning and generalization.

14 citations


Journal ArticleDOI
TL;DR: This article investigated the effect of musical and pitch aptitude on the level-tone learning variability in Mandarin-speaking Mandarin speakers with experience of a contour-tone system (Cantonese).
Abstract: Contrary to studies on speech learning of consonants and vowels, the issue of individual variability is less well understood in the learning of lexical tones. Whereas existing studies have focused on contour-tone learning (Mandarin) by listeners without experience of a tonal language, this study addressed a research gap by investigating the perceptual learning of level-tone contrasts (Cantonese) by learners with experience of a contour-tone system (Mandarin). Critically, we sought to answer the question of how Mandarin listeners' initial perception and learning of Cantonese level-tones are affected by their musical and pitch aptitude. Mandarin-speaking participants completed a pretest, training, and a posttest in the level-tone discrimination and identification (ID) tasks. They were assessed in musical aptitude and speech and nonspeech pitch thresholds before training. The results revealed a significant training effect in the ID task but not in the discrimination task. Importantly, the regression analyses showed an advantage of higher musical and pitch aptitude in perceiving Cantonese level-tone categories. The results explained part of the level-tone learning variability in speakers of a contour-tone system. The finding implies that prior experience of a tonal language does not necessarily override the advantage of listeners' musical and pitch aptitude.

14 citations


Journal ArticleDOI
09 Mar 2021
TL;DR: In this article, the authors examined the impact of COVID-19 pandemic on educational system and investigated if the channels through which learning is promoted cater for students' learning styles.
Abstract: This study examines the impact of COVID-19 pandemic on educational system; investigates how meaningful learning is promoted and continued despite the unprecedented global challenges; and investigates if the channels through which learning is promoted cater for students' learning styles. Two hundred and one secondary school students selected from Ekiti State, Nigeria, participated in the descriptive research study. A validated questionnaire was used to gather data from the respondents. The study found out that the learning channels mostly employed during the pandemic were television stations, school on-air via radio programme, virtual learning, and private teaching. The findings revealed that respondents had no preference for specific perceptual learning styles but embraced different learning channels employed. They modified their learning styles and developed flexibility in learning. A recommendation was provided that new viable policies that promote diverse learning opportunities and alternative learning strategies capable of mitigating the present and future academic obstructions should be made and diligently implemented. This paper concludes that during future emergencies, diverse learning platforms, channels, and digital media employed for learning should cater to students' learning styles: visual, auditory, tactile, kinesthetic, group, and individual. It is noteworthy that learners would learn better if they are exposed to varieties of teaching/learning media.

12 citations


Journal ArticleDOI
TL;DR: The authors found that perceptual learning reflects cumulative experience with a talker's input over time, rather than initial exposure to atypical productions, while other data suggest that learning reflects only the most recent exposure.
Abstract: Listeners use lexical knowledge to modify the mapping from acoustics to speech sounds, but the timecourse of experience that informs lexically guided perceptual learning is unknown. Some data suggest that learning is contingent on initial exposure to atypical productions, while other data suggest that learning reflects only the most recent exposure. Here we seek to reconcile these findings by assessing the type and timecourse of exposure that promote robust lexcially guided perceptual learning. In three experiments, listeners (n = 560) heard 20 critical productions interspersed among 200 trials during an exposure phase and then categorized items from an ashi-asi continuum in a test phase. In Experiment 1, critical productions consisted of ambiguous fricatives embedded in either /s/- or /ʃ/-biasing contexts. Learning was observed; the /s/-bias group showed more asi responses compared to the /ʃ/-bias group. In Experiment 2, listeners heard ambiguous and clear productions in a consistent context. Order and lexical bias were manipulated between-subjects, and perceptual learning occurred regardless of the order in which the clear and ambiguous productions were heard. In Experiment 3, listeners heard ambiguous fricatives in both /s/- and /ʃ/-biasing contexts. Order differed between two exposure groups, and no difference between groups was observed at test. Moreover, the results showed a monotonic decrease in learning across experiments, in line with decreasing exposure to stable lexically biasing contexts, and were replicated across novel stimulus sets. In contrast to previous findings showing that either initial or most recent experience are critical for lexically guided perceptual learning, the current results suggest that perceptual learning reflects cumulative experience with a talker's input over time.

12 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigated the behavioral and neural signatures of rapid perceptual learning of regular sound patterns and found that recurring patterns are detected more quickly and increase sensitivity to pattern deviations and to the temporal order of pattern onset relative to a visual stimulus.

12 citations


Journal ArticleDOI
TL;DR: The authors investigated whether feature-based attention, which enhances the representation of particular features throughout the visual field, facilitates visual perceptual learning (VPL) transfer and how long such an effect would last.
Abstract: Visual perceptual learning (VPL) is typically specific to the trained location and feature. However, the degree of specificity depends upon particular training protocols. Manipulating covert spatial attention during training facilitates learning transfer to other locations. Here we investigated whether feature-based attention (FBA), which enhances the representation of particular features throughout the visual field, facilitates VPL transfer, and how long such an effect would last. To do so, we implemented a novel task in which observers discriminated a stimulus orientation relative to two reference angles presented simultaneously before each block. We found that training with FBA enabled remarkable location transfer, reminiscent of its global effect across the visual field, but preserved orientation specificity in VPL. Critically, both the perceptual improvement and location transfer persisted after 1 year. Our results reveal robust, long-lasting benefits induced by FBA in VPL, and have translational implications for improving generalization of training protocols in visual rehabilitation.

11 citations


Journal ArticleDOI
TL;DR: It is observed that prior knowledge acquired from fast, one-shot perceptual learning sharpens neural representation throughout the ventral visual stream, generating suppressed sensory responses, revealing a heretofore unknown macroscopic gradient of prior knowledge's sharpening effect on neural representations across the cortical hierarchy.
Abstract: Prior knowledge profoundly influences perceptual processing. Previous studies have revealed consistent suppression of predicted stimulus information in sensory areas, but how prior knowledge modulates processing higher up in the cortical hierarchy remains poorly understood. In addition, the mechanism leading to suppression of predicted sensory information remains unclear, and studies thus far have revealed a mixed pattern of results in support of either the "sharpening" or "dampening" model. Here, using 7T fMRI in humans (both sexes), we observed that prior knowledge acquired from fast, one-shot perceptual learning sharpens neural representation throughout the ventral visual stream, generating suppressed sensory responses. In contrast, the frontoparietal and default mode networks exhibit similar sharpening of content-specific neural representation, but in the context of unchanged and enhanced activity magnitudes, respectively: a pattern we refer to as "selective enhancement." Together, these results reveal a heretofore unknown macroscopic gradient of prior knowledge's sharpening effect on neural representations across the cortical hierarchy.SIGNIFICANCE STATEMENT A fundamental question in neuroscience is how prior knowledge shapes perceptual processing. Perception is constantly informed by internal priors in the brain acquired from past experiences, but the neural mechanisms underlying this process are poorly understood. To date, research on this question has focused on early visual regions, reporting a consistent downregulation when predicted stimuli are encountered. Here, using a dramatic one-shot perceptual learning paradigm, we observed that prior knowledge results in sharper neural representations across the cortical hierarchy of the human brain through a gradient of mechanisms. In visual regions, neural responses tuned away from internal predictions are suppressed. In frontoparietal regions, neural activity consistent with priors is selectively enhanced. These results deepen our understanding of how prior knowledge informs perception.

Journal ArticleDOI
TL;DR: In this paper, the authors used functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) to examine training-induced changes in working memory (WM) representation.
Abstract: The ability to discriminate between stimuli relies on a chain of neural operations associated with perception, memory and decision-making. Accumulating studies show learning-dependent plasticity in perception or decision-making, yet whether perceptual learning modifies mnemonic processing remains unclear. Here, we trained human participants of both sexes in an orientation discrimination task, while using functional magnetic resonance imaging (fMRI) and transcranial magnetic stimulation (TMS) to separately examine training-induced changes in working memory (WM) representation. fMRI decoding revealed orientation-specific neural patterns during the delay period in primary visual cortex (V1) before, but not after, training, whereas neurodisruption of V1 during the delay period led to behavioral deficits in both phases. In contrast, both fMRI decoding and disruptive effect of TMS showed that intraparietal sulcus (IPS) represented WM content after, but not before, training. These results suggest that training does not affect the necessity of sensory area in representing WM information, consistent with the sensory recruitment hypothesis in WM, but likely alters the coding format of the stored stimulus in this region. On the other hand, training can render WM content to be maintained in higher-order parietal areas, complementing sensory area to support more robust maintenance of information.SIGNIFICANCE STATEMENT There has been accumulating progresses regarding experience-dependent plasticity in perception or decision-making, yet how perceptual experience moulds mnemonic processing of visual information remains less explored. Here, we provide novel findings that learning-dependent improvement of discriminability accompanies altered WM representation at different cortical levels. Critically, we suggest a role of training in modulating cortical locus of WM representation, providing a plausible explanation to reconcile the discrepant findings between human and animal studies regarding the recruitment of sensory or higher-order areas in WM.

Journal ArticleDOI
TL;DR: It is concluded that a VR PL approach based on depth cue scaffolding may provide a useful method for improving stereoacuity, and the in-game performance metrics may provide useful insights into principles for effective treatment of stereo anomalies.
Abstract: Stereopsis is a valuable feature of human visual perception, which may be impaired or absent in amblyopia and/or strabismus but can be improved through perceptual learning (PL) and videogames. The development of consumer virtual reality (VR) may provide a useful tool for improving stereovision. We report a proof of concept study, especially useful for strabismic patients and/or those with reduced or null stereoacuity. Our novel VR PL strategy is based on a principled approach which included aligning and balancing the perceptual input to the two eyes, dichoptic tasks, exposure to large disparities, scaffolding depth cues and perception for action. We recruited ten adults with normal vision and ten with binocular impairments. Participants played two novel PL games (DartBoard and Halloween) using a VR-HMD. Each game consisted of three depth cue scaffolding conditions, starting with non-binocular and binocular cues to depth and ending with only binocular disparity. All stereo-anomalous participants improved in the game and most (9/10) showed transfer to clinical and psychophysical stereoacuity tests (mean stereoacuity changed from 569 to 296 arc seconds, P < 0.0001). Stereo-normal participants also showed in-game improvement, which transferred to psychophysical tests (mean stereoacuity changed from 23 to a ceiling value of 20 arc seconds, P = 0.001). We conclude that a VR PL approach based on depth cue scaffolding may provide a useful method for improving stereoacuity, and the in-game performance metrics may provide useful insights into principles for effective treatment of stereo anomalies.This study was registered as a clinical trial on 04/05/2010 with the identifier NCT01115283 at ClinicalTrials.gov.

Journal ArticleDOI
19 May 2021-PLOS ONE
TL;DR: In this article, the authors presented omnidirectional in-field scenes to 12 expert players (picked by DFB), 10 regional league intermediate players, and 13 novice soccer goalkeepers in order to assess the perceptual skills of athletes in an optimized manner.
Abstract: By focusing on high experimental control and realistic presentation, the latest research in expertise assessment of soccer players demonstrates the importance of perceptual skills, especially in decision making. Our work captured omnidirectional in-field scenes displayed through virtual reality glasses to 12 expert players (picked by DFB), 10 regional league intermediate players, and13 novice soccer goalkeepers in order to assess the perceptual skills of athletes in an optimized manner. All scenes were shown from the perspective of the same natural goalkeeper and ended after the return pass to that goalkeeper. Based on the gaze behavior of each player, we classified their expertise with common machine learning techniques. Our results show that eye movements contain highly informative features and thus enable a classification of goalkeepers between three stages of expertise, namely elite youth player, regional league player, and novice, at a high accuracy of 78.2%. This research underscores the importance of eye tracking and machine learning in perceptual expertise research and paves the way for perceptual-cognitive diagnosis as well as future training systems.

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced.
Abstract: Perceptual stability is facilitated by a decrease in visual sensitivity during rapid eye movements, called saccadic suppression. While a large body of evidence demonstrates that saccadic programming is plastic, little is known about whether the perceptual consequences of saccades can be modified. Here, we demonstrate that saccadic suppression is attenuated during learning on a standard visual detection-in-noise task, to the point that it is effectively silenced. Across a period of 7 days, 44 participants were trained to detect brief, low-contrast stimuli embedded within dynamic noise, while eye position was tracked. Although instructed to fixate, participants regularly made small fixational saccades. Data were accumulated over a large number of trials, allowing us to assess changes in performance as a function of the temporal proximity of stimuli and saccades. This analysis revealed that improvements in sensitivity over the training period were accompanied by a systematic change in the impact of saccades on performance-robust saccadic suppression on day 1 declined gradually over subsequent days until its magnitude became indistinguishable from zero. This silencing of suppression was not explained by learning-related changes in saccade characteristics and generalized to an untrained retinal location and stimulus orientation. Suppression was restored when learned stimulus timing was perturbed, consistent with the operation of a mechanism that temporarily reduces or eliminates saccadic suppression, but only when it is behaviorally advantageous to do so. Our results indicate that learning can circumvent saccadic suppression to improve performance, without compromising its functional benefits in other viewing contexts.

Journal ArticleDOI
TL;DR: In this article, the authors dissected learning-induced cortical changes over the course of training the monkeys in a global form detection task and found that two distinct components of neuronal population codes were progressively and markedly enhanced in both V4 and PFC.

Journal ArticleDOI
TL;DR: The results indicate that even previously consolidated human perceptual memories are susceptible to neuromodulation, involving early visual cortical processing, and the opportunity to noninvasively neuromomodulate reactivated perceptual learning may have important clinical implications.
Abstract: Perception thresholds can improve through repeated practice with visual tasks. Can an already acquired and well-consolidated perceptual skill be noninvasively neuromodulated, unfolding the neural mechanisms involved? Here, leveraging the susceptibility of reactivated memories ranging from synaptic to systems levels across learning and memory domains and animal models, we used noninvasive brain stimulation to neuromodulate well-consolidated reactivated visual perceptual learning and reveal the underlying neural mechanisms. Subjects first encoded and consolidated the visual skill memory by performing daily practice sessions with the task. On a separate day, the consolidated visual memory was briefly reactivated, followed by low-frequency, inhibitory 1 Hz repetitive transcranial magnetic stimulation over early visual cortex, which was individually localized using functional magnetic resonance imaging. Poststimulation perceptual thresholds were measured on the final session. The results show modulation of perceptual thresholds following early visual cortex stimulation, relative to control stimulation. Consistently, resting state functional connectivity between trained and untrained parts of early visual cortex prior to training predicted the magnitude of perceptual threshold modulation. Together, these results indicate that even previously consolidated human perceptual memories are susceptible to neuromodulation, involving early visual cortical processing. Moreover, the opportunity to noninvasively neuromodulate reactivated perceptual learning may have important clinical implications.

Journal ArticleDOI
TL;DR: In this paper, the authors found that a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task.
Abstract: Core auditory cortex (AC) neurons encode slow fluctuations of acoustic stimuli with temporally patterned activity. However, whether temporal encoding is necessary to explain auditory perceptual skills remains uncertain. Here, we recorded from gerbil AC neurons while they discriminated between a 4-Hz amplitude modulation (AM) broadband noise and AM rates >4 Hz. We found a proportion of neurons possessed neural thresholds based on spike pattern or spike count that were better than the recorded session's behavioral threshold, suggesting that spike count could provide sufficient information for this perceptual task. A population decoder that relied on temporal information outperformed a decoder that relied on spike count alone, but the spike count decoder still remained sufficient to explain average behavioral performance. This leaves open the possibility that more demanding perceptual judgments require temporal information. Thus, we asked whether accurate classification of different AM rates between 4 and 12 Hz required the information contained in AC temporal discharge patterns. Indeed, accurate classification of these AM stimuli depended on the inclusion of temporal information rather than spike count alone. Overall, our results compare two different representations of time-varying acoustic features that can be accessed by downstream circuits required for perceptual judgments.

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relationship between aesthetic preferences for consonance versus dissonance and the memorisation of musical intervals and chords, and found a significant trial-by-trial correlation between subjective aesthetic judgements and single trial amplitude fluctuations of the ERP attention-related N1 component.
Abstract: Is it true that we learn better what we like? Current neuroaesthetic and neurocomputational models of aesthetic appreciation postulate the existence of a correlation between aesthetic appreciation and learning. However, even though aesthetic appreciation has been associated with attentional enhancements, systematic evidence demonstrating its influence on learning processes is still lacking. Here, in two experiments, we investigated the relationship between aesthetic preferences for consonance versus dissonance and the memorisation of musical intervals and chords. In Experiment 1, 60 participants were first asked to memorise and evaluate arpeggiated triad chords (memorisation phase), then, following a distraction task, chords' memorisation accuracy was measured (recognition phase). Memorisation resulted to be significantly enhanced for subjectively preferred as compared with non-preferred chords. To explore the possible neural mechanisms underlying these results, we performed an EEG study, directed to investigate implicit perceptual learning dynamics (Experiment 2). Through an auditory mismatch detection paradigm, electrophysiological responses to standard/deviant intervals were recorded, while participants were asked to evaluate the beauty of the intervals. We found a significant trial-by-trial correlation between subjective aesthetic judgements and single trial amplitude fluctuations of the ERP attention-related N1 component. Moreover, implicit perceptual learning, expressed by larger mismatch detection responses, was enhanced for more appreciated intervals. Altogether, our results showed the existence of a relationship between aesthetic appreciation and implicit learning dynamics as well as higher-order learning processes, such as memorisation. This finding might suggest possible future applications in different research domains such as teaching and rehabilitation of memory and attentional deficits.

Journal ArticleDOI
TL;DR: A review of the state of the art in perceptual learning of dysarthric speech can be found in this article, with recommendations for translational studies that establish best practices and candidates for listener-targeted dysarthria remediation, perceptual training.
Abstract: Purpose Early studies of perceptual learning of dysarthric speech, those summarized in Borrie, McAuliffe, and Liss (2012), yielded preliminary evidence that listeners could learn to better understand the speech of a person with dysarthria, revealing a potentially promising avenue for future intelligibility interventions. Since then, a programmatic body of research grounded in models of perceptual processing has unfolded. The current review provides an updated account of the state of the evidence in this area and offers direction for moving this work toward clinical implementation. Method The studies that have investigated perceptual learning of dysarthric speech (N = 24) are summarized and synthesized first according to the proposed learning source and then by highlighting the parameters that appear to mediate learning, culminating with additional learning outcomes. Results The recent literature has established strong empirical evidence of intelligibility improvements following familiarization with dysarthric speech and a theoretical account of the mechanisms that facilitate improved processing of the neurologically degraded acoustic signal. Conclusions There are no existing intelligibility interventions for individuals with dysarthria who cannot behaviorally modify their speech. However, there is now robust support for the development of an approach that shifts the weight of behavioral change from speaker to listener, exploiting perceptual learning to ease the intelligibility burden of dysarthria. To move this work from bench to bedside, recommendations for translational studies that establish best practices and candidacy for listener-targeted dysarthria remediation, perceptual training, are provided.

Journal ArticleDOI
TL;DR: This paper trained people to produce 90° mean relative phase using task-appropriate feedback and investigated whether and how that learning transfers to other coordinations, finding large, asymmetric transfer of learning bimanual 90° to bimanUAL 60° and 120°.
Abstract: In this paper, we trained people to produce 90° mean relative phase using task-appropriate feedback and investigated whether and how that learning transfers to other coordinations. Past work has failed to find transfer of learning to other relative phases, only to symmetry partners (identical coordinations with reversed lead-lag relationships) and to other effector combinations. However, that research has all trained people using transformed visual feedback (visual metronomes, Lissajous feedback) which removes the relative motion information typically used to produce various coordinations (relative direction, relative position; Wilson and Bingham, in Percept Psychophys 70(3):465-476, 2008). Coordination feedback (Wilson et al., in J Exp Psychol Hum Percept Perform 36(6):1508, 2010) preserves that information and we have recently shown that relative position supports transfer of learning between unimanual and bimanual performance of 90° (Snapp-Childs et al., in Exp Brain Res 233(7), 2225-2238, 2015). Here, we ask whether that information can support the production of other relative phases. We found large, asymmetric transfer of learning bimanual 90° to bimanual 60° and 120°, supported by perceptual learning of relative position information at 90°. For learning to transfer, the two tasks must overlap in some critical way; this is additional evidence that this overlap must be informational. We discuss the results in the context of an ecological, task dynamical approach to understanding the nature of perception-action tasks.

Journal ArticleDOI
TL;DR: In this paper, the amplitude modulation of visual evoked potential components following high-frequency visual stimulation was investigated and no significant correlations between modulation magnitude of visual potential components and visual perceptual learning task performance was evident.
Abstract: Objective: Stimulus-selective response modulation of sensory evoked potentials represents a well-established non-invasive index of long-term potentiation-like (LTP-like) synaptic plasticity in the human sensory cortices. Although our understanding of the mechanisms underlying stimulus-selective response modulation has increased over the past two decades, it remains unclear how this form of LTP-like synaptic plasticity is related to other basic learning mechanisms, such as perceptual learning. The aim of the current study was twofold; firstly, we aimed to corroborate former stimulus-selective response modulation studies, demonstrating modulation of visual evoked potential components following high-frequency visual stimulation. Secondly, we aimed to investigate the association between the magnitudes of LTP-like plasticity and visual perceptual learning. Methods: 42 healthy adults participated in the study. EEG data was recorded during a standard high-frequency stimulus-selective response modulation paradigm. Amplitude values were measured from the peaks of visual components C1, P1, and N1. Embedded in the same experimental session, the visual perceptual learning task required the participants to discriminate between a masked checkerboard pattern and a visual “noise” stimulus before, during and after the stimulus-selective response modulation probes. Results: We demonstrated significant amplitude modulations of visual evoked potentials components C1 and N1 from baseline to both post-stimulation probes. In the visual perceptual learning task, we observed a significant change in the average threshold levels from the first to the second round. No significant association between the magnitudes of LTP-like plasticity and performance on the visual perceptual learning task was evident. Conclusions: To the extent of our knowledge, this study is the first to examine the relationship between the visual stimulus-selective response modulation phenomenon and visual perceptual learning in humans. In accordance with previous studies, we demonstrated robust amplitude modulations of the C1 and N1 components of the visual evoked potential waveform. However, we did not observe any significant correlations between modulation magnitude of visual evoked potential components and visual perceptual learning task performance, suggesting that these phenomena rely on separate learning mechanisms implemented by different neural mechanisms.

Journal ArticleDOI
TL;DR: In this article, the authors assessed the olfactory and gustatory performance of a group of university blind wine tasters before and after training and found that the training group outperformed the control group in detecting odour.
Abstract: A growing body of research has demonstrated differences in perceptual, conceptual, and language abilities between wine experts and novices. However, it is unclear to what extent these differences are innate or acquired through training. The present study assessed the olfactory and gustatory performance of a group of university blind wine tasters before and after training. Previous research has shown that this training regimen significantly improves blind tasting accuracy, but it remains unknown whether perceptual learning from blind tasting training is generalisable to standard tests of olfactory/gustatory ability. Two testing sessions were carried out for the training group (N = 14) as well as for a control group (N = 12) before and after a 5-week training period. In each session, participants underwent olfactory threshold, discrimination, and identification assessments as well as a gustatory sensitivity test. Olfactory discrimination significantly improved in the training group over the 5-week period, and the training group outperformed controls in olfactory identification in both sessions. Based on our limited set of data, wine training seems to have improved olfactory discrimination, even though the method of training did not involve odorants used in the discrimination test itself. These results reveal that even wine training over a short period seems to make concrete changes to olfactory performance, supporting the idea that generalised perceptual learning can take place for odour discrimination.

Journal ArticleDOI
TL;DR: In this paper, the authors used electroencephalography (EEG) to interrogate the feedforward and feedback contributions to neural adaptation as adults with and without dyslexia viewed pairs of faces and words in a paradigm that manipulated whether there was a high likelihood of stimulus repetition versus a high probability of stimulus change.

Journal ArticleDOI
TL;DR: This model considers both local features of shapes: edge lengths and vertex angles, and global features: concaveness, and is in 92% agreement with human subjective ratings of shape complexity, consistent with hierarchical perceptual learning theory.
Abstract: Understanding how people perceive the visual complexity of shapes has important theoretical as well as practical implications. One school of thought, driven by information theory, focuses on studying the local features that contribute to the perception of visual complexity. Another school, in contrast, emphasizes the impact of global characteristics of shapes on perceived complexity. Inspired by recent discoveries in neuroscience, our model considers both local features of shapes: edge lengths and vertex angles, and global features: concaveness, and is in 92% agreement with human subjective ratings of shape complexity. The model is also consistent with the hierarchical perceptual learning theory, which explains how different layers of neurons in the visual system act together to yield a perception of visual shape complexity.

Journal ArticleDOI
TL;DR: In this article, the authors show that naive listeners, those with no prior experience with dysarthria, benefit from explicit familiar knowledge from a familiar familiar knowledge-rich environment. But they do not address the problem of dysarthra.
Abstract: Purpose Perceptual training paradigms, which leverage the mechanism of perceptual learning, show that naive listeners, those with no prior experience with dysarthria, benefit from explicit familiar...

Journal ArticleDOI
TL;DR: In this paper, the authors studied 23 Ethiopian children with bilateral early-onset cataracts and surgically treated only years later, and found that the patients' visual acuity typically improved substantially within 6 months post-surgery.

Journal ArticleDOI
TL;DR: In this paper, four groups of subjects underwent motion direction discrimination training over 8 days with 40, 120, 360, or 1080 trials per day, and the similarity lasted for at least two weeks.
Abstract: Perceptual learning has been widely used to study the plasticity of the visual system in adults. Owing to the belief that practice makes perfect, perceptual learning protocols usually require subjects to practice a task thousands of times over days, even weeks. However, we know very little about the relationship between training amount and behavioral improvement. Here, four groups of subjects underwent motion direction discrimination training over 8 days with 40, 120, 360, or 1080 trials per day. Surprisingly, different daily training amounts induced similar improvement across the four groups, and the similarity lasted for at least 2 weeks. Moreover, the group with 40 training trials per day showed more learning transfer from the trained direction to the untrained directions than the group with 1080 training trials per day immediately after training and 2 weeks later. These findings suggest that perceptual learning of motion direction discrimination is not always dependent on the daily training amount and less training leads to more transfer.

Journal ArticleDOI
TL;DR: In this article, the authors investigated whether training-induced reductions in TBW size transfer across stimulus intensities across different visual and auditory intensities, and found that perceptual improvements following training are specific for high-intensity stimuli, highlighting limitations of proposed TBW training procedures.
Abstract: The temporal binding window (TBW), which reflects the range of temporal offsets in which audiovisual stimuli are combined to form a singular percept, can be reduced through training. Our research aimed to investigate whether training-induced reductions in TBW size transfer across stimulus intensities. A total of 32 observers performed simultaneity judgements at two visual intensities with a fixed auditory intensity, before and after receiving audiovisual TBW training at just one of these two intensities. We show that training individuals with a high visual intensity reduces the size of the TBW for bright stimuli, but this improvement did not transfer to dim stimuli. The reduction in TBW can be explained by shifts in decision criteria. Those trained with the dim visual stimuli, however, showed no reduction in TBW. Our main finding is that perceptual improvements following training are specific for high-intensity stimuli, potentially highlighting limitations of proposed TBW training procedures.

Journal ArticleDOI
01 Jun 2021-Synthese
TL;DR: This paper provides a complete formulation of the Objection from Context and evaluates Brogaards reply to it, arguing that the exercise of context-sensitivity in language comprehension does, in fact, typically involve inference.
Abstract: According to the perceptual view of language comprehension, listeners typically recover high-level linguistic properties such as utterance meaning without inferential work. The perceptual view is subject to the Objection from Context: since utterance meaning is massively context-sensitive, and context-sensitivity requires cognitive inference, the perceptual view is false. In recent work, Berit Brogaard provides a challenging reply to this objection. She argues that in language comprehension context-sensitivity is typically exercised not through inferences, but rather through top-down perceptual modulations or perceptual learning. This paper provides a complete formulation of the Objection from Context and evaluates Brogaards reply to it. Drawing on conceptual considerations and empirical examples, we argue that the exercise of context-sensitivity in language comprehension does, in fact, typically involve inference.