scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dynamics of brain activation during learning of syllable-symbol paired associations

TL;DR: The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
About: This article is published in Neuropsychologia.The article was published on 2019-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Brain activity and meditation & Passive learning.

Summary (3 min read)

1. Introduction

  • Relatively little is known about the immediate learning processes in the human brain that occur at the beginning stages of training of cross-modal associati ns.the authors.
  • While studies examining the long-term learning effects have been important in establishing the brain mechanisms involved in cross-modal processing, it is not known which of these brain mechanisms are used during the initial steps of the learning process, and if there are distinct stages of learning during which some of the mechanisms are more important than others.
  • Studies on long-term effects of audio-visual learning provide a starting point for expected short-term learning effects.
  • The superior temporal sulcus in the left hemisphere has been implicated particularly in processing of well-established letterspeech sound combinations, thus mostly reflecting lo -term audio-visual memory representations (Raij et al., 2000; van Atteveldt et al., 2004; Hashimoto & Sakai, 2004, M AN US CR IP T AC CE PT ED Blomert, 2011).
  • Both active and passive tasks were used to examine possible general neural mechanisms related to learning of audio-visual associations.

2.1 Experiment 1

  • Thirteen adult participants were included in the analyses (26.3 years on average, range 21-38 years; 7 female, 6 male; 12 right-handed, 1 ambidextrous based on self-report).
  • From the total of 15 participants, one participant was excluded due to magnetic artifact from a tooth brace and one due to excessive eye blinks during the visual stimulus presentation.
  • None of the participants had lived in Japan or studied Japanese (relevant for the choice of visual stimuli, see below).
  • The study was approved by the Ethics Committee of the Aalto University.

2.1.2 Stimuli and experimental design

  • Auditory stimuli were recorded by a female native Finnish speaker in a sound-attenuated booth.
  • The delayed audio presentation was introduced in order to allow a clean access to cortical processing of the visual symbol without contamination by auditory activation, motor response, or response error monitoring.
  • Accuracy and reaction time (with respect to question mark onset) were obtained for each trial.
  • For thispurpose two categories of trials were created, learnable and non-learnable .
  • For the other half of the symbols the participants received the word ‘incorrect’ as the feedback and thus their association to syllables could not be learned (non-learnable category).

2.1.3 Data recording and analysis

  • MEG data was collected using a 306-channel (102 magnetometers, 204 planar gradiometers) whole-head device (Elekta Oy, Finland) at the MEG Core of Aalto NeuroImaging, Aalto University, Finland.
  • The head position was monitored continuously using 5 small coils attached to the scalp (3 on the forehead and 2 behind the ears).
  • Noise covariance matrix was calculated from the baseline interval of the averagd responses.

2.1.4 Statistical analysis

  • Repeated measures ANOVAs (category [learnable, non-lear able] x quarter [1st, 2nd, 3rd, 4th] x hemisphere [left, right]) for each time window and region of interest were conducted.
  • Effects involving interaction between category and quarter were of interest.

2.2.1 Participants

  • Seventeen adult participants were included in the analyses (26.2 years on average, range 20-35 years; 14 female, 3 male; 16 right-handed, 1 left-handed based on self-report).
  • The study was approved by the Ethics Committee of the University of Jyväskylä, Finland.
  • Each experimental trial started with a fixation cross shown at the centre of the screen for 745 ms.
  • To examine the effect of association learning, two categories of trials were created, learnable and non-learnable.
  • Half of the visual stimuli were always presented with its corresponding auditory stimuli (earnable category) while the other half of the visual stimuli were randomly paired with three auditory stimuli (non-learnable category).

2.2.3 Data recording and analysis

  • EEG data was collected using a 128-channel NeurOne amplifier (Bittium Oy, Finland) with Ag-AgCl electrodes attached to the HydroCel elctrode net (Electrical Geodesics Inc., OR, USA) with Cz electrode as the reference.
  • Electrode impedance was checked at the beginning of the recording and aimed to be below 50 kOhms for all channels.
  • The data was analysed using BESA Research 6.1 (BESA GmbH, Grafelfing, Germany).
  • EEG was first examined for channels with poor data qu lity (mean: 4, range 0-10) that were rejected at this stage, and then segmented into trial-based time windows of -200 - 700 ms with respect to the visual symbol onset (200 ms pre-stimulus baseline).

2.2.4 Statistical analysis

  • EEG data was then examined using cluster-based permutation tests (Maris & Oostenveld, 2007) in BESA Statistics 2.0.
  • After initial t-test comparison between conditions of interest, the results were clustered based on time points and channels.
  • Significance values for the clusters were based on permuted condition labels.
  • Cluster alpha of 0.05 was used with 3.5 cm channel neighbor distance and 3000 permutations.
  • The learnable and non-learnable conditions were compared in each block.

3.1.1 Behavioral results

  • All participants were able to learn the correct audio-visual associations during the first half (1st and 2nd quarters) of the MEG recording with only a few errors made after that.
  • Accuracy was scored based on the response to the question “do the symbol and syllable form a pair” (for non-learnable items the correct answer was ‘no’).
  • The mean accuracy rate was 90 % and 93 % and mean reaction times were 436 ms and 513 ms for the learnable and non-learnable categories, respectively, across the whole training session.
  • There was a clear effect of training in the accuracy and reaction time measures with improving performance towards the end of the session as shown in Figure 5.

3.1.2 MEG results

  • The MEG data showed clear visual and auditory evoked fields .
  • The response was similar for the two categories during the first quarter of the session, started to differ between categories during the second quarter, and remained different btween categories until the end of the session.
  • The distributed source analysis paralleled the sensor level trends.
  • Activation loci were found in the left and right inferior temporo-occipital areas as well as left frontal areas and right central-parietal areas in the time window of the slowly growing difference between the categories .
  • This was due to a decrease of source strength from the first to the second quarter.

3.2 Experiment 2: Passive learning

  • Similarly to the active learning experiment, the EEG data for the passive learning was examined in four blocks of equal length (10 min).
  • There wno statistically significant condition differences (p = 0.274) in the ERPs measured during the first 5 minutes whereas the between-category differences during the second 5-minute sub-block were statistically significant (cluster 1, p < 0.045 at 165-276 ms, fronto-central distribution) .
  • The authors expected to see learning effects at the early sensory responses as well as in later time window linked to perceptual learning and audio-visual integration in brain areas that previous studies have linked to short-term cross-modal learning (e.g., Raij et al., 2000; Hashimoto & Sakai, 2004).
  • Frontal cortices also showed enhanced activity bilaterally after 10 minutes of training.
  • The time window after 300 ms matches well with the current active learning task and with earlier EEG studies examining audio- M AN US CR IP T AC CE PT ED visual learning using a congruency manipulation (Shams et al., 2005; Karapidis et al., 2017; 2018).

5. References

  • Audiovisual integration of letters in the human brain.
  • The grey box represents the approximate time window for the difference between the stimulus categories given by the cluster-based permutation satistics.

Did you find this useful? Give us your feedback

Figures (6)
Citations
More filters
Journal ArticleDOI
TL;DR: The Jyvaskyla Longitudinal Study of dyslexia (JLD) as discussed by the authors found that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls.
Abstract: This paper reviews the observations of the Jyvaskyla Longitudinal Study of Dyslexia (JLD). The JLD is a prospective family risk study in which the development of children with familial risk for dyslexia (N = 108) due to parental dyslexia and controls without dyslexia risk (N = 92) were followed from birth to adulthood. The JLD revealed that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls. Auditory insensitivity of newborns observed during the first week of life using brain event-related potentials (ERPs) was shown to be the first precursor of dyslexia. ERPs measured at six months of age related to phoneme length identification differentiated the family risk group from the control group and predicted reading speed until the age of 14 years. Early oral language skills, phonological processing skills, rapid automatized naming, and letter knowledge differentiated the groups from ages 2.5–3.5 years onwards and predicted dyslexia and reading development, including reading comprehension, until adolescence. The home environment, a child’s interest in reading, and task avoidance were not different in the risk group but were found to be additional predictors of reading development. Based on the JLD findings, preventive and intervention methods utilizing the association learning approach have been developed.

22 citations

Journal ArticleDOI
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.

18 citations


Cites background from "Dynamics of brain activation during..."

  • ...A ¼ Auditory cortex, V ¼ Visual cortex, STC ¼ oneme. cortical representation and automatic processing of the audiovisual objects....

    [...]

  • ...…Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

  • ...Furthermore, these processes might be affected by modulation of attention to important features for learning in the frontal cortices (H€am€al€ainen et al., 2019)....

    [...]

  • ...In addition, auditory and visual stimuli are combined into audiovisual objects in multisensory brain regions (Stein and Stanford, 2008) (e.g., STC) and such cross-modal audiovisual association is initially stored in the short-term memory system....

    [...]

  • ...As learning progresses, changes have been reported to occur in vOT (Quinn et al., 2017; Madec et al., 2016; Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

Posted ContentDOI
23 Mar 2020-bioRxiv
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.
Abstract: Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 minutes; second day ~ 25 minutes), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior- temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.

4 citations


Cites background from "Dynamics of brain activation during..."

  • ...Depth-weighted (p = 0.8) minimum-norm estimates (wMNE) (Hämäläinen and Ilmoniemi 1994; Lin et al. 2006) were calculated for 10242 free-orientation sources per hemisphere....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an urban landscape visual communication optimization method based on hue saturation value (HSV) technology to provide more orderly and convenient urban planning ideas for the fast-paced life in complex urban environment.
Abstract: The purpose of this study is to provide more orderly and convenient urban planning ideas for the fast-paced life in the complex urban environment. The current sign design is analyzed according to the needs of urban residents for barrier-free sign design, and the sign design based on urban space color is established. An urban landscape visual communication optimization method is proposed based on hue saturation value (HSV) technology. The multiscale retinex (MSR) algorithm is used as a reference for simulation experiments. The experimental results show that the designed optimization method is significantly better than the traditional method in the expression effect of visual communication. First, the attention time of the sign design can be reduced by more than 3 seconds, which can effectively improve the lives of urban residents and tourists and improve their browsing efficiency. Next, 94% of the citizens believe that optimized urban signs are more prominent than traditional ones. Finally, the sign design optimization method proposed provides an image with a higher definition than the traditional sign design method. The proposed sign design and optimization scheme can effectively coordinate the relationship among urban landscape design, guided objects, and cities, help busy urban life, and provide new ideas for the development direction of visual communication design.

1 citations

References
More filters
Journal ArticleDOI
22 Apr 2010-PLOS ONE
TL;DR: The role of stimulus exposure and listening tasks, in the absence of training, on the modulation of evoked brain activity is examined to mean stimulus exposure, with and without being paired with an identification task, alters the way sound is processed in the brain.
Abstract: Auditory training programs are being developed to remediate various types of communication disorders. Biological changes have been shown to coincide with improved perception following auditory training so there is interest in determining if these changes represent biologic markers of auditory learning. Here we examine the role of stimulus exposure and listening tasks, in the absence of training, on the modulation of evoked brain activity. Twenty adults were divided into two groups and exposed to two similar sounding speech syllables during four electrophysiological recording sessions (24 hours, one week, and up to one year later). In between each session, members of one group were asked to identify each stimulus. Both groups showed enhanced neural activity from session-to-session, in the same P2 latency range previously identified as being responsive to auditory training. The enhancement effect was most pronounced over temporal-occipital scalp regions and largest for the group who participated in the identification task. The effects were rapid and long-lasting with enhanced synchronous activity persisting months after the last auditory experience. Physiological changes did not coincide with perceptual changes so results are interpreted to mean stimulus exposure, with and without being paired with an identification task, alters the way sound is processed in the brain. The cumulative effect likely involves auditory memory; however, in the absence of training, the observed physiological changes are insufficient to result in changes in learned behavior.

65 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigated mechanisms for associative learning for stimuli presented in different sensory modalities, using single-trial functional magnetic resonance imaging (FMRI) and found significant time-dependent learning effects in medial parietal and right dorsolateral prefrontal cortices.

59 citations

Journal ArticleDOI
TL;DR: In this paper, a neural mechanism for the integration of speech and script that can serve as a basis for future studies addressing (the failure of) literacy acquisition is presented. But the authors do not consider the role of different stimulus and task factors and effective connectivity between different brain regions.

58 citations

Journal ArticleDOI
TL;DR: The results suggest that the left MFC is a part of an executive attention network, and that the dichotic listening forced attention paradigm may be a feasible tool for assessing subtle attentional dysfunctions in older adults.
Abstract: Background: The frontal lobe has been associated to a wide range of cognitive control functions and is also vulnerable to degeneration in old age. A recent study by Thomsen and colleagues showed a difference between a young and old sample in grey matter density and activation in the left middle frontal cortex (MFC) and performance on a dichotic listening task. The present study investigated this brain behaviour association within a sample of healthy older individuals, and predicted a positive correlation between performance in a condition requiring executive attention and measures of grey matter structure of the posterior left MFC. Methods: A dichotic listening forced attention paradigm was used to measure attention control functions. Subjects were instructed to report only the left or the right ear syllable of a dichotically presented consonant-vowel syllable pair. A conflict situation appears when subjects are instructed to report the left ear stimulus, caused by the conflict with the bottom-up, stimulus-driven right ear advantage. Overcoming this processing conflict was used as a measure of executive attention. Thickness and volumes of frontal lobe regions were derived from automated segmentation of 3D magnetic resonance image acquisitions. Results: The results revealed a statistically significant positive correlation between the thickness measure of the left posterior MFC and performance on the dichotic listening measures of executive attention. Follow-up analyses showed that this correlation was only statistically significant in the subgroup that showed the typical bottom-up, stimulus-driven right ear advantage. Conclusion: The results suggest that the left MFC is a part of an executive attention network, and that the dichotic listening forced attention paradigm may be a feasible tool for assessing subtle attentional dysfunctions in older adults.

53 citations


"Dynamics of brain activation during..." refers background in this paper

  • ...The caudal middle frontal cortex most likely reflects either working memory or attentional control in the current experiment (Calvert, 2001; Andersson et al., 2009; Kastner & Ungerleider, 2000; Moisala et al., 2015)....

    [...]

Journal ArticleDOI
TL;DR: The ability to learn grapheme‐phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network.
Abstract: Learning letter-speech sound correspondences is a major step in reading acquisition and is severely impaired in children with dyslexia. Up to now, it remains largely unknown how quickly neural networks adopt specific functions during audiovisual integration of linguistic information when prereading children learn letter-speech sound correspondences. Here, we simulated the process of learning letter-speech sound correspondences in 20 prereading children (6.13-7.17 years) at varying risk for dyslexia by training artificial letter-speech sound correspondences within a single experimental session. Subsequently, we acquired simultaneously event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) scans during implicit audiovisual presentation of trained and untrained pairs. Audiovisual integration of trained pairs correlated with individual learning rates in right superior temporal, left inferior temporal, and bilateral parietal areas and with phonological awareness in left temporal areas. In correspondence, a differential left-lateralized parietooccipitotemporal ERP at 400 ms for trained pairs correlated with learning achievement and familial risk. Finally, a late (650 ms) posterior negativity indicating audiovisual congruency of trained pairs was associated with increased fMRI activation in the left occipital cortex. Taken together, a short (<30 min) letter-speech sound training initializes audiovisual integration in neural systems that are responsible for processing linguistic information in proficient readers. To conclude, the ability to learn grapheme-phoneme correspondences, the familial history of reading disability, and phonological awareness of prereading children account for the degree of audiovisual integration in a distributed brain network. Such findings on emerging linguistic audiovisual integration could allow for distinguishing between children with typical and atypical reading development. Hum Brain Mapp 38:1038-1055, 2017. © 2016 Wiley Periodicals, Inc.

51 citations


"Dynamics of brain activation during..." refers background in this paper

  • ...Training effects occurring within days to learning of audio-visual combinations have been found in the left parieto-temporal cortex and posterior inferior temporal gyrus (Hashimoto & Sakai, 2004; Karipidis et al., 2017)....

    [...]

  • ...see modulation of activity at a later time window where cross-modal integration effects have been reported (Raij et al., 2000; Shams et al., 2005; Karipidis et al., 2017)....

    [...]

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Dynamics of brain activation during learning of syllable-symbol paired associations" ?

In this paper, the authors examined the long-term effects of audio-visual learning using transcranial direct current stimulation.