scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dynamics of brain activation during learning of syllable-symbol paired associations

TL;DR: The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
About: This article is published in Neuropsychologia.The article was published on 2019-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Brain activity and meditation & Passive learning.

Summary (3 min read)

1. Introduction

  • Relatively little is known about the immediate learning processes in the human brain that occur at the beginning stages of training of cross-modal associati ns.the authors.
  • While studies examining the long-term learning effects have been important in establishing the brain mechanisms involved in cross-modal processing, it is not known which of these brain mechanisms are used during the initial steps of the learning process, and if there are distinct stages of learning during which some of the mechanisms are more important than others.
  • Studies on long-term effects of audio-visual learning provide a starting point for expected short-term learning effects.
  • The superior temporal sulcus in the left hemisphere has been implicated particularly in processing of well-established letterspeech sound combinations, thus mostly reflecting lo -term audio-visual memory representations (Raij et al., 2000; van Atteveldt et al., 2004; Hashimoto & Sakai, 2004, M AN US CR IP T AC CE PT ED Blomert, 2011).
  • Both active and passive tasks were used to examine possible general neural mechanisms related to learning of audio-visual associations.

2.1 Experiment 1

  • Thirteen adult participants were included in the analyses (26.3 years on average, range 21-38 years; 7 female, 6 male; 12 right-handed, 1 ambidextrous based on self-report).
  • From the total of 15 participants, one participant was excluded due to magnetic artifact from a tooth brace and one due to excessive eye blinks during the visual stimulus presentation.
  • None of the participants had lived in Japan or studied Japanese (relevant for the choice of visual stimuli, see below).
  • The study was approved by the Ethics Committee of the Aalto University.

2.1.2 Stimuli and experimental design

  • Auditory stimuli were recorded by a female native Finnish speaker in a sound-attenuated booth.
  • The delayed audio presentation was introduced in order to allow a clean access to cortical processing of the visual symbol without contamination by auditory activation, motor response, or response error monitoring.
  • Accuracy and reaction time (with respect to question mark onset) were obtained for each trial.
  • For thispurpose two categories of trials were created, learnable and non-learnable .
  • For the other half of the symbols the participants received the word ‘incorrect’ as the feedback and thus their association to syllables could not be learned (non-learnable category).

2.1.3 Data recording and analysis

  • MEG data was collected using a 306-channel (102 magnetometers, 204 planar gradiometers) whole-head device (Elekta Oy, Finland) at the MEG Core of Aalto NeuroImaging, Aalto University, Finland.
  • The head position was monitored continuously using 5 small coils attached to the scalp (3 on the forehead and 2 behind the ears).
  • Noise covariance matrix was calculated from the baseline interval of the averagd responses.

2.1.4 Statistical analysis

  • Repeated measures ANOVAs (category [learnable, non-lear able] x quarter [1st, 2nd, 3rd, 4th] x hemisphere [left, right]) for each time window and region of interest were conducted.
  • Effects involving interaction between category and quarter were of interest.

2.2.1 Participants

  • Seventeen adult participants were included in the analyses (26.2 years on average, range 20-35 years; 14 female, 3 male; 16 right-handed, 1 left-handed based on self-report).
  • The study was approved by the Ethics Committee of the University of Jyväskylä, Finland.
  • Each experimental trial started with a fixation cross shown at the centre of the screen for 745 ms.
  • To examine the effect of association learning, two categories of trials were created, learnable and non-learnable.
  • Half of the visual stimuli were always presented with its corresponding auditory stimuli (earnable category) while the other half of the visual stimuli were randomly paired with three auditory stimuli (non-learnable category).

2.2.3 Data recording and analysis

  • EEG data was collected using a 128-channel NeurOne amplifier (Bittium Oy, Finland) with Ag-AgCl electrodes attached to the HydroCel elctrode net (Electrical Geodesics Inc., OR, USA) with Cz electrode as the reference.
  • Electrode impedance was checked at the beginning of the recording and aimed to be below 50 kOhms for all channels.
  • The data was analysed using BESA Research 6.1 (BESA GmbH, Grafelfing, Germany).
  • EEG was first examined for channels with poor data qu lity (mean: 4, range 0-10) that were rejected at this stage, and then segmented into trial-based time windows of -200 - 700 ms with respect to the visual symbol onset (200 ms pre-stimulus baseline).

2.2.4 Statistical analysis

  • EEG data was then examined using cluster-based permutation tests (Maris & Oostenveld, 2007) in BESA Statistics 2.0.
  • After initial t-test comparison between conditions of interest, the results were clustered based on time points and channels.
  • Significance values for the clusters were based on permuted condition labels.
  • Cluster alpha of 0.05 was used with 3.5 cm channel neighbor distance and 3000 permutations.
  • The learnable and non-learnable conditions were compared in each block.

3.1.1 Behavioral results

  • All participants were able to learn the correct audio-visual associations during the first half (1st and 2nd quarters) of the MEG recording with only a few errors made after that.
  • Accuracy was scored based on the response to the question “do the symbol and syllable form a pair” (for non-learnable items the correct answer was ‘no’).
  • The mean accuracy rate was 90 % and 93 % and mean reaction times were 436 ms and 513 ms for the learnable and non-learnable categories, respectively, across the whole training session.
  • There was a clear effect of training in the accuracy and reaction time measures with improving performance towards the end of the session as shown in Figure 5.

3.1.2 MEG results

  • The MEG data showed clear visual and auditory evoked fields .
  • The response was similar for the two categories during the first quarter of the session, started to differ between categories during the second quarter, and remained different btween categories until the end of the session.
  • The distributed source analysis paralleled the sensor level trends.
  • Activation loci were found in the left and right inferior temporo-occipital areas as well as left frontal areas and right central-parietal areas in the time window of the slowly growing difference between the categories .
  • This was due to a decrease of source strength from the first to the second quarter.

3.2 Experiment 2: Passive learning

  • Similarly to the active learning experiment, the EEG data for the passive learning was examined in four blocks of equal length (10 min).
  • There wno statistically significant condition differences (p = 0.274) in the ERPs measured during the first 5 minutes whereas the between-category differences during the second 5-minute sub-block were statistically significant (cluster 1, p < 0.045 at 165-276 ms, fronto-central distribution) .
  • The authors expected to see learning effects at the early sensory responses as well as in later time window linked to perceptual learning and audio-visual integration in brain areas that previous studies have linked to short-term cross-modal learning (e.g., Raij et al., 2000; Hashimoto & Sakai, 2004).
  • Frontal cortices also showed enhanced activity bilaterally after 10 minutes of training.
  • The time window after 300 ms matches well with the current active learning task and with earlier EEG studies examining audio- M AN US CR IP T AC CE PT ED visual learning using a congruency manipulation (Shams et al., 2005; Karapidis et al., 2017; 2018).

5. References

  • Audiovisual integration of letters in the human brain.
  • The grey box represents the approximate time window for the difference between the stimulus categories given by the cluster-based permutation satistics.

Did you find this useful? Give us your feedback

Figures (6)
Citations
More filters
Journal ArticleDOI
TL;DR: The Jyvaskyla Longitudinal Study of dyslexia (JLD) as discussed by the authors found that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls.
Abstract: This paper reviews the observations of the Jyvaskyla Longitudinal Study of Dyslexia (JLD). The JLD is a prospective family risk study in which the development of children with familial risk for dyslexia (N = 108) due to parental dyslexia and controls without dyslexia risk (N = 92) were followed from birth to adulthood. The JLD revealed that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls. Auditory insensitivity of newborns observed during the first week of life using brain event-related potentials (ERPs) was shown to be the first precursor of dyslexia. ERPs measured at six months of age related to phoneme length identification differentiated the family risk group from the control group and predicted reading speed until the age of 14 years. Early oral language skills, phonological processing skills, rapid automatized naming, and letter knowledge differentiated the groups from ages 2.5–3.5 years onwards and predicted dyslexia and reading development, including reading comprehension, until adolescence. The home environment, a child’s interest in reading, and task avoidance were not different in the risk group but were found to be additional predictors of reading development. Based on the JLD findings, preventive and intervention methods utilizing the association learning approach have been developed.

22 citations

Journal ArticleDOI
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.

18 citations


Cites background from "Dynamics of brain activation during..."

  • ...A ¼ Auditory cortex, V ¼ Visual cortex, STC ¼ oneme. cortical representation and automatic processing of the audiovisual objects....

    [...]

  • ...…Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

  • ...Furthermore, these processes might be affected by modulation of attention to important features for learning in the frontal cortices (H€am€al€ainen et al., 2019)....

    [...]

  • ...In addition, auditory and visual stimuli are combined into audiovisual objects in multisensory brain regions (Stein and Stanford, 2008) (e.g., STC) and such cross-modal audiovisual association is initially stored in the short-term memory system....

    [...]

  • ...As learning progresses, changes have been reported to occur in vOT (Quinn et al., 2017; Madec et al., 2016; Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

Posted ContentDOI
23 Mar 2020-bioRxiv
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.
Abstract: Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 minutes; second day ~ 25 minutes), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior- temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.

4 citations


Cites background from "Dynamics of brain activation during..."

  • ...Depth-weighted (p = 0.8) minimum-norm estimates (wMNE) (Hämäläinen and Ilmoniemi 1994; Lin et al. 2006) were calculated for 10242 free-orientation sources per hemisphere....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an urban landscape visual communication optimization method based on hue saturation value (HSV) technology to provide more orderly and convenient urban planning ideas for the fast-paced life in complex urban environment.
Abstract: The purpose of this study is to provide more orderly and convenient urban planning ideas for the fast-paced life in the complex urban environment. The current sign design is analyzed according to the needs of urban residents for barrier-free sign design, and the sign design based on urban space color is established. An urban landscape visual communication optimization method is proposed based on hue saturation value (HSV) technology. The multiscale retinex (MSR) algorithm is used as a reference for simulation experiments. The experimental results show that the designed optimization method is significantly better than the traditional method in the expression effect of visual communication. First, the attention time of the sign design can be reduced by more than 3 seconds, which can effectively improve the lives of urban residents and tourists and improve their browsing efficiency. Next, 94% of the citizens believe that optimized urban signs are more prominent than traditional ones. Finally, the sign design optimization method proposed provides an image with a higher definition than the traditional sign design method. The proposed sign design and optimization scheme can effectively coordinate the relationship among urban landscape design, guided objects, and cities, help busy urban life, and provide new ideas for the development direction of visual communication design.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: It is shown that it is possible to retrieve the text-induced perceptual interpretation from fMRI activity patterns in the posterior superior temporal cortex, and the findings indicate that reading-related audiovisual mappings can adjust the auditory cortical representation of speech in typically reading adults.
Abstract: Learning to read requires the formation of efficient neural associations between written and spoken language. Whether these associations influence the auditory cortical representation of speech remains unknown. Here we address this question by combining multivariate functional MRI analysis and a newly-developed ‘text-based recalibration’ paradigm. In this paradigm, the pairing of visual text and ambiguous speech sounds shifts (i.e. recalibrates) the perceptual interpretation of the ambiguous sounds in subsequent auditory-only trials. We show that it is possible to retrieve the text-induced perceptual interpretation from fMRI activity patterns in the posterior superior temporal cortex. Furthermore, this auditory cortical region showed significant functional connectivity with the inferior parietal lobe (IPL) during the pairing of text with ambiguous speech. Our findings indicate that reading-related audiovisual mappings can adjust the auditory cortical representation of speech in typically reading adults. Additionally, they suggest the involvement of the IPL in audiovisual and/or higher-order perceptual processes leading to this adjustment. When applied in typical and dyslexic readers of different ages, our text-based recalibration paradigm may reveal relevant aspects of perceptual learning and plasticity during successful and failing reading development.

32 citations


"Dynamics of brain activation during..." refers background or result in this paper

  • ..., 2017; 2018) and perceptual tuning of ambiguous speech stimuli by reading exposure (Bonte et al., 2017)....

    [...]

  • ...The current results would be interesting to link with the earlier studies examining the responses to incongruent audio-visual stimuli (e.g., Karapidis et al., 2017; 2018) and perceptual tuning of ambiguous speech stimuli by reading exposure (Bonte et al., 2017)....

    [...]

  • ...Further, we would predict that the size of the learning effect at the initial stage would link with the strength of the perceptual tuning for ambiguous phonemes after exposure to written material (Bonte et al., 2017)....

    [...]

  • ...Long-term exposure to grapheme-phoneme associations also affects perception of ambiguous speech through mechanisms in the posterior superior temporal cortex and inferior parietal lobe (Bonte et al., 2017)....

    [...]

  • ...24 with the strength of the perceptual tuning for ambiguous phonemes after exposure to written material (Bonte et al., 2017)....

    [...]

Journal ArticleDOI
TL;DR: Results indicate that parietotemporal stimulation can enhance learning of new grapheme-phoneme relationships in readers with lower reading skill, yet, while parietOTemporal function is critical to new learning, its role in continued reading improvement likely changes as readers progress in skill.
Abstract: Neuroimaging work from developmental and reading intervention research has suggested a cause of reading failure may be lack of engagement of parietotemporal cortex during initial acquisition of grapheme-phoneme (letter-sound) mappings. Parietotemporal activation increases following grapheme-phoneme learning and successful reading intervention. Further, stimulation of parietotemporal cortex improves reading skill in lower ability adults. However, it is unclear whether these improvements following stimulation are due to enhanced grapheme-phoneme mapping abilities. To test this hypothesis, we used transcranial direct current stimulation (tDCS) to manipulate parietotemporal function in adult readers as they learned a novel artificial orthography with new grapheme-phoneme mappings. Participants received real or sham stimulation to the left inferior parietal lobe for twenty minutes before training. They received explicit training over the course of three days on ten novel words each day. Learning of the artificial orthography was assessed at a pre-training baseline session, the end of each of the three training sessions, an immediate post-training session, and a delayed post-training session about four weeks after training. Stimulation interacted with baseline reading skill to affect learning of trained words and transfer to untrained words. Lower skill readers showed better acquisition, whereas higher skill readers showed worse acquisition, when training was paired with real stimulation, as compared to readers who received sham stimulation. However, readers of all skill levels showed better maintenance of trained material following parietotemporal stimulation, indicating a differential effect of stimulation on initial learning and consolidation. Overall, these results indicate that parietotemporal stimulation can enhance learning of new grapheme-phoneme relationships in readers with lower reading skill. Yet, while parietotemporal function is critical to new learning, its role in continued reading improvement likely changes as readers progress in skill.

12 citations

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Dynamics of brain activation during learning of syllable-symbol paired associations" ?

In this paper, the authors examined the long-term effects of audio-visual learning using transcranial direct current stimulation.