scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dynamics of brain activation during learning of syllable-symbol paired associations

TL;DR: The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
About: This article is published in Neuropsychologia.The article was published on 2019-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Brain activity and meditation & Passive learning.

Summary (3 min read)

1. Introduction

  • Relatively little is known about the immediate learning processes in the human brain that occur at the beginning stages of training of cross-modal associati ns.the authors.
  • While studies examining the long-term learning effects have been important in establishing the brain mechanisms involved in cross-modal processing, it is not known which of these brain mechanisms are used during the initial steps of the learning process, and if there are distinct stages of learning during which some of the mechanisms are more important than others.
  • Studies on long-term effects of audio-visual learning provide a starting point for expected short-term learning effects.
  • The superior temporal sulcus in the left hemisphere has been implicated particularly in processing of well-established letterspeech sound combinations, thus mostly reflecting lo -term audio-visual memory representations (Raij et al., 2000; van Atteveldt et al., 2004; Hashimoto & Sakai, 2004, M AN US CR IP T AC CE PT ED Blomert, 2011).
  • Both active and passive tasks were used to examine possible general neural mechanisms related to learning of audio-visual associations.

2.1 Experiment 1

  • Thirteen adult participants were included in the analyses (26.3 years on average, range 21-38 years; 7 female, 6 male; 12 right-handed, 1 ambidextrous based on self-report).
  • From the total of 15 participants, one participant was excluded due to magnetic artifact from a tooth brace and one due to excessive eye blinks during the visual stimulus presentation.
  • None of the participants had lived in Japan or studied Japanese (relevant for the choice of visual stimuli, see below).
  • The study was approved by the Ethics Committee of the Aalto University.

2.1.2 Stimuli and experimental design

  • Auditory stimuli were recorded by a female native Finnish speaker in a sound-attenuated booth.
  • The delayed audio presentation was introduced in order to allow a clean access to cortical processing of the visual symbol without contamination by auditory activation, motor response, or response error monitoring.
  • Accuracy and reaction time (with respect to question mark onset) were obtained for each trial.
  • For thispurpose two categories of trials were created, learnable and non-learnable .
  • For the other half of the symbols the participants received the word ‘incorrect’ as the feedback and thus their association to syllables could not be learned (non-learnable category).

2.1.3 Data recording and analysis

  • MEG data was collected using a 306-channel (102 magnetometers, 204 planar gradiometers) whole-head device (Elekta Oy, Finland) at the MEG Core of Aalto NeuroImaging, Aalto University, Finland.
  • The head position was monitored continuously using 5 small coils attached to the scalp (3 on the forehead and 2 behind the ears).
  • Noise covariance matrix was calculated from the baseline interval of the averagd responses.

2.1.4 Statistical analysis

  • Repeated measures ANOVAs (category [learnable, non-lear able] x quarter [1st, 2nd, 3rd, 4th] x hemisphere [left, right]) for each time window and region of interest were conducted.
  • Effects involving interaction between category and quarter were of interest.

2.2.1 Participants

  • Seventeen adult participants were included in the analyses (26.2 years on average, range 20-35 years; 14 female, 3 male; 16 right-handed, 1 left-handed based on self-report).
  • The study was approved by the Ethics Committee of the University of Jyväskylä, Finland.
  • Each experimental trial started with a fixation cross shown at the centre of the screen for 745 ms.
  • To examine the effect of association learning, two categories of trials were created, learnable and non-learnable.
  • Half of the visual stimuli were always presented with its corresponding auditory stimuli (earnable category) while the other half of the visual stimuli were randomly paired with three auditory stimuli (non-learnable category).

2.2.3 Data recording and analysis

  • EEG data was collected using a 128-channel NeurOne amplifier (Bittium Oy, Finland) with Ag-AgCl electrodes attached to the HydroCel elctrode net (Electrical Geodesics Inc., OR, USA) with Cz electrode as the reference.
  • Electrode impedance was checked at the beginning of the recording and aimed to be below 50 kOhms for all channels.
  • The data was analysed using BESA Research 6.1 (BESA GmbH, Grafelfing, Germany).
  • EEG was first examined for channels with poor data qu lity (mean: 4, range 0-10) that were rejected at this stage, and then segmented into trial-based time windows of -200 - 700 ms with respect to the visual symbol onset (200 ms pre-stimulus baseline).

2.2.4 Statistical analysis

  • EEG data was then examined using cluster-based permutation tests (Maris & Oostenveld, 2007) in BESA Statistics 2.0.
  • After initial t-test comparison between conditions of interest, the results were clustered based on time points and channels.
  • Significance values for the clusters were based on permuted condition labels.
  • Cluster alpha of 0.05 was used with 3.5 cm channel neighbor distance and 3000 permutations.
  • The learnable and non-learnable conditions were compared in each block.

3.1.1 Behavioral results

  • All participants were able to learn the correct audio-visual associations during the first half (1st and 2nd quarters) of the MEG recording with only a few errors made after that.
  • Accuracy was scored based on the response to the question “do the symbol and syllable form a pair” (for non-learnable items the correct answer was ‘no’).
  • The mean accuracy rate was 90 % and 93 % and mean reaction times were 436 ms and 513 ms for the learnable and non-learnable categories, respectively, across the whole training session.
  • There was a clear effect of training in the accuracy and reaction time measures with improving performance towards the end of the session as shown in Figure 5.

3.1.2 MEG results

  • The MEG data showed clear visual and auditory evoked fields .
  • The response was similar for the two categories during the first quarter of the session, started to differ between categories during the second quarter, and remained different btween categories until the end of the session.
  • The distributed source analysis paralleled the sensor level trends.
  • Activation loci were found in the left and right inferior temporo-occipital areas as well as left frontal areas and right central-parietal areas in the time window of the slowly growing difference between the categories .
  • This was due to a decrease of source strength from the first to the second quarter.

3.2 Experiment 2: Passive learning

  • Similarly to the active learning experiment, the EEG data for the passive learning was examined in four blocks of equal length (10 min).
  • There wno statistically significant condition differences (p = 0.274) in the ERPs measured during the first 5 minutes whereas the between-category differences during the second 5-minute sub-block were statistically significant (cluster 1, p < 0.045 at 165-276 ms, fronto-central distribution) .
  • The authors expected to see learning effects at the early sensory responses as well as in later time window linked to perceptual learning and audio-visual integration in brain areas that previous studies have linked to short-term cross-modal learning (e.g., Raij et al., 2000; Hashimoto & Sakai, 2004).
  • Frontal cortices also showed enhanced activity bilaterally after 10 minutes of training.
  • The time window after 300 ms matches well with the current active learning task and with earlier EEG studies examining audio- M AN US CR IP T AC CE PT ED visual learning using a congruency manipulation (Shams et al., 2005; Karapidis et al., 2017; 2018).

5. References

  • Audiovisual integration of letters in the human brain.
  • The grey box represents the approximate time window for the difference between the stimulus categories given by the cluster-based permutation satistics.

Did you find this useful? Give us your feedback

Figures (6)
Citations
More filters
Journal ArticleDOI
TL;DR: The Jyvaskyla Longitudinal Study of dyslexia (JLD) as discussed by the authors found that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls.
Abstract: This paper reviews the observations of the Jyvaskyla Longitudinal Study of Dyslexia (JLD). The JLD is a prospective family risk study in which the development of children with familial risk for dyslexia (N = 108) due to parental dyslexia and controls without dyslexia risk (N = 92) were followed from birth to adulthood. The JLD revealed that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls. Auditory insensitivity of newborns observed during the first week of life using brain event-related potentials (ERPs) was shown to be the first precursor of dyslexia. ERPs measured at six months of age related to phoneme length identification differentiated the family risk group from the control group and predicted reading speed until the age of 14 years. Early oral language skills, phonological processing skills, rapid automatized naming, and letter knowledge differentiated the groups from ages 2.5–3.5 years onwards and predicted dyslexia and reading development, including reading comprehension, until adolescence. The home environment, a child’s interest in reading, and task avoidance were not different in the risk group but were found to be additional predictors of reading development. Based on the JLD findings, preventive and intervention methods utilizing the association learning approach have been developed.

22 citations

Journal ArticleDOI
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.

18 citations


Cites background from "Dynamics of brain activation during..."

  • ...A ¼ Auditory cortex, V ¼ Visual cortex, STC ¼ oneme. cortical representation and automatic processing of the audiovisual objects....

    [...]

  • ...…Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

  • ...Furthermore, these processes might be affected by modulation of attention to important features for learning in the frontal cortices (H€am€al€ainen et al., 2019)....

    [...]

  • ...In addition, auditory and visual stimuli are combined into audiovisual objects in multisensory brain regions (Stein and Stanford, 2008) (e.g., STC) and such cross-modal audiovisual association is initially stored in the short-term memory system....

    [...]

  • ...As learning progresses, changes have been reported to occur in vOT (Quinn et al., 2017; Madec et al., 2016; Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

Posted ContentDOI
23 Mar 2020-bioRxiv
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.
Abstract: Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 minutes; second day ~ 25 minutes), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior- temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.

4 citations


Cites background from "Dynamics of brain activation during..."

  • ...Depth-weighted (p = 0.8) minimum-norm estimates (wMNE) (Hämäläinen and Ilmoniemi 1994; Lin et al. 2006) were calculated for 10242 free-orientation sources per hemisphere....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an urban landscape visual communication optimization method based on hue saturation value (HSV) technology to provide more orderly and convenient urban planning ideas for the fast-paced life in complex urban environment.
Abstract: The purpose of this study is to provide more orderly and convenient urban planning ideas for the fast-paced life in the complex urban environment. The current sign design is analyzed according to the needs of urban residents for barrier-free sign design, and the sign design based on urban space color is established. An urban landscape visual communication optimization method is proposed based on hue saturation value (HSV) technology. The multiscale retinex (MSR) algorithm is used as a reference for simulation experiments. The experimental results show that the designed optimization method is significantly better than the traditional method in the expression effect of visual communication. First, the attention time of the sign design can be reduced by more than 3 seconds, which can effectively improve the lives of urban residents and tourists and improve their browsing efficiency. Next, 94% of the citizens believe that optimized urban signs are more prominent than traditional ones. Finally, the sign design optimization method proposed provides an image with a higher definition than the traditional sign design method. The proposed sign design and optimization scheme can effectively coordinate the relationship among urban landscape design, guided objects, and cities, help busy urban life, and provide new ideas for the development direction of visual communication design.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The analysis of the information coding properties of individual neurons is proposed as one way to quantitatively determine whether the representation of the acoustic environment in (primary) auditory cortex indeed benefits from multisensory input.

100 citations

Journal ArticleDOI
TL;DR: Attentional focus during learning might influence brain mechanisms recruited during reading, as indexed by the N170 response to visual words, which suggests a key role for attentional focus in early reading acquisition.
Abstract: Reading instruction can direct attention to different unit sizes in print-to-speech mapping, ranging from grapheme-phoneme to whole-word relationships. Thus, attentional focus during learning might influence brain mechanisms recruited during reading, as indexed by the N170 response to visual words. To test this, two groups of adults were trained to read an artificial script under instructions directing attention to grapheme-phoneme versus whole-word associations. N170 responses were subsequently contrasted within an active reading task. Grapheme-phoneme focus drove a left-lateralized N170 response relative to the right-lateralized N170 under whole-word focus. These findings suggest a key role for attentional focus in early reading acquisition.

98 citations

Journal ArticleDOI
TL;DR: There was constant activity in the frontoparietal circuit during the delay period in both audiovisual paired-association learning of delayed matching-to-sample tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.
Abstract: To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.

98 citations


"Dynamics of brain activation during..." refers background in this paper

  • ...temporal area, together with primary auditory and visual areas, showed increased activity during audio-visual learning (Tanabe et al., 2005)....

    [...]

  • ...We expected to see changes in the neural activity of the sensory areas reflected in the low-level cortical responses at 100 - 200 ms after stimulus onset (Tanabe et al., 2005; Yoncheva et al., 2010)....

    [...]

Journal ArticleDOI
TL;DR: The data challenge the view that increased P2 amplitude reflects enhanced perceptual discrimination by auditory cortex and suggest that the effects of exposure to a speech stimulus on ERPs may have a slow time-course and are most evident after a delay.

92 citations

Journal ArticleDOI
16 Dec 2017-Sensors
TL;DR: After SSS, magnetometer and gradiometer data are estimated from a single set of SSS components (usually ≤ 80).
Abstract: Background: Modern Elekta Neuromag MEG devices include 102 sensor triplets containing one magnetometer and two planar gradiometers. The first processing step is often a signal space separation (SSS), which provides a powerful noise reduction. A question commonly raised by researchers and reviewers relates to which data should be employed in analyses: (1) magnetometers only, (2) gradiometers only, (3) magnetometers and gradiometers together. The MEG community is currently divided with regard to the proper answer. Methods: First, we provide theoretical evidence that both gradiometers and magnetometers result from the backprojection of the same SSS components. Then, we compare resting state and task-related sensor and source estimations from magnetometers and gradiometers in real MEG recordings before and after SSS. Results: SSS introduced a strong increase in the similarity between source time series derived from magnetometers and gradiometers (r2 = 0.3–0.8 before SSS and r2 > 0.80 after SSS). After SSS, resting state power spectrum and functional connectivity, as well as visual evoked responses, derived from both magnetometers and gradiometers were highly similar (Intraclass Correlation Coefficient > 0.8, r2 > 0.8). Conclusions: After SSS, magnetometer and gradiometer data are estimated from a single set of SSS components (usually ≤ 80). Equivalent results can be obtained with both sensor types in typical MEG experiments.

72 citations

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Dynamics of brain activation during learning of syllable-symbol paired associations" ?

In this paper, the authors examined the long-term effects of audio-visual learning using transcranial direct current stimulation.