scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Dynamics of brain activation during learning of syllable-symbol paired associations

TL;DR: The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
About: This article is published in Neuropsychologia.The article was published on 2019-06-01 and is currently open access. It has received 4 citations till now. The article focuses on the topics: Brain activity and meditation & Passive learning.

Summary (3 min read)

1. Introduction

  • Relatively little is known about the immediate learning processes in the human brain that occur at the beginning stages of training of cross-modal associati ns.the authors.
  • While studies examining the long-term learning effects have been important in establishing the brain mechanisms involved in cross-modal processing, it is not known which of these brain mechanisms are used during the initial steps of the learning process, and if there are distinct stages of learning during which some of the mechanisms are more important than others.
  • Studies on long-term effects of audio-visual learning provide a starting point for expected short-term learning effects.
  • The superior temporal sulcus in the left hemisphere has been implicated particularly in processing of well-established letterspeech sound combinations, thus mostly reflecting lo -term audio-visual memory representations (Raij et al., 2000; van Atteveldt et al., 2004; Hashimoto & Sakai, 2004, M AN US CR IP T AC CE PT ED Blomert, 2011).
  • Both active and passive tasks were used to examine possible general neural mechanisms related to learning of audio-visual associations.

2.1 Experiment 1

  • Thirteen adult participants were included in the analyses (26.3 years on average, range 21-38 years; 7 female, 6 male; 12 right-handed, 1 ambidextrous based on self-report).
  • From the total of 15 participants, one participant was excluded due to magnetic artifact from a tooth brace and one due to excessive eye blinks during the visual stimulus presentation.
  • None of the participants had lived in Japan or studied Japanese (relevant for the choice of visual stimuli, see below).
  • The study was approved by the Ethics Committee of the Aalto University.

2.1.2 Stimuli and experimental design

  • Auditory stimuli were recorded by a female native Finnish speaker in a sound-attenuated booth.
  • The delayed audio presentation was introduced in order to allow a clean access to cortical processing of the visual symbol without contamination by auditory activation, motor response, or response error monitoring.
  • Accuracy and reaction time (with respect to question mark onset) were obtained for each trial.
  • For thispurpose two categories of trials were created, learnable and non-learnable .
  • For the other half of the symbols the participants received the word ‘incorrect’ as the feedback and thus their association to syllables could not be learned (non-learnable category).

2.1.3 Data recording and analysis

  • MEG data was collected using a 306-channel (102 magnetometers, 204 planar gradiometers) whole-head device (Elekta Oy, Finland) at the MEG Core of Aalto NeuroImaging, Aalto University, Finland.
  • The head position was monitored continuously using 5 small coils attached to the scalp (3 on the forehead and 2 behind the ears).
  • Noise covariance matrix was calculated from the baseline interval of the averagd responses.

2.1.4 Statistical analysis

  • Repeated measures ANOVAs (category [learnable, non-lear able] x quarter [1st, 2nd, 3rd, 4th] x hemisphere [left, right]) for each time window and region of interest were conducted.
  • Effects involving interaction between category and quarter were of interest.

2.2.1 Participants

  • Seventeen adult participants were included in the analyses (26.2 years on average, range 20-35 years; 14 female, 3 male; 16 right-handed, 1 left-handed based on self-report).
  • The study was approved by the Ethics Committee of the University of Jyväskylä, Finland.
  • Each experimental trial started with a fixation cross shown at the centre of the screen for 745 ms.
  • To examine the effect of association learning, two categories of trials were created, learnable and non-learnable.
  • Half of the visual stimuli were always presented with its corresponding auditory stimuli (earnable category) while the other half of the visual stimuli were randomly paired with three auditory stimuli (non-learnable category).

2.2.3 Data recording and analysis

  • EEG data was collected using a 128-channel NeurOne amplifier (Bittium Oy, Finland) with Ag-AgCl electrodes attached to the HydroCel elctrode net (Electrical Geodesics Inc., OR, USA) with Cz electrode as the reference.
  • Electrode impedance was checked at the beginning of the recording and aimed to be below 50 kOhms for all channels.
  • The data was analysed using BESA Research 6.1 (BESA GmbH, Grafelfing, Germany).
  • EEG was first examined for channels with poor data qu lity (mean: 4, range 0-10) that were rejected at this stage, and then segmented into trial-based time windows of -200 - 700 ms with respect to the visual symbol onset (200 ms pre-stimulus baseline).

2.2.4 Statistical analysis

  • EEG data was then examined using cluster-based permutation tests (Maris & Oostenveld, 2007) in BESA Statistics 2.0.
  • After initial t-test comparison between conditions of interest, the results were clustered based on time points and channels.
  • Significance values for the clusters were based on permuted condition labels.
  • Cluster alpha of 0.05 was used with 3.5 cm channel neighbor distance and 3000 permutations.
  • The learnable and non-learnable conditions were compared in each block.

3.1.1 Behavioral results

  • All participants were able to learn the correct audio-visual associations during the first half (1st and 2nd quarters) of the MEG recording with only a few errors made after that.
  • Accuracy was scored based on the response to the question “do the symbol and syllable form a pair” (for non-learnable items the correct answer was ‘no’).
  • The mean accuracy rate was 90 % and 93 % and mean reaction times were 436 ms and 513 ms for the learnable and non-learnable categories, respectively, across the whole training session.
  • There was a clear effect of training in the accuracy and reaction time measures with improving performance towards the end of the session as shown in Figure 5.

3.1.2 MEG results

  • The MEG data showed clear visual and auditory evoked fields .
  • The response was similar for the two categories during the first quarter of the session, started to differ between categories during the second quarter, and remained different btween categories until the end of the session.
  • The distributed source analysis paralleled the sensor level trends.
  • Activation loci were found in the left and right inferior temporo-occipital areas as well as left frontal areas and right central-parietal areas in the time window of the slowly growing difference between the categories .
  • This was due to a decrease of source strength from the first to the second quarter.

3.2 Experiment 2: Passive learning

  • Similarly to the active learning experiment, the EEG data for the passive learning was examined in four blocks of equal length (10 min).
  • There wno statistically significant condition differences (p = 0.274) in the ERPs measured during the first 5 minutes whereas the between-category differences during the second 5-minute sub-block were statistically significant (cluster 1, p < 0.045 at 165-276 ms, fronto-central distribution) .
  • The authors expected to see learning effects at the early sensory responses as well as in later time window linked to perceptual learning and audio-visual integration in brain areas that previous studies have linked to short-term cross-modal learning (e.g., Raij et al., 2000; Hashimoto & Sakai, 2004).
  • Frontal cortices also showed enhanced activity bilaterally after 10 minutes of training.
  • The time window after 300 ms matches well with the current active learning task and with earlier EEG studies examining audio- M AN US CR IP T AC CE PT ED visual learning using a congruency manipulation (Shams et al., 2005; Karapidis et al., 2017; 2018).

5. References

  • Audiovisual integration of letters in the human brain.
  • The grey box represents the approximate time window for the difference between the stimulus categories given by the cluster-based permutation satistics.

Did you find this useful? Give us your feedback

Figures (6)
Citations
More filters
Journal ArticleDOI
TL;DR: The Jyvaskyla Longitudinal Study of dyslexia (JLD) as discussed by the authors found that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls.
Abstract: This paper reviews the observations of the Jyvaskyla Longitudinal Study of Dyslexia (JLD). The JLD is a prospective family risk study in which the development of children with familial risk for dyslexia (N = 108) due to parental dyslexia and controls without dyslexia risk (N = 92) were followed from birth to adulthood. The JLD revealed that the likelihood of at-risk children performing poorly in reading and spelling tasks was fourfold compared to the controls. Auditory insensitivity of newborns observed during the first week of life using brain event-related potentials (ERPs) was shown to be the first precursor of dyslexia. ERPs measured at six months of age related to phoneme length identification differentiated the family risk group from the control group and predicted reading speed until the age of 14 years. Early oral language skills, phonological processing skills, rapid automatized naming, and letter knowledge differentiated the groups from ages 2.5–3.5 years onwards and predicted dyslexia and reading development, including reading comprehension, until adolescence. The home environment, a child’s interest in reading, and task avoidance were not different in the risk group but were found to be additional predictors of reading development. Based on the JLD findings, preventive and intervention methods utilizing the association learning approach have been developed.

22 citations

Journal ArticleDOI
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.

18 citations


Cites background from "Dynamics of brain activation during..."

  • ...A ¼ Auditory cortex, V ¼ Visual cortex, STC ¼ oneme. cortical representation and automatic processing of the audiovisual objects....

    [...]

  • ...…Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

  • ...Furthermore, these processes might be affected by modulation of attention to important features for learning in the frontal cortices (H€am€al€ainen et al., 2019)....

    [...]

  • ...In addition, auditory and visual stimuli are combined into audiovisual objects in multisensory brain regions (Stein and Stanford, 2008) (e.g., STC) and such cross-modal audiovisual association is initially stored in the short-term memory system....

    [...]

  • ...As learning progresses, changes have been reported to occur in vOT (Quinn et al., 2017; Madec et al., 2016; Hashimoto and Sakai, 2004; Brem et al. 2010, 2018) and dorsal pathway (Taylor et al. 2014, 2017; Hashimoto and Sakai, 2004; Mei et al. 2014, 2015) as well as the STC (H€am€al€ainen et al., 2019; Karipidis et al. 2017, 2018; Madec et al., 2016) for forming optimal ers-speech sound associations....

    [...]

Posted ContentDOI
23 Mar 2020-bioRxiv
TL;DR: Dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned and changes were observed in the brain responses to the novel letters during the learning process are found.
Abstract: Learning to associate written letters with speech sounds is crucial for the initial phase of acquiring reading skills. However, little is known about the cortical reorganization for supporting letter-speech sound learning, particularly the brain dynamics during the learning of grapheme-phoneme associations. In the present study, we trained 30 Finnish participants (mean age: 24.33 years, SD: 3.50 years) to associate novel foreign letters with familiar Finnish speech sounds on two consecutive days (first day ~ 50 minutes; second day ~ 25 minutes), while neural activity was measured using magnetoencephalography (MEG). Two sets of audiovisual stimuli were used for the training in which the grapheme-phoneme association in one set (Learnable) could be learned based on the different learning cues provided, but not in the other set (Control). The learning progress was tracked at a trial-by-trial basis and used to segment different learning stages for the MEG source analysis. The learning-related changes were examined by comparing the brain responses to Learnable and Control uni/multi-sensory stimuli, as well as the brain responses to learning cues at different learning stages over the two days. We found dynamic changes in brain responses related to multi-sensory processing when grapheme-phoneme associations were learned. Further, changes were observed in the brain responses to the novel letters during the learning process. We also found that some of these learning effects were observed only after memory consolidation the following day. Overall, the learning process modulated the activity in a large network of brain regions, including the superior temporal cortex and the dorsal (parietal) pathway. Most interestingly, middle- and inferior- temporal regions were engaged during multi-sensory memory encoding after the cross-modal relationship was extracted from the learning cues. Our findings highlight the brain dynamics and plasticity related to the learning of letter-speech sound associations and provide a more refined model of grapheme-phoneme learning in reading acquisition.

4 citations


Cites background from "Dynamics of brain activation during..."

  • ...Depth-weighted (p = 0.8) minimum-norm estimates (wMNE) (Hämäläinen and Ilmoniemi 1994; Lin et al. 2006) were calculated for 10242 free-orientation sources per hemisphere....

    [...]

Journal ArticleDOI
TL;DR: Zhang et al. as discussed by the authors proposed an urban landscape visual communication optimization method based on hue saturation value (HSV) technology to provide more orderly and convenient urban planning ideas for the fast-paced life in complex urban environment.
Abstract: The purpose of this study is to provide more orderly and convenient urban planning ideas for the fast-paced life in the complex urban environment. The current sign design is analyzed according to the needs of urban residents for barrier-free sign design, and the sign design based on urban space color is established. An urban landscape visual communication optimization method is proposed based on hue saturation value (HSV) technology. The multiscale retinex (MSR) algorithm is used as a reference for simulation experiments. The experimental results show that the designed optimization method is significantly better than the traditional method in the expression effect of visual communication. First, the attention time of the sign design can be reduced by more than 3 seconds, which can effectively improve the lives of urban residents and tourists and improve their browsing efficiency. Next, 94% of the citizens believe that optimized urban signs are more prominent than traditional ones. Finally, the sign design optimization method proposed provides an image with a higher definition than the traditional sign design method. The proposed sign design and optimization scheme can effectively coordinate the relationship among urban landscape design, guided objects, and cities, help busy urban life, and provide new ideas for the development direction of visual communication design.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: The evidence supportive of P2 being the result of independent processes is described and several features, such as its persistence from wakefulness into sleep, the general consensus that unlike most other EEG phenomena it increases with age, and the fact that it can be generated using respiratory stimuli are highlighted.

727 citations

Journal ArticleDOI
22 Jul 2004-Neuron
TL;DR: The results suggest that efficient processing of culturally defined associations between letters and speech sounds relies on neural mechanisms similar to those naturally evolved for integrating audiovisual speech.

523 citations

Journal ArticleDOI
18 May 2000-Nature
TL;DR: It is concluded that prefrontal cortex neurons are part of integrative networks that represent behaviourally meaningful cross-modal associations and are crucial for the temporal transfer of information in the structuring of behaviour, reasoning and language.
Abstract: The prefrontal cortex is essential for the temporal integration of sensory information in behavioural and linguistic sequences. Such information is commonly encoded in more than one sense modality, notably sight and sound. Connections from sensory cortices to the prefrontal cortex support its integrative function. Here we present the first evidence that prefrontal cortex cells associate visual and auditory stimuli across time. We gave monkeys the task of remembering a tone of a certain pitch for 10 s and then choosing the colour associated with it. In this task, prefrontal cortex cells responded selectively to tones, and most of them also responded to colours according to the task rule. Thus, their reaction to a tone was correlated with their subsequent reaction to the associated colour. This correlation faltered in trials ending in behavioural error. We conclude that prefrontal cortex neurons are part of integrative networks that represent behaviourally meaningful cross-modal associations. The orderly and timely activation of neurons in such networks is crucial for the temporal transfer of information in the structuring of behaviour, reasoning and language.

512 citations

Journal ArticleDOI
TL;DR: The Signal Space Separation method (SSS) as mentioned in this paper idealizes magnetic multichannel signals by transforming them into device-independent idealized channels representing the measured data in uncorrelated form.
Abstract: The reliability of biomagnetic measurements is traditionally challenged by external interference signals, movement artifacts, and comparison problems caused by different positions of the subjects or different sensor configurations. The Signal Space Separation method (SSS) idealizes magnetic multichannel signals by transforming them into device-independent idealized channels representing the measured data in uncorrelated form. The transformation has separate components for the biomagnetic and external interference signals, and thus, the biomagnetic signals can be reconstructed simply by leaving out the contribution of the external interference. The foundation of SSS is a basis spanning all multichannel signals of magnetic origin. It is based on Maxwell's equations and the geometry of the sensor array only, with the assumption that the sensors are located in a current free volume. SSS is demonstrated to provide suppression of external interference signals, standardization of different positions of the subject, standardization of different sensor configurations, compensation for distortions caused by movement of the subject (even a subject containing magnetic impurities), suppression of sporadic sensor artifacts, a tool for fine calibration of the device, extraction of biomagnetic DC fields, and an aid for realizing an active compensation system. Thus, SSS removes many limitations of traditional biomagnetic measurements.

494 citations


"Dynamics of brain activation during..." refers methods in this paper

  • ...Offline, head movements were corrected and external noise sources attenuated using the temporal extension of the signal space separation algorithm (Taulu et al., 2005) of the MaxFilter program (Elekta Oy, Finland)....

    [...]

Journal ArticleDOI
TL;DR: The occipito-temporal print sensitivity thus is established during the earliest phase of reading acquisition in childhood, suggesting that a crucial part of the later reading network first adopts a role in mapping print and sound.
Abstract: The acquisition of reading skills is a major landmark process in a human's cognitive development. On the neural level, a new functional network develops during this time, as children typically learn to associate the well-known sounds of their spoken language with unfamiliar characters in alphabetic languages and finally access the meaning of written words, allowing for later reading. A critical component of the mature reading network located in the left occipito-temporal cortex, termed the "visual word-form system" (VWFS), exhibits print-sensitive activation in readers. When and how the sensitivity of the VWFS to print comes about remains an open question. In this study, we demonstrate the initiation of occipito-temporal cortex sensitivity to print using functional MRI (fMRI) (n = 16) and event-related potentials (ERP) (n = 32) in a controlled, longitudinal training study. Print sensitivity of fast (<250 ms) processes in posterior occipito-temporal brain regions accompanied basic associative learning of letter-speech sound correspondences in young (mean age 6.4 +/- 0.08 y) nonreading kindergarten children, as shown by concordant ERP and fMRI results. The occipito-temporal print sensitivity thus is established during the earliest phase of reading acquisition in childhood, suggesting that a crucial part of the later reading network first adopts a role in mapping print and sound.

381 citations

Frequently Asked Questions (1)
Q1. What are the contributions mentioned in the paper "Dynamics of brain activation during learning of syllable-symbol paired associations" ?

In this paper, the authors examined the long-term effects of audio-visual learning using transcranial direct current stimulation.