scispace - formally typeset
Search or ask a question

Showing papers by "Karl Magnus Petersson published in 2016"


Journal ArticleDOI
TL;DR: It is proposed that repetition enhancement might reflect a brain mechanism to build and strengthen a neural network to process novel syntactic regularities and novel words, which would indicate an overlap in neural mechanisms for native and new language constructions with sufficient structural similarities.
Abstract: When learning a new language, we build brain networks to process and represent the acquired words and syntax and integrate these with existing language representations. It is an open question whether the same or different neural mechanisms are involved in learning and processing a novel language compared to the native language(s). Here we investigated the neural repetition effects of repeating known and novel word orders while human subjects were in the early stages of learning a new language. Combining a miniature language with a syntactic priming paradigm, we examined the neural correlates of language learning online using functional magnetic resonance imaging (fMRI). In left inferior frontal gyrus (LIFG) and posterior temporal cortex the repetition of novel syntactic structures led to repetition enhancement, while repetition of known structures resulted in repetition suppression. Additional verb repetition led to an increase in the syntactic repetition enhancement effect in language-related brain regions. Similarly the repetition of verbs led to repetition enhancement effects in areas related to lexical and semantic processing, an effect that continued to increase in a subset of these regions. Repetition enhancement might reflect a mechanism to build and strengthen a neural network to process novel syntactic structures and lexical items. By contrast, the observed repetition suppression points to overlapping neural mechanisms for native and new language constructions when these have sufficient structural similarities.

40 citations


Journal ArticleDOI
TL;DR: These findings suggest suboptimal processing in early stages of object processing in dyslexia, when integration and mapping of perceptual information to a more form-specific percept in memory take place.

17 citations


Journal ArticleDOI
TL;DR: The results suggest that dyslexics’ parafoveal dysfunction is not based on strict visuo-attentional factors, but nevertheless they stress the importance of extra-phonological processing.
Abstract: Two different forms of parafoveal dysfunction have been hypothesized as core deficits of dyslexic individuals: reduced parafoveal preview benefits ("too little parafovea") and increased costs of parafoveal load ("too much parafovea"). We tested both hypotheses in a single eye-tracking experiment using a modified serial rapid automatized naming (RAN) task. Comparisons between dyslexic and non-dyslexic adults showed reduced parafoveal preview benefits in dyslexics, without increased costs of parafoveal load. Reduced parafoveal preview benefits were observed in a naming task, but not in a silent letter-finding task, indicating that the parafoveal dysfunction may be consequent to the overload with extracting phonological information from orthographic input. Our results suggest that dyslexics' parafoveal dysfunction is not based on strict visuo-attentional factors, but nevertheless they stress the importance of extra-phonological processing. Furthermore, evidence of reduced parafoveal preview benefits in dyslexia may help understand why serial RAN is an important reading predictor in adulthood.

14 citations


Journal ArticleDOI
TL;DR: The offset EVS, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent, and it is seen that theoffset EVS may accommodate for more than the articulatory programming stage of word N.
Abstract: During oral reading, the eyes tend to be ahead of the voice (eye-voice span) It has been hypothesized that the extent to which this happens depends on the automaticity of reading processes, namely on the speed of print-to-sound conversion We tested whether EVS is affected by another automaticity component - immunity from interference To that end, we manipulated word familiarity (high-frequency, low-frequency and pseudowords) and word length as proxies of immunity from interference, and we used linear mixed effects models to measure the effects of both variables on the time interval at which readers do parallel processing by gazing at word N+1 while not having articulated word N yet (offset eye-voice span) Parallel processing was enhanced by automaticity, as shown by familiarity x length interactions on offset eye-voice span, and it was impeded by lack of automaticity, as shown by the transformation of offset eye-voice span into voice-eye span (voice ahead of the offset of the eyes) in pseudowords The relation between parallel processing and automaticity was strengthened by the fact that offset eye-voice span predicted reading velocity Our findings contribute to understand how the offset eye-voice span, an index that is obtained in oral reading, may tap into different components of automaticity that underlie reading ability, oral or silent In addition, we compared the duration of the offset eye-voice span with the average reference duration of stages in word production, and we saw that the offset eye-voice span may accommodate for more than the articulatory programming stage of word N

6 citations


Journal ArticleDOI
TL;DR: In this paper, the interaction between surface and colour knowledge information during object recognition was investigated and it was shown that surface color is more prominent for detecting structurally similar shapes than surface color.
Abstract: This study investigates the interaction between surface and colour knowledge information during object recognition. In two different experiments, participants were instructed to decide whether two presented stimuli belonged to the same object identity. On the non-matching trials, we manipulated the shape and colour knowledge information activated by the two stimuli by creating four different stimulus pairs: (1) similar in shape and colour (e.g. TOMATO–APPLE); (2) similar in shape and dissimilar in colour (e.g. TOMATO–COCONUT); (3) dissimilar in shape and similar in colour (e.g. TOMATO–CHILI PEPPER) and (4) dissimilar in both shape and colour (e.g. TOMATO–PEANUT). The object pictures were presented in typical and atypical colours and also in black-and-white. The interaction between surface and colour knowledge showed to be contingent upon shape information: while colour knowledge is more important for recognising structurally similar shaped objects, surface colour is more prominent for recognising ...

5 citations