scispace - formally typeset
Search or ask a question

Showing papers on "Semantic memory published in 2018"


Journal ArticleDOI
TL;DR: These findings demonstrate that, when successfully comprehending natural speech, the human brain responds to the contextual semantic content of each word in a relatively time-locked fashion.

286 citations


Proceedings Article
27 Sep 2018
TL;DR: This work proposes to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework and shows how semantic knowledge improves performance significantly and improves in generalization to unseen scenes and/or objects.
Abstract: How do humans navigate to target objects in novel scenes? Do we use the semantic/functional priors we have built over years to efficiently search and navigate? For example, to search for mugs, we search cabinets near the coffee machine and for fruits we try the fridge. In this work, we focus on incorporating semantic priors in the task of semantic navigation. We propose to use Graph Convolutional Networks for incorporating the prior knowledge into a deep reinforcement learning framework. The agent uses the features from the knowledge graph to predict the actions. For evaluation, we use the AI2-THOR framework. Our experiments show how semantic knowledge improves performance significantly. More importantly, we show improvement in generalization to unseen scenes and/or objects. The supplementary video can be accessed at the following link: this https URL .

210 citations


Journal ArticleDOI
TL;DR: Using task based imaging, regions that respond when cognition combines both stimulus independence with multi‐modal information are established and it is shown that these regions were located at the extreme end of a macroscale gradient, which describes gradual transitions from sensorimotor to transmodal cortex.

201 citations


Journal ArticleDOI
TL;DR: It is shown that sustained voltages in human EEG recordings contain fine-grained information about the orientation of an object being held in memory, consistent with a memory storage signal.
Abstract: In human scalp EEG recordings, both sustained potentials and alpha-band oscillations are present during the delay period of working memory tasks and may therefore reflect the representation of information in working memory. However, these signals may instead reflect support mechanisms rather than the actual contents of memory. In particular, alpha-band oscillations have been tightly tied to spatial attention and may not reflect location-independent memory representations per se. To determine how sustained and oscillating EEG signals are related to attention and working memory, we attempted to decode which of 16 orientations was being held in working memory by human observers (both women and men). We found that sustained EEG activity could be used to decode the remembered orientation of a stimulus, even when the orientation of the stimulus varied independently of its location. Alpha-band oscillations also carried clear information about the location of the stimulus, but they provided little or no information about orientation independently of location. Thus, sustained potentials contain information about the object properties being maintained in working memory, consistent with previous evidence of a tight link between these potentials and working memory capacity. In contrast, alpha-band oscillations primarily carry location information, consistent with their link to spatial attention.SIGNIFICANCE STATEMENT Working memory plays a key role in cognition, and working memory is impaired in several neurological and psychiatric disorders. Previous research has suggested that human scalp EEG recordings contain signals that reflect the neural representation of information in working memory. However, to conclude that a neural signal actually represents the object being remembered, it is necessary to show that the signal contains fine-grained information about that object. Here, we show that sustained voltages in human EEG recordings contain fine-grained information about the orientation of an object being held in memory, consistent with a memory storage signal.

168 citations


Journal ArticleDOI
TL;DR: A review of experimental data evaluates predictions of the APC model and alternative theories, also providing detailed discussion of some seemingly contradictory findings.

154 citations


Journal ArticleDOI
02 Feb 2018-eLife
TL;DR: Comparing the neural pattern similarities among object-evoked fMRI responses with behavior-based models that independently captured the visual and conceptual similarities among these stimuli revealed evidence for distinctive coding of visual features in lateral occipital cortex and conceptual features in the temporal pole and parahippocampal cortex.
Abstract: Our ability to interact with the world depends in large part on our understanding of objects. But objects that look similar, such as a hairdryer and a gun, may do different things, while objects that look different, such as tape and glue, may have similar roles. The fact that we can effortlessly distinguish between such objects suggests that the brain combines information about an object’s visual and abstract properties. Nevertheless, brain imaging experiments show that thinking about what an object looks like activates different brain regions to thinking about abstract knowledge. For example, thinking about an object’s appearance activates areas that support vision, whereas thinking about how to use that object activates regions that control movement. So how does the brain combine these different kinds of information? Martin et al. asked healthy volunteers to answer questions about objects while lying inside a brain scanner. Questions about appearance (such as “is a hairdryer angular?”) activated different regions of the brain to questions about abstract knowledge (“is a hairdryer manmade?”). But both types of question also activated a region of the brain called the perirhinal cortex. When volunteers responded to either type of question, the activity in their perirhinal cortex signaled both the physical appearance of the object as well as its abstract properties, even though both types of information were not necessary for the task. This suggests that information in the perirhinal cortex reflects combinations of multiple features of objects. These findings provide insights into a neurodegenerative disorder called semantic dementia. Patients with semantic dementia lose their general knowledge about the world. This leads to difficulties interacting with everyday objects. Patients may try to use a fork to comb their hair, for example. Notably, the perirhinal cortex is a brain region that is usually damaged in semantic dementia. Loss of combined information about the visual and abstract properties of objects may lie at the core of the observed impairments.

147 citations


Journal ArticleDOI
01 Oct 2018-Cortex
TL;DR: It is argued that the balance of the evidence suggests that the angular gyrus supports the representation of retrieved episodic information, and that this likely reflects a more general role for the region in representing multi-modal and multi-domain information.

142 citations


Journal ArticleDOI
TL;DR: The analysis reveals that the semantic network of high creative individuals is more robust to network percolation compared with the network of low creative individuals and that this higher robustness is related to differences in the structure of the networks.
Abstract: Flexibility of thought is theorized to play a critical role in the ability of high creative individuals to generate novel and innovative ideas. However, this has been examined only through indirect behavioral measures. Here we use network percolation analysis (removal of links in a network whose strength is below an increasing threshold) to computationally examine the robustness of the semantic memory networks of low and high creative individuals. Robustness of a network indicates its flexibility and thus can be used to quantify flexibility of thought as related to creativity. This is based on the assumption that the higher the robustness of the semantic network, the higher its flexibility. Our analysis reveals that the semantic network of high creative individuals is more robust to network percolation compared with the network of low creative individuals and that this higher robustness is related to differences in the structure of the networks. Specifically, we find that this higher robustness is related to stronger links connecting between different components of similar semantic words in the network, which may also help to facilitate spread of activation over their network. Thus, we directly and quantitatively examine the relation between flexibility of thought and creative ability. Our findings support the associative theory of creativity, which posits that high creative ability is related to a flexible structure of semantic memory. Finally, this approach may have further implications, by enabling a quantitative examination of flexibility of thought, in both healthy and clinical populations.

127 citations


Journal ArticleDOI
TL;DR: The proposed dual-memory self-organizing architecture is evaluated on the CORe50 benchmark dataset for continuous object recognition, showing that it significantly outperform current methods of lifelong learning in three different incremental learning scenarios.
Abstract: Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting in which novel sensory experience interferes with existing representations and leads to abrupt decreases in the performance on previously acquired knowledge. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. Therefore, specialized neural network mechanisms are required that adapt to novel sequential experience while preventing disruptive interference with existing representations. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios.

117 citations


Journal ArticleDOI
TL;DR: It is shown that brain responses to such continuous stimuli can be investigated in detail, for magnetoencephalography (MEG) data, by combining linear kernel estimation with minimum norm source localization, and this technique can be used to study the neural processing of continuous stimuli in time and anatomical space with the millisecond temporal resolution of MEG.

113 citations


Posted Content
TL;DR: Notably, this simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep-learning dynamics to give rise to these regularities.
Abstract: An extensive body of empirical research has revealed remarkable regularities in the acquisition, organization, deployment, and neural representation of human semantic knowledge, thereby raising a fundamental conceptual question: what are the theoretical principles governing the ability of neural networks to acquire, organize, and deploy abstract knowledge by integrating across many individual experiences? We address this question by mathematically analyzing the nonlinear dynamics of learning in deep linear networks. We find exact solutions to this learning dynamics that yield a conceptual explanation for the prevalence of many disparate phenomena in semantic cognition, including the hierarchical differentiation of concepts through rapid developmental transitions, the ubiquity of semantic illusions between such transitions, the emergence of item typicality and category coherence as factors controlling the speed of semantic processing, changing patterns of inductive projection over development, and the conservation of semantic similarity in neural representations across species. Thus, surprisingly, our simple neural model qualitatively recapitulates many diverse regularities underlying semantic development, while providing analytic insight into how the statistical structure of an environment can interact with nonlinear deep learning dynamics to give rise to these regularities.

Journal ArticleDOI
TL;DR: The data suggest that of the regions studied, TDP-43 pathology in the ATPC is an important early neocortical stage of T DP-43 progression in aging and AD while extension of TDP -43 pathology to the midfrontal cortex is a late stage associated with more severe and global cognitive impairment.
Abstract: TDP-43 pathology was investigated in the anterior temporal pole cortex (ATPC) and orbital frontal cortex (OFC), regions often degenerated in frontotemporal lobar degenerations (FTLD), in aging and Alzheimer’s disease (AD). Diagnosis of dementia in the 1160 autopsied participants from 3 studies of community-dwelling elders was based on clinical evaluation and cognitive performance tests which were used to create summary measures of the five cognitive domains. Neuronal and glial TDP-43 cytoplasmic inclusions were quantitated in 8 brain regions by immunohistochemistry, and used in ANOVA and regression analyses. TDP-43 pathology was present in 547 (49.4%) participants in whom ATPC (41.9%) was the most frequently involved neocortical region and in 15.5% of these cases, ATPC was the only neocortical area with TDP-43 pathology suggesting not only that ATPC is involved early by TDP-43 but that ATPC may represent an intermediate stage between mesial temporal lobe involvement by TDP-43 and the last stage with involvement of other neocortical areas. To better study this intermediary neocortical stage, and to integrate with other staging schemes, our previous 3 stage distribution of TDP-43 pathology was revised to a 5 stage distribution scheme with stage 1 showing involvement of the amygdala only; stage 2 showed extension to hippocampus and/or entorhinal cortex; stage 3 showed extension to the ATPC; stage 4 – showed extension to the midtemporal cortex and/or OFC and finally in stage 5, there was extension to the midfrontal cortex. Clinically, cases in stages 2 to 5 had impaired episodic memory, however, stage 3 was distinct from stage 2 since stage 3 cases had significantly increased odds of dementia. The proportion of cases with hippocampal sclerosis increased progressively across the stages with stage 5 showing the largest proportion of hippocampal sclerosis cases. Stage 5 cases differed from other stages by having impairment of semantic memory and perceptual speed, in addition to episodic memory impairment. These data suggest that of the regions studied, TDP-43 pathology in the ATPC is an important early neocortical stage of TDP-43 progression in aging and AD while extension of TDP-43 pathology to the midfrontal cortex is a late stage associated with more severe and global cognitive impairment.

Journal ArticleDOI
01 Jul 2018-Brain
TL;DR: Convergent data strongly support a model in which a distinct neuroanatomical substrate in middle fusiform gyrus provides access to object semantic information, and this under-appreciated locus of semantic processing is at risk in resections for temporal lobe epilepsy.
Abstract: Semantic memory underpins our understanding of objects, people, places, and ideas. Anomia, a disruption of semantic memory access, is the most common residual language disturbance and is seen in dementia and following injury to temporal cortex. While such anomia has been well characterized by lesion symptom mapping studies, its pathophysiology is not well understood. We hypothesize that inputs to the semantic memory system engage a specific heteromodal network hub that integrates lexical retrieval with the appropriate semantic content. Such a network hub has been proposed by others, but has thus far eluded precise spatiotemporal delineation. This limitation in our understanding of semantic memory has impeded progress in the treatment of anomia. We evaluated the cortical structure and dynamics of the lexical semantic network in driving speech production in a large cohort of patients with epilepsy using electrocorticography (n = 64), functional MRI (n = 36), and direct cortical stimulation (n = 30) during two generative language processes that rely on semantic knowledge: visual picture naming and auditory naming to definition. Each task also featured a non-semantic control condition: scrambled pictures and reversed speech, respectively. These large-scale data of the left, language-dominant hemisphere uniquely enable convergent, high-resolution analyses of neural mechanisms characterized by rapid, transient dynamics with strong interactions between distributed cortical substrates. We observed three stages of activity during both visual picture naming and auditory naming to definition that were serially organized: sensory processing, lexical semantic processing, and articulation. Critically, the second stage was absent in both the visual and auditory control conditions. Group activity maps from both electrocorticography and functional MRI identified heteromodal responses in middle fusiform gyrus, intraparietal sulcus, and inferior frontal gyrus; furthermore, the spectrotemporal profiles of these three regions revealed coincident activity preceding articulation. Only in the middle fusiform gyrus did direct cortical stimulation disrupt both naming tasks while still preserving the ability to repeat sentences. These convergent data strongly support a model in which a distinct neuroanatomical substrate in middle fusiform gyrus provides access to object semantic information. This under-appreciated locus of semantic processing is at risk in resections for temporal lobe epilepsy as well as in trauma and strokes that affect the inferior temporal cortex-it may explain the range of anomic states seen in these conditions. Further characterization of brain network behaviour engaging this region in both healthy and diseased states will expand our understanding of semantic memory and further development of therapies directed at anomia.

Journal ArticleDOI
TL;DR: This work highlights recent behavioral and neuroimaging evidence suggesting that maturational differences among subfields within the hippocampus contribute to the developmental lead-lag relation between generalization and specificity, and lays out future research directions.

Journal ArticleDOI
TL;DR: This paper proposed the constructive episodic simulation hypothesis to account for emerging findin' patterns in episodic memory, which was later extended by Schacter and Schacter to predict future-oriented episodic memories.
Abstract: Over the past decade, episodic memory has been reconceptualised as future-oriented. In 2007, Schacter and I proposed the ‘constructive episodic simulation hypothesis’ to account for emerging findin...

Journal ArticleDOI
TL;DR: The findings expose the hippocampus as a key pillar in the neural architecture of mind-wandering and reveal its impact beyond episodic memory, placing it at the heart of the authors' mental life.
Abstract: Subjective inner experiences, such as mind-wandering, represent the fundaments of human cognition. Although the precise function of mind-wandering is still debated, it is increasingly acknowledged to have influence across cognition on processes such as future planning, creative thinking, and problem-solving and even on depressive rumination and other mental health disorders. Recently, there has been important progress in characterizing mind-wandering and identifying the associated neural networks. Two prominent features of mind-wandering are mental time travel and visuospatial imagery, which are often linked with the hippocampus. People with selective bilateral hippocampal damage cannot vividly recall events from their past, envision their future, or imagine fictitious scenes. This raises the question of whether the hippocampus plays a causal role in mind-wandering and, if so, in what way. Leveraging a unique opportunity to shadow people (all males) with bilateral hippocampal damage for several days, we examined, for the first time, what they thought about spontaneously, without direct task demands. We found that they engaged in as much mind-wandering as control participants. However, whereas controls thought about the past, present, and future, imagining vivid visual scenes, hippocampal damage resulted in thoughts primarily about the present comprising verbally mediated semantic knowledge. These findings expose the hippocampus as a key pillar in the neural architecture of mind-wandering and also reveal its impact beyond episodic memory, placing it at the heart of our mental life.SIGNIFICANCE STATEMENT Humans tend to mind-wander ∼30-50% of their waking time. Two prominent features of this pervasive form of thought are mental time travel and visuospatial imagery, which are often associated with the hippocampus. To examine whether the hippocampus plays a causal role in mind-wandering, we examined the frequency and phenomenology of mind-wandering in patients with selective bilateral hippocampal damage. We found that they engaged in as much mind-wandering as controls. However, hippocampal damage changed the form and content of mind-wandering from flexible, episodic, and scene based to abstract, semanticized, and verbal. These findings expose the hippocampus as a key pillar in the neural architecture of mind-wandering and reveal its impact beyond episodic memory, placing it at the heart of our mental life.

Journal ArticleDOI
TL;DR: Assessment of semantic cognition in a large group of postsurgical temporal lobe epilepsy patients with left versus right anterior temporal lobectomy supports a model in which the 2 ATLs act as a coupled bilateral system for the representation of semantic knowledge, and in which graded hemispheric specializations emerge as a consequence of differential connectivity to lateralized speech production and face perception regions.
Abstract: The presence and degree of specialization between the anterior temporal lobes (ATLs) is a key issue in debates about the neural architecture of semantic memory. Here, we comprehensively assessed multiple aspects of semantic cognition in a large group of postsurgical temporal lobe epilepsy (TLE) patients with left versus right anterior temporal lobectomy (n = 40). Both subgroups showed deficits in expressive and receptive verbal semantic tasks, word and object recognition, naming and recognition of famous faces and perception of faces and emotions. Graded differences in performance between the left and right groups were secondary to the overall mild semantic impairment; primarily, left resected TLE patients showed weaker performance on tasks that required naming or accessing semantic information from a written word. Right resected TLE patients were relatively more impaired at recognizing famous faces as familiar, although this effect was observed less consistently. These findings unify previous partial, inconsistent results and also align directly with fMRI and transcranial magnetic stimulation results in neurologically intact participants. Taken together, these data support a model in which the 2 ATLs act as a coupled bilateral system for the representation of semantic knowledge, and in which graded hemispheric specializations emerge as a consequence of differential connectivity to lateralized speech production and face perception regions.

Journal ArticleDOI
TL;DR: Semantic processing in later life is associated with a shift from semantic‐specific to domain‐general neural resources, consistent with the theory of neural dedifferentiation, and a performance‐related reduction in prefrontal lateralisation, which may reflect a response to increased task demands.

Journal ArticleDOI
09 Mar 2018-Cortex
TL;DR: These findings characterise the interaction within the neural architecture of semantic cognition – the control system dynamically heightens its connectivity with relevant components of the representation system, in response to different semantic contents and difficulty levels.

Journal ArticleDOI
TL;DR: This work provides an overview of learning and memory enhancement techniques before focusing on two techniques – spaced repetition and retrieval practice – that have been linked to the memory systems and presents specific predictions for how these techniques should enhance language learning.
Abstract: The declarative/procedural (DP) model posits that the learning, storage, and use of language critically depend on two learning and memory systems in the brain: declarative memory and procedural memory. Thus, on the basis of independent research on the memory systems, the model can generate specific and often novel predictions for language. Till now most such predictions and ensuing empirical work have been motivated by research on the neurocognition of the two memory systems. However, there is also a large literature on techniques that enhance learning and memory. The DP model provides a theoretical framework for predicting which techniques should extend to language learning, and in what circumstances they should apply. In order to lay the neurocognitive groundwork for these predictions, here we first summarize the neurocognitive fundamentals of the two memory systems and briefly lay out the resulting claims of the DP model for both first and second language. We then provide an overview of learning and me...

Journal ArticleDOI
TL;DR: This work replicates the language/MD network dissociation discovered previously with other approaches, and shows that the language network is robustly dissociated from the DMN, overall suggesting that these three networks contribute to high‐level cognition in different ways and, perhaps, support distinct computations.

Journal ArticleDOI
TL;DR: A major separation between two brain networks for mathematical and non-mathematical semantics is indicated, which goes a long way to explain a variety of facts in neuroimaging, neuropsychology and developmental disorders.
Abstract: Is mathematical language similar to natural language? Are language areas used by mathematicians when they do mathematics? And does the brain comprise a generic semantic system that stores mathematical knowledge alongside knowledge of history, geography or famous people? Here, we refute those views by reviewing three functional MRI studies of the representation and manipulation of high-level mathematical knowledge in professional mathematicians. The results reveal that brain activity during professional mathematical reflection spares perisylvian language-related brain regions as well as temporal lobe areas classically involved in general semantic knowledge. Instead, mathematical reflection recycles bilateral intraparietal and ventral temporal regions involved in elementary number sense. Even simple fact retrieval, such as remembering that 'the sine function is periodical' or that 'London buses are red', activates dissociated areas for math versus non-math knowledge. Together with other fMRI and recent intracranial studies, our results indicated a major separation between two brain networks for mathematical and non-mathematical semantics, which goes a long way to explain a variety of facts in neuroimaging, neuropsychology and developmental disorders.This article is part of a discussion meeting issue 'The origins of numerical abilities'.

Proceedings ArticleDOI
19 Jul 2018
TL;DR: A novel framework to learn visual relation facts for VQA is proposed and a multi-step attention model composed of visual attention and semantic attention sequentially to extract related visual knowledge and semantic knowledge is proposed.
Abstract: Recently, Visual Question Answering (VQA) has emerged as one of the most significant tasks in multimodal learning as it requires understanding both visual and textual modalities. Existing methods mainly rely on extracting image and question features to learn their joint feature embedding via multimodal fusion or attention mechanism. Some recent studies utilize external VQA-independent models to detect candidate entities or attributes in images, which serve as semantic knowledge complementary to the VQA task. However, these candidate entities or attributes might be unrelated to the VQA task and have limited semantic capacities. To better utilize semantic knowledge in images, we propose a novel framework to learn visual relation facts for VQA. Specifically, we build up a Relation-VQA (R-VQA) dataset based on the Visual Genome dataset via a semantic similarity module, in which each data consists of an image, a corresponding question, a correct answer and a supporting relation fact. A well-defined relation detector is then adopted to predict visual question-related relation facts. We further propose a multi-step attention model composed of visual attention and semantic attention sequentially to extract related visual knowledge and semantic knowledge. We conduct comprehensive experiments on the two benchmark datasets, demonstrating that our model achieves state-of-the-art performance and verifying the benefit of considering visual relation facts.

Journal ArticleDOI
TL;DR: In the combined model, concept activation is driven by visual input and co-occurrence of semantic features, consistent with neurocognitive accounts, and provides proof of principle of how a mechanistic model of combined visuo-semantic processing can account for pattern-information in the ventral stream.
Abstract: Recognising an object involves rapid visual processing and activation of semantic knowledge about the object, but how visual processing activates and interacts with semantic representations remains unclear. Cognitive neuroscience research has shown that while visual processing involves posterior regions along the ventral stream, object meaning involves more anterior regions, especially perirhinal cortex. Here we investigate visuo-semantic processing by combining a deep neural network model of vision with an attractor network model of semantics, such that visual information maps onto object meanings represented as activation patterns across features. In the combined model, concept activation is driven by visual input and co-occurrence of semantic features, consistent with neurocognitive accounts. We tested the model's ability to explain fMRI data where participants named objects. Visual layers explained activation patterns in early visual cortex, whereas pattern-information in perirhinal cortex was best explained by later stages of the attractor network, when detailed semantic representations are activated. Posterior ventral temporal cortex was best explained by intermediate stages corresponding to initial semantic processing, when visual information has the greatest influence on the emerging semantic representation. These results provide proof of principle of how a mechanistic model of combined visuo-semantic processing can account for pattern-information in the ventral stream.

Journal ArticleDOI
TL;DR: The semantic content of abstract concepts is investigated using a property generation task and is compatible with grounded cognition theories, which emphasize the importance of linguistic, social, introspective and affective experiential information for the representation of abstract concept representation.
Abstract: The relation of abstract concepts to the modality-specific systems is discussed controversially. According to classical approaches, the semantic content of abstract concepts can only be coded by amodal or verbal-symbolic representations distinct from the sensory and motor systems, because abstract concepts lack a clear physical referent. Grounded cognition theories, in contrast, propose that abstract concepts do not depend only on the verbal system, but also on a variety of modal systems involving perception, action, emotion and internal states. In order to contribute to this debate, we investigated the semantic content of abstract concepts using a property generation task. Participants were asked to generate properties for 296 abstract concepts, which are relevant for constituting their meaning. These properties were categorized by a coding-scheme making a classification into modality-specific and verbal contents possible. Words were additionally rated with regard to concreteness/abstractness and familiarity. To identify possible subgroups of abstract concepts with distinct profiles of generated features, hierarchical cluster analyses were conducted. Participants generated a substantial proportion of introspective, affective, social, sensory and motor-related properties, in addition to verbal associations. Cluster analyses revealed different subcategories of abstract concepts, which can be characterized by the dominance of certain conceptual features. The present results are therefore compatible with grounded cognition theories, which emphasize the importance of linguistic, social, introspective and affective experiential information for the representation of abstract concepts. Our findings also indicate that abstract concepts are highly heterogeneous requiring the investigation of well-specified subcategories of abstract concepts, for instance as revealed by the present cluster analyses. The present study could thus guide future behavioral or imaging work further elucidating the representation of abstract concepts.

Journal ArticleDOI
TL;DR: Assessment of semantic cognition in young and older adults indicates that three distinct elements contribute to semantic cognition: semantic representations that accumulate throughout the lifespan, processes for controlled retrieval of less salient semantic information, and mechanisms for selecting task-relevant aspects of semantic knowledge, which decline with age and may relate more closely to domain-general executive control.
Abstract: Semantic cognition refers to the appropriate use of acquired knowledge about the world. This requires representation of knowledge as well as control processes which ensure that currently-relevant aspects of knowledge are retrieved and selected. Although these abilities can be impaired selectively following brain damage, the relationship between them in healthy individuals is unclear. It is also commonly assumed that semantic cognition is preserved in later life, because older people have greater reserves of knowledge. However, this claim overlooks the possibility of decline in semantic control processes. Here, semantic cognition was assessed in 100 young and older adults. Despite having a broader knowledge base, older people showed specific impairments in semantic control, performing more poorly than young people when selecting among competing semantic representations. Conversely, they showed preserved controlled retrieval of less salient information from the semantic store. Breadth of semantic knowledge was positively correlated with controlled retrieval but was unrelated to semantic selection ability, which was instead correlated with non-semantic executive function. These findings indicate that three distinct elements contribute to semantic cognition: semantic representations that accumulate throughout the lifespan, processes for controlled retrieval of less salient semantic information, which appear age-invariant, and mechanisms for selecting task-relevant aspects of semantic knowledge, which decline with age and may relate more closely to domain-general executive control.

Journal ArticleDOI
TL;DR: The present study aims to characterize left hemisphere regional hypoactivation in readers with dyslexia for the main processes involved in successful reading: phonological, orthographic and semantic.

Journal ArticleDOI
TL;DR: The findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system.
Abstract: Previous research has suggested that distributional learning mechanisms may contribute to the acquisition of semantic knowledge. However, distributional learning mechanisms, statistical learning, and contemporary "deep learning" approaches have been criticized for being incapable of learning the kind of abstract and structured knowledge that many think is required for acquisition of semantic knowledge. In this paper, we show that recurrent neural networks, trained on noisy naturalistic speech to children, do in fact learn what appears to be abstract and structured knowledge. We trained two types of recurrent neural networks (Simple Recurrent Network, and Long Short-Term Memory) to predict word sequences in a 5-million-word corpus of speech directed to children ages 0-3 years old, and assessed what semantic knowledge they acquired. We found that learned internal representations are encoding various abstract grammatical and semantic features that are useful for predicting word sequences. Assessing the organization of semantic knowledge in terms of the similarity structure, we found evidence of emergent categorical and hierarchical structure in both models. We found that the Long Short-term Memory (LSTM) and SRN are both learning very similar kinds of representations, but the LSTM achieved higher levels of performance on a quantitative evaluation. We also trained a non-recurrent neural network, Skip-gram, on the same input to compare our results to the state-of-the-art in machine learning. We found that Skip-gram achieves relatively similar performance to the LSTM, but is representing words more in terms of thematic compared to taxonomic relations, and we provide reasons why this might be the case. Our findings show that a learning system that derives abstract, distributed representations for the purpose of predicting sequential dependencies in naturalistic language may provide insight into emergence of many properties of the developing semantic system.

Journal ArticleDOI
TL;DR: In this article, a dual-memory self-organizing architecture for lifelong learning is proposed, which consists of two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory).
Abstract: Artificial autonomous agents and robots interacting in complex environments are required to continually acquire and fine-tune knowledge over sustained periods of time. The ability to learn from continuous streams of information is referred to as lifelong learning and represents a long-standing challenge for neural network models due to catastrophic forgetting. Computational models of lifelong learning typically alleviate catastrophic forgetting in experimental scenarios with given datasets of static images and limited complexity, thereby differing significantly from the conditions artificial agents are exposed to. In more natural settings, sequential information may become progressively available over time and access to previous experience may be restricted. In this paper, we propose a dual-memory self-organizing architecture for lifelong learning scenarios. The architecture comprises two growing recurrent networks with the complementary tasks of learning object instances (episodic memory) and categories (semantic memory). Both growing networks can expand in response to novel sensory experience: the episodic memory learns fine-grained spatiotemporal representations of object instances in an unsupervised fashion while the semantic memory uses task-relevant signals to regulate structural plasticity levels and develop more compact representations from episodic experience. For the consolidation of knowledge in the absence of external sensory input, the episodic memory periodically replays trajectories of neural reactivations. We evaluate the proposed model on the CORe50 benchmark dataset for continuous object recognition, showing that we significantly outperform current methods of lifelong learning in three different incremental learning scenarios

Journal ArticleDOI
09 Aug 2018-PLOS ONE
TL;DR: It is suggested that studies relying on self-report should use the same well-defined time frames across all self-reported measures, as it indicates that even for longer time frames individuals might attempt to retrieve episodic information to provide a response.
Abstract: Background The degree to which episodic and semantic memory processes contribute to retrospective self-reports have been shown to depend on the length of reporting period. Robinson and Clore (2002) argued that when the amount of accessible detail decreases due to longer reporting periods, an episodic retrieval strategy is abandoned in favor of a semantic retrieval strategy. The current study further examines this shift between retrieval strategies by conceptually replicating the model of Robinson and Clore (2002) for both emotions and symptoms and by attempting to estimate the exact moment of the theorized shift. Method A sample of 469 adults reported the extent to which they experienced 8 states (excited, happy, calm, sad, anxious, angry, pain, stress) over 12 time frames (right now to in general). A series of curvilinear and piecewise linear multilevel growth models were used to examine the pattern of response times and response levels (i.e., rated intensity on a 1–5 scale) across the different time frames. Results Replicating previous results, both response times and response levels increased with longer time frames. In contrast to prior work, no consistent evidence was found for a change in response patterns that would suggest a shift in retrieval strategies (i.e., a flattening or decrease of the slope for longer time frames). The relationship between the time frames and response times/levels was similar for emotions and symptoms. Conclusions Although the current study showed a pronounced effect of time frame on response times and response levels, it did not replicate prior work that suggested a shift from episodic to semantic memory as time frame duration increased. This indicates that even for longer time frames individuals might attempt to retrieve episodic information to provide a response. We suggest that studies relying on self-report should use the same well-defined time frames across all self-reported measures.