scispace - formally typeset
Search or ask a question
Author

Mozhgan Shahmohammadi

Bio: Mozhgan Shahmohammadi is an academic researcher from Islamic Azad University. The author has contributed to research in topics: Feature (machine learning) & Decoding methods. The author has an hindex of 1, co-authored 2 publications receiving 1 citations. Previous affiliations of Mozhgan Shahmohammadi include Islamic Azad University Central Tehran Branch.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors compare the information content of 31 variability-sensitive features against the mean of activity, using three independent highly varied data sets and conclude that the brain might encode the information in multiple aspects of neural variabilities simultaneously such as phase, amplitude, and frequency rather than mean per se.
Abstract: How does the human brain encode visual object categories? Our understanding of this has advanced substantially with the development of multivariate decoding analyses. However, conventional electroencephalography (EEG) decoding predominantly uses the mean neural activation within the analysis window to extract category information. Such temporal averaging overlooks the within-trial neural variability that is suggested to provide an additional channel for the encoding of information about the complexity and uncertainty of the sensory input. The richness of temporal variabilities, however, has not been systematically compared with the conventional mean activity. Here we compare the information content of 31 variability-sensitive features against the mean of activity, using three independent highly varied data sets. In whole-trial decoding, the classical event-related potential (ERP) components of P2a and P2b provided information comparable to those provided by original magnitude data (OMD) and wavelet coefficients (WC), the two most informative variability-sensitive features. In time-resolved decoding, the OMD and WC outperformed all the other features (including the mean), which were sensitive to limited and specific aspects of temporal variabilities, such as their phase or frequency. The information was more pronounced in the theta frequency band, previously suggested to support feedforward visual processing. We concluded that the brain might encode the information in multiple aspects of neural variabilities simultaneously such as phase, amplitude, and frequency rather than mean per se. In our active categorization data set, we found that more effective decoding of the neural codes corresponds to better prediction of behavioral performance. Therefore, the incorporation of temporal variabilities in time-resolved decoding can provide additional category information and improved prediction of behavior.

3 citations

Posted ContentDOI
06 Dec 2020-bioRxiv
TL;DR: The results of this study can constrain previous theories about how the brain codes object category information and characterize the most informative aspects of brain activations about object categories.
Abstract: In order to develop object recognition algorithms, which can approach human-level recognition performance, researchers have been studying how the human brain performs recognition in the past five decades. This has already inspired AI-based object recognition algorithms, such as convolutional neural networks, which are among the most successful object recognition platforms today and can approach human performance in specific tasks. However, it is not yet clearly known how recorded brain activations convey information about object category processing. One main obstacle has been the lack of large feature sets, to evaluate the information contents of multiple aspects of neural activations. Here, we com-pared the information contents of a large set of 25 features, extracted from time series of electroencephalography (EEG) recorded from human participants doing an object recognition task. We could characterize the most informative aspects of brain activations about object categories. Among the evaluated features, event-related potential (ERP) components of N1 and P2a were among the most in-formative features with the highest information in the Theta frequency bands. Upon limiting the analysis time window, we observed more information for fea-tures detecting temporally informative patterns in the signals. The results of this study can constrain previous theories about how the brain codes object category information.

Cited by
More filters
Journal ArticleDOI
TL;DR: In this article , two variants of RCA, model-free and model-based RCA are considered, where the goal is to quantify the shared information for two brain regions.
Abstract: Brain connectivity analyses have conventionally relied on statistical relationship between one-dimensional summaries of activation in different brain areas. However, summarizing activation patterns within each area to a single dimension ignores the potential statistical dependencies between their multi-dimensional activity patterns. Representational Connectivity Analyses (RCA) is a method that quantifies the relationship between multi-dimensional patterns of activity without reducing the dimensionality of the data. We consider two variants of RCA. In model-free RCA, the goal is to quantify the shared information for two brain regions. In model-based RCA, one tests whether two regions have shared information about a specific aspect of the stimuli/task, as defined by a model. However, this is a new approach and the potential caveats of model-free and model-based RCA are still understudied. We first explain how model-based RCA detects connectivity through the lens of models, and then present three scenarios where model-based and model-free RCA give discrepant results. These conflicting results complicate the interpretation of functional connectivity. We highlight the challenges in three scenarios: complex intermediate models, common patterns across regions, and transformation of representational structure across brain regions. The article is accompanied by scripts (https://osf.io/3nxfa/) that reproduce the results. In each case, we suggest potential ways to mitigate the difficulties caused by inconsistent results. The results of this study shed light on some understudied aspects of RCA, and allow researchers to use the method more effectively.

3 citations

Journal ArticleDOI
TL;DR: In this article , the authors combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17) across three datasets, and compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features.
Abstract: Neural codes are reflected in complex neural activation patterns. Conventional electroencephalography (EEG) decoding analyses summarize activations by averaging/down-sampling signals within the analysis window. This diminishes informative fine-grained patterns. While previous studies have proposed distinct statistical features capable of capturing variability-dependent neural codes, it has been suggested that the brain could use a combination of encoding protocols not reflected in any one mathematical feature alone. To check, we combined 30 features using state-of-the-art supervised and unsupervised feature selection procedures (n = 17). Across three datasets, we compared decoding of visual object category between these 17 sets of combined features, and between combined and individual features. Object category could be robustly decoded using the combined features from all of the 17 algorithms. However, the combination of features, which were equalized in dimension to the individual features, were outperformed across most of the time points by the multiscale feature of Wavelet coefficients. Moreover, the Wavelet coefficients also explained the behavioral performance more accurately than the combined features. These results suggest that a single but multiscale encoding protocol may capture the EEG neural codes better than any combination of protocols. Our findings put new constraints on the models of neural information encoding in EEG.
Posted ContentDOI
11 Aug 2021-bioRxiv
TL;DR: In this article, the authors consider two variants of RCA, model-free and model-based, and highlight the challenges in three scenarios: complex intermediate models, common patterns across regions and transformation of representational structure across brain regions.
Abstract: Brain connectivity analyses have conventionally relied on statistical relationship between one-dimensional summaries of activation in different brain areas. However, summarising activation patterns within each area to a single dimension ignores the potential statistical dependencies between their multi-dimensional activity patterns. Representational Connectivity Analyses (RCA) is a method that quantifies the relationship between multi-dimensional patterns of activity without reducing the dimensionality of the data. We consider two variants of RCA. In model-free RCA, the goal is to quantify the shared information for two brain regions. In model-based RCA, one tests whether two regions have shared information about a specific aspect of the stimuli/task, as defined by a model. However, this is a new approach and the potential caveats of model-free and model-based RCA are still understudied. We first explain how model-based RCA detects connectivity through the lens of models, and then present three scenarios where model-based and model-free RCA give discrepant results. These conflicting results complicate the interpretation of functional connectivity. We highlight the challenges in three scenarios: complex intermediate models, common patterns across regions and transformation of representational structure across brain regions. The paper is accompanied by scripts that reproduce the results. In each case, we suggest potential ways to mitigate the difficulties caused by inconsistent results. The results of this study shed light on some understudied aspects of RCA, and allow researchers to use the method more effectively.