scispace - formally typeset
Search or ask a question

Showing papers by "Russell A. Poldrack published in 2023"


Posted ContentDOI
15 Feb 2023-bioRxiv
TL;DR: In this article , a simple approach that includes response times in the fMRI time series model was proposed to separate condition differences from RT differences, retaining power for detection of unconfounded condition differences while also allowing the identification of RT-related activation.
Abstract: The functional MRI (fMRI) signal is a proxy for an unobservable neuronal signal, and differences in fMRI signals on cognitive tasks are generally interpreted as reflecting differences in the intensity of local neuronal activity. However, changes in either intensity or duration of neuronal activity can yield identical differences in fMRI signals. When conditions differ in response times (RTs), it is thus impossible to determine whether condition differences in fMRI signals are due to differences in the intensity of neuronal activity or to potentially spurious differences in the duration of neuronal activity. The most common fMRI analysis approach ignores RTs, making it difficult to interpret condition differences that could be driven by RTs and/or intensity. Because differences in response time are one of the most important signals of interest for cognitive psychology, nearly every task of interest for fMRI exhibits RT differences across conditions of interest. This results in a paradox, wherein the signal of interest for the psychologist is a potential confound for the fMRI researcher. We review this longstanding problem, and demonstrate that the failure to address RTs in the fMRI time series model can also lead to spurious correlations at the group level related to RTs or other variables of interest, potentially impacting the interpretation of brain-behavior correlations. We propose a simple approach that remedies this problem by including RT in the fMRI time series model. This model separates condition differences from RT differences, retaining power for detection of unconfounded condition differences while also allowing the identification of RT-related activation. We conclude by highlighting the need for further theoretical development regarding the interpretation of fMRI signals and their relationship to response times.

4 citations


Journal ArticleDOI
TL;DR: In this paper , the authors report several experiments using GPT-4 to generate computer code and demonstrate that AI code generation using the current generation of tools, while powerful, requires substantial human validation to ensure accurate performance.
Abstract: Artificial intelligence (AI) tools based on large language models have acheived human-level performance on some computer programming tasks. We report several experiments using GPT-4 to generate computer code. These experiments demonstrate that AI code generation using the current generation of tools, while powerful, requires substantial human validation to ensure accurate performance. We also demonstrate that GPT-4 refactoring of existing code can significantly improve that code along several established metrics for code quality, and we show that GPT-4 can generate tests with substantial coverage, but that many of the tests fail when applied to the associated code. These findings suggest that while AI coding tools are very powerful, they still require humans in the loop to ensure validity and accuracy of the results.

3 citations



Journal ArticleDOI
TL;DR: The Workgroup for HArmonized Taxonomy of Networks (WhatNET) as discussed by the authors was formed as an Organization for Human Brain Mapping (OHBM)-endorsed best practices committee to provide recommendations on points of consensus, identify open questions, and highlight areas of ongoing debate in the service of moving the field towards standardized reporting of network neuroscience results.
Abstract: Progress in scientific disciplines is accompanied by standardization of terminology. Network neuroscience, at the level of macro-scale organization of the brain, is beginning to confront the challenges associated with developing a taxonomy of its fundamental explanatory constructs. The Workgroup for HArmonized Taxonomy of NETworks (WHATNET) was formed in 2020 as an Organization for Human Brain Mapping (OHBM)-endorsed best practices committee to provide recommendations on points of consensus, identify open questions, and highlight areas of ongoing debate in the service of moving the field towards standardized reporting of network neuroscience results. The committee conducted a survey to catalog current practices in large-scale brain network nomenclature. A few well-known network names (e.g., default mode network) dominated responses to the survey, and a number of illuminating points of disagreement emerged. We summarize survey results and provide initial considerations and recommendations from the workgroup. This perspective piece includes a selective review of challenges to this enterprise, including 1) network scale, resolution, and hierarchies; 2) inter-individual variability of networks; 3) dynamics and non-stationarity of networks; 4) consideration of network affiliations of subcortical structures; and 5) consideration of multi-modal information. We close with minimal reporting guidelines for the cognitive and network neuroscience communities to adopt.

1 citations


Posted ContentDOI
29 May 2023-bioRxiv
TL;DR: In this article , the influence of global activation and pattern similarity on the variance of across-voxel activation patterns is investigated. But, the authors do not consider the effect of global activations on pattern similarity.
Abstract: Pattern similarity analysis, which uses correlation to examine similarities between neural activation patterns evoked by different trials or conditions, is often leveraged to test hypotheses not easily answerable with univariate comparisons, such as how events are represented or processed and the relationships between representations or processing of events. In principle, univariate analyses of global activation and multivariate analyses of pattern similarity can be used to answer substantively different questions about psychological and neural processing. For this to hold, it is necessary that pattern similarity estimates are not contaminated by differences in global activation across experimental events. Here, we report simulated data that demonstrate that global activation and pattern similarity (as assessed by correlation), although theoretically independent, are often intertwined. We present two plausible scenarios that illustrate how condition-specific changes in global activation can elicit condition-specific increases in pattern similarity by interacting with underlying across-voxel activation patterns. First, we consider a scenario in which a target region contains subpopulations of voxels such that only some voxels in a region are sensitive to a psychological variable and the remaining voxels are not modulated by this variable. In this scenario, this spatial pattern of responsive and unresponsive voxels adds new, shared across-voxel variability for events in the ‘active’ condition, thereby increasing pattern similarity between these events. Second, we consider a scenario in which trials from all conditions elicit a shared across-voxel pattern of activation, but this shared across-voxel pattern is amplified for trials within one condition due to greater global activation. In this scenario, the change in activation for a given condition increases the ability to detect pre-existing, shared across-voxel variability across events in that condition, thereby increasing pattern similarity between these events. Given the observed influence of global activation on pattern similarity, we then assess whether it is possible to statistically separate the contributions of global activation and pattern similarity to observed activation patterns (using regression approaches, matching activation across conditions, and inclusion of control conditions). Additional simulations demonstrate that use of these techniques is not always effective in removing the influence of global activation on pattern similarity ––the efficacy of these techniques depends on a variety of signal parameters that will likely vary across experiments and participants, highlighting the need for tailored control analyses that are targeted at addressing the particular hypotheses and potential global activation confounds of a given experiment. [Note: The reported simulations and this resulting white paper were generated in 2014. We share, without update, this paper given the continued relevance of understanding and controlling for global activation confounds when conducting multi-variate pattern analyses.]

Journal ArticleDOI
TL;DR: The Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit as mentioned in this paper is developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives.
Abstract: As data sharing has become more prevalent, three pillars - archives, standards, and analysis tools - have emerged as critical components in facilitating effective data sharing and collaboration. This paper compares four freely available intracranial neuroelectrophysiology data repositories: Data Archive for the BRAIN Initiative (DABI), Distributed Archives for Neurophysiology Data Integration (DANDI), OpenNeuro, and Brain-CODE. These archives provide researchers with tools to store, share, and reanalyze neurophysiology data though the means of accomplishing these objectives differ. The Brain Imaging Data Structure (BIDS) and Neurodata Without Borders (NWB) are utilized by these archives to make data more accessible to researchers by implementing a common standard. While many tools are available to reanalyze data on and off the archives' platforms, this article features Reproducible Analysis and Visualization of Intracranial EEG (RAVE) toolkit, developed specifically for the analysis of intracranial signal data and integrated with the discussed standards and archives. Neuroelectrophysiology data archives improve how researchers can aggregate, analyze, distribute, and parse these data, which can lead to more significant findings in neuroscience research.

Journal ArticleDOI
TL;DR: In this article , the authors address the issue of cold data storage: when to dispose of data for offline storage, how can this be done while maintaining FAIR principles and who should be responsible for cold archiving and long-term preservation.
Abstract: Accessing research data at any time is what FAIR (Findable Accessible Interoperable Reusable) data sharing aims to achieve at scale. Yet, we argue that it is not sustainable to keep accumulating and maintaining all datasets for rapid access, considering the monetary and ecological cost of maintaining repositories. Here, we address the issue of cold data storage: when to dispose of data for offline storage, how can this be done while maintaining FAIR principles and who should be responsible for cold archiving and long-term preservation.

Journal ArticleDOI
TL;DR: In this article , the authors benchmark prominent explanation methods in a mental state decoding analysis of multiple functional Magnetic Resonance Imaging (fMRI) datasets, and provide guidance for neuroimaging researchers on how to choose an explanation method to gain insight into the mental states decoding decisions of DL models.


Journal ArticleDOI
TL;DR: The brainlife.io platform as discussed by the authors automatically tracks the provenance history of thousands of data objects, supporting simplicity, efficiency, and transparency in neuroscience research, and provides open-source data standardization, management, visualization, and processing and simplifies the data pipeline.
Abstract: Neuroscience research has expanded dramatically over the past 30 years by advancing standardization and tool development to support rigor and transparency. Consequently, the complexity of the data pipeline has also increased, hindering access to FAIR (Findable, Accessible, Interoperabile, and Reusable) data analysis to portions of the worldwide research community. brainlife.io was developed to reduce these burdens and democratize modern neuroscience research across institutions and career levels. Using community software and hardware infrastructure, the platform provides open-source data standardization, management, visualization, and processing and simplifies the data pipeline. brainlife.io automatically tracks the provenance history of thousands of data objects, supporting simplicity, efficiency, and transparency in neuroscience research. Here brainlife.io's technology and data services are described and evaluated for validity, reliability, reproducibility, replicability, and scientific utility. Using data from 4 modalities and 3,200 participants, we demonstrate that brainlife.io's services produce outputs that adhere to best practices in modern neuroscience research.