scispace - formally typeset
Search or ask a question
Author

Russell A. Poldrack

Bio: Russell A. Poldrack is an academic researcher from Stanford University. The author has contributed to research in topics: Cognition & Functional magnetic resonance imaging. The author has an hindex of 125, co-authored 452 publications receiving 58695 citations. Previous affiliations of Russell A. Poldrack include University of Illinois at Urbana–Champaign & University of Texas at Austin.


Papers
More filters
Journal ArticleDOI
TL;DR: Advances in human lesion-mapping support the functional localization of such inhibition to right IFC alone, and future research should investigate the generality of this proposed inhibitory function to other task domains, and its interaction within a wider network.

2,920 citations

Journal ArticleDOI
TL;DR: An automated brain-mapping framework that uses text-mining, meta-analysis and machine-learning techniques to generate a large database of mappings between neural and cognitive states is described and validated.
Abstract: The rapid growth of the literature on neuroimaging in humans has led to major advances in our understanding of human brain function but has also made it increasingly difficult to aggregate and synthesize neuroimaging findings. Here we describe and validate an automated brain-mapping framework that uses text-mining, meta-analysis and machine-learning techniques to generate a large database of mappings between neural and cognitive states. We show that our approach can be used to automatically conduct large-scale, high-quality neuroimaging meta-analyses, address long-standing inferential problems in the neuroimaging literature and support accurate 'decoding' of broad cognitive states from brain activity in both entire studies and individual human subjects. Collectively, our results have validated a powerful and generative framework for synthesizing human neuroimaging data on an unprecedented scale.

2,853 citations

Journal ArticleDOI
TL;DR: It is argued that cognitive neuroscientists should be circumspect in the use of reverse inference, particularly when selectivity of the region in question cannot be established or is known to be weak.

1,802 citations

Journal ArticleDOI
TL;DR: It is proposed that the rIFC (along with one or more fronto-basal-ganglia networks) is best characterized as a brake, and this brake can be turned on in different modes and in different contexts.

1,568 citations

Journal ArticleDOI
TL;DR: Results provide convergent data for a role for the subthalamic nucleus in Stop-signal response inhibition and suggest that the speed of Go and Stop processes could relate to the relative activation of different neural pathways.
Abstract: Suppressing an already initiated manual response depends critically on the right inferior frontal cortex (IFC), yet it is unclear how this inhibitory function is implemented in the motor system. It has been suggested that the subthalamic nucleus (STN), which is a part of the basal ganglia, may play a role because it is well placed to suppress the “direct” fronto-striatal pathway that is activated by response initiation. In two experiments, we investigated this hypothesis with functional magnetic resonance imaging and a Stop-signal task. Subjects responded to Go signals and attempted to inhibit the initiated response to occasional Stop signals. In experiment 1, Going significantly activated frontal, striatal, pallidal, and motor cortical regions, consistent with the direct pathway, whereas Stopping significantly activated right IFC and STN. In addition, Stopping-related activation was significantly greater for fast inhibitors than slow ones in both IFC and STN, and activity in these regions was correlated across subjects. In experiment 2, high-resolution functional and structural imaging confirmed the location of Stopping activation within the vicinity of the STN. We propose that the role of the STN is to suppress thalamocortical output, thereby blocking Go response execution. These results provide convergent data for a role for the STN in Stop-signal response inhibition. They also suggest that the speed of Go and Stop processes could relate to the relative activation of different neural pathways. Future research is required to establish whether Stop-signal inhibition could be implemented via a direct functional neuroanatomic projection between IFC and STN (a “hyperdirect” pathway).

1,553 citations


Cited by
More filters
28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations

Journal ArticleDOI
TL;DR: The meaning of the terms "method" and "method bias" are explored and whether method biases influence all measures equally are examined, and the evidence of the effects that method biases have on individual measures and on the covariation between different constructs is reviewed.
Abstract: Despite the concern that has been expressed about potential method biases, and the pervasiveness of research settings with the potential to produce them, there is disagreement about whether they really are a problem for researchers in the behavioral sciences. Therefore, the purpose of this review is to explore the current state of knowledge about method biases. First, we explore the meaning of the terms “method” and “method bias” and then we examine whether method biases influence all measures equally. Next, we review the evidence of the effects that method biases have on individual measures and on the covariation between different constructs. Following this, we evaluate the procedural and statistical remedies that have been used to control method biases and provide recommendations for minimizing method bias.

8,719 citations