scispace - formally typeset
Search or ask a question
Author

Christopher R. Genovese

Bio: Christopher R. Genovese is an academic researcher from Carnegie Mellon University. The author has contributed to research in topics: False discovery rate & Multiple comparisons problem. The author has an hindex of 38, co-authored 108 publications receiving 11196 citations. Previous affiliations of Christopher R. Genovese include Battelle Memorial Institute & National Science Foundation.


Papers
More filters
Journal ArticleDOI
TL;DR: This paper introduces to the neuroscience literature statistical procedures for controlling the false discovery rate (FDR) and demonstrates this approach using both simulations and functional magnetic resonance imaging data from two simple experiments.

4,838 citations

Journal ArticleDOI
TL;DR: It is suggested that efficient top-down modulation of reflexive acts may not be fully developed until adulthood and evidence that maturation of function across widely distributed brain regions lays the groundwork for enhanced voluntary control of behavior during cognitive development is provided.

700 citations

Journal ArticleDOI
TL;DR: In this article, the authors investigate the operating characteristics of the Benjamini-Hochberg false discovery rate procedure for multiple testing, which is a distribution-free method that controls the expected fraction of falsely rejected null hypotheses among those rejected.
Abstract: Summary. We investigate the operating characteristics of the Benjamini–Hochberg false discovery rate procedure for multiple testing. This is a distribution-free method that controls the expected fraction of falsely rejected null hypotheses among those rejected. The paper provides a framework for understanding more about this procedure. We first study the asymptotic properties of the ‘deciding point’ D that determines the critical p-value. From this, we obtain explicit asymptotic expressions for a particular risk function. We introduce the dual notion of false non-rejections and we consider a risk function that combines the false discovery rate and false non-rejections. We also consider the optimal procedure with respect to a measure of conditional risk.

603 citations

Journal ArticleDOI
TL;DR: A Bayesian method, for fitting curves to data drawn from an exponential family, that uses splines for which the number and locations of knots are free parameters, which performs well and is illustrated in two neuroscience applications.
Abstract: We describe a Bayesian method, for fitting curves to data drawn from an exponential family, that uses splines for which the number and locations of knots are free parameters. The method uses reversible jump Markov chain Monte Carlo to change the knot configurations and a locality heuristic to speed up mixing. For nonnormal models, we approximate the integrated likelihood ratios needed to compute acceptance probabilities by using the Bayesian information criterion, BIC, under priors that make this approximation accurate. Our technique is based on a marginalised chain on the knot number and locations, but we provide methods for inference about the regression coefficients, and functions of them, in both normal and nonnormal models. Simulation results suggest that the method performs well, and we illustrate the method in two neuroscience applications.

444 citations

Journal ArticleDOI
TL;DR: These findings localize areas in frontal and parietal cortex involved in saccade generation in humans, and indicate significant differences from the macaque monkey in both frontal andParietal cortex.
Abstract: Neurophysiological studies in non-human primates have identified saccade-related neuronal activity in cortical regions including frontal (FEF), supplementary (SEF) and parietal eye fields. Lesion and neuroimaging studies suggest a generally homologous mapping of the oculomotor system in humans; however, a detailed mapping of the precise anatomical location of these functional regions has not yet been achieved. We investigated dorsal frontal and parietal cortex during a saccade task vs. central fixation in 10 adult subjects using functional magnetic resonance imaging (fMRI). The FEF were restricted to the precentral sulcus, and did not extend anteriorly into Brodmann area 8, which has traditionally been viewed as their location in humans. The SEF were located in cortex along the interhemispheric fissure and extended minimally onto the dorsal cortical surface. Parietal activation was seen in precuneus and along the intraparietal sulcus, extending into both superior and inferior parietal lobules. These findings localize areas in frontal and parietal cortex involved in saccade generation in humans, and indicate significant differences from the macaque monkey in both frontal and parietal cortex. These differences may have functional implications for the roles these areas play in visuomotor processes.

377 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: SciPy as discussed by the authors is an open source scientific computing library for the Python programming language, which includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics.
Abstract: SciPy is an open source scientific computing library for the Python programming language. SciPy 1.0 was released in late 2017, about 16 years after the original version 0.1 release. SciPy has become a de facto standard for leveraging scientific algorithms in the Python programming language, with more than 600 unique code contributors, thousands of dependent packages, over 100,000 dependent repositories, and millions of downloads per year. This includes usage of SciPy in almost half of all machine learning projects on GitHub, and usage by high profile projects including LIGO gravitational wave analysis and creation of the first-ever image of a black hole (M87). The library includes functionality spanning clustering, Fourier transforms, integration, interpolation, file I/O, linear algebra, image processing, orthogonal distance regression, minimization algorithms, signal processing, sparse matrix handling, computational geometry, and statistics. In this work, we provide an overview of the capabilities and development practices of the SciPy library and highlight some recent technical developments.

12,774 citations

Book ChapterDOI
TL;DR: This chapter demonstrates the functional importance of dopamine to working memory function in several ways and demonstrates that a network of brain regions, including the prefrontal cortex, is critical for the active maintenance of internal representations.
Abstract: Publisher Summary This chapter focuses on the modern notion of short-term memory, called working memory. Working memory refers to the temporary maintenance of information that was just experienced or just retrieved from long-term memory but no longer exists in the external environment. These internal representations are short-lived, but can be maintained for longer periods of time through active rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goal-directed behavior. Working memory is a system that is critically important in cognition and seems necessary in the course of performing many other cognitive functions, such as reasoning, language comprehension, planning, and spatial processing. This chapter demonstrates the functional importance of dopamine to working memory function in several ways. Elucidation of the cognitive and neural mechanisms underlying human working memory is an important focus of cognitive neuroscience and neurology for much of the past decade. One conclusion that arises from research is that working memory, a faculty that enables temporary storage and manipulation of information in the service of behavioral goals, can be viewed as neither a unitary, nor a dedicated system. Data from numerous neuropsychological and neurophysiological studies in animals and humans demonstrates that a network of brain regions, including the prefrontal cortex, is critical for the active maintenance of internal representations.

10,081 citations

Journal ArticleDOI
TL;DR: This work proposes an approach to measuring statistical significance in genomewide studies based on the concept of the false discovery rate, which offers a sensible balance between the number of true and false positives that is automatically calibrated and easily interpreted.
Abstract: With the increase in genomewide experiments and the sequencing of multiple genomes, the analysis of large data sets has become commonplace in biology. It is often the case that thousands of features in a genomewide data set are tested against some null hypothesis, where a number of features are expected to be significant. Here we propose an approach to measuring statistical significance in these genomewide studies based on the concept of the false discovery rate. This approach offers a sensible balance between the number of true and false positives that is automatically calibrated and easily interpreted. In doing so, a measure of statistical significance called the q value is associated with each tested feature. The q value is similar to the well known p value, except it is a measure of significance in terms of the false discovery rate rather than the false positive rate. Our approach avoids a flood of false positive results, while offering a more liberal criterion than what has been used in genome scans for linkage.

9,239 citations

Journal ArticleDOI
TL;DR: This work shows that this seemingly mysterious phenomenon of boosting can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood, and develops more direct approximations and shows that they exhibit nearly identical results to boosting.
Abstract: Boosting is one of the most important recent developments in classification methodology. Boosting works by sequentially applying a classification algorithm to reweighted versions of the training data and then taking a weighted majority vote of the sequence of classifiers thus produced. For many classification algorithms, this simple strategy results in dramatic improvements in performance. We show that this seemingly mysterious phenomenon can be understood in terms of well-known statistical principles, namely additive modeling and maximum likelihood. For the two-class problem, boosting can be viewed as an approximation to additive modeling on the logistic scale using maximum Bernoulli likelihood as a criterion. We develop more direct approximations and show that they exhibit nearly identical results to boosting. Direct multiclass generalizations based on multinomial likelihood are derived that exhibit performance comparable to other recently proposed multiclass generalizations of boosting in most situations, and far superior in some. We suggest a minor modification to boosting that can reduce computation, often by factors of 10 to 50. Finally, we apply these insights to produce an alternative formulation of boosting decision trees. This approach, based on best-first truncated tree induction, often leads to better performance, and can provide interpretable descriptions of the aggregate decision rule. It is also much faster computationally, making it more suitable to large-scale data mining applications.

6,598 citations