scispace - formally typeset
Search or ask a question
Author

Daniel Bennett

Bio: Daniel Bennett is an academic researcher from Princeton University. The author has contributed to research in topics: Psychology & Bayesian inference. The author has an hindex of 10, co-authored 27 publications receiving 387 citations. Previous affiliations of Daniel Bennett include Monash University & University of Melbourne.

Papers
More filters
Journal ArticleDOI
TL;DR: It is found that participants were willing to incur considerable monetary costs to acquire payoff-irrelevant information about the lottery outcome, and this behaviour was well explained by a computational cognitive model in which information preference resulted from aversion to temporally prolonged uncertainty.
Abstract: In a dynamic world, an accurate model of the environment is vital for survival, and agents ought regularly to seek out new information with which to update their world models. This aspect of behaviour is not captured well by classical theories of decision making, and the cognitive mechanisms of information seeking are poorly understood. In particular, it is not known whether information is valued only for its instrumental use, or whether humans also assign it a non-instrumental intrinsic value. To address this question, the present study assessed preference for non-instrumental information among 80 healthy participants in two experiments. Participants performed a novel information preference task in which they could choose to pay a monetary cost to receive advance information about the outcome of a monetary lottery. Importantly, acquiring information did not alter lottery outcome probabilities. We found that participants were willing to incur considerable monetary costs to acquire payoff-irrelevant information about the lottery outcome. This behaviour was well explained by a computational cognitive model in which information preference resulted from aversion to temporally prolonged uncertainty. These results strongly suggest that humans assign an intrinsic value to information in a manner inconsistent with normative accounts of decision making under uncertainty. This intrinsic value may be associated with adaptive behaviour in real-world environments by producing a bias towards exploratory and information-seeking behaviour.

93 citations

Journal ArticleDOI
TL;DR: Analysis of the event-related potential elicited by informative cues revealed that the feedback-related negativity independently encoded both an information prediction error and a reward prediction error, consistent with the hypothesis that information seeking results from processing of information within neural reward circuits.
Abstract: In a dynamic world, accurate beliefs about the environment are vital for survival, and individuals should therefore regularly seek out new information with which to update their beliefs. This aspect of behaviour is not well captured by standard theories of decision making, and the neural mechanisms of information seeking remain unclear. One recent theory posits that valuation of information results from representation of informative stimuli within canonical neural reward-processing circuits, even if that information lacks instrumental use. We investigated this question by recording EEG from twenty-three human participants performing a non-instrumental information-seeking task. In this task, participants could pay a monetary cost to receive advance information about the likelihood of receiving reward in a lottery at the end of each trial. Behavioural results showed that participants were willing to incur considerable monetary costs to acquire early but non-instrumental information. Analysis of the event-related potential elicited by informative cues revealed that the feedback-related negativity independently encoded both an information prediction error and a reward prediction error. These findings are consistent with the hypothesis that information seeking results from processing of information within neural reward circuits, and suggests that information may represent a distinct dimension of valuation in decision making under uncertainty.

75 citations

Journal ArticleDOI
01 Sep 2015
TL;DR: Study of how two components of the human event-related potential encoded different aspects of belief updating provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components and evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework.
Abstract: Belief updating-the process by which an agent alters an internal model of its environment-is a core function of the CNS. Recent theory has proposed broad principles by which belief updating might operate, but more precise details of its implementation in the human brain remain unclear. In order to address this question, we studied how two components of the human event-related potential encoded different aspects of belief updating. Participants completed a novel perceptual learning task while electroencephalography was recorded. Participants learned the mapping between the contrast of a dynamic visual stimulus and a monetary reward and updated their beliefs about a target contrast on each trial. A Bayesian computational model was formulated to estimate belief states at each trial and was used to quantify the following two variables: belief update size and belief uncertainty. Robust single-trial regression was used to assess how these model-derived variables were related to the amplitudes of the P3 and the stimulus-preceding negativity (SPN), respectively. Results showed a positive relationship between belief update size and P3 amplitude at one fronto-central electrode, and a negative relationship between SPN amplitude and belief uncertainty at a left central and a right parietal electrode. These results provide evidence that belief update size and belief uncertainty have distinct neural signatures that can be tracked in single trials in specific ERP components. This, in turn, provides evidence that the cognitive mechanisms underlying belief updating in humans can be described well within a Bayesian framework.

56 citations

Journal ArticleDOI
TL;DR: DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes and could therefore be a valuable tool for the neuroimaging community.
Abstract: In recent years, neuroimaging research in cognitive neuroscience has increasingly used multivariate pattern analysis (MVPA) to investigate higher cognitive functions. Here we present DDTBOX, an open-source MVPA toolbox for electroencephalography (EEG) data. DDTBOX runs under MATLAB and is well integrated with the EEGLAB/ERPLAB and Fieldtrip toolboxes (Delorme and Makeig 2004; Lopez-Calderon and Luck 2014; Oostenveld et al. 2011). It trains support vector machines (SVMs) on patterns of event-related potential (ERP) amplitude data, following or preceding an event of interest, for classification or regression of experimental variables. These amplitude patterns can be extracted across space/electrodes (spatial decoding), time (temporal decoding), or both (spatiotemporal decoding). DDTBOX can also extract SVM feature weights, generate empirical chance distributions based on shuffled-labels decoding for group-level statistical testing, provide estimates of the prevalence of decodable information in the population, and perform a variety of corrections for multiple comparisons. It also includes plotting functions for single subject and group results. DDTBOX complements conventional analyses of ERP components, as subtle multivariate patterns can be detected that would be overlooked in standard analyses. It further allows for a more explorative search for information when no ERP component is known to be specifically linked to a cognitive process of interest. In summary, DDTBOX is an easy-to-use and open-source toolbox that allows for characterising the time-course of information related to various perceptual and cognitive processes. It can be applied to data from a large number of experimental paradigms and could therefore be a valuable tool for the neuroimaging community.

54 citations

Journal ArticleDOI
TL;DR: The dual cultures of computational psychiatry share an overlapping set of statistical tools and practical methods but differ in whether the end goal is explanation or prediction, as well as limiting the inferences that each culture can draw.
Abstract: Computational psychiatry is a rapidly growing field that uses tools from cognitive science, computational neuroscience, and machine learning to address difficult psychiatric questions. Its great promise is that these tools will improve psychiatric diagnosis and treatment while also helping to explain the causes of psychiatric illness.1-3 Within computational psychiatry, there are distinct research cultures with distinct computational tools and research goals: machine learning and explanatory modeling.1 While each can potentially advance psychiatric research, important distinctions between the cultures sometimes go unappreciated in the broader psychiatric research community. We detail these distinctions, referring to Breiman’s influential dichotomy between these cultures of statistical modeling4 to identify limitations on the inferences that each culture can draw. Breiman4 defined the 2 cultures of statistical modeling in terms of a data-generating process that generates output data from input variables. His dichotomy distinguished “algorithmic modeling,”4(p200) which aims to predict what outputs a data-generating process will produce from a given set of inputs while treating the process itself as a black box,2,3 from “data modeling,”4(p199) which uses the pattern of outputs and inputs to explain how the data-generating process works. In psychiatry, the data-generating processes are the psychological and neurobiological mechanisms that produce psychiatric illnesses. The output data produced by these processes are psychiatric outcomes (eg, symptoms, medication response) with input variables including family history, precipitating life events, and others. Breiman’s distinction between prediction and explanation is also what separates machine-learning approaches to computational psychiatry, which aim to predict psychiatric outcomes, from explanatory modeling, which aims to explain the computational-biological mechanisms of psychiatric illnesses. While these approaches have also been termed data-driven and theory-driven,1 we emphasize that the dual cultures of computational psychiatry share an overlapping set of statistical tools and practical methods but differ in whether the end goal is explanation or prediction. A deep neural network, for instance, can be either explanatory (as a biophysically realistic model of psychiatric dysfunction), or predictive (as a classifier used to predict a diagnosis), depending on context. The culture of machine learning typically uses statistical techniques, such as support vector machines or deep neural networks, to predict psychiatric outcomes. These tools can be seen as lying on a continuum with classical statistics such as regression but with the addition of practices designed to reduce overfitting, such as parameter regularization and cross validation. For instance, a study by Webb et al5 has used such tools to predict antidepressant response from a combination of variables, including demographic factors, symptom severity, and cognitive task performance. Despite good predictive performance, the study drew no conclusions about the mechanisms by which these variables were linked to antidepressant response. This is because in machine learning, the parameters of the models that are used to predict psychiatric outcomes are not assumed to correspond to any underlying psychological or neural process; consequently, these parameters cannot be interpreted mechanistically. In comparison, the culture of explanatory modeling focuses on statistical models (expressed as equations) that define interacting processes with parameters that putatively correspond to neural computations. For instance, equations describing value updating in reinforcement-learning models are thought to correspond to corticostriatal synaptic modifications modulated by dopaminergic signaling of reward prediction errors. Consequently, explanatory model parameters fit to behavioral and/or neural data from patients with psychiatric diagnoses can directly inform inferences about dysfunctions in underlying neural computations, subject to several conditions being met. For instance, Huys et al6 have shown that anhedonia is correlated across diagnoses with a model parameter corresponding to the blunting of experienced reward value but not with a parameter controlling the rate of learning from this experienced value, providing evidence against one dopaminergic explanation of depression. Importantly, there are several conditions that must be met before an explanatory model can be used in this way. First, to support the model’s correspondence to the true data-generating process and distinguish between different candidate models, the models must make sufficiently different predictions for the experimental data. Separately, to identify the model parameters accurately, the parameters’ effects on model predictions should be relatively independent, and there must be sufficient data. One approach to testing these conditions is to simulate data from each candidate model and test the ability of a model-fitting routine to recover the true cognitive model and its parameters from these data. Because empirical data will not correspond as perfectly to any of the candidate models, this test is a necessary but not sufficient condition for reliable explanatory modeling. Indeed, a common error is to overinterpret results, forgetting that the best-fitting model is only better than models with which it was compared and parameter values are only estimates reliable to a level of statistical error. A potential limitation of explanatory modeling in computational psychiatry is that theories (ie, models) may be ill-matched to available data, because data collected for other purposes may not distinguish between subtly (but importantly) different hypotheses regardVIEWPOINT

32 citations


Cited by
More filters
01 Jan 2016
TL;DR: This is an introduction to the event related potential technique, which can help people facing with some malicious bugs inside their laptop to read a good book with a cup of tea in the afternoon.
Abstract: Thank you for downloading an introduction to the event related potential technique. Maybe you have knowledge that, people have look hundreds times for their favorite readings like this an introduction to the event related potential technique, but end up in malicious downloads. Rather than reading a good book with a cup of tea in the afternoon, instead they are facing with some malicious bugs inside their laptop.

2,445 citations

Journal ArticleDOI
TL;DR: Miller, Galanter, and Pribram as discussed by the authors discuss the difference between the brain and its vast number of parallel channels, but few operations, and the modern high-speed computer with its few channels and vast numbers of operations.
Abstract: which is used to describe a third component of thinking processes, particularly preverbal, and it denotes the concept that the world is activated by some generalized "energy" that links together causally all objects and events ; it is presumably revealed by a person's lack of curiosity about causal connections, as though they were self-evident. Aside from the rather frequent use of such key words, having strong connotations for this reviewer far away from what the author is aiming to denote, the book is written in a lucid and stimulating style that makes reading it an invigorating intellectual exercise. It is a book that is likely to have somewhat limited attraction to the full-time clinician, especially one treating adult patients. And child psychiatrists and psychologists, if reasonably well read, will most likely be familiar with the majority of references from which this author has synthesized his material. On the other hand, the scholarly and refreshing con¬ ceptual approaches of the author will appeal to psychologists, philosophers, linguists, and psychiatrists with a research bent and anyone else who wants to be provoked to do some thinking on the problems of language, language development, and the psychology of cognition. Louis A. GOTTSCHALK, M.D. Plans and the Structure of Behavior. By George A Miller, Eugene gALANTER, and Karl H. Pribram. Price, $5.00. Pp. 226. Henry Holt & Co., Inc., New York 17, 1960. This is an important book for psychiatrists and behavioral scientists, since it presents a clear, concise study of the application of cybernetics, information and computer theory to the problem of analyzing behavior. The authors have been actively engaged in behavioral research in different areas\p=m-\Millerin information and communication, Galanter in experimental psychology, and Pribram in neurophysiology. The book resulted from a series of discussions which they engaged in during a year they spent together at the Center for Advanced Study in the Behavioral Sciences, Palo Alto, Calif. Their original intent was to write a diary, as it were, of the development of their ideas and, fortunately, enough of this remains to make the book clear, easy to read, and interesting. It is also fortunate, however, that in the final writing a variety of studies comparing the "behavior" of computing machines with human "cognitive behavior" have been reviewed and summarized. The result is one of the best presentations of the present status of the brain-computer problem. The authors, however, do not discuss certain aspects of this problem, such as the difference between the brain and its vast number of parallel channels, but few operations, and the modern high-speed computer with its few channels and vast numbers of operations. This omission is consistent with their interest, since it would introduce the question of mechanisms rather than the problem of the structure of behavior as it is observed in everyday life in the clinic and in experiments on learning, conditioning, etc. Similarly, they do not discuss the qualitative differences between mechanisms of memory in the computer and those in the brain. In the former, a "memory"\p=m-\ i.e., stored information\p=m-\isidentified, metaphorically speaking, by an address, whereas no such mechanism is known in the brain (personal communication, Dr. Julian Bigelow). With few exceptions, however, the data, concepts, and theories presented are handled with elegant precision, as illustrated by the discussion of Sherrington's concepts of the "Reflex" and the "Synapse," Kurt Lewin's ideas of "tension states," and the numerous references to the work of Newell, Shaw, and Simon on computers and logic. There are, nevertheless, areas with

1,219 citations

01 Jan 2016
TL;DR: This introduction to robust estimation and hypothesis testing helps people to enjoy a good book with a cup of coffee in the afternoon, instead they cope with some harmful bugs inside their laptop.
Abstract: Thank you very much for downloading introduction to robust estimation and hypothesis testing. As you may know, people have search numerous times for their favorite books like this introduction to robust estimation and hypothesis testing, but end up in harmful downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they cope with some harmful bugs inside their laptop.

968 citations

01 Jan 2016
TL;DR: The oxford handbook of event related potential components as discussed by the authors is one of the most widely used handbook for potential components, but it can also contain harmful downloads that can end up in harmful downloads.
Abstract: Thank you very much for reading the oxford handbook of event related potential components. Maybe you have knowledge that, people have look numerous times for their chosen readings like this the oxford handbook of event related potential components, but end up in harmful downloads. Rather than enjoying a good book with a cup of tea in the afternoon, instead they are facing with some infectious virus inside their desktop computer.

664 citations

Journal ArticleDOI
15 Jan 2020-Nature
TL;DR: It is hypothesized that the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel.
Abstract: Since its introduction, the reward prediction error theory of dopamine has explained a wealth of empirical phenomena, providing a unifying framework for understanding the representation of reward and value in the brain1–3. According to the now canonical theory, reward predictions are represented as a single scalar quantity, which supports learning about the expectation, or mean, of stochastic outcomes. Here we propose an account of dopamine-based reinforcement learning inspired by recent artificial intelligence research on distributional reinforcement learning4–6. We hypothesized that the brain represents possible future rewards not as a single mean, but instead as a probability distribution, effectively representing multiple future outcomes simultaneously and in parallel. This idea implies a set of empirical predictions, which we tested using single-unit recordings from mouse ventral tegmental area. Our findings provide strong evidence for a neural realization of distributional reinforcement learning. Analyses of single-cell recordings from mouse ventral tegmental area are consistent with a model of reinforcement learning in which the brain represents possible future rewards not as a single mean of stochastic outcomes, as in the canonical model, but instead as a probability distribution.

279 citations