scispace - formally typeset
Search or ask a question
Author

Raymond J. Dolan

Bio: Raymond J. Dolan is an academic researcher from University College London. The author has contributed to research in topics: Prefrontal cortex & Functional magnetic resonance imaging. The author has an hindex of 196, co-authored 919 publications receiving 138540 citations. Previous affiliations of Raymond J. Dolan include VU University Amsterdam & McGovern Institute for Brain Research.


Papers
More filters
Posted ContentDOI
03 Aug 2022-bioRxiv
TL;DR: It is found that healthy human participants are systematically under-confident when discriminating low-precision stimuli, and novel neuroanatomical and physiological markers underlying this metacognitive bias are revealed.
Abstract: Correctly estimating the influence of uncertainty on our decisions is a critical metacognitive faculty. However, the relationship between sensory uncertainty (or its inverse, precision), decision accuracy, and subjective confidence is currently unclear. Although some findings indicate that healthy adults exhibit an illusion of over-confidence, under-confidence in response to sensory uncertainty has also been reported. One reason for this ambiguity is that stimulus intensity and precision are typically confounded with one another, limiting the ability to assess their independent contribution to metacognitive biases. Here we report four psychophysical experiments controlling these factors, finding that healthy human participants are systematically under-confident when discriminating low-precision stimuli. This bias remains even when decision accuracy and reaction time are accounted for, indicating that a performance-independent computation partially underpins the influence of sensory precision on confidence. We further show that this influence is linked to fluctuations in arousal and individual differences in the neuroanatomy of the left superior parietal lobe and middle insula. These results illuminate the neural and physiological correlates of precision misperception in metacognition. Significance Statement The ability to recognize the influence of sensory uncertainty on our decisions underpins the veracity of self-monitoring, or metacognition. In the extreme, a systematic confidence bias can undermine decision accuracy and potentially underpin disordered self-insight in neuropsychiatric illness. Previously it was unclear if metacognition accurately reflects changes in sensory precision, in part due to confounding effects of stimulus intensity and precision. Here we overcome these limitations to repeatedly demonstrate a robust precision-related confidence bias. Further, we reveal novel neuroanatomical and physiological markers underlying this metacognitive bias. These results suggest a unique state-based computational mechanism may drive subjective confidence biases and further provide new avenues for investigating maladaptive awareness of uncertainty in neuropsychiatric disorders.
01 Jan 2009
TL;DR: It is shown that administration of a drug that enhances dopaminergic function during the imaginative construction of positive future life events subsequently enhances estimates of the hedonic pleasure to be derived from these same events.
Abstract: Summary Human action is strongly influenced by expectations of pleasure. Making decisions, ranging from which products to buy to which job offer to accept, requires an estimation of how good (or bad) the likely outcomes will make us feel [1]. Yet, little is known about the biological basis of subjective estimations of future hedonic reactions. Here, we show that administration of a drug that enhances dopaminergic function (dihydroxy-L-phenylalanine; L-DOPA) during the imaginative construction of positive future life events subsequently enhances estimates of the hedonic pleasure to be derived from these same events. These findings provide thefirst directevidencefortheroleofdopamineinthemodulation of subjective hedonic expectations in humans.
Journal ArticleDOI
TL;DR: In this article, a task-relevant replay-default mode network coupling was found to be associated with memory maintenance of learned task sequences in schizophrenia, which was not explained by differential replay or altered default mode network dynamics between groups nor by reference to antipsychotic exposure.
Abstract: Abstract Schizophrenia is characterized by an abnormal resting state and default mode network brain activity. However, despite intense study, the mechanisms linking default mode network dynamics to neural computation remain elusive. During rest, sequential hippocampal reactivations, known as ‘replay’, are played out within default mode network activation windows, highlighting a potential role of replay-default mode network coupling in memory consolidation and model-based mental simulation. Here, we test a hypothesis of reduced replay-default mode network coupling in schizophrenia, using magnetoencephalography and a non-spatial sequence learning task designed to elicit off-task (i.e. resting state) neural replay. Participants with a diagnosis of schizophrenia (n = 28, mean age 28.2 years, range 20–40, 6 females, 13 not taking antipsychotic medication) and non-clinical control participants (n = 29, mean age 28.1 years, range 18–45, 6 females, matched at group level for age, intelligence quotient, gender, years in education and working memory) underwent a magnetoencephalography scan both during task completion and during a post-task resting state session. We used neural decoding to infer the time course of default mode network activation (time-delay embedding hidden Markov model) and spontaneous neural replay (temporally delayed linear modelling) in resting state magnetoencephalography data. Using multiple regression, we then quantified the extent to which default mode network activation was uniquely predicted by replay events that recapitulated the learned task sequences (i.e. ‘task-relevant’ replay-default mode network coupling). In control participants, replay-default mode network coupling was augmented following sequence learning, an augmentation that was specific for replay of task-relevant (i.e. learned) state transitions. This task-relevant replay-default mode network coupling effect was significantly reduced in schizophrenia (t(52) = 3.93, P = 0.018). Task-relevant replay-default mode network coupling predicted memory maintenance of learned sequences (ρ(52) = 0.31, P = 0.02). Importantly, reduced task-relevant replay-default mode network coupling in schizophrenia was not explained by differential replay or altered default mode network dynamics between groups nor by reference to antipsychotic exposure. Finally, task-relevant replay-default mode network coupling during rest correlated with stimulus-evoked default mode network modulation as measured in a separate task session. In the context of a proposed functional role of replay-default mode network coupling, our findings shed light on the functional significance of default mode network abnormalities in schizophrenia and provide for a consilience between task-based and resting state default mode network findings in this disorder.
Journal ArticleDOI
TL;DR: The authors found that participants were willing to increase the experimental pain of another participant to avoid delaying it, indicative of dread, though did so to a lesser extent than was the case for their own pain.
Abstract: A dislike of waiting for pain, aptly termed ‘dread’, is so great that people will increase pain to avoid delaying it. However, despite many accounts of altruistic responses to pain in others, no previous studies have tested whether people take delay into account when attempting to ameliorate others' pain. We examined the impact of delay in 2 experiments where participants (total N = 130) specified the intensity and delay of pain either for themselves or another person. Participants were willing to increase the experimental pain of another participant to avoid delaying it, indicative of dread, though did so to a lesser extent than was the case for their own pain. We observed a similar attenuation in dread when participants chose the timing of a hypothetical painful medical treatment for a close friend or relative, but no such attenuation when participants chose for a more distant acquaintance. A model in which altruism is biased to privilege pain intensity over the dread of pain parsimoniously accounts for these findings. We refer to this underestimation of others' dread as a ‘Dread Empathy Gap’.
Journal ArticleDOI
TL;DR: Universal prevention strategies reducing levels of mental distress in the whole population (in addition to screening) may prevent more suicides than approaches targeting youths with psychiatric disorders.
Abstract: Background: Recent evidence suggests that multiple symptoms or diagnoses, partucularly when co-ocuring with non-suicidal self-harm, predict suicide risk more strongly than single diagnosis. Method: Suicidal thought (ST) and non-suicidal self-injury (NSSI) were studies in two independent longitudinal UK samples of young people: the Neuroscience in Psychiatry (NSPN) 2400 cohort (n=2403) and the ROOTS cohort (n=1074). Participants, age 14-24 years, were recruited from primary health care registers, schools and colleges, and advertisements to complete quotas in age-sex strata. We calculated a score on a latent construct Common Mental Distress (the summary measure indexing a broad range of symptoms conventionally seen as components of distinct disorders). We examined the relative prevalence of ST and NSSI over the population distribution of mental distress; we used logistic regressions, IRT and ROC analyses to determine associations between suicide risks and mental distress (in continuous and above-the-norm categorical format); and pathway mediation models to examine longitudinal associations. Outcomes: We found a dose-response relationship between levels of mental distress and suicide risk. In both cohorts the majority of all subjects experiencing ST (78% and 76%) and NSSI (66% and 71%) had scores on mental distress no more than two standard deviations above the population mean; higher scores indicated highest risk but were, by definition, infrequent. Mental distress contributed to the longitudinal persistence of ST and NSSI. Interpretation: Universal prevention strategies reducing levels of mental distress in the whole population (in addition to screening) may prevent more suicides than approaches targeting youths with psychiatric disorders. Funding Statement: The ROOTS study was supported by a Wellcome Trust Grant (Grant number 074296) to I.M.G. and P.B.J., the NIHR Collaborations for Leadership in Applied Research and Care (CLAHRC) East of England, and the NIHR Cambridge Biomedical Research Centre. The NSPN study was supported by the Wellcome Trust Strategic Award (095844/Z/11/Z) to I.M.G., E.B., P.B.J., R.D., P.F. The work has been carried out in the Department of Psychiatry, University of Cambridge. Declaration of Interests: E.P., S.N., I.M.G., J.S. and P.B.J. have no competing interests. E.B., P.F. and PBJ are in receipt of National Institute for Health Research (NIHR) Senior Investigator Awards (NF-SI0514-10157, and NF-SI-0514-10117. P.F. was in part supported by the NIHR Collaboration for Leadership in Applied Health Research and Care (CLAHRC) North Thames at Barts Health NHS Trust. P.W. has recent/current grant support from NIHR, Cambridgeshire County Council and CLAHRC East of England. P.W. discloses consulting for Lundbeck and Takeda; PBJ discloses consulting for Janssen and Ricordati. E.B. is employed half-time by the University of Cambridge and half-time by GlaxoSmithKline in which he holds stock. Ethics Approval Statement: Written consent from participants age 14 or 15 years was supplemented by written consent from their parent or legal guardian; older participants gave their own written consent. Ethical approval was obtained for Cohort 1 from the National Health Service Research Ethics Service (# 97546) and for Cohort 2 from the Cambridgeshire 2 REC (# 03/302).

Cited by
More filters
Book
01 Jan 1988
TL;DR: This book provides a clear and simple account of the key ideas and algorithms of reinforcement learning, which ranges from the history of the field's intellectual foundations to the most recent developments and applications.
Abstract: Reinforcement learning, one of the most active research areas in artificial intelligence, is a computational approach to learning whereby an agent tries to maximize the total amount of reward it receives when interacting with a complex, uncertain environment. In Reinforcement Learning, Richard Sutton and Andrew Barto provide a clear and simple account of the key ideas and algorithms of reinforcement learning. Their discussion ranges from the history of the field's intellectual foundations to the most recent developments and applications. The only necessary mathematical background is familiarity with elementary concepts of probability. The book is divided into three parts. Part I defines the reinforcement learning problem in terms of Markov decision processes. Part II provides basic solution methods: dynamic programming, Monte Carlo methods, and temporal-difference learning. Part III presents a unified view of the solution methods and incorporates artificial neural networks, eligibility traces, and planning; the two final chapters present case studies and consider the future of reinforcement learning.

37,989 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis.
Abstract: Machine Learning is the study of methods for programming computers to learn. Computers are applied to a wide range of tasks, and for most of these it is relatively easy for programmers to design and implement the necessary software. However, there are many tasks for which this is difficult or impossible. These can be divided into four general categories. First, there are problems for which there exist no human experts. For example, in modern automated manufacturing facilities, there is a need to predict machine failures before they occur by analyzing sensor readings. Because the machines are new, there are no human experts who can be interviewed by a programmer to provide the knowledge necessary to build a computer system. A machine learning system can study recorded data and subsequent machine failures and learn prediction rules. Second, there are problems where human experts exist, but where they are unable to explain their expertise. This is the case in many perceptual tasks, such as speech recognition, hand-writing recognition, and natural language understanding. Virtually all humans exhibit expert-level abilities on these tasks, but none of them can describe the detailed steps that they follow as they perform them. Fortunately, humans can provide machines with examples of the inputs and correct outputs for these tasks, so machine learning algorithms can learn to map the inputs to the outputs. Third, there are problems where phenomena are changing rapidly. In finance, for example, people would like to predict the future behavior of the stock market, of consumer purchases, or of exchange rates. These behaviors change frequently, so that even if a programmer could construct a good predictive computer program, it would need to be rewritten frequently. A learning program can relieve the programmer of this burden by constantly modifying and tuning a set of learned prediction rules. Fourth, there are applications that need to be customized for each computer user separately. Consider, for example, a program to filter unwanted electronic mail messages. Different users will need different filters. It is unreasonable to expect each user to program his or her own rules, and it is infeasible to provide every user with a software engineer to keep the rules up-to-date. A machine learning system can learn which mail messages the user rejects and maintain the filtering rules automatically. Machine learning addresses many of the same research questions as the fields of statistics, data mining, and psychology, but with differences of emphasis. Statistics focuses on understanding the phenomena that have generated the data, often with the goal of testing different hypotheses about those phenomena. Data mining seeks to find patterns in the data that are understandable by people. Psychological studies of human learning aspire to understand the mechanisms underlying the various learning behaviors exhibited by people (concept learning, skill acquisition, strategy change, etc.).

13,246 citations

Journal ArticleDOI
TL;DR: It is proposed that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them, which provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task.
Abstract: ▪ Abstract The prefrontal cortex has long been suspected to play an important role in cognitive control, in the ability to orchestrate thought and action in accordance with internal goals. Its neural basis, however, has remained a mystery. Here, we propose that cognitive control stems from the active maintenance of patterns of activity in the prefrontal cortex that represent goals and the means to achieve them. They provide bias signals to other brain structures whose net effect is to guide the flow of activity along neural pathways that establish the proper mappings between inputs, internal states, and outputs needed to perform a given task. We review neurophysiological, neurobiological, neuroimaging, and computational studies that support this theory and discuss its implications as well as further issues to be addressed

10,943 citations

Christopher M. Bishop1
01 Jan 2006
TL;DR: Probability distributions of linear models for regression and classification are given in this article, along with a discussion of combining models and combining models in the context of machine learning and classification.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

10,141 citations